1 |
On Monday 20 February 2006 11:51, brettholcomb@×××××××××.net wrote about |
2 |
'Re: Re: [gentoo-user] raid/partition question': |
3 |
> As an extension of this question since I'm working on setting up a |
4 |
> system now. |
5 |
> |
6 |
> What is better to do with LVM2 after the RAID is created. I am using |
7 |
> EVMS also. |
8 |
> |
9 |
> 1. Make all the RAID freespace a big LVM2 container and then and then |
10 |
> create LVM2 volumes on top of this big container. |
11 |
> |
12 |
> or |
13 |
> |
14 |
> 2. Parcel out the RAID freespace into LVM2 containers for each partiton |
15 |
> (/, /user, etc.). |
16 |
|
17 |
3. Neither. See below. First a discussion of the two options. |
18 |
|
19 |
1. Is fine, but it forces you to choose a single raid level for all your |
20 |
data. I like raid 0 for filesystems that are used a lot, but can easily |
21 |
be reconstructed given time (/usr) and especially filesystems that don't |
22 |
need to be reconstructed (/var/tmp), raid 5 or 6 for large filesystems |
23 |
that I don't want to lose (/home, particularly), and raid 1 for critical, |
24 |
but small, filesystems (/boot, maybe). |
25 |
|
26 |
2. Is a little silly, since LVM is designed so that you can treat multiple |
27 |
pvs as a single pool of data OR you can allocate from a certain pv -- |
28 |
whatever suits the task at hand. So, it rarely makes sense to have |
29 |
multiple volume groups; you'd only do this when you want a fault-tolerant |
30 |
"air-gap" between two filesystems. |
31 |
|
32 |
Failure of a single pv in a vg will require some damage control, maybe a |
33 |
little, maybe a lot, but having production encounter any problems just |
34 |
because development had a disk go bad is unacceptable is many |
35 |
environments. So, you have a strong argument for separate vgs there. |
36 |
|
37 |
3. My approach: While I don't use EVMS (the LVM tools are fine with me, at |
38 |
least for now) I have a software raid 0 and a hw raid 5 as separate pvs in |
39 |
a single vg. I create and expand lvs on the pv that suits the data. I |
40 |
also have a separate (not under lvm) hw raid 0 for swap and hw raid 6 for |
41 |
boot. I may migrate my swap to LVM in the near future; during my initial |
42 |
setup, I feared it was unsafe. Recent experience tells me that's (most |
43 |
likely) not the case. |
44 |
|
45 |
For the uninitiated, you can specify the pv to place lv data on like so: |
46 |
lvcreate -L <size> -n <name> <vg> <pv> |
47 |
lvresize -L <size> <vg>/<lv> <pv> |
48 |
The second command only affect where new extents are allocated, it will not |
49 |
move old extents; use pvmove for that. |
50 |
|
51 |
-- |
52 |
Boyd Stephen Smith Jr. |
53 |
bss03@××××××××××.com |
54 |
ICQ: 514984 YM/AIM: DaTwinkDaddy |
55 |
-- |
56 |
gentoo-user@g.o mailing list |