1 |
Thank you very much. I'll need to go back and reread this and digest it some more. I hadn't thought of doing multiple RAID types on the drives. I have two and did RAID1 for /boot and was going to RAID1 the rest. However, I really want RAID0 for speed and capacity on some file systems. The swap comment is interesting, too. I have two small partitons for swap - one on each drive and I was going to parallel them per one of DRobbins articles. |
2 |
|
3 |
|
4 |
|
5 |
> |
6 |
> From: "Boyd Stephen Smith Jr." <bss03@××××××××××.com> |
7 |
> Date: 2006/02/20 Mon PM 01:30:59 EST |
8 |
> To: gentoo-user@l.g.o |
9 |
> Subject: Re: [gentoo-user] raid/partition question |
10 |
> |
11 |
> On Monday 20 February 2006 11:51, brettholcomb@×××××××××.net wrote about |
12 |
> 'Re: Re: [gentoo-user] raid/partition question': |
13 |
> > As an extension of this question since I'm working on setting up a |
14 |
> > system now. |
15 |
> > |
16 |
> |
17 |
> 3. Neither. See below. First a discussion of the two options. |
18 |
> |
19 |
> 1. Is fine, but it forces you to choose a single raid level for all your |
20 |
> data. I like raid 0 for filesystems that are used a lot, but can easily |
21 |
> be reconstructed given time (/usr) and especially filesystems that don't |
22 |
> need to be reconstructed (/var/tmp), raid 5 or 6 for large filesystems |
23 |
> that I don't want to lose (/home, particularly), and raid 1 for critical, |
24 |
> but small, filesystems (/boot, maybe). |
25 |
> |
26 |
> 2. Is a little silly, since LVM is designed so that you can treat multiple |
27 |
> pvs as a single pool of data OR you can allocate from a certain pv -- |
28 |
> whatever suits the task at hand. So, it rarely makes sense to have |
29 |
> multiple volume groups; you'd only do this when you want a fault-tolerant |
30 |
> "air-gap" between two filesystems. |
31 |
> |
32 |
> Failure of a single pv in a vg will require some damage control, maybe a |
33 |
> little, maybe a lot, but having production encounter any problems just |
34 |
> because development had a disk go bad is unacceptable is many |
35 |
> environments. So, you have a strong argument for separate vgs there. |
36 |
> |
37 |
> 3. My approach: While I don't use EVMS (the LVM tools are fine with me, at |
38 |
> least for now) I have a software raid 0 and a hw raid 5 as separate pvs in |
39 |
> a single vg. I create and expand lvs on the pv that suits the data. I |
40 |
> also have a separate (not under lvm) hw raid 0 for swap and hw raid 6 for |
41 |
> boot. I may migrate my swap to LVM in the near future; during my initial |
42 |
> setup, I feared it was unsafe. Recent experience tells me that's (most |
43 |
> likely) not the case. |
44 |
> |
45 |
> For the uninitiated, you can specify the pv to place lv data on like so: |
46 |
> lvcreate -L <size> -n <name> <vg> <pv> |
47 |
> lvresize -L <size> <vg>/<lv> <pv> |
48 |
> The second command only affect where new extents are allocated, it will not |
49 |
> move old extents; use pvmove for that. |
50 |
> |
51 |
> -- |
52 |
> Boyd Stephen Smith Jr. |
53 |
> bss03@××××××××××.com |
54 |
> ICQ: 514984 YM/AIM: DaTwinkDaddy |
55 |
> -- |
56 |
> gentoo-user@g.o mailing list |
57 |
> |
58 |
> |
59 |
|
60 |
-- |
61 |
gentoo-user@g.o mailing list |