1 |
Boyd Stephen Smith Jr. posted <200604252300.53210.bss03@××××××××××.net>, |
2 |
excerpted below, on Tue, 25 Apr 2006 23:00:47 -0500: |
3 |
|
4 |
> On Tuesday 25 April 2006 20:53, Duncan <1i5t5.duncan@×××.net> wrote about |
5 |
> '[gentoo-amd64] Re: Re: Re: Giving up 64 platform': |
6 |
>> Of course, folks like me would then want to rearrange the partitions a |
7 |
>> bit, but given today's 200 gig plus hard drives, copying a few |
8 |
>> partitions around is easier than setting up a new installation from |
9 |
>> stage-X. |
10 |
> |
11 |
> Don't use partitions! Use LVM! |
12 |
> |
13 |
> Okay, sure, you still need a partition for /boot (and then a partition for |
14 |
> the rest of the drive, which is controlled by LVM), but that's only on one |
15 |
> drive. |
16 |
|
17 |
Well, I am using LVM, but not for / either, because that seriously |
18 |
complicates the boot process. I have / (and my rootbak partition as well) |
19 |
on kernel md RAID (partitioned RAID, actually), which is acceptable, as |
20 |
the kernel itself (with a few added command line parameters, if necessary) |
21 |
handles md/RAID. However, the kernel can't yet mount LVMs directly, |
22 |
without help from userspace, which makes things MUCH more complicated as |
23 |
that requires an initramfs/initrd, if / is LVM managed. |
24 |
|
25 |
Layout here: |
26 |
|
27 |
4-SATA-hard-drive RAID, with each physical drive identically partitioned. |
28 |
|
29 |
The first partition on each drive is an unpartitioned RAID-1 for /boot, |
30 |
which because RAID-1 is mirroring, means /boot is mirrored across all four |
31 |
drives and after installing grub to the boot sector on each one, means I |
32 |
can boot grub from any of them, independent of the others. |
33 |
|
34 |
Partition two on each drive is part of a partitioned RAID-6, so any two |
35 |
of the four drives can drop out without disruption. The first two |
36 |
partitions on the RAID-6 are / and rootbak, identically sized and |
37 |
containing /usr and /var (minus selected subdirs of each), such that the |
38 |
OS image on each is self-consistent with the portage database in |
39 |
/var/db/portage on each. (I learned that lesson the hard way, when I had |
40 |
/usr on a separate partition and lost it, and the backup /usr was old, so |
41 |
the portage database was tracking the wrong versions of everything |
42 |
installed to /usr, saying they were all upto date when they weren't.) The |
43 |
third partition on the RAID-6 is LVM, which contains logical volumes for |
44 |
the packages tree (by default /usr/portage/packages, IIRC, with |
45 |
FEATURES=buildpkg) and it's backup image, /home and it's backup, a mail |
46 |
volume and backup, a news volume (no backup), a media volume and backup, a |
47 |
log volume (/var/log, no backup), etc. The arrangement is such that LVM |
48 |
need not be started until after / (or the backup root, if I have problems) |
49 |
is mounted, so no initramfs needed. Further, should there ever be a |
50 |
problem with LVM, I can at least boot to a functional root filesystem for |
51 |
maintenance -- I appreciated the wisdom of this recently, as there were |
52 |
some problems with ~arch LVM updates, causing folks with root on LVM to |
53 |
have problems that either woudnn't touch me or would be far less severe. |
54 |
(Again, note that RAID is kernel dependant only, so as long as I keep a |
55 |
known working RAID kernel around, no problems getting /root mounted. / on |
56 |
LVM is an entirely different story!) |
57 |
|
58 |
The third partition on each physical drive is my swap partition, 4 gig |
59 |
each drive, 16 gig total. With swap priority specifically set equal for |
60 |
all four, the kernel manages them as striped swap, and yes, that /does/ |
61 |
make swap /much/ faster! (Not that I use it since I upgraded to 8 gig of |
62 |
RAM anyway, but before that, when I had only a gig of real RAM, I |
63 |
definitely appreciated the performance benefits of striped swap.) |
64 |
|
65 |
Partition 4 on each drive is reserved as the extended partition, in case I |
66 |
have need to add more than the one additional partition I actually use. |
67 |
That makes the additional partition #5, partitioned RAID-0, striped across |
68 |
the four drives for speed at the cost of lack of redundancy. Until I |
69 |
upgraded to 8 gig RAM, my /tmp was the first RAID-0 partition. It now |
70 |
remains unmounted by default, as my /tmp is now tmpfs based (limited to 6 |
71 |
gig max, but I don't believe I've ever reached that, or caused swapping of |
72 |
any kind, for that matter, I've had 6 or 7 gig of the 8 gig in use at |
73 |
times, however, mostly as cache, which tmpfs registers as until it starts |
74 |
swapping, as well). The second partition on the RAID-0 contains three |
75 |
subdirs, each representing some place in the system containing data that's |
76 |
either easily recovered or is easily rebuilt. p contains the portage tree |
77 |
(default /usr/portage). Of course, that's easily redownloadable, in the |
78 |
event of a disk malfunction that would take out the RAID-0. Likewise, |
79 |
usrsrc is symlinked from /usr/src, and contains the various kernel trees |
80 |
I'm currently using and since my last cleanup, plus a tarballs dir |
81 |
containing the kernel tarballs. Again, this data is easily |
82 |
redownloadable, should the RAID-0 fail, so lack of redundancy isn't an |
83 |
issue. The third subdir on RAID-0's partition two is ccache, nice, but |
84 |
rebuildable from scratch if necessary. It's a very nice coincidence that |
85 |
all three of these happen to be performance critical as well, particularly |
86 |
the portage tree and ccache, and striped RAID-0 optimizes performance, at |
87 |
the cost of data redundancy safety, which isn't needed for these anyway. |
88 |
|
89 |
Even allowing for plenty of spare room on all the RAIDs and each RAID |
90 |
partition, with the disks being 300 gig each, I've a bit over 100 gig left |
91 |
entirely unpartitioned on each one. That gives me plenty of room to move |
92 |
partitions around and/or expand them, if necessary. Naturally, if I need |
93 |
to expand the LVM beyond its current bounds, that will be easiest. Just |
94 |
add a new set of RAID-6 partitions and add it to the LVM. However, with |
95 |
the area unpartitioned, it remains flexible enough to allow moving the |
96 |
other partitions around if necessary, or to add more. Further, while I |
97 |
have well over 16 data volumes (partitions or logical volumes), nothing is |
98 |
getting close to the limit of 16 partitions per SCSI host imposed by the |
99 |
kernel device numbering, or the similar limit on RAID (I don't know if |
100 |
it's 16 or higher, but it's not lower). Back on IDE/PATA, the limit is 64 |
101 |
partitions. I think I had 20-some at one point, before I switched to |
102 |
RAID-0/1/6+LVM2-on-4-300G-SATA-disk. I'm glad I made the switch. It has |
103 |
been well worth it! |
104 |
|
105 |
-- |
106 |
Duncan - List replies preferred. No HTML msgs. |
107 |
"Every nonfree program has a lord, a master -- |
108 |
and if you use the program, he is your master." Richard Stallman in |
109 |
http://www.linuxdevcenter.com/pub/a/linux/2004/12/22/rms_interview.html |
110 |
|
111 |
|
112 |
-- |
113 |
gentoo-amd64@g.o mailing list |