Gentoo Archives: gentoo-amd64

From: Duncan <1i5t5.duncan@×××.net>
To: gentoo-amd64@l.g.o
Subject: [gentoo-amd64] Re: SATA2 Recommendations
Date: Tue, 24 Jan 2006 14:40:23
Message-Id: pan.2006.01.24.14.36.41.737901@cox.net
In Reply to: Re: [gentoo-amd64] SATA2 Recommendations by Jordi Molina
1 Jordi Molina posted
2 <647a40580601231504v7d5ca83sf383df73f5857b42@××××××××××.com>, excerpted
3 below, on Tue, 24 Jan 2006 00:04:06 +0100:
4
5 > I installed gentoo from another livecd and then compiled the kernel
6 > and the initrd image to support sata_nv.
7 >
8 > It boots fine for me. Forget about using the nvraid, is not hardware,
9 > so if you need it, go sw raid or buy a decent RAID card.
10
11 As someone else stated, my info doesn't quite fit your scenario, but it
12 can add to the list. I agree with the above, go sw (preferably
13 kernel-built-in) RAID.
14
15 I'm running an older (I believe SATA-1) Silicon Image 3114, on a dual
16 Opteron Tyan s2885. Attached to it, I have four Seagate SATA-2 300 gig
17 drives, in mixed-RAID configuration, all using the kernel's software RAID.
18 The fact that I'm using kernel-RAID means that I don't have to worry about
19 hardware RAID compatibility or the like, when this system dies or I simply
20 decide to upgrade. Simply building a new kernel with the standard SATA
21 chipset drivers and installing it to /boot before unplugging my drives and
22 plugging them into the new system, should be all I need to do to port to a
23 new SATA chipset.
24
25 As I mentioned, four drives, mixed RAID, arranged as follows. A small
26 RAID-1 to boot off of. Since RAID-1 is direct mirrored, I can install
27 GRUB to the MBR of all four drives, and can boot to GRUB from any of the
28 four, by just switching the BIOS to the one I want to boot. GRUB doesn't
29 do RAID, but it sees each of the mirrors individually, which is all that
30 it needs to see the kernel mirrored on each one, so it can boot it.
31
32 My main system is RAID-6 over the four drives, which means any of the four
33 can die and I'll remain up with little speed degradation, a second one can
34 die, and I'll still have my data, but at significantly reduced speed,
35 until I recover at least to three drives. I was originally thinking about
36 RAID-5 with a hot-spare, but decided RAID-6 without a hot-spare is
37 effectively the same thing, only with more protection because the second
38 drive can die before the hot-spare could have been brought online, and
39 I'll still be fine.
40
41 Stuff like /tmp, /var/tmp, and the portage tree and distdir, are on
42 RAID-0, to maximize speed and space usage, because that's either
43 non-critical data or stuff that can be redownloaded off the net rather
44 quickly in any case. I have swap distributed across the four drives as
45 effectively a RAID-0 as well, as they are all set at the same priority,
46 which allows the kernel to manage them effectively as RAID-0. If a drive
47 or two dies, therefore, I'll go down, but can come right back up by simply
48 reconfiguring the swap and RAID-0 for two or three drives instead of four,
49 remaking the RAID-0, and running that way if necessary until I can procure
50 another 300 gig drive or two to get back to normal operation.
51
52 I don't have to use an initrd at all. With a couple kernel parameters,
53 the kernel can find and reassemble the RAID-6 upon which my root file
54 system is based, without an initrd. I did choose to use partitioned
55 RAID-6 (partitioned RAID is possible on 2.6, with an additional kernel
56 command line append telling it which RAIDs to load partitioned) for my
57 root and root-backup-image filesystems, rather than LVM, thus avoiding the
58 complication of initrd/initramfs, which LVM would require. However, the
59 rest of my RAID-6 data is on another RAID-6 partition, which is LVM-2
60 split in ordered to be more dynamically manageable, into my other logical
61 volumes (home and home-backup-image, media and media-backup-image, log,
62 which I decided I didn't need a backup image for, mail and
63 mail-backup-image, etc).
64
65 The backup images are there to prevent the one thing RAID redundancy does
66 NOT protect against -- fat-fingered admins! Of course, the root-backup
67 also protects against the occasional issue with a bad update making my
68 working root unbootable, or without a working gcc or portage or whatever,
69 giving me an emergency root backup boot option, whether the main root boot
70 failure is due to my own fat-fingering, or the occasional bad upgrade one
71 might have with ~amd64 plus pulling in stuff like modular-X and gcc-4
72 before it's even stable enough for ~arch!
73
74 I've been VERY impressed with the speed improvement of the system, over
75 bog-standard single-disk PATA. Now that I know how much more responsive
76 the system is with 2-4 way striped RAID (a four-disk RAID-6 is effectively
77 2-way-striped, the RAID-0 is of course 4-way-striped, as is swap), I wish
78 I had done it earlier!
79
80 As I mentioned at the top, I recommend kernel-RAID, for two reasons. One,
81 it massively decreases porting or upgrade worries, as it's not dependent
82 on specific hardware, only SATA standard hardware. Two, the mixed-RAID
83 implementation I've setup as described above isn't possible, to my
84 knowledge, on hardware RAID. The two combined, plus the fact that I've
85 got a dual processor system already, so the rather small CPU hit of
86 software RAID matters even less, PLUS the fact that I could direct-boot
87 it, something I thought was only possible with hardware RAID, made this by
88 FAR the best choice possible for me.
89
90 Some of that may apply to your current RAID-1 situation, some not. If you
91 are only going all RAID-1 because you didn't realize you could do
92 mixed-RAID, depending on your usage, you may wish to reconsider doing
93 mixed-RAID, now that you know it's an option. With a two-physical-drive
94 solution, you can at least implement RAID-0 for /tmp and the like,
95 /provided/ that it's not absolutely critical to keep it from going down
96 period. Or you can throw another drive in and make it RAID-5, with a
97 small RAID-1 for /boot and possibly a RAID-0 for non-critical data.
98
99 ** Something that *WILL* apply to your situation, even (especially) if
100 you are sticking with RAID-1 only -- for installation, you can do a
101 conventional single-drive installation, if necessary, no RAID drivers
102 necessary on the LiveCD. When you build your kernel, just ensure that it
103 includes software RAID built-in, along with the regular SATA chipset
104 drivers. Then, after you are up and running on the single drive,
105 create a "degraded" RAID-1 on the second drive, activate it, partition it
106 if you have it setup as partitionable RAID, create your filesystems on
107 it, mount them, and copy your system over from the single drive to the
108 degraded-but-operational RAID-1. Once that's done and GRUB is installed
109 too the MBR of the degraded RAID-1, reboot onto the degraded RAID-1, and
110 then activate what WAS your single drive as the second RAID-1 drive.
111 It'll take some time to mirror everything over, doing its recovery cycle,
112 destroying the single-drive installation in the process, but when it's
113 done, you'll have a fully active non-degraded RAID-1 going, all without
114 requiring the RAID drivers on the LiveCD, only the standard SATA drivers.
115 The process of installing a RAID system in this manner, by installing to a
116 single drive then activating the RAID in degraded mode to copy everything
117 over, before bringing in the single drive as the missing one and
118 recovering, is covered in more detail in the various RAID HOWTOs and
119 Gentoo documentation.
120
121 --
122 Duncan - List replies preferred. No HTML msgs.
123 "Every nonfree program has a lord, a master --
124 and if you use the program, he is your master." Richard Stallman in
125 http://www.linuxdevcenter.com/pub/a/linux/2004/12/22/rms_interview.html
126
127
128 --
129 gentoo-amd64@g.o mailing list