1 |
Lindsay Haisley posted on Fri, 25 Mar 2011 07:48:12 -0500 as excerpted: |
2 |
|
3 |
> Yeah, I missed it the first time around. |
4 |
> |
5 |
> I posted to linuxforums.org and referenced the thread to the Central TX |
6 |
> LUG list, and Wayne Walker, one of our members, jumped on and saw it |
7 |
> right away. We have some very smart and Linux-savvy folks here in |
8 |
> Austin! The CTLUG tech list is really good. We have IBM, AMD, Dell, |
9 |
> and a bunch of other tech companies in the area and the level of Linux |
10 |
> tech expertise here is exceptional. You'd be welcome to join the list |
11 |
> if you're interested. See <http://www.ctlug.org>. We have people on |
12 |
> the list from all over the world. |
13 |
> |
14 |
> I expect the LVM system will come up now, and it only remains to be seen |
15 |
> if I can get the legacy nVidia driver to build. |
16 |
|
17 |
This might be a bit more of the "I don't want to hear it" stuff, which you |
18 |
can ignore if so, but for your /next/ system, consider the following, |
19 |
speaking from my own experience... |
20 |
|
21 |
I ran LVM(2) on md-RAID here for awhile, but ultimately decided that the |
22 |
case of lvm on top of md-raid was too complex to get my head around well |
23 |
enough to be reasonably sure of recovery in the event of a problem. |
24 |
|
25 |
Originally, I (thought I) needed lvm on top because md-raid didn't support |
26 |
partitioned-RAID all that well. I migrated to that setup just after what |
27 |
was originally separate support for mdp, partitioned md-raid, was |
28 |
introduced, and the documentation for it was scarce indeed! But md-raid's |
29 |
support for partitions has VASTLY improved, as has the documentation, with |
30 |
partitions now supported just fine on ordinary md-raid, making the |
31 |
separate mdp legacy. |
32 |
|
33 |
So at some point I backed everything up and reorganized, cutting out the |
34 |
lvm. Now I simply run partitioned md-raid. Note that here, I have / on |
35 |
md-raid as well. That was one of the problems with lvm, I could put / on |
36 |
md-raid and even have /boot on md-raid as long as it was RAID-1, but lvm |
37 |
requires userspace, so either / had to be managed separately, or I had to |
38 |
run an initrd/initramfs to manage the early userspace and do a pivot_root |
39 |
to my real / after lvm had brought it up. I was doing the former, not |
40 |
putting / on lvm, but that defeated much of the purpose for me as now I |
41 |
was missing out on the flexibility of lvm for my / and root-backup |
42 |
partitions! |
43 |
|
44 |
So especially now that partitioned md-raid is well supported and |
45 |
documented, you may wish to consider dropping the lvm layer, thus avoiding |
46 |
the complexity of having to recover both the md-raid and the lvm if |
47 |
something goes wrong, with the non-zero chance of admin-flubbing the |
48 |
recovery increasing dramatically due to the extra complexity of additional |
49 |
layers and having to keep straight which commands to run at each layer, in |
50 |
an emergency situation when you're already under pressure because things |
51 |
aren't working! |
52 |
|
53 |
Here, I decided that extra layer was simply NOT worth the extra worry and |
54 |
hassle in a serious recovery scenario, and I'm glad I did. I'm *FAR* more |
55 |
confident in my ability to recover from disaster now than I was before, |
56 |
because I can actually get my head around the whole, not just a step at a |
57 |
time, and thus am FAR less likely to screw things up with a stupid fat- |
58 |
finger mistake. |
59 |
|
60 |
But YMMV, as they say. Given that the capacities of LVM2 have been |
61 |
improving as well (including its own RAID support, in some cases sharing |
62 |
code with md-raid), AND the fact that the device-mapper services (now part |
63 |
of the lvm2 package on the userspace side) used by lvm2 are now used by |
64 |
udisks and etc, the replacements for hal for removable disk detection and |
65 |
automounting, etc, switching to lvm exclusively instead of md-raid |
66 |
exclusively, is another option. Of course lvm still requires userspace |
67 |
while md-raid doesn't, so it's a tradeoff of initr* if you put / on it |
68 |
too, vs using the same device-mapper technology for both lvm and udisks/ |
69 |
auto-mount. |
70 |
|
71 |
There's a third choice as well, or soon will be, as the technology is |
72 |
available but still immature. btrfs has built-in raid support (as with |
73 |
lvm2, sharing code at the kernel level, where it makes sense). The two |
74 |
biggest advantages to btrfs are that (1) it's the designated successor to |
75 |
ext2/3/4 and will thus be EXTREMELY well supported when it matures, AND |
76 |
(2) because it's a filesystem as well, (2a) you're dealing with just the |
77 |
one (multi-faceted) technology, AND (2b) it knows what's valuable data and |
78 |
what's not, so recoveries are shorter because it doesn't have to deal with |
79 |
"empty" space, like md-raid does because it's on a layer of its own, not |
80 |
knowing what's valuable data and what's simply empty space. The biggest |
81 |
disadvantage of course is that btrfs isn't yet mature. In particular (1) |
82 |
the on-disk format isn't officially cast in stone yet (tho changes now are |
83 |
backward compatible, so you should have no trouble loading older btrfs |
84 |
with newer kernels, but might not be able to mount it with the older |
85 |
kernel once you do, if there was a change, AFAIK there have been two disk |
86 |
format changes so far, one with the no-old-kernels restriction, the latest |
87 |
without, as long as the filesystem was created with the older kernel), AND |
88 |
(2) as of now there's not yet a proper fsck.btrfs, tho that's currently |
89 |
very high priority and there very likely will be one within months, within |
90 |
a kernel or two, so likely available for 2.6.40. |
91 |
|
92 |
Booting btrfs can be a problem currently as well. As you may well know, |
93 |
grub-1 (0.9x) is officially legacy and hasn't had any official new |
94 |
features for years, /despite/ the fact that last I knew, grub2's on-disk |
95 |
format wasn't set in stone either. However, being GPLv2-ed, the various |
96 |
distributions have been applying feature patches to bring it upto date for |
97 |
years, including a number of patches used routinely for ext2/3 |
98 |
filesystems. There is a grub-1 patch adding btrfs support, but I'm not |
99 |
sure whether it's in gentoo's version yet or not. (Of course, that |
100 |
wouldn't be an issue for your new system as you mentioned it probably |
101 |
won't be gentoo-based anyway, but others will be reading this too.) |
102 |
|
103 |
The newer grub-2 that many distributions are now using (despite the fact |
104 |
that it's still immature) has a btrfs patch as well. However, there's an |
105 |
additional complication there as grub-2 is GPLv3, while the kernel and |
106 |
thus btrfs is GPLv2, specifically /without/ the "or later version" |
107 |
clause. The existing grub-2 btrfs support patch is said to have worked |
108 |
around that thru reverse engineering, etc, but that is an issue for |
109 |
further updates, given that btrfs' on-disk-format is NOT yet declared |
110 |
final. Surely the issue will eventually be resolved, but these are the |
111 |
sorts of "teething problems" that the immature btrfs is having, as it |
112 |
matures. |
113 |
|
114 |
The other aspect of booting to btrfs that I've not yet seen covered in any |
115 |
detail is the extent to which "advanced" btrfs features such as built-in |
116 |
RAID and extensible sub-volumes will be boot-supported. It's quite |
117 |
possible that only a quite basic and limited btrfs will be supported for |
118 |
/boot, with advanced features only supported from the kernel (thus on /) |
119 |
or even, possibly, userspace (thus not on / without an initr*). |
120 |
|
121 |
Meanwhile, btrfs is already the default for some distributions despite all |
122 |
the issues. How they can justify that even without a proper fsck.btrfs |
123 |
and without official on-disk format lock-down, among other things, I don't |
124 |
know, but anyway... I believe at present, most of them are using |
125 |
something else (ext2 or even vfat) for /boot, tho, thus eliminating the |
126 |
grub/btrfs issues. |
127 |
|
128 |
But I do believe 2011 is the year for btrfs, and by year-end (or say the |
129 |
first 2012 kernel, so a year from now, leaving a bit more wiggle room), |
130 |
the on-disk format will be nailed-down, a working fsck.btrfs will be |
131 |
available, and the boot problems solved to a large extent. With those |
132 |
three issues gone, people will be starting the mass migration, altho |
133 |
conservative users will remain on ext3/ext4 for quite some time, just as |
134 |
many haven't yet adopted ext4, today. (FWIW, I'm on reiserfs, and plan on |
135 |
staying there until I can move to btrfs and take advantage of both its |
136 |
tail-packing and built-in RAID support, so the ext3/ext4 thing doesn't |
137 |
affect me that much.) |
138 |
|
139 |
That leaves the two choices for now now, md-raid and lvm2 including its |
140 |
raid features, with btrfs as a maturing third choice, likely reasonable by |
141 |
year-end. Each has its strong and weak points and I'd recommend |
142 |
evaluating the three and making one's choice of /one/ of them. By |
143 |
avoiding layering one on the other, significant complexity is avoided, |
144 |
simplifying both routine administration and disaster recovery, with the |
145 |
latter a BIG factor since it reduces by no small factor the chance of |
146 |
screwing things up /in/ that recovery. |
147 |
|
148 |
Simply my experience-educated opinion. YMMV, as they say. And of course, |
149 |
it applies to new installations more than your current situation, but as |
150 |
you mentioned that you are planning such a new installation... |
151 |
|
152 |
-- |
153 |
Duncan - List replies preferred. No HTML msgs. |
154 |
"Every nonfree program has a lord, a master -- |
155 |
and if you use the program, he is your master." Richard Stallman |