Gentoo Archives: gentoo-amd64

From: Duncan <1i5t5.duncan@×××.net>
To: gentoo-amd64@l.g.o
Subject: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value?
Date: Sat, 22 Jun 2013 15:45:31
Message-Id: pan$7c60a$34f695e$f0595300$b5612b1@cox.net
In Reply to: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value? by Rich Freeman
1 Rich Freeman posted on Sat, 22 Jun 2013 07:12:25 -0400 as excerpted:
2
3 > Multiple-level redundancy just seems to be past the point of diminishing
4 > returns to me. If I wanted to spend that kind of money I'd probably
5 > spend it differently.
6
7 My point was that for me, it wasn't multiple level redundancy. It was
8 simply device redundancy (raid), and fat-finger redundancy (backups), on
9 the same set of drives so I was protected from either scenario.
10
11 The fire/flood scenario would certainly get me if I didn't have offsite
12 backups, but just as you call multiple redundancy past your point of
13 diminishing returns, I call the fire/flood scenario past mine. If that
14 happens, I figure I'll have far more important things to worry about than
15 rebuilding my computer for awhile. And chances are, when I do get around
16 to it, things will be progressed enough that much of the data won't be
17 worth so much any more anyway. Besides, the real /important/ data is in
18 my head. What's worth rebuilding, will be nearly as easy to rebuild due
19 to what's in my head, as it would be to go thru what's now historical
20 data and try to pick up the pieces, sorting thru what's still worth
21 keeping around and what's not.
22
23 Tho as I said, I do/did keep an additional level of backup on that 1 TB
24 drive, but it's on-site too, and while not in the computer, it's
25 generally nearby enough that it'd be lost too in case of flood/fire.
26 It's more a convenience than a real backup, and I don't really keep it
27 upto date, but if it survived and what's in the computer itself didn't, I
28 do have old copies of much of my data, simply because it's still there
29 from the last time I used that drive as convenient temporary storage
30 while I switched things around.
31
32 > However, I do agree that mdadm should support more flexible arrays. For
33 > example, my boot partition is raid1 (since grub doesn't support anything
34 > else), and I have it set up across all 5 of my drives. However, the
35 > reality is that only two get used and the others are treated only as
36 > spares. So, that is just a waste of space, and it is actually more
37 > annoying from a config perspective because it would be really nice if my
38 > system could boot from an arbitrary drive.
39
40 Three points on that. First, obviously you're not on grub2 yet. It
41 handles all sorts of raid, lvm, newer filesystems like btrfs (and zfs for
42 those so inclined), various filesystems, etc, natively, thru its modules.
43
44 Second, /boot is an interesting case. Here, originally (with grub1 and
45 the raid6s across 4 drives) I setup a 4-drive raid1. But, I actually
46 installed grub to the boot sector of all four drives, and tested each one
47 booting just to grub by itself (the other drives off), so I knew it was
48 using its own grub, not pointed somewhere else.
49
50 But I was still worried about it as while I could boot from any of the
51 drives, they were a single raid1, which meant no fat-finger redundancy,
52 and doing a usable backup of /boot isn't so easy.
53
54 So I think it was when I switched from raid6 to raid1 for almost the
55 entire system, that I switched to dual dual-drive raid1s for /boot as
56 well, and of course tested booting to each one alone again, just to be
57 sure. That gave me fat-finger redundancy, as well as added convenience
58 since I run git kernels, and I was able to update just the one dual-drive
59 raid1 /boot with the git kernels, then update the backup with the
60 releases once they came out, which made for a nice division of stable
61 kernel vs pre-release there.
62
63 That dual dual-drive raid-1 setup proved very helpful when I upgraded to
64 grub2 as well, since I was able to play around with it on the one dual-
65 drive raid1 /boot while the other one stayed safely bootable grub1 until
66 I had grub2 working the way I wanted on the working /boot, and had again
67 installed and tested it on both component hard drives to boot to grub and
68 to the full raid1 system just from the one drive by itself, with the
69 others entirely shut off.
70
71 Only when I had both drives of the working /boot up and running grub2,
72 did I mounth the backup /boot as well, and copy over the now working
73 config to it, before running grub2-install on those two drives.
74
75 Of course somewhere along the way, IIRC at the same time as the raid6 to
76 raid1 conversion as well, I had also upgraded to gpt partitions from
77 traditional mbr. When I did I had the foresight to create BOTH dedicated
78 BIOS boot partitions and EFI partitions on each of the four drives.
79 grub1 wasn't using them, but that was fine; they were small (tiny). That
80 made the upgrade to grub2 even easier, since grub2 could install its core
81 into the dedicated BIOS partitions. The EFI partitions remain unused to
82 this day, but as I said, they're tiny, and with gpt they're specifically
83 typed and labeled so they can't mix me up, either.
84
85 (BTW, talking about data integrity, if you're not on GPT yet, do consider
86 it. It keeps a second partition table at the end of the drive as well as
87 the one at the beginning, and unlike mbr they're checksummed, so
88 corruption is detected. It also kills the primary/secondary/extended
89 difference so no more worrying about that, and allows partition labels,
90 much like filesystem labels, which makes tracking and managing what's
91 what **FAR** easier. I GPT partition everything now, including my USB
92 thumbdrives if I partition them at all!)
93
94 When that machine slowly died and I transferred to a new half-TB drive
95 thinking it was the aging 300-gigs (it wasn't, caps were dying on the by
96 then 8 year old mobo), and then transferred that into my new machine
97 without raid, I did the usual working/backup partition arrangement, but
98 got frustrated without the ability to have a backup /boot, because with
99 just one device, the boot sector could point just one place, at the core
100 grub2 in the dedicated BIOS boot partition, which in turn pointed at the
101 usual /boot. Now grub2's better in this regard than grub2, since that
102 core grub2 has an emergency mode that would give me limited ability to
103 load a backup /boot, that's an entirely manual process with a
104 comparatively limited grub2 emergency shell without additional modules
105 available, and I didn't actually take advantage of that to configure a
106 backup /boot that it could reach.
107
108 But when I switched to the SSDs, I again had multiple devices, the pair
109 of SSDs, which I setup with individual /boots, and the original one still
110 on the spinning rust. Again I installed grub2 to each one, pointed at
111 its own separately configured /boot, so now I actually have three
112 separately configured and bootable /boots, one on each of the SSDs and a
113 third on the spinning rust half-TB.
114
115 (FWIW the four old 300-gigs are sitting on the shelf. I need to badblocks
116 or dd them to wipe, and I have a friend that'll buy them off me.)
117
118 Third point. /boot partition raid1 across all five drives and three are
119 wasted? How? I believe if you check, all five will have a mirror of the
120 data (not just two unless it's btrfs raid1 not mdadm raid1, but btrfs is /
121 entirely/ different in that regard). Either they're all wasted but one,
122 or none are wasted, depending on how you look at it.
123
124 Meanwhile, do look into installing grub on each drive, so you can boot
125 from any of them. I definitely know it's possible as that's what I've
126 been doing, tested, for quite some time.
127
128 > Oh, as far as raid on partitions goes - I do use this for a different
129 > purpose. If you have a collection of drives of different sizes it can
130 > reduce space waste. Suppose you have 3 500GB drives and 2 1TB drives.
131 > If you put them all directly in a raid5 you get 2TB of space. If you
132 > chop the 1TB drives into 2 500GB partitions then you can get two raid5s
133 > - one 2TB in space, and the other 500GB in space. That is 500GB more
134 > data for the same space. Oh, and I realize I wrote raid5. With mdadm
135 > you can set up a 2-drive raid5. It is functionally equivalent to a
136 > raid1 I think,
137
138 You better check. Unless I'm misinformed, which I could be as I've not
139 looked at this in awhile and both mdadm and the kernel have changed quite
140 a bit since then, that'll be setup as a degraded raid5, which means if
141 you lose one...
142
143 But I do know raid10 can be setup like that, on fewer drives than it'd
144 normally take, with the mirrors in "far" mode I believe, and it just
145 arranges the stripes as it needs to. It's quite possible that they fixed
146 it so raid5 works similarly and can do the same thing now, in which case
147 that degraded thing I knew about is obsolete. But unless you know for
148 sure, please do check.
149
150 > and I believe you can convert between them, but since I generally intend
151 > to expand arrays I prefer to just set them up as raid5 from the start.
152 > Since I stick lvm on top I don't care if the space is chopped up.
153
154 There's a lot of raid conversion ability in modern mdadm. I think most
155 levels can be converted between, given sufficient devices. Again, a lot
156 has changed in that regard since I set my originals up, I'd guess
157 somewhere around 2008.
158
159 --
160 Duncan - List replies preferred. No HTML msgs.
161 "Every nonfree program has a lord, a master --
162 and if you use the program, he is your master." Richard Stallman