Gentoo Archives: gentoo-amd64

From: Rich Freeman <rich0@g.o>
To: gentoo-amd64@l.g.o
Subject: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value?
Date: Fri, 21 Jun 2013 15:13:58
Message-Id: CAGfcS_nCzMWWJdLc_3jPOWZdwgmV_YTLqks2+WT6yqhD1hTW5g@mail.gmail.com
In Reply to: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value? by Duncan <1i5t5.duncan@cox.net>
1 On Fri, Jun 21, 2013 at 10:27 AM, Duncan <1i5t5.duncan@×××.net> wrote:
2 > Rich Freeman posted on Fri, 21 Jun 2013 06:28:35 -0400 as excerpted:
3 >
4 >> That is what is keeping me away. I won't touch it until I can use it
5 >> with raid5, and the first common containing that hit the kernel weeks
6 >> ago I think (and it has known gaps). Until it is stable I'm sticking
7 >> with my current setup.
8 >
9 > Question: Would you use it for raid1 yet, as I'm doing? What about as a
10 > single-device filesystem? Do you believe my estimates of reliability in
11 > those cases (almost but not quite stable for single-device, kind of in
12 > the middle for raid1/raid0/raid10, say a year behind single-device and
13 > raid5/6/50/60 about a year behind that) reasonably accurate?
14
15 If I wanted to use raid1 I might consider using btrfs now. I think it
16 is still a bit risky, but the established use cases have gotten a fair
17 bit of testing now. I'd be more confident in using it with a single
18 device.
19
20 >
21 > Because if you're waiting until btrfs raid5 is fully stable, that's
22 > likely to be some wait yet -- I'd say a year, likely more given that
23 > everything btrfs has seemed to take longer than people expected.
24
25 That's my thought as well. Right now I'm not running out of space, so
26 I'm hoping that I can wait until the next time I need to migrate my
27 data (from 1TB to 5+TB drives, for example). With such a scenario I
28 don't need to have 10 drives mounted at once to migrate the data - I
29 can migrate existing data to 1-2 drives, remove the old ones, and
30 expand the new array. To migrate today would require finding someplace
31 to dump all the data offline and migrate the drives, as there is no
32 in-place way to migrate multiple ext3/4 logical volumes on top of
33 mdadm to a single btrfs on bare metal.
34
35 Without replying to anything in particular both you and Bob have
36 mentioned the importance of multiple redundancy.
37
38 Obviously risk goes down as redundancy goes up. If you protect 25
39 drives of data with 1 drive of parity then you need 2/26 drives to
40 fail to hose 25 drives of data. If you protect 1 drive of data with
41 25 drives of parity (call them mirrors or parity or whatever - they're
42 functionally equivalent) then you need 25/26 drives to fail to lose 1
43 drive of data. RAID 1 is actually less effective - if you protect 13
44 drives of data with 13 mirrors you need 2/26 drives to fail to lose 1
45 drive of data (they just have to be the wrong 2). However, you do
46 need to consider that RAID is not the only way to protect data, and
47 I'm not sure that multiple-redundancy raid-1 is the most
48 cost-effective strategy.
49
50 If I had 2 drives of data to protect and had 4 spare drives to do it
51 with, I doubt I'd set up a 3x raid-1/5/10 setup (or whatever you want
52 to call it - imho raid "levels" are poorly named as there really is
53 just striping and mirroring and adding RS parity and everything else
54 is just combinations). Instead I'd probably set up a
55 RAID1/5/10/whatever with single redundancy for faster storage and
56 recovery, and an offline backup (compressed and with
57 incrementals/etc). The backup gets you more security and you only
58 need it in a very unlikely double-failure. I'd only invest in
59 multiple redundancy in the event that the risk-weighted cost of having
60 the node go down exceeds the cost of the extra drives. Frankly in
61 that case RAID still isn't the right solution - you need a backup node
62 someplace else entirely as hard drives aren't the only thing that can
63 break in your server.
64
65 This sort of rationale is why I don't like arguments like "RAM is
66 cheap" or "HDs are cheap" or whatever. The fact is that wasting money
67 on any component means investing less in some other component that
68 could give you more space/performance/whatever-makes-you-happy. If
69 you have $1000 that you can afford to blow on extra drives then you
70 have $1000 you could blow on RAM, CPU, an extra server, or a trip to
71 Disney. Why not blow it on something useful?
72
73 Rich

Replies