1 |
On Fri, Dec 1, 2017 at 11:58 AM, Wols Lists <antlists@××××××××××××.uk> wrote: |
2 |
> On 27/11/17 22:30, Bill Kenworthy wrote: |
3 |
>> Hi all, |
4 |
>> I need to expand two bcache fronted 4xdisk btrfs raid 10's - this |
5 |
>> requires purchasing 4 drives (and one system does not have room for two |
6 |
>> more drives) so I am trying to see if using raid 5 is an option |
7 |
>> |
8 |
>> I have been trying to find if btrfs raid 5/6 is stable enough to use but |
9 |
>> while there is mention of improvements in kernel 4.12, and fixes for the |
10 |
>> write hole problem I cant see any reports that its "working fine now" |
11 |
>> though there is a phoronix article saying Oracle is using it since the |
12 |
>> fixes. |
13 |
>> |
14 |
>> Is anyone here successfully using btrfs raid 5/6? What is the status of |
15 |
>> scrub and self healing? The btrfs wiki is woefully out of date :( |
16 |
>> |
17 |
> Or put btrfs over md-raid? |
18 |
> |
19 |
> Thing is, with raid-6 over four drives, you have a 100% certainty of |
20 |
> surviving a two-disk failure. With raid-10 you have a 33% chance of |
21 |
> losing your array. |
22 |
> |
23 |
|
24 |
I tend to be a fan of parity raid in general for these reasons. I'm |
25 |
not sure the performance gains with raid-10 are enough to warrant the |
26 |
waste of space. |
27 |
|
28 |
With btrfs though I don't really see the point of "Raid-10" vs just a |
29 |
pile of individual disks in raid1 mode. Btrfs will do a so-so job of |
30 |
balancing the IO across them already (they haven't really bothered to |
31 |
optimize this yet). |
32 |
|
33 |
I've moved away from btrfs entirely until they sort things out. |
34 |
However, I would not use btrfs for raid-5/6 under any circumstances. |
35 |
That has NEVER been stable, and if anything has gone backwards. I'm |
36 |
sure they'll sort it out sometime, but I have no idea when. RAID-1 on |
37 |
btrfs is reasonably stable, but I've still had it run into issues |
38 |
(nothing that kept me from reading the data off the array, but I've |
39 |
had various issues with it, and when I finally moved it to ZFS it was |
40 |
in a state where I couldn't run it in anything other than degraded |
41 |
mode). |
42 |
|
43 |
You could run btrfs over md-raid, but other than the snapshots I think |
44 |
this loses a lot of the benefit of btrfs in the first place. You are |
45 |
vulnerable to the write hole, the ability of btrfs to recover data |
46 |
with soft errors is compromised (though you can detect it still), and |
47 |
you're potentially faced with more read-write-read cycles when raid |
48 |
stripes are modified. Both zfs and btrfs were really designed to work |
49 |
best on raw block devices without any layers below. They still work |
50 |
of course, but you don't get some of those optimizations since they |
51 |
don't have visibility into what is happening at the disk level. |
52 |
|
53 |
-- |
54 |
Rich |