1 |
On 21/12/2022 06:19, Frank Steinmetzger wrote: |
2 |
> Am Wed, Dec 21, 2022 at 05:53:03AM +0000 schrieb Wols Lists: |
3 |
> |
4 |
>> On 21/12/2022 02:47, Dale wrote: |
5 |
>>> I think if I can hold out a little while, something really nice is going |
6 |
>>> to come along. It seems there is a good bit of interest in having a |
7 |
>>> Raspberry Pi NAS that gives really good performance. I'm talking a NAS |
8 |
>>> that is about the same speed as a internal drive. Plus the ability to |
9 |
>>> use RAID and such. I'd like to have a 6 bay with 6 drives setup in |
10 |
>>> pairs for redundancy. I can't recall what number RAID that is. |
11 |
>>> Basically, if one drive fails, another copy still exists. Of course, |
12 |
>>> two independent NASs would be better in my opinion. Still, any of this |
13 |
>>> is progress. |
14 |
>> |
15 |
>> That's called either Raid-10 (linux), or Raid-1+0 (elsewhere). Note that 1+0 |
16 |
>> is often called 10, but linux-10 is slightly different. |
17 |
> |
18 |
> In layman’s term, a stripe of mirrors. Raid-1 is the mirror, Raid-0 a (JBOD) |
19 |
> pool. So mirror + pool = mirrorpool, hence the 1+0 → 10. |
20 |
|
21 |
Except raid-10 is not a stripe of mirrors. It's each block is saved to |
22 |
two different drives. (Or 3, or more, so long as you have more drives |
23 |
than mirrors.) |
24 |
|
25 |
Linux will happily give you a 2-copy mirror across 3 drives - 3x6TB |
26 |
drives will give you 9TB useful storage ... |
27 |
> |
28 |
>> I'd personally be inclined to go for raid-6. That's 4 data drives, 2 parity |
29 |
>> (so you could have an "any two" drive failure and still recover). |
30 |
>> A two-copy 10 or 1+0 is vulnerable to a two-drive failure. A three-copy is |
31 |
>> vulnerable to a three-drive failure. |
32 |
> |
33 |
> At first, I had only two drives in my 4-bay NAS, which were of course set up |
34 |
> as a mirror. After a year, when it became full, I bought the second pair of |
35 |
> drives and had long deliberations by then, what to choose. I went for raid-6 |
36 |
> (or RaidZ2 in ZFS parlance). With only four disks, it has the same net |
37 |
> capacity as a pair of mirrors, but at the advantage that *any* two drives |
38 |
> may fail, not just two particular ones. A raid of mirrors has performance |
39 |
> benefits over a parity raid, but who cares for a simple Gbit storage device. |
40 |
> |
41 |
> With increasing number of disks, a mirror setup is at a disadvantage with |
42 |
> storage efficiency – it’s always 50 % or less, if you mirror over more than |
43 |
> two disks. But with only four disks, that was irrelevant in my case. On the |
44 |
> plus-side, each mirror can have a different physical disk size, so you can |
45 |
> more easily mix’n’match what you got lying around, or do upgrades in smaller |
46 |
> increments. |
47 |
> |
48 |
> If I wanted to increase my capacity, I’d have to replace *all* drives with |
49 |
> bigger ones. With a mirror, only the drives in one of the mirrors need |
50 |
> replacing. And the rebuild process would be quicker and less painful, as |
51 |
> each drive will only be read once to rebuild its partner, and there is no |
52 |
> parity calculation involved. In a RAID, each drive is replaced one by one, |
53 |
> and each replacement requires a full read of all drives’ payload. |
54 |
|
55 |
If you've got a spare SATA connection or whatever, each replacement does |
56 |
not need a full read of all drives. "mdadm --add /dev/sdx --replace |
57 |
/dev/sdy". That'll stream sdy on to sdx, and only hammer the other |
58 |
drives if sdy complains ... |
59 |
|
60 |
> With older |
61 |
> drives, this is cause for some concern whether the disks may survive that. |
62 |
> That’s why, with increasing disk capacities, raid-5 is said to be obsolete. |
63 |
> Because if another drive fails during rebuild, you are officially screwed. |
64 |
> |
65 |
> Fun, innit? |
66 |
> |
67 |
They've always said that. Just make sure you don't have multiple drives |
68 |
from the same batch, then they're less likely statistically to fail at |
69 |
the same time. I'm running raid-5 over 3TB partitions ... |
70 |
|
71 |
Cheers, |
72 |
Wol |