1 |
Am Wed, Dec 21, 2022 at 08:03:36PM +0000 schrieb Wol: |
2 |
> On 21/12/2022 06:19, Frank Steinmetzger wrote: |
3 |
> > Am Wed, Dec 21, 2022 at 05:53:03AM +0000 schrieb Wols Lists: |
4 |
> > |
5 |
> > > On 21/12/2022 02:47, Dale wrote: |
6 |
> > > > I think if I can hold out a little while, something really nice is going |
7 |
> > > > to come along. It seems there is a good bit of interest in having a |
8 |
> > > > Raspberry Pi NAS that gives really good performance. I'm talking a NAS |
9 |
> > > > that is about the same speed as a internal drive. Plus the ability to |
10 |
> > > > use RAID and such. I'd like to have a 6 bay with 6 drives setup in |
11 |
> > > > pairs for redundancy. I can't recall what number RAID that is. |
12 |
> > > > Basically, if one drive fails, another copy still exists. Of course, |
13 |
> > > > two independent NASs would be better in my opinion. Still, any of this |
14 |
> > > > is progress. |
15 |
> > > |
16 |
> > > That's called either Raid-10 (linux), or Raid-1+0 (elsewhere). Note that 1+0 |
17 |
> > > is often called 10, but linux-10 is slightly different. |
18 |
> > |
19 |
|
20 |
> > In layman’s term, a stripe of mirrors. Raid-1 is the mirror, Raid-0 a (JBOD) |
21 |
> > pool. So mirror + pool = mirrorpool, hence the 1+0 → 10. |
22 |
> |
23 |
> Except raid-10 is not a stripe of mirrors. |
24 |
> It's each block is saved to two different drives. (Or 3, or more, so long |
25 |
> as you have more drives than mirrors.) |
26 |
|
27 |
Yes? In a mirror setup, all member drives of a mirror have the same content |
28 |
(at least in ZFS). |
29 |
|
30 |
Raid 10 distributes its content across several mirrors. This is the cause |
31 |
for its increased performance. So when one of the mirrors (not single drive, |
32 |
but a whole set of mirrored drives) fails, the pool is gone. |
33 |
|
34 |
> Linux will happily give you a 2-copy mirror across 3 drives - 3x6TB drives |
35 |
> will give you 9TB useful storage ... |
36 |
|
37 |
I admit, I’ve never head of that. (Though it sounds like raid-5 to me.) |
38 |
|
39 |
> > If I wanted to increase my capacity, I’d have to replace *all* drives with |
40 |
> > bigger ones. With a mirror, only the drives in one of the mirrors need |
41 |
> > replacing. And the rebuild process would be quicker and less painful, as |
42 |
> > each drive will only be read once to rebuild its partner, and there is no |
43 |
> > parity calculation involved. In a RAID, each drive is replaced one by one, |
44 |
> > and each replacement requires a full read of all drives’ payload. |
45 |
> |
46 |
> If you've got a spare SATA connection or whatever, each replacement does not |
47 |
> need a full read of all drives. "mdadm --add /dev/sdx --replace /dev/sdy". |
48 |
> That'll stream sdy on to sdx, and only hammer the other drives if sdy |
49 |
> complains ... |
50 |
|
51 |
Strange that I didn’t think of that, even though it’s a perfectly clear |
52 |
concept. In ZFS there is also a replace function which would do just that. |
53 |
Currently I plan on keeping my old drives (who would want to buy them off of |
54 |
me anyways) and just reorganise them in Z1 over Z2. I’ll just have to move |
55 |
all data off to temprary external drives. |
56 |
|
57 |
> > With older |
58 |
> > drives, this is cause for some concern whether the disks may survive that. |
59 |
> > That’s why, with increasing disk capacities, raid-5 is said to be obsolete. |
60 |
> > Because if another drive fails during rebuild, you are officially screwed. |
61 |
> > |
62 |
> > Fun, innit? |
63 |
> > |
64 |
> They've always said that. Just make sure you don't have multiple drives from |
65 |
> the same batch, then they're less likely statistically to fail at the same |
66 |
> time. I'm running raid-5 over 3TB partitions ... |
67 |
|
68 |
Yeah, I bought my drives from different shops back then for that reason. |
69 |
|
70 |
-- |
71 |
Grüße | Greetings | Salut | Qapla’ |
72 |
Please do not share anything from, with or about me on any social network. |
73 |
|
74 |
When the going gets tough, the tough get going. |
75 |
... and so do I. – Alf |