1 |
On 4/5/20 3:50 pm, hitachi303 wrote: |
2 |
> Am 04.05.2020 um 02:46 schrieb Rich Freeman: |
3 |
>> On Sun, May 3, 2020 at 6:50 PM hitachi303 |
4 |
>> <gentoo-user@××××××××××××××××.de> wrote: |
5 |
|
6 |
>> ... |
7 |
> So you are right. This is the way they do it. I used the term raid to |
8 |
> broadly. |
9 |
> But still they have problems with limitations. Size of room, what air |
10 |
> conditioning can handle and stuff like this. |
11 |
> |
12 |
> Anyway I only wanted to point out that there are different approaches |
13 |
> in the industries and saving the data at any price is not always |
14 |
> necessary. |
15 |
> |
16 |
I would suggest that once you go past a few drives there are better ways. |
17 |
|
18 |
I had two 4 disk, bcache fronted raid 10's on two PC'cs with critical |
19 |
data backed up between them. When an ssd bcache failed in one, and two |
20 |
backing stores in the other almost simultaneously I nearly had to resort |
21 |
to offline backups to restore the data ... downtime was still a major pain. |
22 |
|
23 |
I now have a 5 x cheap arm systems and a small x86 master with 7 disks |
24 |
across them - response time is good, power use seems less (being much |
25 |
cooler, quieter) than running two over the top older PC's. The |
26 |
reliability/recovery time (at least when I tested by manually failing |
27 |
drives and causing power outages) is much better. |
28 |
|
29 |
I am using moosefs, but lizardfs looks similar and both can offer |
30 |
erasure coding which gives more storage space still with recovery if you |
31 |
have enough disks. |
32 |
|
33 |
Downside is maintaining more systems, more complex networking and the |
34 |
like - Its been a few months now, and I wont be going back to raid for |
35 |
my main storage. |
36 |
|
37 |
BillK |