1 |
On Fri, Jun 21, 2013 at 2:50 PM, Gary E. Miller <gem@××××××.com> wrote: |
2 |
> On Fri, 21 Jun 2013 11:38:00 -0700 |
3 |
> Mark Knecht <markknecht@×××××.com> wrote: |
4 |
>> Or maybe you're saying it's RAID1 and I don't know if anything bad is |
5 |
>> happening _unless_ I do a scrub and specifically check all the drives |
6 |
>> for consistency? |
7 |
> |
8 |
> No. A simple read will find the problem. But given it is RAID1 the only |
9 |
> way to be sure to read from both dirves is a raid rebuild. |
10 |
|
11 |
Keep in mind that a read will only find the problem if it is visible |
12 |
to the hard drive's ECC. A silent error would not be detected. It |
13 |
could be detected by a rebuild, though it could not be reliably fixed |
14 |
in this way. With raid5 a silent error in a single drive per stripe |
15 |
could be fixed in a rebuild. |
16 |
|
17 |
> |
18 |
> Your only protection against a full RAIDx failure is an offsite backup. |
19 |
|
20 |
++ |
21 |
|
22 |
That's why I'm not big on crazy levels of redundancy. RAID is first |
23 |
and foremost a restoration avoidance tool, not a backup solution. It |
24 |
reduces the risk of needing restoration, but it does not cover as many |
25 |
failure modes as an offline backup. If btrfs eats your data it really |
26 |
won't matter how many platters it had to chew on in the process. So, |
27 |
by all means use RAID, but if you're going to spend a lot of money on |
28 |
redundant disks, spend it on a backup solution instead (which might |
29 |
very well involve disks, though you should move them offsite). |
30 |
|
31 |
Rich |