1 |
On Tue, Jun 24, 2014 at 4:02 PM, Alan McKinnon <alan.mckinnon@×××××.com> wrote: |
2 |
> |
3 |
> External drives have a much higher failure rate than internal drives. |
4 |
> people don't expect them to fail or be dropped or accidentally plugged |
5 |
> in in the wrong order and the wrong one to be mkfs'ed (until it does |
6 |
> happen). These are real risks that you can't ignore whereas with a good |
7 |
> internal drive you can often get away with it. |
8 |
> |
9 |
|
10 |
++ |
11 |
|
12 |
Don't ignore the potential for logical errors. If you have some |
13 |
script that magically rsyncs stuff then don't make the mistake of |
14 |
moving data over and rsyncing the old copy over the new, or mounting |
15 |
the devices in a manner that isn't robust when udev changes all your |
16 |
device labels, and so on. That seems like the most likely way your |
17 |
data is going to get scrambled, unless you have them both in your car |
18 |
and end up in a crash. |
19 |
|
20 |
That was one of the reasons I went with btrfs for my offline copy. If |
21 |
it unmounts, then I know I have two copies of everything. If it |
22 |
mounts, I know it found both mirrors. If I scrub and there are no |
23 |
errors, then I know both copies are good. You can do that in other |
24 |
ways, but make sure you actually catch the failure modes. |
25 |
|
26 |
Rich |