1 |
Neil Bothwick wrote: |
2 |
>> - If your local backup becomes corrupt, then so does your remote |
3 |
>> backup, except if you are quick enough to disable the rsync step. |
4 |
> |
5 |
> That's a potential problem with any form of backup, local or remote. The |
6 |
> truly paranoid would use two different backup methods on two physically |
7 |
> separate destinations. |
8 |
|
9 |
Well, it's not quite the same. In the 2-step case (local backup, e.g. |
10 |
using rdiff-backup, followed by an rsync of the backup to a remote |
11 |
location), if your local backup gets corrupted, then so does your remote |
12 |
one. |
13 |
|
14 |
If you just do two independent backups, even with the same method, one |
15 |
locally and the second remotely, if one gets corrupted, chances are the |
16 |
other one is still ok. |
17 |
|
18 |
>> - If you have disconnection during the rsync step (happened to me |
19 |
>> last night), your remote backup is temporarily corrupted. |
20 |
> |
21 |
> That should be fixable by having the script that runs rsync check the |
22 |
> return value and try again if it fails. |
23 |
|
24 |
You're right, of course. I would still be more comfortable keeping the |
25 |
"window of vulnerability" (the time for which the remote file is |
26 |
inconsistent) as small as possible, and independent of network |
27 |
connectivity. That's why I was thinking in the lines of "calculate diff, |
28 |
send diff and store remotely, update remote copy when connection has |
29 |
closed". |
30 |
|
31 |
-- Remy |