1 |
On Sat, Oct 15, 2016 at 1:35 AM, Ian Zimmerman <itz@×××××××.net> wrote: |
2 |
> On 2016-10-15 05:42, Kai Krakow wrote: |
3 |
> |
4 |
>> The backup source is my btrfs subvol 0. I can put the systemd units |
5 |
>> for backup to github if you're interested. |
6 |
> |
7 |
> I'm not a systemd fan, so they wouldn't help me, but thanks for |
8 |
> offering. |
9 |
|
10 |
I'd be curious to see them; whether you use systemd or not units are |
11 |
pretty trivial to read and make use of in scripts, cron entries, etc. |
12 |
|
13 |
I just use snapper to manage snapshots if all I'm worried about is |
14 |
casual deletion of files. Since I don't fully trust btrfs I also keep |
15 |
a full rsnapshot (basically an rsync wrapper) of my btrfs volumes on a |
16 |
local ext4. |
17 |
|
18 |
> |
19 |
> My priorities are different, and there are constraints resulting from my |
20 |
> priorities as well as others. |
21 |
> |
22 |
> I am mostly worried about physical catastrophic damage (I live in |
23 |
> earthquake country) and losing my personal data, which could not be |
24 |
> recreated. So it has to be offsite, and because it's personal data it |
25 |
> has to be encrypted. And I cannot make the trip to where it's stored |
26 |
> every day. |
27 |
> |
28 |
> Given the time (including the trip) it takes to restore, I don't see the |
29 |
> point of backing up static files which can be reinstalled from the |
30 |
> distribution. Of course, I'm still learning gentoo and so it was easy |
31 |
> for me to make the mistake of forgetting that files under /usr/portage |
32 |
> aren't really in that catagory. |
33 |
> |
34 |
|
35 |
++ |
36 |
|
37 |
What I really consider my "backups" are stored encrypted on amazon s3 |
38 |
using duplicity (which I highly recommend for this purpose). |
39 |
Basically it amounts to /etc, and /home, with a number of exclusions |
40 |
(media and cache). For media I care about like photos I include new |
41 |
stuff in m duplicity backups, but as I accumulate reasonable chunks of |
42 |
it I make a separate backup and store it offsite, and remove that |
43 |
chunk from my duplicity backup. This prevents the daily-updated |
44 |
backup pool from getting insanely large, while still maintaining an |
45 |
offsite copy (since these files don't change over time). |
46 |
|
47 |
I'd never spend the money to be doing cloud backup of /usr (other than |
48 |
/usr/local). I have all my static configuration backed up, so I could |
49 |
just restore that onto a stage 3 and run emerge -uDN world to get all |
50 |
of that back. |
51 |
|
52 |
If I did have an offsite server somewhere where storage costs weren't |
53 |
a big deal then I'd probably be setting up replicas using |
54 |
btrfs+zfs-send/receive. You can do that at minimal cost with |
55 |
incrementals, and I could probably do the first clone on the local |
56 |
LAN. |
57 |
|
58 |
-- |
59 |
Rich |