Gentoo Archives: gentoo-user

From: Kai Krakow <hurikhan77@×××××.com>
To: gentoo-user@l.g.o
Subject: [gentoo-user] Re: Backup [Was: Old Firefox ebuild?]
Date: Sat, 15 Oct 2016 19:18:33
Message-Id: 20161015211803.2c1577b9@jupiter.sol.kaishome.de
In Reply to: Re: [gentoo-user] Backup [Was: Old Firefox ebuild?] by Rich Freeman
1 Am Sat, 15 Oct 2016 07:22:06 -0400
2 schrieb Rich Freeman <rich0@g.o>:
3
4 > On Sat, Oct 15, 2016 at 1:35 AM, Ian Zimmerman <itz@×××××××.net>
5 > wrote:
6 > > On 2016-10-15 05:42, Kai Krakow wrote:
7 > >
8 > >> The backup source is my btrfs subvol 0. I can put the systemd units
9 > >> for backup to github if you're interested.
10 > >
11 > > I'm not a systemd fan, so they wouldn't help me, but thanks for
12 > > offering.
13 >
14 > I'd be curious to see them; whether you use systemd or not units are
15 > pretty trivial to read and make use of in scripts, cron entries, etc.
16
17 https://gist.github.com/kakra/7637555528a54a0c7aaca6f68338418c
18
19 My system automatically sleeps after 2 hours, so I'm using WakeSystem
20 to wake it up at night for backup.
21
22 /mnt/btrfs-pool is my pool. I also create the list of subvolumes on the
23 backup drive to more easily recreate all subvolumes before restoring the
24 backup.
25
26 My initial backup took around 24 hours, successive backups run 15
27 minutes. I'm using bcache for both my system storage and backup storage
28 to speed up the process but it includes Eric Wheelers ionice patches to
29 reduce SSD wear and improve overall performance:
30
31 "bcache: introduce per-process ioprio-based bypass/writeback hints"
32
33 Also, you can change the environment settings to support encrypted
34 backups. I'm not using them as I only use trusted storage and don't
35 want to bother keeping the crypt key in a safe place.
36
37 If I reintroduce duplicating to a remote location, I'd simply rsync the
38 backup archive to a remote location with an additional ExecStartPost
39 line. My diff is around 500 MB to 2 GB per day. (my external USB disk
40 died, I'll get a NAS sometime later instead)
41
42 Actually, I'm doing this with a remote location which is rsync'ed to my
43 local backup every night.
44
45 > I just use snapper to manage snapshots if all I'm worried about is
46 > casual deletion of files. Since I don't fully trust btrfs I also keep
47 > a full rsnapshot (basically an rsync wrapper) of my btrfs volumes on a
48 > local ext4.
49
50 That is why I chose borgbackup. But I prefer XFS as a more reliable and
51 faster file system for backup storage. Also, I used the systemd
52 automounter so it won't be mounted all the time but just on demand.
53
54 > > My priorities are different, and there are constraints resulting
55 > > from my priorities as well as others.
56 > >
57 > > I am mostly worried about physical catastrophic damage (I live in
58 > > earthquake country) and losing my personal data, which could not be
59 > > recreated. So it has to be offsite, and because it's personal data
60 > > it has to be encrypted. And I cannot make the trip to where it's
61 > > stored every day.
62 > >
63 > > Given the time (including the trip) it takes to restore, I don't
64 > > see the point of backing up static files which can be reinstalled
65 > > from the distribution. Of course, I'm still learning gentoo and so
66 > > it was easy for me to make the mistake of forgetting that files
67 > > under /usr/portage aren't really in that catagory.
68 > >
69 >
70 > ++
71 >
72 > What I really consider my "backups" are stored encrypted on amazon s3
73 > using duplicity (which I highly recommend for this purpose).
74
75 Borgbackup supports strong encryption. I'm pretty sure you could even
76 manage storing to amazon s3 directly.
77
78 > Basically it amounts to /etc, and /home, with a number of exclusions
79 > (media and cache). For media I care about like photos I include new
80 > stuff in m duplicity backups, but as I accumulate reasonable chunks of
81 > it I make a separate backup and store it offsite, and remove that
82 > chunk from my duplicity backup. This prevents the daily-updated
83 > backup pool from getting insanely large, while still maintaining an
84 > offsite copy (since these files don't change over time).
85
86 Borgbackup supports cache tags to exclude cache directories. I placed
87 some of them manually in e.g. $HOME/.cache. Borgbackup deduplication
88 makes full backup snapshots insanely small: 30 TB of backups are stored
89 in 1.8 TB for me. So it also detects file moves which rsync doesn't
90 (and deduplicity probably also doesn't).
91
92 > I'd never spend the money to be doing cloud backup of /usr (other than
93 > /usr/local). I have all my static configuration backed up, so I could
94 > just restore that onto a stage 3 and run emerge -uDN world to get all
95 > of that back.
96
97 I figured that reinstallation plus restoring is much more time
98 consuming than just simply put it back. Reinstalling Gentoo from
99 scratch will take me 3-4 days until everything is back in place
100 normally. Restoring my data from borgbackup takes around 24 hours last
101 time I had to do it.
102
103 Restoring onto a stage3 may also introduce orphan files because you are
104 overwriting what the package manager thinks is installed. And by the
105 way: How big could /usr be usually (excluding portage)? Maybe a few
106 tiny gigabytes. It's a tiny fragment of my complete backup.
107
108 Instead, I'm using a tiny USB stick with an sdcard and a rescue system
109 that I keep updated from time to time (simply by booting a
110 systemd-nspawn container from it). That USB stick is just as small as
111 the Logitech nano receivers (the sdcard is inserted within the
112 connector), so I keep it plugged in all the time. This installation
113 also includes a script to recreate my btrfs including it's subvolumes,
114 and then restore the backup upon that. It also works as a backup for
115 my /boot partition as it essentially has a copy of that (to be
116 bootable).
117
118 > If I did have an offsite server somewhere where storage costs weren't
119 > a big deal then I'd probably be setting up replicas using
120 > btrfs+zfs-send/receive. You can do that at minimal cost with
121 > incrementals, and I could probably do the first clone on the local
122 > LAN.
123
124 I'd take a look at borgbackups remote capabilities also. I'm backing up
125 some remote machines by simply using the prebuilt single binary from
126 the website (so versions easily stay in sync). Borgbackup uses ssh
127 tunnels for transferring data and a remote cache for speeding up delta
128 transfers. It's much much faster than rsync. I never tried btrfs
129 send/receive and it's probably still less space efficient than
130 borgbackup.
131
132
133 --
134 Regards,
135 Kai
136
137 Replies to list-only preferred.