Gentoo Archives: gentoo-user

From: William Kenworthy <billk@×××××××××.au>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] [OT] SMR drives (WAS: cryptsetup close and device in use when it is not)
Date: Sun, 01 Aug 2021 03:37:34
Message-Id: faf7bf15-ec9e-ba6f-a840-fa062faccf18@iinet.net.au
In Reply to: Re: [gentoo-user] [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) by Frank Steinmetzger
1 On 1/8/21 8:50 am, Frank Steinmetzger wrote:
2 > Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
3 >
4 >> Its not raid, just a btrfs single on disk (no partition).  Contains a
5 >> single borgbackup repo for an offline backup of all the online
6 >> borgbackup repo's I have for a 3 times a day backup rota of individual
7 >> machines/data stores
8 > So you are borg’ing a repo into a repo? I am planning on simply rsync’ing
9 > the borg directory from one external HDD to another. Hopefully SMR can cope
10 > with this adequatly.
11 >
12 > And you are storing several machines into a single repo? The docs say this
13 > is not supported officially. But I have one repo each for /, /home and data
14 > for both my PC and laptop. Using a wrapper script, I create snapshots that
15 > are named $HOSTNAME_$DATE in each repo.
16
17 Basicly yes: I use a once per hour snapshot of approximately 500Gib of
18 data on moosefs, plus borgbackups 3 times a day to individual repos on
19 moosefs for each host.  3 times a day, the latest snapshot is stuffed
20 into a borg repo on moosefs and the old  snapshots are deleted.  I
21 currently manually push all the repos into a borg repo on the USB3 SMR
22 drive once a day or so.
23
24 1. rsync (and cp etc.) are dismally slow on SMR - use where you have to,
25 avoid otherwise.
26
27 2. borgbackup with small updates goes very quick
28
29 3. borgbackup often to keep changes between updates small - time to
30 backup will stay short.
31
32 4. borg'ing a repo into a repo works extreemly well - however there are
33 catches based around backup set names and the file change tests used.
34 (ping me if you want the details)
35
36 5. Yes, I have had disasters (i.e., a poorly thought out rm -rf in a
37 moosefs directory, unstable power that took awhile to cure, ...)
38 requiring underfire restoration of both large and small datasets - it works!
39
40 6. Be careful of snapshot resources on moosefs - moosefs has a defined
41 amount of memory for each file stored.  Even with the lazy snapshot
42 method, taking a snapshot will about double the memory usage on the
43 master for that portion of the filesystem.  Also taking too many
44 snapshots multiplies the effect.  Once you go into swap, it becomes a
45 recovery effort.  Also keep in mind that trashtime is carried into the
46 snapshot so the data may exist in trash even after deletion - its
47 actually easy to create a DOS condition by not paying attention to this.
48
49 BillK
50
51
52 >
53 >> - I get an insane amount of de-duplication that way for a slight decrease
54 >> in conveniance!
55 > And thanks to the cache, a new snapshots usually is done very fast. But for
56 > a yet unknown reason, sometimes Borg re-hashes all files, even though I
57 > didn’t touch the cache. In that case it takes 2½ hours to go through my
58 > video directory.
59 >

Replies