1 |
On Tuesday 27 May 2014 11:28:17 Rich Freeman wrote: |
2 |
> On Tue, May 27, 2014 at 11:12 AM, J. Roeleveld <joost@××××××××.org> wrote: |
3 |
> > On Tuesday, May 27, 2014 10:31:26 AM Rich Freeman wrote: |
4 |
> >> btrfs wouldn't have any issues with this at all. You'd have an |
5 |
> >> advantage in that you wouldn't have to unmount the filesystem to |
6 |
> >> cleanly create the snapshot (which you have to do with lvm). |
7 |
> > |
8 |
> > That, or a "sync" prior to creating the snapshot. :) |
9 |
> |
10 |
> If the filesystem is still mounted, I'm not sure that a sync is |
11 |
> guaranteed to give you a clean remount. It only flushes the |
12 |
> caches/etc. You need to remount read-only or unmount before doing the |
13 |
> sync (and the sync probably isn't actually necessary as I'd think LVM |
14 |
> would snapshot the contents of the cache as well). |
15 |
|
16 |
I do this for the OS-partitions of my VMs. |
17 |
In the VM, run "sync", then on the host, take an LVM snapshot and then mount |
18 |
the snapshot on the host. |
19 |
I have not had any errors from this. |
20 |
|
21 |
> > I have a yearly (full), monthly, weekly and daily. Each incremental is |
22 |
> > against the most recent one of itself or longer period. |
23 |
> > That means having to keep multiple snapshots active, which I prefer to |
24 |
> > avoid. |
25 |
> You only need to store snapshots for use with incremental backups. |
26 |
> So, if all your backups are full, then you don't need to retain any |
27 |
> snapshots (and you wouldn't use btrfs send anyway). If your yearly is |
28 |
> full and your monthlies are incremental against the yearly then you |
29 |
> need to keep your yearly snapshot for a year. If your yearly is full |
30 |
> and your monthlies are incremental against the last month, then you |
31 |
> only need to keep the yearly until the next monthly. If your |
32 |
> monthlies are full then you only need to keep the current monthly |
33 |
> assuming your dailies are incremental against those, but if they're |
34 |
> incremental from the last daily then you never need to keep anything |
35 |
> for more than a day. |
36 |
|
37 |
That makes for an interesting option. Not sure if I would implement it that |
38 |
way. |
39 |
|
40 |
> > But, it is a good idea for backing up desktops and laptops. |
41 |
> |
42 |
> It is really intended more for something like datacenter replication. |
43 |
> Snapshot every 5 min, send the data to the backup datacenter, delete |
44 |
> the snapshots upon confirmation of successful receipt. In such a |
45 |
> scenario you wouldn't retain the sent files but just keep playing them |
46 |
> against the replicate filesystem. |
47 |
> |
48 |
> They'd be fine for backups as well, as long as you can store the |
49 |
> snapshots online until no longer needed for incrementals. |
50 |
|
51 |
"app-backup/dar" uses catalogues for the incrementals. I think I will stick to |
52 |
that for the foreseeable future. |
53 |
|
54 |
> >> But, you can always just create a snapshot, write it to backup with |
55 |
> >> your favorite tool (it is just a directory tree), and then remove it |
56 |
> >> as soon as you're done with it. Creating a snapshot is atomic at the |
57 |
> >> filesystem level, though again if you want application level |
58 |
> >> consistency you need to deal with that until somebody comes up with a |
59 |
> >> transactional way to store files on Linux that is more elegant that |
60 |
> >> fsyncing on every write. |
61 |
> > |
62 |
> > That would require a method to keep database and filesystem perfectly in |
63 |
> > sync when they are not necessarily on the same machine. |
64 |
> |
65 |
> Well, right now we can't even guarantee consistency when everything is |
66 |
> written by a single process on the same machine. The best we have is |
67 |
> a clunky fsync operation which kills the write cache and destroys |
68 |
> performance, and even that doesn't do anything if you have more than |
69 |
> one file that must be consistent. |
70 |
|
71 |
Yep, and that's why those filesystems are actually umounted prior to creating |
72 |
the LVM snapshot. Umounting forces the filesystem to be put into a consistent |
73 |
state. |
74 |
|
75 |
> The result is journals on top of journals as nobody can trust the next |
76 |
> layer down to do its job correctly. |
77 |
> |
78 |
> Going across machines does complicate things further as there are more |
79 |
> failure modes that take out one part of the overall system but not |
80 |
> another. However, I'd like to think that an OS that natively supports |
81 |
> transactions could at least standardize things so that every layer |
82 |
> along the path isn't storing its own journal. |
83 |
> |
84 |
> In fact, many of the optimizations possible with zfs and btrfs are due |
85 |
> to the fact that you eliminate all those layers. |
86 |
|
87 |
Either of those 2, probably btrfs as I prefer native support in the kernel, |
88 |
will be implemented when I get the opportunity to get the NAS on bare metal |
89 |
and remove the virtualization for that component. I need to find a different |
90 |
host for the other services first. |
91 |
That might take a while. |
92 |
|
93 |
-- |
94 |
Joost |