1 |
On Tue, 14 Aug 2018 16:06:21 -0400, |
2 |
J. Roeleveld wrote: |
3 |
> |
4 |
> On August 14, 2018 11:42:18 AM UTC, John Covici <covici@××××××××××.com> wrote: |
5 |
> |
6 |
> >I use sanoid/syncoid to back up using zfs. Its great, keeps snapshots |
7 |
> >for as long as I want them (I use 80 days for now). And it keeps |
8 |
> >hourlies for the last couple of days as well, so I could roll back in |
9 |
> >case of a problem. Very nice if you use zfs. |
10 |
> |
11 |
> I tried sanoid, but it has a few problems which really become annoying when you have a lot of datasets: |
12 |
> 1) every dataset is handled seperately, no use of recursive snapshots when datasets are inside the same tree |
13 |
> 2) it keeps seperate hourly, daily,.... snapshots, which means it will happily create multiple snapshots with only a few seconds difference for every dataset around midnight. |
14 |
> 3) when rolling back several snapshots, there are multiple errors reported because the cache (where does it store that?) does not match reality. |
15 |
> |
16 |
> Have these been resolved yet? |
17 |
> |
18 |
> I ended up writing my own system for this, got some extra intelligence in there to work around any possible error condition I have encountered. |
19 |
> |
20 |
|
21 |
Well, I got around your second point by having a special job at 11:59 |
22 |
pm to create the dailies and the one at midnight works well. I only |
23 |
do the cron jobs hourly, not every minute like they wanted. |
24 |
|
25 |
If your script is not special for you, I would like to see it, maybe I |
26 |
would use it instead. Things seem to work for now, however with those |
27 |
modifications. |
28 |
|
29 |
-- |
30 |
Your life is like a penny. You're going to lose it. The question is: |
31 |
How do |
32 |
you spend it? |
33 |
|
34 |
John Covici wb2una |
35 |
covici@××××××××××.com |