1 |
On Tue, May 27, 2014 at 7:49 AM, Neil Bothwick <neil@××××××××××.uk> wrote: |
2 |
> I have zfs-snapshot making snapshots at 15 minute, hourly, daily, monthly |
3 |
> and weekly intervals - and it cleans up after itself. There isn't |
4 |
> anything quite like that for btrfs, so I'm knocking up a python script to |
5 |
> take care of it. I want automated snapshots before I risk it on my |
6 |
> desktop. |
7 |
|
8 |
There is snapper, which is even in the tree now. It isn't 100% |
9 |
flexible but supports any number of hourly, daily, monthly, and yearly |
10 |
snapshots, with retention policies for each. |
11 |
|
12 |
My problem was that the snapshots were created hourly, but cleaned up |
13 |
daily, which meant 24 deleted in parallel at a time. |
14 |
|
15 |
In general I've tended to notice that btrfs tends to suffer from |
16 |
hurry-up-and-wait issues. It will happily accept a ton of writes |
17 |
(even from an ioniced process) which I imagine go into the log, and |
18 |
then the whole filesystem will come to a halt at the next checkpoint |
19 |
(every 30s by default). It makes ionice just about useless, since the |
20 |
filesystem accepts more data than it can handle, and then blocks even |
21 |
realtime-priority processes while it is catching up. |
22 |
|
23 |
I suspect that it was having a related issue with snapshot removals. |
24 |
24 huge snapshot removal commands complete in almost zero time, and |
25 |
then in 30s the debt comes due. |
26 |
|
27 |
In order to be a bit more ionice-friendly the filesystem needs to |
28 |
figure out what it can sustain and throttle writes to a reasonable |
29 |
rate. I'm fine with having some allowance for bursting, but having |
30 |
all disk access block for 10-20 seconds isn't acceptable. |
31 |
|
32 |
Oh, and chromium just loves its disk cache - it will happily wait for |
33 |
20 seconds to read a cache entry that it could have downloaded online |
34 |
in less than a second. |
35 |
|
36 |
Rich |