1 |
And honestly, expanding on what Rich said... Given your particular |
2 |
circumstances with the extensive number of hardlinks are pretty |
3 |
specific, I reckon you might be best off just setting up a small scale |
4 |
test of some options and profiling it. Converting it all to a btrfs |
5 |
subvolume might be a realistic option, or might take an order of |
6 |
magnitude more time than just waiting for it all to delete. or all of |
7 |
the various move tricks mentioned previously |
8 |
|
9 |
If this were an "I know I need to do this in the future, what should I |
10 |
do" question then you'd either put it all in a subvolume to begin |
11 |
with, or select the file system specifically for its speed deleting |
12 |
small files... (Certainly dont quote me here, but wasnt JFS the king |
13 |
of that back in the day? I cant quite recall) |
14 |
|
15 |
On Fri, 22 Oct 2021 at 22:29, Rich Freeman <rich0@g.o> wrote: |
16 |
> |
17 |
> On Fri, Oct 22, 2021 at 7:36 AM Helmut Jarausch <jarausch@××××××.be> wrote: |
18 |
> > |
19 |
> > |
20 |
> > There are more than 55,000 files on some <PREVBackUp> which is located |
21 |
> > on a BTRFS file system. |
22 |
> > Standard 'rm -rf' is really slow. |
23 |
> > |
24 |
> > Is there anything I can do about this? |
25 |
> > |
26 |
> |
27 |
> I don't have any solid suggestions as I haven't used btrfs in a while. |
28 |
> File deletion speed is something that is very filesystem specific, but |
29 |
> on most it tends to be slow. |
30 |
> |
31 |
> An obvious solution would be garbage collection, which is something |
32 |
> used by some filesystems but I'm not aware of any mainstream ones. |
33 |
> You can sort-of get that behavior by renaming a directory before |
34 |
> deleting it. Suppose you have a directory created by a build system |
35 |
> and you want to do a new build. Deleting the directory takes a long |
36 |
> time. So, first you rename it to something else (or move it someplace |
37 |
> on the same filesystem which is fast), then you kick off your build |
38 |
> which no longer sees the old directory, and then you can delete the |
39 |
> old directory slowly at your leisure. Of course, as with all garbage |
40 |
> collection, you need to have the spare space to hold the data while it |
41 |
> gets cleaned up. |
42 |
> |
43 |
> I'm not sure if btffs is any faster at deleting snapshots/reflinks |
44 |
> than hard links. I suspect it wouldn't be, but you could test that. |
45 |
> Instead of populating a directory with hard links, create a snapshot |
46 |
> of the directory tree, and then rsync over it/etc. The result looks |
47 |
> the same but is COW copies. Again, I'm not sure that btrfs will be |
48 |
> any faster at deleting reflinks than hard links though - they're both |
49 |
> similar metadata operations. I see there is a patch in the works for |
50 |
> rsync that uses reflinks instead of hard links to do it all in one |
51 |
> command. That has a lot of benefits, but again I'm not sure if it |
52 |
> will help with deletion. |
53 |
> |
54 |
> You could also explore other filesystems that may or may not have |
55 |
> faster deletion, or look to see if there is any way to optimize it on |
56 |
> btrfs. |
57 |
> |
58 |
> If you can spare the space, the option of moving the directory to make |
59 |
> it look like it was deleted will work on basically any filesystem. If |
60 |
> you want to further automate it you could move it to a tmp directory |
61 |
> on the same filesystem and have tmpreaper do your garbage collection. |
62 |
> Consider using ionice to run it at a lower priority, but I'm not sure |
63 |
> how much impact that has on metadata operations like deletion. |
64 |
> |
65 |
> -- |
66 |
> Rich |
67 |
> |