1 |
On Sun, 26 Apr 2020 18:15:51 +0200 |
2 |
tuxic@××××××.de wrote: |
3 |
|
4 |
> Filesystem Size Used Avail Use% Mounted on |
5 |
> /dev/root 246G 45G 189G 20% / |
6 |
|
7 |
Given that (Size - Used) is roughly 200G, it suggests to me that |
8 |
perhaps, some process somewhere is creating and deleting a lot of |
9 |
temporary files on this device (or maybe simply re-writing the same |
10 |
file multiple times) |
11 |
|
12 |
From a userspace, this would be invisible, as the "new" file would be |
13 |
in a new location on the disk, and the "old" file would be invisible, |
14 |
and marked "can be overwritten". |
15 |
|
16 |
So if you did: |
17 |
|
18 |
for i in {0..200}; do |
19 |
cp a b |
20 |
rm a |
21 |
mv b a |
22 |
done |
23 |
|
24 |
Where "a" is a 1G file, I'd expect this to have a *ceiling* of 200G |
25 |
that would turn up in fstrim output, as once you reached iteration 201, |
26 |
where "can overwrite" would allow the SSD to go back and rewrite over |
27 |
the space used in iteration 1. |
28 |
|
29 |
While the whole time, the visible disk usage in df -h would never |
30 |
exceed 46G . |
31 |
|
32 |
I don't know if this is what is happening, I don't have an SSD and |
33 |
don't get to use fstrim. |
34 |
|
35 |
But based on what you've said, the results aren't *too* surprising. |
36 |
|
37 |
Though its possible the hardware has some internal magic to elide some |
38 |
writes, potentially making the "cp" action incur very few writes, which |
39 |
would show up in the smartctl data, but ext4 might not know anything |
40 |
about that, so perhaps fstrim only indicates what ext4 *tracked* as |
41 |
being cleaned, while it may have incurred much less cleanup required on |
42 |
the hardware. |
43 |
|
44 |
That would explain the difference between smartctl and fstrim results. |
45 |
|
46 |
Maybe compare smartctl output over time with |
47 |
/sys/fs/ext4/<device>/session_write_kbytes and see if one grows faster |
48 |
than the other? :) |
49 |
|
50 |
My local session_write_kbytes is currently at 709G, the partition its |
51 |
for is only 552G, with 49G space, and its been booted 33 days, so "21G |
52 |
of writes a day". |
53 |
|
54 |
And uh, lifetime_write_kbytes is about 18TB. Yikes. |
55 |
|
56 |
( compiling things involves a *LOT* of ephemeral data ) |
57 |
|
58 |
Also, probably don't assume the amount of free space on your partition |
59 |
is all the physical device has at its disposal to use. It seems |
60 |
possible that on the hardware, the total pool of "free blocks" is |
61 |
arbitrarily usable by the device for wear levelling, and a TRIM command |
62 |
to that device could plausibly report more blocks trimmed than your |
63 |
current partition size, depending on how its implemented. |
64 |
|
65 |
But indeed, lots of speculation here on my part :) |