1 |
On Sat, Oct 31, 2015 at 10:37:55AM +0100, Remy Blank wrote: |
2 |
> I'm trying to make sense of the disk usage reported by "zfs list". |
3 |
> Here's what I get: |
4 |
> |
5 |
> $ zfs list \ |
6 |
> -o name,used,avail,refer,usedbydataset,usedbychildren,usedbysnapshots \ |
7 |
> -t all |
8 |
> |
9 |
> NAME USED AVAIL REFER USEDDS USEDCHILD USEDSNAP |
10 |
> pool/data 58.0G 718G 46.7G 46.7G 0 11.3G |
11 |
> pool/data@2015-10-03 0 - 46.5G - - - |
12 |
> ... |
13 |
> pool/data@2015-10-12 0 - 46.5G - - - |
14 |
> pool/data@2015-10-13 734M - 46.7G - - - |
15 |
> pool/data@2015-10-14 0 - 46.7G - - - |
16 |
> ... |
17 |
> pool/data@2015-10-28 0 - 46.7G - - - |
18 |
> pool/data@2015-10-29 755M - 46.7G - - - |
19 |
> pool/data@2015-10-30 757M - 46.7G - - - |
20 |
> pool/data@2015-10-31 0 - 46.7G - - - |
21 |
> |
22 |
> What I don't understand: I have 29 snapshots, only three of them use |
23 |
> ~750M, but in total they take 11.3G. Where do the excess 9.1G come from? |
24 |
> |
25 |
|
26 |
I'm going to go out on a limb and assume that zfs works in a similar way |
27 |
to btrfs here (my quick googling shows that atleast in this case that may be |
28 |
true). You then have to understand the numbers in the following way: |
29 |
|
30 |
USEDSNAP refers to _data_ that is not in pool/data but in the snapshots. |
31 |
The value for USED is _data_ that is only present in *this one* snapshot, |
32 |
and not in any other snapshots or in pool/data. _data_ that is shared |
33 |
between atleast two snapshots is not shown as USED because removing one of |
34 |
the snapshots would not free it (it is still referenced by another |
35 |
snapshot). |
36 |
|
37 |
So in your case you have 3 snapshots which each have 750 MB exclusively, |
38 |
and the remaining ~9 GB is in some way shared between all snapshots. So if |
39 |
you were to delete any one of the 3 snapshots, you would free 750 MB. If |
40 |
you were to delete all snapshots you would free 11.3 GB. But deleting any |
41 |
one snapshot can change the USED count of any other snapshot. |
42 |
|
43 |
This is one of the problems with copy-on-write filesystems - they make |
44 |
disk space accounting more complicated especially with snapshots. Perhaps |
45 |
zfs has something similar to btrfs qgroups which would allow you to group |
46 |
snapshots in arbitrary ways to find out how much any group of snapshots |
47 |
uses. Here's the example output of 'btrfs qgroup show' on my machine: |
48 |
|
49 |
qgroupid rfer excl parent child |
50 |
-------- ---- ---- ------ ----- |
51 |
0/5 0.00GiB 0.00GiB --- --- |
52 |
0/262 6.37GiB 0.03GiB --- --- |
53 |
0/265 3.52GiB 2.38GiB 1/0 --- |
54 |
0/270 6.38GiB 0.16GiB --- --- |
55 |
0/275 0.00GiB 0.00GiB --- --- |
56 |
0/276 4.38GiB 0.35GiB 1/0 --- |
57 |
0/277 0.00GiB 0.00GiB --- --- |
58 |
0/278 4.98GiB 0.40GiB 1/1 --- |
59 |
0/279 4.62GiB 0.12GiB 1/0 --- |
60 |
0/285 5.59GiB 0.01GiB 1/0 --- |
61 |
0/286 5.69GiB 0.01GiB 1/0 --- |
62 |
0/289 6.34GiB 0.42GiB 1/1 --- |
63 |
0/290 6.35GiB 0.01GiB 1/0 --- |
64 |
0/291 6.38GiB 0.15GiB 1/1 --- |
65 |
1/0 10.02GiB 3.68GiB --- 0/265,0/276,0/279,0/285,0/286,0/290 |
66 |
1/1 7.20GiB 0.98GiB --- 0/278,0/289,0/291 |
67 |
|
68 |
0/262 is / |
69 |
0/270 is /home |
70 |
1/0 contains all snapshots of / |
71 |
1/1 contains all snapshots of /home |
72 |
|
73 |
but I could also have grouped a subgroup of the snapshots in some other |
74 |
way to find out how much space they take exclusively and how much space |
75 |
they would free if they were to be deleted. |