1 |
On Wed, 24 Apr 2013 19:07:05 +0100, Stroller wrote: |
2 |
|
3 |
> > That only works on small systems. I have systems here where a 'du' on |
4 |
> > /home would take hours and produce massive IO wait, because there's so |
5 |
> > much data in there. |
6 |
> |
7 |
> Of course. Excuse me. |
8 |
> |
9 |
> My original idea was in respect of the previous respondent's desire to |
10 |
> offer hard limits of a gigabyte - allocating each user a partition and |
11 |
> running `du`, which returns immediately, on it. |
12 |
|
13 |
I said "by the gigabyte" not "of a gigabyte", a user could have hundreds |
14 |
of them. |
15 |
|
16 |
> I don't understand how a hard limit could be enforced if it's |
17 |
> impractical to assess the size of used data. |
18 |
|
19 |
Because the filesystem keeps track of the usage, just like it does for |
20 |
the whole filesystem, which is why "df ." is so much faster than |
21 |
"du .". ZFS does this too, it just doesn't have a concept of a soft limit. |
22 |
|
23 |
|
24 |
-- |
25 |
Neil Bothwick |
26 |
|
27 |
Please rotate your phone 90 degrees and try again. |