1 |
On Thursday, September 01, 2016 09:35:15 AM Rich Freeman wrote: |
2 |
> On Thu, Sep 1, 2016 at 8:41 AM, Michael Mol <mikemol@×××××.com> wrote: |
3 |
> > The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO, |
4 |
> > badly broken and an insidious source of problems for both regular Linux |
5 |
> > users and system administrators. |
6 |
> |
7 |
> It depends on whether you tend to yank out drives without unmounting |
8 |
> them, |
9 |
|
10 |
The sad truth is that many (most?) users don't understand the idea of |
11 |
unmounting. Even Microsoft largely gave up, having flash drives "optimized for |
12 |
data safety" as opposed to "optimized for speed". While it'd be nice if the |
13 |
average John Doe would follow instructions, anyone who's worked in IT |
14 |
understands that the average John Doe...doesn't. And above-average ones assume |
15 |
they know better and don't have to. |
16 |
|
17 |
As such, queuing up that much data while reporting to the user that the copy |
18 |
is already complete violates the principle of least surprise. |
19 |
|
20 |
> or if you have a poorly-implemented database that doesn't know |
21 |
> about fsync and tries to implement transactions across multiple hosts. |
22 |
|
23 |
I don't know off the top of my head what database implementation would do that, |
24 |
though I could think of a dozen that could be vulnerable if they didn't sync |
25 |
properly. |
26 |
|
27 |
The real culprit that comes to mind, for me, are copy tools. Whether it's dd, |
28 |
mv, cp, or a copy dialog in GNOME or KDE. I would love to see CoDeL-style |
29 |
time-based buffer sizes applied throughout the stack. The user may not care |
30 |
about how many milliseconds it takes for a read to turn into a completed write |
31 |
on the face of it, but they do like accurate time estimates and low latency |
32 |
UI. |
33 |
|
34 |
> |
35 |
> The flip side of all of this is that you can save-save-save in your |
36 |
> applications and not sit there and watch your application wait for the |
37 |
> USB drive to catch up. It also allows writes to be combined more |
38 |
> efficiently (less of an issue for flash, but you probably can still |
39 |
> avoid multiple rounds of overwriting data in place if multiple |
40 |
> revisions come in succession, and metadata updating can be |
41 |
> consolidated). |
42 |
|
43 |
I recently got bit by vim's easytags causing saves to take a couple dozen |
44 |
seconds, leading me not to save as often as I used to. And then a bunch of |
45 |
code I wrote Monday...wasn't there any more. I was sad. |
46 |
|
47 |
> |
48 |
> For a desktop-oriented workflow I'd think that having nice big write |
49 |
> buffers would greatly improve the user experience, as long as you hit |
50 |
> that unmount button or pay attention to that flashing green light |
51 |
> every time you yank a drive. |
52 |
|
53 |
Realistically, users aren't going to pay attention. You and I do, but that's |
54 |
because we understand the *why* behind the importance. |
55 |
|
56 |
I love me fat write buffers for write combining, page caches etc. But, IMO, it |
57 |
shouldn't take longer than 1-2s (barring spinning rust disk wake) for full |
58 |
buffers to flush to disk; at modern write speeds (even for a slow spinning |
59 |
disc), that's going to be a dozen or so megabytes of data, which is plenty big |
60 |
for write-combining purposes. |
61 |
|
62 |
-- |
63 |
:wq |