1 |
On Sun, Sep 24, 2017 at 2:11 PM, Martin Vaeth <martin@×××××.de> wrote: |
2 |
> Rich Freeman <rich0@g.o> wrote: |
3 |
>> On Sun, Sep 24, 2017 at 4:24 AM, Martin Vaeth <martin@×××××.de> wrote: |
4 |
>>> Tim Harder <radhermit@g.o> wrote: |
5 |
>>> |
6 |
>>> It is the big advantage of overlay that it is implemented in |
7 |
>>> kernel and does not involve any time-consuming checks during |
8 |
>>> normal file operations. |
9 |
>> |
10 |
>> Why would you expect containers to behave any differently? |
11 |
> |
12 |
> For overlay, there is only one directory to be checked in |
13 |
> addition for every file access. |
14 |
> |
15 |
> For containers, at least a dozens of binds are minimally required |
16 |
> (/usr /proc /sys /dev ...). |
17 |
|
18 |
I wouldn't be surprised if it works with a single bind mount with |
19 |
/proc and /dev and so on mounted on top of that. You really don't |
20 |
want to be passing these directories through to the host filesystem |
21 |
anyway. |
22 |
|
23 |
> |
24 |
>> Now, I am concerned about the time to create the container, if we're |
25 |
>> going to specify individual files, but the same would be true of an |
26 |
>> overlay. [...] |
27 |
>> to populate an overlayfs with just that specific list of files. |
28 |
> |
29 |
> No. For overlay you need only one mount (not even a bind) |
30 |
> and only one directory traversal at the end to check for |
31 |
> violations. |
32 |
|
33 |
You say "not even a bind" as if that is a benefit. I suspect bind |
34 |
mounts operate more quickly than an overlayfs if anything. |
35 |
|
36 |
> The nice thing is that this is practically independent of |
37 |
> the number or structure of directories/files you want to protect, |
38 |
> i.e. it scales perfectly well. |
39 |
> For the more fine-grained approach, you just delete the files |
40 |
> you do not want to have in the beginning. Not sure, how quick this |
41 |
> can be done, but once it is done, the slowdown when running the |
42 |
> sandbox is independent of the number of deleted files (because |
43 |
> here certainly only one hash lookup is required). |
44 |
|
45 |
Honestly, you can't really claim that overlayfs is superior to bind |
46 |
mounts when it comes to access times without actually looking into how |
47 |
fast bind mounts actually operate. I'd have to read up on the kernel |
48 |
VFS myself but people run hosts with lots of containers all the time |
49 |
and they usually contain a ton of mountpoints. The kernel obviously |
50 |
has an efficient way to figure out what filesystem a path is actually |
51 |
on, and it actually has to work this out even if you're using |
52 |
overlayfs, since the kernel has to first figure out that the path is |
53 |
even on the overlayfs. |
54 |
|
55 |
It is possible that bind mount performance is inferior when you've |
56 |
removed all but a thousand files from your overlayfs, and it is |
57 |
possible that overlayfs performance is inferior. |
58 |
|
59 |
> |
60 |
>> If you just replicate the current sandbox |
61 |
>> functionality then setup time is tiny |
62 |
> |
63 |
> I am not so much concerned about the setup time but more about the |
64 |
> delay caused for file operations once the sandbox is set up. |
65 |
> Perhaps even a dozen bind directories already give a considerable |
66 |
> slowdown... |
67 |
> |
68 |
|
69 |
I run builds on Gentoo containers all the time and the host is |
70 |
juggling dozens of bind mounts already. Before I started using |
71 |
containers I'd use bind mounts fairly often on monolithic hosts. I |
72 |
certainly haven't noticed any overhead. There are certainly people |
73 |
running FAR more containers per host than I am. I wouldn't be |
74 |
concerned with a couple of bind mounts. I have a ton of zfs |
75 |
mountpoints as well and no issues. (Bind mounts shouldn't have any |
76 |
more cost than any other type of mount, and probably less.) |
77 |
|
78 |
I wouldn't assume that thousands of bind mounts would have zero impact |
79 |
without testing it, but I also have no reason to be concerned. |
80 |
|
81 |
-- |
82 |
Rich |