1 |
On Sat, Jan 17, 2015 at 7:56 AM, lee <lee@××××××××.de> wrote: |
2 |
> Rich Freeman <rich0@g.o> writes: |
3 |
>> |
4 |
>> Depends on how you run it, but yes, you might have multiple instances |
5 |
>> of fail2ban running this way consuming additional RAM. If you were |
6 |
>> really clever with your container setup they could share the same |
7 |
>> binary and shared libraries, which means they'd share the same RAM. |
8 |
>> However, it seems like nobody bothers running containers this way |
9 |
>> (obviously way more work coordinating them). |
10 |
> |
11 |
> And they wouldn't be much separated anymore. |
12 |
|
13 |
Yes and no. You can run 45 containers off of the same read-only root |
14 |
filesystem and they're just as separated as 45 containers running off |
15 |
of 45 copies of the same filesystem. The work comes from needing to |
16 |
design your containers so that they can share stuff. The processes |
17 |
are still isolated in memory, and they can't see outside of their |
18 |
container. If they share a writable filesystem then they are not |
19 |
separated with regard to that - you'd need to be careful what you |
20 |
shared and what you didn't. |
21 |
|
22 |
Typically in these kinds of setups you're going to use a gold image |
23 |
for most of your filesystem, and then use tmpfs or such for anything |
24 |
writable. This is one of the drivers for the /usr move - the goal is |
25 |
to consolidate the areas of a linux filesystem that an instance really |
26 |
needs to touch so that you don't end up having read-write areas all |
27 |
over the place. |
28 |
|
29 |
> |
30 |
>> I doubt it would take more CPU - 1 process scanning 5 logs probably |
31 |
>> doesn't use more CPU than 5 processes scanning 1 log each. |
32 |
> |
33 |
> Isn't there some sort of scheduling and/or other overhead involved when |
34 |
> you run more processes? I mean the overhead of "just being there": A |
35 |
> process scheduler that needs to consider 500 processes might require |
36 |
> more CPU itself than a scheduler considering 150 processes. |
37 |
|
38 |
The overhead is very minimal as far as I'm aware. The scheduler |
39 |
itself is O(1) so no, it doesn't take longer to task-switch between 5 |
40 |
processes and 5M processes. I'm sure there are some CPU costs I'm not |
41 |
accounting for, but it should be pretty low. RAM is really the issue |
42 |
here. Maybe disk IO if all those processes are updating different |
43 |
files as well. |
44 |
|
45 |
> |
46 |
>> You would get a security benefit from just running fail2ban on the |
47 |
>> host, since a failure on one container would apply a block to all the |
48 |
>> others. |
49 |
> |
50 |
> Plus when running fail2ban on the host, you can block connections from |
51 |
> a particular IP for everyone. |
52 |
> |
53 |
|
54 |
Also, if you have a container running apache and another running |
55 |
postfix, then a failed access attempt in postfix would result in |
56 |
access to your apache server being blocked as well. |
57 |
|
58 |
-- |
59 |
Rich |