1 |
On Friday, July 22 at 19:55 (+0100), Peter Humphrey said: |
2 |
|
3 |
> > Wouldn't a sufficiently large swap (100GB for example) completely |
4 |
> prevent |
5 |
> > out of memory conditions and the oom-killer? |
6 |
> |
7 |
> Of course, on any system with more than a few dozen MB of RAM, but I |
8 |
> can't |
9 |
> imagine any combination of running programs whose size could add up to |
10 |
> even |
11 |
> a tenth of that, with or without library sharing (somebody will be |
12 |
> along |
13 |
> with an example in a moment). |
14 |
|
15 |
The *prime* example is you have a program with a memory leak (omg we |
16 |
have programs with memory leaks?). |
17 |
|
18 |
On a system with only say 2GB swap, that program will cause oom killer |
19 |
to kick in fairly quickly, on a system with 100GB swap, that system is |
20 |
going to have to use all 100GB of swap before oom kicks in. By then |
21 |
your system will probably be thrashing like hell. |
22 |
|
23 |
There is no way you can complete guarantee a system won't run out of |
24 |
virtual memory, unless you can guarantee that there are no misbehaving |
25 |
applications or that some clueless guy won't isn't going to try to open |
26 |
a database dump in vi.* |
27 |
|
28 |
* Well you could set process/user limits to make sure a process gets an |
29 |
error after it tries to allocate a set limit of memory. |