Gentoo Archives: gentoo-user

From: Jarry <mr.jarry@×××××.com>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] Apache forked itself to death...
Date: Mon, 17 Sep 2012 14:58:00
Message-Id: 50573949.8050406@gmail.com
In Reply to: Re: [gentoo-user] Apache forked itself to death... by Michael Hampicke
1 On 16-Sep-12 20:06, Michael Hampicke wrote:
2 >> * Each Apache process is consuming 80-100MB of RAM.
3 >> * Squid is consuming 666MB of RAM
4 >> * memcached is consuming 822MB of RAM
5 >> * mysqld is consuming 886MB of RAM
6 >> * The kernel is using 110MB of RAM for buffers
7 >> * The kernel is using 851MB of RAM for file cache (which benefits squid).
8 >>
9 >
10 > As Jerry did not specify which content his apache is serving, I used
11 > 12MB of RAM per apache process (as a general rule of thumb). But if it's
12 > dynamic content generated by a scripting language like php it could be a
13 > lot more. But I think 80-100MB of RAM with php in the back should be a
14 > good guess.
15 >
16 > Important thing is:
17 >
18 > MaxClients x memory footprint per apache process < available memory :-)
19 >
20 > If you have lots of concurrent requests you may be better suited with
21 > something lighter.... like lighttpd. Or start caching of some sort, like
22 > Michael does.
23
24 Thank you for all tips&tweaks. My apache is serving mostly dynamic
25 content (drupal cms), and single apache process has ~35-40MB RES
26 It is on VPS, with 1GB/2GB soft/hard RAM limits, only apache & mysql
27 running. Mysqld needs ~100-200MB, and caching is covered by apc.
28 I reduced maxclients down to 40, it should never run out of memory.
29
30 BTW, how's that someone has apache process 10-20MB, and me 40MB?
31 I'd like to reduce its size, but do not know how...
32
33 Jarry
34
35 --
36 _______________________________________________________________
37 This mailbox accepts e-mails only from selected mailing-lists!
38 Everything else is considered to be spam and therefore deleted.

Replies

Subject Author
Re: [gentoo-user] Apache forked itself to death... Michael Mol <mikemol@×××××.com>