1 |
>> I ran into an out of memory problem. The first mention of it in the |
2 |
>> kernel log is "mysqld invoked oom-killer". I haven't run into this |
3 |
>> before. I do have a swap partition but I don't activate it based on |
4 |
>> something I read previously that I later found out was wrong so I |
5 |
>> suppose I should activate it. Is fstab the way to do that? I have a |
6 |
>> commented line in there for swap. |
7 |
> |
8 |
> Yes, just uncomment it and should be automatic. (you can use "swapon" |
9 |
> to enable it without rebooting) |
10 |
|
11 |
Got it. |
12 |
|
13 |
>> Can anyone tell how much swap this is: |
14 |
>> |
15 |
>> /dev/sda2 80325 1140614 530145 82 Linux swap / Solaris |
16 |
>> |
17 |
>> If it's something like 512MB, that may not have prevented me from |
18 |
>> running out of memory since I have 4GB RAM. Is there any way to find |
19 |
>> out if there was a memory leak or other problem that should be |
20 |
>> investigated? |
21 |
> |
22 |
> That's 512MB. You can also create a swap file to supplement the swap |
23 |
> partition if you don't want to or aren't able to repartition. |
24 |
|
25 |
So I'm sure I have the concept right, is adding a 1GB swap partition |
26 |
functionally identical to adding 1GB RAM with regard to the potential |
27 |
for out-of-memory conditions? |
28 |
|
29 |
> I'd check the MySQL logs to see if it shows anything. Maybe check the |
30 |
> settings with regard to memory upper limits (Google it, there's a lot |
31 |
> of info about MySQL RAM management). |
32 |
|
33 |
Nothing in the log and from what I read online, an error should be |
34 |
logged if I reach mysql's memory limit. |
35 |
|
36 |
> If you're running any other servers that utilize MySQL like Apache or |
37 |
> something, check its access logs to see if you had an abnormal number |
38 |
> of connections. Bruteforce hacking or some kind of flooding/DOS attack |
39 |
> might cause it to use more memory than it ordinarily would. |
40 |
|
41 |
It runs apache and I found some info there. |
42 |
|
43 |
> A Basic "what's using up my memory?" technique is to log the output of |
44 |
> "top" by using the -b command. Something like "top -b > toplog.txt". |
45 |
> Then you can go back to the time when the OOM occurred and see what |
46 |
> was using a lot of RAM at that time. |
47 |
|
48 |
The kernel actually logged some top-like output and it looks like I |
49 |
had a large number of apache2 processes running, likely 256 processes |
50 |
which is the default MaxClients. The specified total_vm for each |
51 |
process was about 67000 which means 256 x 67MB = 17GB??? |
52 |
|
53 |
I looked over my apache2 log and I was hit severely by a single IP |
54 |
right as the server went down. However, that IP looks to be a |
55 |
residential customer in the US and they engaged in normal browsing |
56 |
behavior both before and after the disruption. I think that IP may |
57 |
have done the refresh-100-times thing out of frustration as the server |
58 |
started to go down. |
59 |
|
60 |
Does it sound like apache2 was using up all the memory? If so, should |
61 |
I look further for a catalyst or did this likely happen slowly? What |
62 |
can I do to prevent it from happening again? Should I switch apache2 |
63 |
from prefork to threads? |
64 |
|
65 |
- Grant |