1 |
Rich Freeman wrote: |
2 |
> On Sat, Feb 29, 2020 at 9:13 AM Dale <rdalek1967@×××××.com> wrote: |
3 |
>> Runaway processes is one reason I expanded my memory to 32GBs. It gives |
4 |
>> me more wiggle room for portage to be on tmpfs. |
5 |
>> |
6 |
> That is my other issue. 99% of the time the OOM killer is preferred |
7 |
> when this happens versus having the system just grind to a complete |
8 |
> halt. Either way some service is going to stop working, but at least |
9 |
> with the OOM killer it probably will only be one service. |
10 |
> |
11 |
> OOM doesn't always kill the right thing, but it happens so |
12 |
> infrequently I haven't bothered to address this. |
13 |
> |
14 |
> Setting limits on VM use on each service would of course be a good |
15 |
> idea. Also, you can tune OOM priority as well for any process. With |
16 |
> systemd these are unit config settings. I haven't looked at openrc |
17 |
> recently but certainly you can just edit the init.d script to set |
18 |
> these if there isn't just a config option. |
19 |
> |
20 |
> I've found OOM guessing wrong is more of an issue when you have a lot |
21 |
> of medium-sized processes vs one large one. If one process is using |
22 |
> 10GB of RAM and goes haywire it is very likely that this is the one |
23 |
> OOM-killer will go after. On the other hand if you're building with |
24 |
> make -j16 and you hit some really intensive part of a build and you |
25 |
> get 16 processes demanding half a GB each then it is more likely that |
26 |
> the OOM killer will first target some service that is RAM-hungry but |
27 |
> not usually a problem, because it is using more than any one of the |
28 |
> gcc processes. |
29 |
> |
30 |
> I wonder if you can make OOM-killer cgroup-aware. Services are |
31 |
> generally in separate cgroups while make would all be in another, so |
32 |
> if it looked at total use of the cgroup and not the individual process |
33 |
> then it would weigh something that forks heavily a lot higher. |
34 |
> |
35 |
|
36 |
|
37 |
I have noticed the OOM killing the wrong thing as well. In a way, how |
38 |
does it know what it should kill really??? After all, the process using |
39 |
the most memory may not be the problem but another one, or more, could. |
40 |
I guess in most cases the one using the most is the bad one but it may |
41 |
not always be the case. I'm not sure how OOM could determine that tho. |
42 |
Maybe having some setting like you mentions would help. It's a thought. |
43 |
|
44 |
A while back I had issues with Firefox getting hoggy on memory. One or |
45 |
two websites would just make it go nuts. I block all that crypto mining |
46 |
crap tho. Still, it would normally take a couple GBs when a lot of tabs |
47 |
are open but when those sites got a hold of it, it could swell to 5, 6, |
48 |
8GBs or more in fairly short order. Thing is, at times OOM would kill |
49 |
the whole GUI not just Firefox. I'd be back at a login screen. Of |
50 |
course, when it does that, it not only kills firefox, it kills firefox |
51 |
running other profiles, Seamonkey and any other programs running within |
52 |
the GUI. As you say, it doesn't always do the "right thing" when it |
53 |
starts killing processes. |
54 |
|
55 |
I don't recall ever actually running out of memory when emerging. I |
56 |
know what the big packages are and set tmpfs to be on spinning rust for |
57 |
those. Of course, after the upgrade, I think only LOo is on that. I'm |
58 |
not sure why, but when LOo updates, so does either Firefox or Seamonkey |
59 |
and sometimes both. Given their size, they end up trying to compile |
60 |
together. That can take up some space on tmpfs if in memory instead of |
61 |
spinning rust. Then there is that HUGE qt package. O_O |
62 |
|
63 |
While I see swap as a necessary evil, I only want it used on very rare |
64 |
occasions. The OOM tool just doesn't get it right. Me killing the |
65 |
offender in htop does tho. |
66 |
|
67 |
BTW, Firefox stopped doing the memory hog thing so bad a few upgrades |
68 |
ago. I never did figure out exactly what it was that triggered that. :/ |
69 |
|
70 |
Dale |
71 |
|
72 |
:-) :-) |