1 |
On Thursday, 16 February 2023 12:23:52 GMT Rich Freeman wrote: |
2 |
|
3 |
--->8 Much useful detail. |
4 |
|
5 |
That all makes perfect sense, and is what I'd assumed, but it's good to have |
6 |
it confirmed. |
7 |
|
8 |
> The load average setting is definitely useful and I would definitely |
9 |
> set it, but when the issue is swapping it doesn't go far enough. Make |
10 |
> has no idea how much memory a gcc process will require. Since that is |
11 |
> the resource likely causing problems it is hard to efficiently max out |
12 |
> your cores without actually accounting for memory use. The best I've |
13 |
> been able to do is just set things conservatively so it never gets out |
14 |
> of control, and underutilizes CPU in the process. Often it is only |
15 |
> parts of a build that even have issues - something big like chromium |
16 |
> might have 10,000 tasks that would run fine with -j16 or whatever, but |
17 |
> then there is this one part where the jobs all want a ton of RAM and |
18 |
> you need to run just that one part at a lower setting. |
19 |
|
20 |
I've just looked at 'man make', from which it's clear that -j = --jobs, and |
21 |
that both those and --load-average are passed to /usr/bin/make, presumably |
22 |
untouched unless portage itself has identically named variables. So I wonder |
23 |
how feasible it might be for make to incorporate its own checks to ensure that |
24 |
the load average is not exceeded. I am not a programmer (not for at least 35 |
25 |
years, anyway), so I have to leave any such suggestion to the experts. |
26 |
|
27 |
-- |
28 |
Regards, |
29 |
Peter. |