1 |
On Thu, Feb 16, 2023 at 5:32 AM Andreas Fink <finkandreas@×××.de> wrote: |
2 |
> |
3 |
> On Thu, 16 Feb 2023 09:53:30 +0000 |
4 |
> Peter Humphrey <peter@××××××××××××.uk> wrote: |
5 |
> |
6 |
> > Yes, I was aware of that, but why didn't --load-average=32 take precedence? |
7 |
> This only means that emerge would not schedule additional package job |
8 |
> (where a package job means something like `emerge gcc`) when load |
9 |
> average > 32, howwever if a job is scheduled it's running, independently |
10 |
> of the current load. |
11 |
> While having it in MAKEOPTS, it would be handled by the make system, |
12 |
> which schedules single build jobs, and would stop scheduling additional |
13 |
> jobs, when the load is too high. |
14 |
> |
15 |
> Extreme case: |
16 |
> emerge chromium firefox qtwebengine |
17 |
> --> your load when you do this is pretty much close to 0, i.e. all 3 |
18 |
> packages are being merged simultaneously and each will be built with |
19 |
> -j16. |
20 |
> I.e. for a long time you will have about 3*16=48 single build jobs |
21 |
> running in parallel, i.e. you should see a load going towards 48, when |
22 |
> you do not have anything in your MAKEOPTS. |
23 |
|
24 |
TL;DR - the load-average option results in underdamping, as a result |
25 |
of the delay with the measurement of load average. |
26 |
|
27 |
Keep in mind that load averages are averages and have a time lag, and |
28 |
compilers that are swapping like crazy can run for a fairly long time. |
29 |
So you will probably have fairly severe oscillation in the load if |
30 |
swapping is happening. If your load is under 32, each of your 16 |
31 |
parallel makes, even if running with the limit in MAKEOPTS, will feel |
32 |
free to launch another 256 jobs, because it will take seconds for the |
33 |
1 minute load average to creep above 32. At that point you have WAY |
34 |
more than 32 tasks running and if they're swapping then half of the |
35 |
processes on your system are probably going to start blocking. So now |
36 |
make (if configured in MAKEOPTS) will hold off on launching anything, |
37 |
but it could take minutes for those swapping compiler jobs to complete |
38 |
the amount of work that would normally take a few seconds. Then as |
39 |
those processes eventually start terminating (assuming you don't get |
40 |
OOM killing or PANICs) your load will start dropping, until eventually |
41 |
it gets back below 32, at which point all those make processes that |
42 |
are just sitting around will wake up and fire off another 50 gcc |
43 |
instances or whatever they get up to before the brakes come back on. |
44 |
|
45 |
The load average setting is definitely useful and I would definitely |
46 |
set it, but when the issue is swapping it doesn't go far enough. Make |
47 |
has no idea how much memory a gcc process will require. Since that is |
48 |
the resource likely causing problems it is hard to efficiently max out |
49 |
your cores without actually accounting for memory use. The best I've |
50 |
been able to do is just set things conservatively so it never gets out |
51 |
of control, and underutilizes CPU in the process. Often it is only |
52 |
parts of a build that even have issues - something big like chromium |
53 |
might have 10,000 tasks that would run fine with -j16 or whatever, but |
54 |
then there is this one part where the jobs all want a ton of RAM and |
55 |
you need to run just that one part at a lower setting. |
56 |
|
57 |
-- |
58 |
Rich |