1 |
Rich Freeman posted on Thu, 22 Dec 2011 11:09:16 -0500 as excerpted: |
2 |
|
3 |
> On Thu, Dec 22, 2011 at 10:55 AM, Michał Górny <mgorny@g.o> |
4 |
> wrote: |
5 |
>>> Just wanted to point out that (if there is enough memory) recent |
6 |
>>> kernels manage much better parallelism, even excess of it, once |
7 |
>>> reached the maximum load augmenting threads only bring minimal loss of |
8 |
>>> "real" time. |
9 |
>> |
10 |
>> Does that include handling complete lack of memory and heavy swapping? |
11 |
>> |
12 |
>> |
13 |
> I think the key was the "if there is enough memory" - which I think is a |
14 |
> pretty big issue. |
15 |
|
16 |
User experience but it matters too... |
17 |
|
18 |
The above memory limit-facter being the real limiter was true here when I |
19 |
used to run unlimited local jobs, too (I moderated a decent bit since |
20 |
portage itself parallelizes now, the fascination phase is over, and the |
21 |
kernel no longer requires double-load-factor to stay 95-100% CPU |
22 |
utilized). If the make is setup in such a way that it can sufficiently |
23 |
parallelize, even back on dual-core, I could run hundreds of parallel |
24 |
jobs (say, with the kernel build) surprisingly well -- as long as they |
25 |
didn't require much memory. |
26 |
|
27 |
As I've moderated since then, it's mostly to keep memory usage (and I/O, |
28 |
especially when it compounds with swap) sensible, way more than it is any |
29 |
real problem scheduling the actual loads. Plus, the kernel has gotten |
30 |
rather better at fully utilizing CPU resources; it used to be that you |
31 |
had to run about double (sometimes more, but generally slightly less) |
32 |
your cores in load average to avoid idle CPUs. That's no longer the case |
33 |
as a 25% over-scheduling seems to do what it used to require a 100% over- |
34 |
scheduling to achieve in terms of keeping 95-100% CPU usage, so by 50% |
35 |
over you're just wasting cycles and bytes. |
36 |
|
37 |
What I /really/ wish was that there was a make (and portage) switch |
38 |
parallel to -l, that responded based on memory instead of load average! |
39 |
Then those single-job gig-plus links could limit to say 4 jobs on an 8- |
40 |
gig machine, even if it's an 8-way that would otherwise be -j16 -l10 or |
41 |
-l12 (regardless of the balance or lack thereof, of an 8-way with only a |
42 |
gig of RAM per core... 2 gigs/core seems to be about the sweet-spot I've |
43 |
found for a general purpose gentooer build-station/desktop). |
44 |
|
45 |
That'd be an interesting and very useful parallel-domain task, adding a |
46 |
make switch that responded to memory just as -l responds to load. Maybe |
47 |
a make other than gmake already has such an option? |
48 |
|
49 |
-- |
50 |
Duncan - List replies preferred. No HTML msgs. |
51 |
"Every nonfree program has a lord, a master -- |
52 |
and if you use the program, he is your master." Richard Stallman |