Rich Freeman posted on Thu, 22 Dec 2011 11:09:16 -0500 as excerpted:
> On Thu, Dec 22, 2011 at 10:55 AM, Michał Górny <firstname.lastname@example.org>
>>> Just wanted to point out that (if there is enough memory) recent
>>> kernels manage much better parallelism, even excess of it, once
>>> reached the maximum load augmenting threads only bring minimal loss of
>>> "real" time.
>> Does that include handling complete lack of memory and heavy swapping?
> I think the key was the "if there is enough memory" - which I think is a
> pretty big issue.
User experience but it matters too...
The above memory limit-facter being the real limiter was true here when I
used to run unlimited local jobs, too (I moderated a decent bit since
portage itself parallelizes now, the fascination phase is over, and the
kernel no longer requires double-load-factor to stay 95-100% CPU
utilized). If the make is setup in such a way that it can sufficiently
parallelize, even back on dual-core, I could run hundreds of parallel
jobs (say, with the kernel build) surprisingly well -- as long as they
didn't require much memory.
As I've moderated since then, it's mostly to keep memory usage (and I/O,
especially when it compounds with swap) sensible, way more than it is any
real problem scheduling the actual loads. Plus, the kernel has gotten
rather better at fully utilizing CPU resources; it used to be that you
had to run about double (sometimes more, but generally slightly less)
your cores in load average to avoid idle CPUs. That's no longer the case
as a 25% over-scheduling seems to do what it used to require a 100% over-
scheduling to achieve in terms of keeping 95-100% CPU usage, so by 50%
over you're just wasting cycles and bytes.
What I /really/ wish was that there was a make (and portage) switch
parallel to -l, that responded based on memory instead of load average!
Then those single-job gig-plus links could limit to say 4 jobs on an 8-
gig machine, even if it's an 8-way that would otherwise be -j16 -l10 or
-l12 (regardless of the balance or lack thereof, of an 8-way with only a
gig of RAM per core... 2 gigs/core seems to be about the sweet-spot I've
found for a general purpose gentooer build-station/desktop).
That'd be an interesting and very useful parallel-domain task, adding a
make switch that responded to memory just as -l responds to load. Maybe
a make other than gmake already has such an option?
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman