1 |
On 09/12/2012 09:33 AM, Hans de Graaff wrote: |
2 |
> On Wed, 2012-09-12 at 08:58 -0400, Ian Stakenvicius wrote: |
3 |
> |
4 |
>> So essentially what you're saying here is that it might be worthwhile |
5 |
>> to look into parallelism as a whole and possibly come up with a |
6 |
>> solution that combines 'emerge --jobs' and build-system parallelism |
7 |
>> together to maximum benefit? |
8 |
> |
9 |
> Forget about jobs and load average, and just keep starting jobs all |
10 |
> around until there is only 20% (or whatever tuneable amount) free memory |
11 |
> left. As far as I can tell this is always the real bottleneck in the |
12 |
> end. Once you hit swap overall throughput has to go down quite a bit. |
13 |
|
14 |
Well, I think it's still good to limit the number of jobs at least, |
15 |
since otherwise you could become overloaded with processes that don't |
16 |
consume a lot of memory at first but by the time they complete they have |
17 |
consumed much more memory than desired (using swap). |
18 |
-- |
19 |
Thanks, |
20 |
Zac |