1 |
2011/12/22 Rich Freeman <rich0@g.o>: |
2 |
> On Wed, Dec 21, 2011 at 11:43 PM, Donnie Berkholz <dberkholz@g.o> wrote: |
3 |
>> I looked into this 6 or 7 years ago. It wasn't feasible unless you were |
4 |
>> on an extremely high-speed, low-latency network, beyond what was |
5 |
>> typically accessible at the time outside of universities and LANs. Could |
6 |
>> be worth exploring again now that 25-100 mbps connections are becoming |
7 |
>> more common. |
8 |
> |
9 |
> I tried messing around with this with Amazon EC2. The problem was |
10 |
> that due to latency I only really saw the benefit for VERY high levels |
11 |
> of parallelization (think -j25+).. However, make isn't actually |
12 |
> "distcc-aware" so it just runs 25 jobs of anything in parallel. So, |
13 |
> anytime a makefile launched a ton of java or python jobs the host |
14 |
> ground to a halt as it wasn't distributed and it was way more than the |
15 |
> host could handle (especially java - which swapped like there was no |
16 |
> tomorrow). |
17 |
> |
18 |
> If somebody were to do a distcc-ng for a large cluster one of the |
19 |
> problems to solve would be having it not run jobs in parallel if it |
20 |
> couldn't actually distribute them. |
21 |
> |
22 |
> Rich |
23 |
> |
24 |
|
25 |
Just wanted to point out that (if there is enough memory) recent |
26 |
kernels manage much better parallelism, even excess of it, once |
27 |
reached the maximum load augmenting threads only bring minimal loss of |
28 |
"real" time. |