1 |
On 12/18/2010 07:15 AM, Peter Humphrey wrote: |
2 |
> On Saturday 18 December 2010 10:18:43 Neil Bothwick wrote: |
3 |
> |
4 |
>> I've found there's just too much overhead with distcc, plus much of |
5 |
>> the work is still done locally. |
6 |
> |
7 |
> I expected that but I wanted to try it to see. |
8 |
> |
9 |
>> I have a couple of Atom boxes, a server and a netbook, and I've set up |
10 |
>> a chroot for each on my workstation. In the chroot I have |
11 |
>> FEATURES=buildpkg, using an NFS mounted PKGDIR available to both |
12 |
>> computers, then I emerge -k on the Atom box. |
13 |
> |
14 |
> Maybe I'll go this way instead. Thanks for the idea, which is similar to |
15 |
> one from YoYo Siska three days ago. |
16 |
|
17 |
I had my Atom 330 running as a distcc client for a long time. I have |
18 |
several other speedy CPUs alongside it so it could spray plenty of |
19 |
compilation requests out its gigabit NIC to various much beefier |
20 |
machines. But as Neil stated, lots of the processing still occurs |
21 |
locally, so as you increase nodes, you need to decrease the amount of |
22 |
compilation done locally. With such a disparity between CPU, it takes |
23 |
less time overall to just do it the way Neil describes - make a chroot |
24 |
and then just build it with the intention that the slow CPUs will use |
25 |
the binary build. |
26 |
|
27 |
You still need lots of CPU to compile, so a slow machine will still |
28 |
compile slowly. If your client is a pokey 1.6GHz Atom and you're sending |
29 |
jobs to two quad core 3GHz CPUs on your subnet, you'll soon see that the |
30 |
Atom's load goes up toward 8 as it tries to bring those remote jobs |
31 |
back. So, the four threads on my 330 get completely filled up and it's |
32 |
dog slow. And it's even more painful when you use the preprocessor |
33 |
because the client must zip the compile "construction" before it ships |
34 |
it out, so you have even less CPU available for compilation (although |
35 |
you get some of that back). |
36 |
|
37 |
All said and done, my back-of-the-napkin and seat-of-the-pants |
38 |
calculation tells me that I still get a _minimum_ 25% reduction in |
39 |
overall compile times with distcc. That's my experience after using |
40 |
distcc for almost ten years with various configurations of network and CPUs. |