1 |
On Fri, 2006-01-13 at 19:53 +0900, Kalin KOZHUHAROV wrote: |
2 |
> > Make this distributed tool for tar zip bzip2 and gzip and I'm in, I |
3 |
> > don't think it would be useful with anything else than Gigabit Ethernet. |
4 |
One 2Ghz CPU can't even saturate a 100Mbit line with bzip2 as far as I |
5 |
can tell. |
6 |
Although the speedups won't be extreme it could just work. |
7 |
|
8 |
> > We might want to have in the make.conf 2 separate variables, one of them |
9 |
> > saying how many threads can be run on the machine, then How many |
10 |
> > threads/process across a cluster. |
11 |
> > |
12 |
> > For example, my Dual Xeon EM64T file server can do make -j4 locally, |
13 |
> > like in make install, make docs etc etc, But for compiling I can use |
14 |
> > -j20, really not useful over -j8 anyway. But the point is, it would be |
15 |
> > usefully to separate the load distribution on the local machine and |
16 |
> > cluster nodes. |
17 |
> |
18 |
> As the discusison started... |
19 |
> |
20 |
> I would like to be able to limit the -jN when there is no distcc host |
21 |
> available or when compiling c++ code, otherwise my poor laptop is dead with |
22 |
> -j5 compiling pwlib when the network is down.... |
23 |
As far as I can tell distcc isn't smart enough for dynamic load balancing. |
24 |
One could hack portage to "test" each server in the distcc host list and |
25 |
remove missing servers for each run - doesn't look elegant to me. |
26 |
|
27 |
> It is particular example, but being able to limit portage in some way as |
28 |
> total CPU, total MEM might be interesting (just nice-ing is not enough) |
29 |
Very difficult - usually gcc uses ~25M per process (small source files), but |
30 |
I've seen >100M (most larger C++ files) and heard of ~600M per process for MySQL |
31 |
|
32 |
Limiting that is beyond the scope of portage. |
33 |
|
34 |
wkr, |
35 |
Patrick |
36 |
-- |
37 |
Stand still, and let the rest of the universe move |