1 |
Hi, |
2 |
|
3 |
On Fri, 5 May 2006 18:49:29 +0530 Farhan Ahmed |
4 |
<farhanahmed06@×××××.com> wrote: |
5 |
|
6 |
> > The difference is: with -j2 box is slow. A lot of packages do not |
7 |
> > compile because of ooms. |
8 |
> > |
9 |
> > with -j1 box is normal. No ooms. Compiling does not take longer as |
10 |
> > with -j2. |
11 |
> |
12 |
> This is the first time I'm hearing this.. Even with -j2 my system |
13 |
> functions normally.. Has anyone encountered same problem? |
14 |
|
15 |
Not really. GCC does eat some memory, but it's not that worse. Well, |
16 |
this does absolutely depend on RAM+Swap. Whenever I had oom conditions |
17 |
in the past 4 years, that was because of a leaky, long-running |
18 |
application. I've yet to see a gcc process that claims 100MB of |
19 |
physical memory. I did see Apache eat such an amount of mem after |
20 |
running some days and calling leaky skripts (integrated as a module, of |
21 |
course). |
22 |
|
23 |
Another way to get OOMs here would probably to limit maximum memory |
24 |
usage too much. |
25 |
|
26 |
The only thing that breaks -j<N> for make are Makefiles that base on |
27 |
broken assumptions. Those Makefiles are buggy and to be reported. |
28 |
|
29 |
Running -j<N> for N>1 on a UP machine won't likely bring too much |
30 |
speed. It may have some small positive effects due to better |
31 |
distribution of IO load and process creation. That's the real culprit |
32 |
when compiling: Lots of context switches for each make process. So what |
33 |
probably gives a little more speed is setting the scheduler HZ value to |
34 |
>100 (but this isn't the default any more). "vmstat" is a good tool to |
35 |
>monitor the amount of context switches. |
36 |
|
37 |
|
38 |
-hwh |
39 |
-- |
40 |
gentoo-user@g.o mailing list |