1 |
Mark Knecht wrote: |
2 |
> On Wed, Sep 14, 2011 at 4:46 PM, Neil Bothwick<neil@××××××××××.uk> wrote: |
3 |
>> On Wed, 14 Sep 2011 15:37:15 -0700, Mark Knecht wrote: |
4 |
>> |
5 |
>>> 3) Do the rest of the work |
6 |
>>> |
7 |
>>> emerge -fDuN @world |
8 |
>>> emerge -pvDuN @world |
9 |
>>> |
10 |
>>> Fix USE flag issues, if any |
11 |
>>> |
12 |
>>> 4) Do the build |
13 |
>>> |
14 |
>>> emerge -DuN -j13 @world |
15 |
>> There's not point in doing the fetch first, portage has done parallel |
16 |
>> fetching for some time - it's faster to let the distfiles download while |
17 |
>> the first package is compiling. |
18 |
>> |
19 |
>> emerge -auDN @world covers all of that - except the -j which is |
20 |
>> system-dependent. |
21 |
>> |
22 |
>> |
23 |
>> -- |
24 |
>> Neil Bothwick |
25 |
> Quite true about the parallel fetch, but I still do this anyway |
26 |
> because I want to know all the code is local before I start. With 12 |
27 |
> processor cores I often build the first file before the second has |
28 |
> been downloaded. Also I don't want to start a big build, say 50-70 |
29 |
> updates, and then find out an hour later when I come back that some |
30 |
> portage mirror choked on finding a specific file and the whole thing |
31 |
> died 10 minutes in. This way I have a better chance of getting to the |
32 |
> end in one pass. |
33 |
> |
34 |
> Anyway, it works well for this old dog, and in my mind there is a good |
35 |
> reason to fetch before building but I can see how others might not |
36 |
> want to do that. |
37 |
> |
38 |
> - Mark |
39 |
> |
40 |
> |
41 |
|
42 |
tail -f /var/log/emerge-fetch.log |
43 |
|
44 |
That way you can be compiling and watching the fetching process at the |
45 |
same time. If something fails, it'll be printer there. I use it all |
46 |
the time here. I only have 4 cores here tho. :-P |
47 |
|
48 |
Dale |
49 |
|
50 |
:-) :-) |