1 |
On Wed, Sep 14, 2011 at 4:46 PM, Neil Bothwick <neil@××××××××××.uk> wrote: |
2 |
> On Wed, 14 Sep 2011 15:37:15 -0700, Mark Knecht wrote: |
3 |
> |
4 |
>> 3) Do the rest of the work |
5 |
>> |
6 |
>> emerge -fDuN @world |
7 |
>> emerge -pvDuN @world |
8 |
>> |
9 |
>> Fix USE flag issues, if any |
10 |
>> |
11 |
>> 4) Do the build |
12 |
>> |
13 |
>> emerge -DuN -j13 @world |
14 |
> |
15 |
> There's not point in doing the fetch first, portage has done parallel |
16 |
> fetching for some time - it's faster to let the distfiles download while |
17 |
> the first package is compiling. |
18 |
> |
19 |
> emerge -auDN @world covers all of that - except the -j which is |
20 |
> system-dependent. |
21 |
> |
22 |
> |
23 |
> -- |
24 |
> Neil Bothwick |
25 |
|
26 |
Quite true about the parallel fetch, but I still do this anyway |
27 |
because I want to know all the code is local before I start. With 12 |
28 |
processor cores I often build the first file before the second has |
29 |
been downloaded. Also I don't want to start a big build, say 50-70 |
30 |
updates, and then find out an hour later when I come back that some |
31 |
portage mirror choked on finding a specific file and the whole thing |
32 |
died 10 minutes in. This way I have a better chance of getting to the |
33 |
end in one pass. |
34 |
|
35 |
Anyway, it works well for this old dog, and in my mind there is a good |
36 |
reason to fetch before building but I can see how others might not |
37 |
want to do that. |
38 |
|
39 |
- Mark |