1 |
On Wednesday 03 November 2004 22:07, TRauMa wrote: |
2 |
> Hi there, |
3 |
|
4 |
Hello. You've got a couple of answers already, but I'll add my two bobs... |
5 |
|
6 |
> I decided against an NFS-mounted distfiles dir b/c I didn't want two boxes |
7 |
> downloading the same file to conflict. |
8 |
|
9 |
This should no longer be an issue with portage-2.0.51. A lot of work went into |
10 |
support for distfile locking to preserve sanity even when the emerge |
11 |
instances are on separate machines. |
12 |
|
13 |
> For syncing the portage tree, I "emerge sync" manually on the master box |
14 |
> and then use rsync over ssh locally to get the latest ebuilds on the |
15 |
> clients. |
16 |
|
17 |
> Now I have two questions: |
18 |
> 1. is it save to generate the metadata on the server and then just pull |
19 |
> it in on the clients? |
20 |
|
21 |
If you are rsync'ing the entire tree, then your clients should be getting the |
22 |
metadata directory as well. "emerge metadata" on the clients will do the |
23 |
cache updates that normally occur at the end of an "emerge sync". |
24 |
|
25 |
> 2. How can I tell portage to ignore fetch restricitions? If I encounter |
26 |
> one, I manually pull in the file as portage tells me to, but I'm not |
27 |
> keen on pulling it on every client, too. So it would be perfect if I |
28 |
> could tell the clients to just try and fetch the restricted files from |
29 |
> the local mirror. |
30 |
|
31 |
Portage will ignore fetch restrictions for local mirrors that are a |
32 |
file-system path. Yes, it is possible to hack it into fetch() to support |
33 |
local http/ftp mirrors, but the function really needs a cleanup first (which |
34 |
is already in progress). Of course, you don't have to deal with these issues |
35 |
if you just NFS mount your distfiles. :) |
36 |
|
37 |
Regards, |
38 |
Jason Stubbs |