1 |
On Mon, 29 Jan 2007 11:50:34 +0200, Alan McKinnon wrote: |
2 |
|
3 |
> I already use a fairly complicate solution with emerge -pvf and wget in |
4 |
> a cron on one of the fileservers, but it's getting cumbersome. And I'd |
5 |
> rather not maintain an entire gentoo install on a server simply to act |
6 |
> as a proxy. Would I be right in saying that I'd have to keep |
7 |
> the "proxy" machine up to date to avoid the inevitable blockers that |
8 |
> will happen in short order if I don't? |
9 |
> |
10 |
> I've been looking into kashani's suggestion of http-replicator, this |
11 |
> might be a good interim solution till I can come up with something |
12 |
> better suited to our needs. |
13 |
|
14 |
I was suggesting the emerge -uDNf world in combination in |
15 |
http-replicator. The first request forces http-replicator to download the |
16 |
files, all other request for those files are then handled locally. So if |
17 |
you run this on a suitable cross-section of machines overnight, |
18 |
http-replicator's cache will be primed by the time you stumble |
19 |
bleary-eyed into the office. |
20 |
|
21 |
If all your machines run a similar mix of software, say KDE desktops, you |
22 |
only need to run the cron task on one of them. |
23 |
|
24 |
I use a slightly different approach here, with an NFS mounted $DISTDIR |
25 |
for all machines and one of them doing emerge -f world each morning. it's |
26 |
simpler to set up that http-replicator but is less scalable since you'll |
27 |
get problems if one machines tries to download a file while another is |
28 |
partway through downloading it. |
29 |
|
30 |
|
31 |
-- |
32 |
Neil Bothwick |
33 |
|
34 |
Most software is about as user-friendly as a cornered rat! |