1 |
On Tue, Oct 16, 2012 at 2:39 PM, Peter Stuge <peter@×××××.se> wrote: |
2 |
> Maybe someone with infra access can describe a few of them in detail. |
3 |
> I'm not saying to randomly add SSH keys to servers, but to explain |
4 |
> what needs doing, talk about how they need to be done, and all of |
5 |
> sudden there may be new infra contenders entering the tournament. |
6 |
|
7 |
There is a tracker bug. Mainly just some plugins to manage git |
8 |
commits, get the data to the rsync mirrors, and so on. |
9 |
|
10 |
On Tue, Oct 16, 2012 at 2:37 PM, Michael Mol <mikemol@×××××.com> wrote: |
11 |
> Bittorrent may help. I'll admit not following too closely (closing on |
12 |
> a house next week. Then moving.), but if it's possible to lump the |
13 |
> data and roll a torrent, you could have the ec2 instances feed each |
14 |
> other. |
15 |
|
16 |
Getting the data around EC2 isn't a big problem - that is fast. The |
17 |
issue is getting it there in the first place. |
18 |
|
19 |
Once it is there I'm sticking it on an EBS volume, snapshotting it, |
20 |
and then attaching a copy to each node. Starcluster actually has NFS |
21 |
set up out-of-the-box, but I think that is going to bog down until all |
22 |
the caches are full. So, the real rate-limiters are copying it over, |
23 |
and then the time to create the snapshot (which does worry me a |
24 |
little). Creating volumes from a snapshot and attaching them to nodes |
25 |
is fast, and I have it half-scripted. |
26 |
|
27 |
The real issue is getting the data from wherever the git repository is |
28 |
created over to the cluster, again assuming we stick with EC2. It is |
29 |
about 1GB, so it doesn't take forever. If my issue is only my own |
30 |
upload bandwidth then perhaps it won't be a big problem if the |
31 |
conversion system is on a faster pipe. |
32 |
|
33 |
Rich |