1 |
On Sat, Sep 20, 2014 at 9:27 PM, Peter Stuge <peter@×××××.se> wrote: |
2 |
> |
3 |
> I've so far gotten zero feedback on my hosting offer, intended to |
4 |
> help find some starting processes. |
5 |
> |
6 |
|
7 |
hassufel's repository on github should be more than adequate: |
8 |
https://github.com/gentoo/gentoo-gitmig |
9 |
|
10 |
If we need history we can always substitute this with one with full history. |
11 |
|
12 |
On a side note, my migration scripts take a few hours to run on a |
13 |
fairly old Phenom II quad-core. From the looks of things, |
14 |
single-threaded performance seems to be what matters the most. I did |
15 |
try running a migration on EC2 for kicks, and this only reinforced the |
16 |
need for good single-threaded performance (which is awful on EC2) - |
17 |
with 32 cores it did make it through a majority of the tree in very |
18 |
short order, but the longest categories (especially profiles) still |
19 |
took something like 1.5 hours to convert and then the second stage of |
20 |
the migration doesn't really use more than a core or two. It was |
21 |
faster than doing it on the old Phenom, but not by all that much. |
22 |
|
23 |
If somebody has a beefy server they'd like to test a migration on let |
24 |
me know. I have a tarball of a chroot that is all set to run a |
25 |
migration, and a squashfs of the cvs tree which just needs to be |
26 |
mounted on top of it. It is fairly trivial to transport, and in the |
27 |
event of an actual migration it could be tested and then at the time |
28 |
of the migration all you need is a current cvsroot to mount on top. |
29 |
|
30 |
I suspect that more than 4 cores would be optimal, but I'd put the |
31 |
highest priority on single-thread performance. It didn't really seem |
32 |
to be IO-bound on the writes (and this is on btrfs on spinning disks |
33 |
which is less than stellar for performance), but if you have a large |
34 |
number of cores IO might become a bottleneck. I don't think IO reads |
35 |
are going to be much of an issue at all - the starting tree is only a |
36 |
few hundred MB in squashfs, and the disks didn't look busy for reads |
37 |
in the later steps at all. |
38 |
|
39 |
Even a few hours for such a huge migration isn't really that big a |
40 |
deal I would think, but I've heard that on a good system this can be |
41 |
cut down to under an hour (plus all the time moving trees around, |
42 |
setting up back-end, etc). |
43 |
|
44 |
-- |
45 |
Rich |