1 |
On 01/21/2015 07:47 AM, Sam Bishop wrote: |
2 |
> So I've been thinking crazy thoughts. |
3 |
> |
4 |
> Theoretically it can't be that hard to do a complete package binhost for gentoo. |
5 |
|
6 |
I love that you qualify this with "theoretically." |
7 |
|
8 |
> |
9 |
> To be clear, when i say complete, Im referring to building, all |
10 |
> versions of all ebuilds marked stable or unstable on amd64, with every |
11 |
> combination of use flags. |
12 |
|
13 |
Every ebuild with every combination of USE flags? This is likely |
14 |
impossible, and definitely not feasible. With 17000ish ebuilds in the |
15 |
portage tree and assuming each only has 2 USE flags, this would be |
16 |
building 17000*2^2 = 68,000 packages. If average build time is 20 |
17 |
seconds (nice server w/ SSD and enough RAM to build in /tmp), it'd take |
18 |
377ish hours to do an initial build of the tree. I guess this isn't so |
19 |
bad. Of course, there are outliers like www-client/firefox: 19 |
20 |
non-language USE flags, so 2^19 different firefox permutations at a fast |
21 |
5 minutes apiece would take 43000 hours. I haven't looked at |
22 |
REQUIRED_USE, so there could be less than 2^19 different combinations of |
23 |
flags; taking it down to 2^10 combinations is only 85 hours or so. |
24 |
|
25 |
> |
26 |
> This pretty much boils down to bytes and bytes of storage + compute |
27 |
> resources. Both of which are easily available to me. So I began |
28 |
> pondering and here I am, thinking to myself "is this really all there |
29 |
> is too it"? |
30 |
|
31 |
A full CentOS mirror is ~600GB iirc, so you're gonna need a ton of storage. |
32 |
|
33 |
> Does it really come down to CPU cycles and repeatedly running through |
34 |
> the following commands for each combination of ebuild, version and use |
35 |
> flags |
36 |
> emerge --emptytree --onlydeps ${name} |
37 |
> emerge --emptytree --buildpkgonly --buildpkg ${name} |
38 |
> |
39 |
> Obviously running them in a clean environment each time, either by |
40 |
> chroot or other means. |
41 |
> Then just storing the giant binhost somewhere suitable such as an AWS |
42 |
> s3 bucket setup to work via HTTP so the normal tools work fine with |
43 |
> it. |
44 |
> |
45 |
|
46 |
I haven't used binpkgs in a long time, but I think USE on the client |
47 |
machine has to match the USE of the package being installed. Managing |
48 |
all of this would be a nightmare unless you wrote your own special |
49 |
portage server that served up binpkgs in a USE-aware way and a portage |
50 |
host could request a binpkg with a certain USE. |
51 |
|
52 |
Theoretically, great idea. I think this would be possible if you had |
53 |
maybe 3 or 4 different USE combos (i.e. one for servers, one for KDE |
54 |
client machines, one for gnome clients, etc.). |
55 |
|
56 |
Alec |
57 |
|
58 |
P.S. I'm reasonably sure my math is correct, but I would appreciate |
59 |
corrections. |