1 |
I'm doing more or less the same here for a few years already, works |
2 |
pretty well. I think others are as well. |
3 |
Doing the sync only on one system (and applying patches to the tree |
4 |
where needed) and then locally copying the image to systems. |
5 |
|
6 |
Big advantage I see is more efficient access to portage tree and less |
7 |
disk usage on all systems (even the master system except for the |
8 |
duration of the sync) |
9 |
|
10 |
On master system I do the following: |
11 |
dd if=/dev/zero of=/tmp/portage.img size=384m |
12 |
mkfs.ext2 /tmp/portage.img |
13 |
mkdir /tmp/portage |
14 |
mount /tmp/portage.img /tmp/portage -o loop |
15 |
cp -a /usr/portage/* /tmp/portage |
16 |
umount /usr/portage |
17 |
mount /tmp/portage /usr/portage --move |
18 |
emerge --sync |
19 |
(apply patches) |
20 |
mksquashfs /usr/portage /tmp/portage.sqsh $options |
21 |
umount /usr/portage |
22 |
rm -f /tmp/portage.img |
23 |
mv /tmp/portage.sqsh $target |
24 |
mount $target /usr/portage -o loop |
25 |
|
26 |
This is pretty quick (and ext2 image on tmpfs uses less memory at the |
27 |
end of the day than what would be needed for putting portage tree on |
28 |
tmpfs) |
29 |
After that I can fetch the sqsh image from any client system when I do |
30 |
updates over there (I even keep a few weeks worth of images on the |
31 |
master in case one would be needed) |
32 |
|
33 |
Only thing to take care of is mounting RW filesystem |
34 |
over /usr/portage/distfiles and /usr/portage/packages or adjusting |
35 |
make.conf so portage can download distfiles and if asked to same |
36 |
binpkgs. |
37 |
|
38 |
If there is some interest, I can share my script which does the whole |
39 |
process. |
40 |
|
41 |
Having tree snapshots that are currently available as tarballs also |
42 |
available as squashfs images would be nice, more or less same download |
43 |
size but easier to access (e.g. no need to unpack) |
44 |
|
45 |
Bruno |
46 |
|
47 |
On Wed, 12 August 2009 Francesco R <vivo75@×××××.com> wrote: |
48 |
> Proposal, create snapshots of portage as squashfs iso, to be used in |
49 |
> place of /usr/portage directory. |
50 |
> prior art: see #1 |
51 |
> |
52 |
> Already working here: one central server which keep the portage full |
53 |
> tree and that after a sync create "portage.sqsh" a squashfs 4.0 iso. |
54 |
> |
55 |
> Advantages are mainly: |
56 |
> - cleaner root directory (ext4: du -sh /usr/portage ~= 600M | find |
57 |
> /g/portage | wc -l ~= 130000) |
58 |
> - faster `emerge --sync` with fast connections |
59 |
> - faster `emerge -uDpvN world` |
60 |
> - less cpu/disk load on the server (if not serving from memory) |
61 |
> |
62 |
> Disadvantages |
63 |
> - need to mount portage, or to use autofs |
64 |
> - need kernel >= 2.6.30 |
65 |
> - bigger rsync transfer size (?= 2x) #2 |
66 |
> - bigger initial transfer size, lzma snapshot currently weight 30.8M, |
67 |
> portage.sqsh 45M |
68 |
> |
69 |
> How it's done here: |
70 |
> Currently on the dispatcher the following run after every emerge |
71 |
> --sync: |
72 |
> |
73 |
> mksquashfs /usr/portage /srv/portage.sqsh \ |
74 |
> -noappend -no-exports -no-recovery -force-uid 250 |
75 |
> -force-gid 250 |
76 |
> |
77 |
> The clients run from cron the following: |
78 |
> umount /g/portage 2>/dev/null \ |
79 |
> ; cp /srv/server/portage.sqsh /var/tmp/portage.sqsh \ |
80 |
> && mount /usr/portage |
81 |
> |
82 |
> /etc/fstab: |
83 |
> /srv/server/portage.sqsh /usr/portage squashfs loop,ro,noauto 1 1 |
84 |
> |
85 |
> some real data: |
86 |
> |
87 |
> stats for a portage sync, one day |
88 |
> |
89 |
> Number of files: 136429 |
90 |
> Number of files transferred: 326 |
91 |
> Total file size: 180345216 bytes |
92 |
> Total transferred file size: 1976658 bytes |
93 |
> Literal data: 1976658 bytes |
94 |
> Matched data: 0 bytes |
95 |
> File list size: 3377038 |
96 |
> File list generation time: 0.001 seconds |
97 |
> File list transfer time: 0.000 seconds |
98 |
> Total bytes sent: 47533 |
99 |
> Total bytes received: 4120255 |
100 |
> |
101 |
> sent 47533 bytes received 4120255 bytes 124411.58 bytes/sec |
102 |
> total size is 180345216 speedup is 43.27 |
103 |
> |
104 |
> stats for a portage.sqsh sync, one day |
105 |
> |
106 |
> Number of files: 1 |
107 |
> Number of files transferred: 1 |
108 |
> Total file size: 46985216 bytes |
109 |
> Total transferred file size: 46985216 bytes |
110 |
> Literal data: 8430976 bytes |
111 |
> Matched data: 38554240 bytes |
112 |
> File list size: 27 |
113 |
> File list generation time: 0.001 seconds |
114 |
> File list transfer time: 0.000 seconds |
115 |
> Total bytes sent: 48096 |
116 |
> Total bytes received: 8454837 |
117 |
> |
118 |
> sent 48096 bytes received 8454837 bytes 5668622.00 bytes/sec |
119 |
> total size is 46985216 speedup is 5.53 |
120 |
> |
121 |
> |
122 |
> |
123 |
> #1 http://forums.gentoo.org/viewtopic-p-2218914.html |
124 |
> http://www.mail-archive.com/gentoo-dev@g.o/msg05240.html |
125 |
> |
126 |
> #2 May be mitigated by mksquashfs '--sort' option, to be tested |
127 |
> |
128 |
> - francesco (vivo) |