1 |
On Tuesday 22 March 2005 13:55, Patrick Lauer wrote: |
2 |
> |
3 |
> A few problems: |
4 |
> - that .iso and the .zsync metadata need to be generated. More load on |
5 |
> master server |
6 |
> - isos don't allow easy access, e.g. writing a few bytes for a tricial |
7 |
> bugfix |
8 |
> - mkisofs might shuffle the data so that transferring one large file |
9 |
> might cause more traffic than rsync does now |
10 |
> |
11 |
> I don't see the advantages over tar + binary diffs. |
12 |
> |
13 |
|
14 |
You make valid points, but notice: |
15 |
- The .zsync metadata doesn't have to be generated on the master server. |
16 |
Anyone can do it right now. |
17 |
- ISO's would have to be regenerated periodically. This could vary from every |
18 |
30 minutes to only once per day, we'd have to see how it works. |
19 |
Personally I think every 30 minutes would be viable, but it's not really |
20 |
necessary. Once per day would be enough and better than emerge-webrsync.. |
21 |
|
22 |
The advantage over tar + binary diffs: |
23 |
- Client doesn't have to remove entire portage tree and extract the tar file |
24 |
every sync. |
25 |
- I think xdelta might be possible, but bsdiff would be impossible due to the |
26 |
memory requirements for a tar this large. I don't really know how xdelta |
27 |
performs CPU-wise and memory-wise.. |
28 |
- It's simpler (only 2 files on the server and very few commands |
29 |
necessary) :-) |
30 |
-- |
31 |
gentoo-dev@g.o mailing list |