1 |
On Tue, 18 Dec 2018 12:55:55 +0300 |
2 |
Andrew Savchenko <bircoph@g.o> wrote: |
3 |
|
4 |
> My main concern with git is downlink fault tolerance. If rsync |
5 |
> connection is broken, it can be easily restored without much data |
6 |
> retransmission. If git download connection is broken, it has to |
7 |
> start all over again. So there are cases where rsync will be always |
8 |
> much more preferable than git. |
9 |
|
10 |
I suspect there's a mechanism available to get git to sync forward only |
11 |
"n-much", but not entirely sure. |
12 |
|
13 |
I'll have to re-read and re-comprehend `git help fetch` though to be |
14 |
sure. |
15 |
|
16 |
But if there was, an alternative for "I have problems with links |
17 |
flaking" would be to do batches of smaller fast-forwards. |
18 |
|
19 |
This option would *theoretically* be equivalent to having published |
20 |
bundles, except of course allowing you to jump forward an arbitrary |
21 |
step-size. |
22 |
|
23 |
I suspect a published list of SHA1's broken down by time might also |
24 |
help here in conjunction with passing required ones as "refspec" values |
25 |
to fetch, which would also approximate the bundle strategy, albeit |
26 |
using substantially less server-side storage space. |