On Mon, Jun 04, 2012 at 09:27:10AM +0200, Micha?? G??rny wrote:
> On Sun, 3 Jun 2012 09:48:26 +0000
> "Robin H. Johnson" <firstname.lastname@example.org> wrote:
> > On Sun, Jun 03, 2012 at 11:34:07AM +0200, Micha?? G??rny wrote:
> > > I means using separate proto for metadata, not necesarrily git. In
> > > any case, if it comes to transferring a lot of frequently-changing
> > > files, rsync is not that efficient...
> > It does NOT send any of the intermediate states.
> But it does have to check all the files.
Which is a pretty minimal cost in the grand scheme of things. You
also need to figure out what 'efficiency' you're going to talk about
here; network io, disk io, cpu io, etc. Most people in this case care
about network IO; rsync's not perfect, but for reasons described
below, it's the best of breed for the usage scenario.
> Did I mention I'm not talking necessarily about git?
Git would be sanest if you were after this; it already does point to
point delta transformations sanely. No point in reinventing a VCS; if
you can't force the tree back to a known good state (aka, distributed
VCS), you can't apply deltas to it, which case you need an rsync like
> Rather anything which would just
> lookup our timestamp, revision or whatever and just send what have
> changed, in a packed manner.
This would be reinventing git/VCS, or more likely, pretending that a
timestamp file automatically means the repository is *unmodified*, and
trying to do a point to point transformation on it. Where you're
notion breaks down is that fun little bit about "unmodified".
This is why rsync is used; it's not limited to a point to point
transformation, it's able to work from any starting point
Either way, suggest you do some research into this- including
efficiencies of rsync, git, existing snapshot delta rsync machinery
(tarsync, diffball, etc), study the trade offs inherint in each. Your
initial email frankly reaks of NIH, hence my suggestions to go
investigate what exists now.