Gentoo Archives: gentoo-dev

From: Enrico Weigelt <weigelt@×××××.de>
To: gentoo-dev@l.g.o
Subject: Re: [gentoo-dev] [ [Bug 322157] [mail-filter/procmail] new ebuild + autocreate maildirs]
Date: Thu, 15 Jul 2010 13:51:20
Message-Id: 20100715134339.GF6557@nibiru.local
In Reply to: Re: [gentoo-dev] [ [Bug 322157] [mail-filter/procmail] new ebuild + autocreate maildirs] by Nirbheek Chauhan
* Nirbheek Chauhan <nirbheek@g.o> schrieb:

> Ah, so you want us to use your git repos as patch managers? That > clears up a few things.
I dont want you to use *my* repos. But I'd like to advocate git-based workflows (eg. downstream branches w/ rebase, etc) instead of loose patches. And I'm offering you an proven model for that.
> >> No, it does not. The security problems come because you are the single > >> point of failure. > > > > Which SPOF ? > > > > man 1 git-cone ;-p > > > > So you don't want us to use your tarballs?
Why still using tarballs at all (if the UPSTREAM.* branches actually come from tarballs is a different topic) ? Even on projects w/ heavy change rates, eg. the Linux kernel, the git repo isn't much larger than a release tarball. (and here, the vast majority is shared between several kernel types, eg. vanilla <-> ovz <-> xen). Tarballs provide only work on the granularity of one big tree at once, so support only one operation: fetch a whole tree at once. No differential transmissions or storage, no changesets, etc, etc.
> >   BTW: as long as not all upstreams sign their releases, our trust > >   relies just relies on their server's integrity (and the connection > >   to them). > > Difference is that there are multiple upstreams while you are one, and > the larger upstreams (such as GNOME/KDE/FDO) have a professional > admins devoted to the security of their machines, while you're using a > free service on a public git website.
I never said, that I want to be your (only) upstream. Again: all I'm offering is a generic model (and a fist reference implementation).
> > d) some vendor (possibly myself) adds crappy changes: you'll most > >   likely have a look at the changes before cherry-picking them. > > Makes sense. If we use your git repos for pulling patches we'll verify > them before applying them locally.
Right. And you would put your changes in your repos and automatically pushing it back to the concentrated repository, and other distros would do the same. So in the end, everybody still has his own repo, but also everything's collected in a central place.
> > That wouldn't affect your clones. You simply won't get anything > > new from my site anymore. > > Of course, since now I understand that you don't want us to use your > tarballs, the hosting problem is a non-issue.
> Oh wait, I just remembered. All the ebuilds that you submitted use > *your* tarballs. And since you want us to use snapshot tarballs, the > same old problems of trust, security, redundancy come into play.
Just make your own repo, maybe put into into an simple dns-based cluster (multiple a-records), and send me the pointer - so I'll give you a fixed ebuild. Trivial.
> Please decide if you want us to use your git repos as patch > aggregators or as snapshot tarball sources.
Please differentiate between the model and the concrete implementation. The model just specifies the basic logic structure, eg. naming etc. But the infrastructure is an different issue. You'd most likely have your own repos, but I'd like to have your refs automatically pushed (within your namespace) into my aggregator repos.
> > First, I have to build that website and maintain it over a long time. > > Then I'll have to do a lot of advertisement work to get people to > > actually push their patches there. > > > > Why would they push their patches? You'd be an RSS reader. An > aggregator, You wouldn't need anyone to push anything.
Assuming they all publish their patches via RSS-feeds. And still this would only operate on single patches, not changesets and branches w/ histories.
> > On the other hand, the git-based infrastructure is already there, > > people can use it right now. And it gives my exactly what I need. > > So I prefer spending my time in advocating this one. > > Yes, in a sense you've already made the patch aggregation website. > Except you use git to store whatever patches you get manually from > the various sources.
No, I've made a branch aggregator. Git does not store patches, it stores histories of trees. Thats completely different, see above.
> > It's not that simple. Many distros don't even do proper patches, > > instead wildly copy over or directly sed certain sourcesfiles, > > or even (like Debian) use their own broken tarballs. > > (the worst srcpkg I've ever seen is Debian's mysql-5.0.32 ...) > > *Shrug* you aggregate whatever is there in patch form. Refinements > can happen later.
I could only aggregate patches, not other changes (eg. those done by those ugly sed hacks), and also missing the right apply order. So I'd loose a lot of important information.
> The same problem is also present in your current approach, but > that didn't stop you.
Right, that's why I'd people to use a proper vcs from start up, and I'll step by step tweak certain packaging systems to create git commits. (eg. a tweaked portage could import epatches directly into git and also commits between all commands that might change the tree, within the src_unpack stage).
> On the other hand, if you propose benefits w/o a change of workflow, > I'm sure many more people will be interested.
The major benefit is that changesets can be easily shared between various distros and upstreams, up to fully notifications, etc. cu -- ---------------------------------------------------------------------- Enrico Weigelt, metux IT service -- phone: +49 36207 519931 email: weigelt@×××××.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 ---------------------------------------------------------------------- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme ----------------------------------------------------------------------