Gentoo Logo
Gentoo Spaceship




Note: Due to technical difficulties, the Archives are currently not up to date. GMANE provides an alternative service for most mailing lists.
c.f. bug 424647
List Archive: gentoo-soc
Navigation:
Lists: gentoo-soc: < Prev By Thread Next > < Prev By Date Next >
Headers:
To: gentoo-soc@g.o
From: Eric Thibodeau <kyron@...>
Subject: Re: Progress report - AutotuA (formerly "Automate it All")
Date: Tue, 17 Jun 2008 10:08:46 -0400
Dude...sounds to me like you need a PBS....

Nirbheek Chauhan wrote:
> It's only been a little more than a week since I started working on
> the project (due to personal reasons), and my time spent to work ratio
> is extremely bad, so I'm sorry but the progress isn't as much as I had
> hoped.
>
> The idea has undergone significant changes in the time passed, and
> thanks to Patrick's guidance (and constant cluebats), I now have a far
> more clear-cut idea of how the whole thing will come together. I
> wonder whether I should describe the project blueprint that we've come
> up with, the path that led to it, or what all code I have written. I
> suppose the progress of the code written cannot be judged unless one
> knows the whole plan, and the path taken to come up with the plan is
> largely irrelevant :)
>
> The general idea has changed somewhat from the abstract:
>
> As before, there is a master server which acts a storage area, manages
> all the slaves and does various bookkeeping. This part will be written
> in Django.
>
> The concept of the slave has changed radically to allow for a less
> steep learning curve. The project described "jobs" which consist of
> executables stored on the master-server which could be fetched and run
> by the slaves. We thought of ways in which we could describe
> dependencies between the slaves, and the most obvious answer to me was
> an XML format (much to the disgust of Patrick).
>
> However, there were numerous problems with such an approach (least of
> which was the overhead involved with parsing XML and the jing deps on
> the Django side for it). The most serious of these was the fact that
> learning a new XML format and writing custom executables (scripts or
> otherwise) which communicate with the server via the Slave's bindings
> has an *extremely* steep learning curve, and will cause chaos. The
> project is useless if no one ends up using it, or it gets too
> complicated to use.
>
> The solution came to me in the form of a "Doh!" moment as I was
> cycling back to my room. The answer was -- "jobuilds". Bash scripts
> are easily adaptable, easy to understand and use (for Gentoo devs),
> and their parsing is well-understood. For the second time in my life,
> I appreciated the ingenuity of the inventors of the ebuild format.
>
>
> Jobuilds:
> ----------
> A jobuild is the smallest possible "quantum" of work. A job consists
> of a root jobuild which has dependencies on other jobuilds, and all
> these taken together form a job. The format of a jobuild is:
> http://pastebin.osuosl.org/8355
>
> - The four phases are to be run (by default) in the chroot where the
> job will take place.
> - SRC_URI are programs: test suites etc which are required by the
> jobuild (does not include the deps which will be pulled in by emerge
> in the chroot).
> - PORTCONF_URI are tarballs which will contain portage config files
> (/etc/portage/* /etc/make.conf etc)
> - DEPEND are other jobuilds on which this jobuild _hard_ depends, ie
> they must be completed in the same chroot (example: Test Amarok
> depends on Build KDE which depends on Build X)
> - SIDEPEND are SuperImpose Depends, all we need to know is that those
> jobuilds completed successfully *somewhere* so that further
> distribution of work is possible. (example: testing if all the
> packages that import gnome2.eclass still work after some changes to
> it)
>
> SRC_URI will be downloaded before entering the chroot, stored in a
> tarballs folder, and hardlinked (if on the same device), or bind
> mounted inside the chroot.
>
> To counter the problem of recursive QA checking, the jobuild format
> will be *extremely* simple. That means no EAPI, no eclasses, no SLOTS,
> minimal versioning (xxx.yyy), no fancy depends (except perhaps ||).
> Built-in functions such as unpack() etc will be provided of course.
>
> The loss of utility from there not being eclasses will be offset
> through the concept of "Template jobuilds" (similar in concept to how
> Django handles Templates[1]). However, I am open to including eclasses
> in the design (who doesn't love them? :) if enough reasons can be
> given.
>
> NOTE: It will be highly recommended that the autotua work folder be on
> the same device. I've assumed this to be true to allow a number of
> optimisations, but I will keep (slower) fallbacks in case that is not
> true.
>
>
> The Tree:
> -----------
> Obviously the jobuilds will be stored in a structured format similar
> to the portage tree :^)
> And following the tradition of being completely unimaginative, it
> shall be called the "Jobtage tree".
> The structure is as follows:
>
> ${user}/
>     ${user}.asc
>     ${jobuild_name}/
>         ${jobuild_name}-${ver}.jobuild
>         Manifest
>
> The tree will be stored in bzr, with an overlays/ directory in .bzrignore
> jobuilds will not be manifested, and will only be signed with the
> maintainer's gpg key
> SRC_URI and PORTCONF_URI will be Manifested (probably the same way in portage)
>
> To further offset the problem of QA in this tree (mentioned in
> "Jobuild" above), when Jobs are created/committed/uploaded on the
> server (the details of that are in the next section), the whole
> depgraph is validated, details about that stored as metadata, and the
> Job itself is attached to **that specific revision** of the Jobtage
> tree. This prevents breakages due to future changes made to the
> jobuilds it depends on. If the maintainer wishes to update the
> attached revision (for say a bugfix in a depending jobuild), he can
> force a re-validation at anytime before a Job is accepted by a Slave.
> Whenever a Slave accepts a Job, it syncs with the revision of the tree
> it's attached to.
>
> The other solution to this problem could've been to trigger a reverse
> depgraph validation whenever a commit was made to the tree. The
> problems with that approach are:
> - Load on the server increases exponentially with jobuilds
> - Raises the question of what the next action should be -- revert the
> (potentially critical) commit or mark (potentially hundreds of)
> jobuilds as broken?
> - Makes Jobs fragile -- a job might be fine when you upload it, but
> horribly broken 4 hours later.
>
>
> Slaves:
> --------
> The slave can pull a list of Jobs that it can do from the master
> server. A Job will consist of metadata about it:
> http://pastebin.osuosl.org/8358 . The actual data is then gathered
> from the jobuild(s), the chroot is prepared, etc etc and work begins.
> Slave reports back to the master server after every jobuild is
> complete with data and receives updates (if any) about the Job
> (updates might consist of changing depends due to SIDEPENDs).
>
> Obviously the Slave has to parse jobuilds. And so the concepts should
> be similar to Portage. However, I am drawing inspiration from the
> pkgcore[2] codebase, simplifying the extremely versatile code to suit
> my needs (which is another reason for my slow progress -- it's not
> easy to understand a work of art ;)
>
>
> Actual Progress aka "No more hand-waving":
> ------------------------------------------------------
> Now follows my *real* progress w.r.t the code.
>
> I'm currently working on the slave, and am concentrating on the things
> that don't depend on the part of parsing the jobuilds (have a general
> idea how it's done, haven't fleshed out the details). Currently I've
> implemented an OO interface (in Python of course) to a Job() object
> accessed via Jobs(), a Syncer() object (jobtage), a Fetchable object
> and a Fetcher (stage3 etc). Total code comes out to 167+70+38+30 =
> ~300 lines ;p
>
> This week I'll start on chroot preparation and iron out the kinks in
> that, followed by the Jobuild() object, the jobuild parser
> (jobuild.sh), and the bridge connecting them. The #pkgcore guys are
> really helpful and nice so I'll have good help for this part :)
>
> Next week (end of the month) will (hopefully) see a working slave
> which accepts Jobs from some magical source and runs them.
>
> I'll begin work on the Master server the week after that, specifically
> the backend work and the details of the communication between the
> Master and Slaves. Frontend prettyfication will take place towards the
> end.
>
>
> 1. http://www.djangobook.com/en/1.0/chapter04/ -- not the exact
> format, only the idea of "Reverse Inheritance"
> 2. http://www.pkgcore.org/
>
> PS: Another reason why progress is slow is because the Slave portion
> has become much more sophisticated than what I had originally
> intended. The original idea had (maintainer-made) executables doing
> all the work (causing a steep learning curve) with the Slave just
> being an API wrapper to talk to the master server. All of that work is
> now shifted into the Slave and abstracted for the maintainer to use in
> a familiar way.
>
>   

-- 
gentoo-soc@g.o mailing list


Replies:
Re: Progress report - AutotuA (formerly "Automate it All")
-- Nirbheek Chauhan
References:
Progress report - AutotuA (formerly "Automate it All")
-- Nirbheek Chauhan
Navigation:
Lists: gentoo-soc: < Prev By Thread Next > < Prev By Date Next >
Previous by thread:
Re: Progress report - AutotuA (formerly "Automate it All")
Next by thread:
Re: Progress report - AutotuA (formerly "Automate it All")
Previous by date:
Re: Progress Report 1 - GNAP Love
Next by date:
Re: Progress report - AutotuA (formerly "Automate it All")


Updated Jun 17, 2009

Summary: Archive of the gentoo-soc mailing list.

Donate to support our development efforts.

Copyright 2001-2013 Gentoo Foundation, Inc. Questions, Comments? Contact us.