We started off with a status report as usual. Basically we've progressed to version 0.2, which has a lot of bugfixes and new functionality. We have the new webgli FE now and it is progressing nicely. Work is being done to make the FEs closer to each other in functionality. A comparison chart can be found at http://dev.gentoo.org/~agaffney/gli/comparison.html
An attempt was made to come up with a name for the higher-up installer daemon. Proposed names were GLI Install Daemon, gli-server, and GL Management Daemon. Andrew described its ability to deploy mass installations.
20:09 <@agaffney> client CD/net boots, client starts, broadcasts for server on UDP port 8001, server responds, client connects to server via XMLRPC now that it knows the IP, registers itself, and waits
20:10 <@agaffney> the client polls the server every few seconds asking if it can start the install
20:11 <@agaffney> when the server says yes (triggered by the admin setting a flag in the web interface), the client downloads its client_profile and install_profile, and starts its install
20:11 <@agaffney> for each step completed, it calls a function via XMLRPC that updates its install status
20:11 <@agaffney> so the server knows where each client is
Issues were pointed out in the current design for mass installation due to portage, compiling, and customization among other things. What is needed along with a normal full installation is the ability to install to a chroot environment (i.e. no partitioning or bootloader), which then becomes a "stage4" that can be installed into other machines very similar to an image. This led to a brief discussion of "roles" or "modes" for the installer to run under. Among the "roles" would be full install, stage4 install, and install to chroot. No decision was made on the best way to implement this.
The discusion then turned towards the development of a new project that would integrate with the installer and cover enterprise system management in general. Examples of existing programs like SystemInstaller (which uses SystemImager to install), and m23 (http://m23.sourceforge.net) were discussed.
20:42 <@esammer> installation shouldn't be considered a one time thing. it bleeds into maintenance and configuration management.
20:44 <@codeman> once the profile can be defined (a normal GLI one) and then use that profile to install to a chroot, the user can then customize that install to their hearts content. This becomes the "lead" or "staging" machine of the group.
20:44 <@codeman> once that is done, then using a "stage4" role, the same profile can be used in conjunction with the "lead" machine to install to any number of machines
20:46 <@agaffney> so post-install, this machine would get an update and then send it out to the other members of the group via a binary.
There was some discussion on the feasibility of a rollback feature in this new utility, and this brought up talk of the future features of portage 3.x
20:48 <+antarus|osx> So Savior is all about user power, you want thing X, you just need to derive from Framework classes and implement it
20:50 <+antarus|osx> Remote Tree, Remote VDB, SQL cache backends, Portage Daemon
Discussion shifted back to the structure and scope of the new project, with an example of RedHat's Satellite program given by wolf31o2
21:08 <@codeman> well the highest level i think should just be linking the lower-level projects
21:08 <@blackace> GLMD in the server project, uses optional components GLR in the server project, and GLI in the releng project.
There was an idea to use the package list of each GLI profile as an overlay of sorts for a centralized portage.
21:19 <+wolf31o2> now... using portage, this would be simple... keep the packages on the "server" box... need to rollback, BINHOST="http://server/channel" emerge --oneshot -K =cat/pkg-version
After some more general discussion, the issue of identifying the individual machines was brought up. MAC address was suggested, along with a unique ID, SSH identification keys, and IP address.
21:33 <@blackace> the server isn't in control from the point of view of linux permissions...ie. the server doesn't just tell the client to do stuff...the client has to ask "is there anything you want me to do?"
Best line of the night goes to:
21:35 <+wolf31o2> it's all like... "yo server... what you got fo me?" and the server is like "I gotz DEEZ NUTZ!!!" and the client is like "step off, brother... I don't need your jive"
Lastly nagios integration was discussed with the central server providing the machines information to nagios to aid in the configuration.
We agreed to continue working on the design and come up with some more ideas and plans and then have a meeting on December 20th to collect them together.
Meeting ended around 11pm EST (timestamps are in CST).