Gentoo Archives: gentoo-cluster

From: Jimmy Rosen <listjiro@×××××.com>
To: Eric Thibodeau <kyron@××××××××.com>, gentoo-cluster@l.g.o
Subject: Re: [gentoo-cluster] Gentoo clustering "à la" www.clustermatic.org (PXE booted nodes)
Date: Fri, 25 Nov 2005 17:10:02
Message-Id: 200511251809.19032.listjiro@gmail.com
1 On Thursday 24 November 2005 14.20, you wrote:
2 > Hi Jimmy,
3 > As I stated in some other reply, any bit of information is
4 > definately deemed useful.
5 >
6 > > No duplicate system trees and binaries, just tweak with
7 > > selective /etc/init.d scripts and such.
8 >
9 > Such as...
10
11 You just run different services on different machines. Eg not so
12 necessary to have nfsd running on a diskless node, or xdm/cups on a
13 headless node?
14
15 >
16 > > A few init.d scripts has to be slightly changed from the gentoo
17 > > originals, along with /sbin/rc and /sbin/functions.sh.
18 > > The nodes can have different config files. But that is a very
19 > > minor overhead.
20 >
21 > Again, which files and what are the modifications?
22
23 /sbin/rc needs to be intercepted to allow for diskless mounting of
24 root system over nfs, before the rest of the system tries to wake up.
25 Then overload the config areas that need to be different on different
26 machines, like runlevels, fstab, etc.
27 /sbin/functions.sh just adds a function that checks whether it's a
28 diskless node or not.
29 E.g. Intercept checkroot and checkfs on diskless nodes, should not
30 happend. Don't try to raise eth0, which has already been done by
31 dhcp/kernel, and make sure you don't take it down. lol.
32 Don't unmount / nfs on diskless nodes when shutting down, right?
33 This kind of things, I'll send you the files.
34
35 >
36 > > I have posted on this topic before, but there hasn't been much
37 > > interest. If you want to try it then contact me and I'll see if I
38 > > can whip up a quick terse description. Then we can flesh it out
39 > > as we go. But I don't have time to actually write something nice
40 > > for quite a while, unfortunately.
41 > >
42 > > Harebrafolk
43 > > Jimmy
44 >
45 > Ah, see, that is where I come in. I _have_ to build this cluster
46 > and it's part of my mandate to document this, and the document is
47 > supposed to be published in a way or another. I definately intend
48 > to have this built as a "Classic" Beowulf with PXE booted nodes,
49 > each with local disks for local caching of large amounts of data to
50 > be processed.
51 >
52 > So, if you don't have time to describe the changes and all, just
53 > send me some or all of the files which required modifications, I'll
54 > attempt to figure it out with diff and document this.
55 >
56 > Thanks,
57 >
58 > Eric Thibodeau
59
60 Ok, sounds like you have your work cut out for a little while.
61 I'll contact you off list about how I can upload the stuff to you.
62 Also, take a look at:
63 http://www.gentoo.org/doc/en/diskless-howto.xml
64 http://forums.gentoo.org/viewtopic.php?t=54293
65 http://www.gentoo.org/doc/en/altinstall.xml
66 http://www.gentoo.org/proj/en/cluster/hpc-howto.xml
67 http://www.gentoo.org/doc/en/openmosix-howto.xml
68 http://www.gentoo.org/proj/en/cluster/
69 http://oss.linbit.com/csync2/
70 http://www.gentoo.org/proj/en/cluster/demo-lwe.xml
71 (I'm not sure what you want it for, so I can't tell if any of the
72 above is of interest)
73
74 But: To me a "classic" beowulf is just a bunch of dedicated homogenous
75 machines with mirrored system images and mpi/pvm and batch queuing.
76 Why the hassle of setting up a more complex cluster system with pxe
77 booting? The classic stuff can be solved with more or less just an
78 rsync script? If you already have the dedicated homogenous hardware,
79 why not save a lot of time, and do what everyone else has been doing
80 (tried and true, eh?) for many years, and for which there are a tons
81 of available systems and good documentation?
82 I'm just curious. After all, I ended up rolling my own solution since
83 I couldn't find anything that suited my needs at the time.
84
85 Harebrafolk
86 Jimmy
87 --
88 gentoo-cluster@g.o mailing list