public inbox for gentoo-catalyst@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-catalyst] Re-organize the python structure
@ 2014-01-12  1:46 Brian Dolbec
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories Brian Dolbec
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12  1:46 UTC (permalink / raw
  To: gentoo-catalyst

 [PATCH 1/5] Initial rearrangement of the python directories
 [PATCH 2/5] Move catalyst_support, builder, catalyst_lock out of...
 [PATCH 3/5] Rename the modules subpkg to targets, to better reflect...
 [PATCH 4/5] Move catalyst.conf and catalystrc to an etc/ directory
 [PATCH 5/5] setup.py: Add disutils-based packaging

This series of patches move catalyst into a more normal 
python code structure.  After this series, the -9999 ebuild 
will need to be swapped out for the 2.9999 live ebuild.
Each commit individually works without breakage, but each one 
builds on changes made from previous commits.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories
  2014-01-12  1:46 [gentoo-catalyst] Re-organize the python structure Brian Dolbec
@ 2014-01-12  1:46 ` Brian Dolbec
  2014-01-12 20:25   ` Brian Dolbec
                     ` (2 more replies)
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 2/5] Move catalyst_support, builder, catalyst_lock out of modules, into the catalyst namespace Brian Dolbec
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12  1:46 UTC (permalink / raw
  To: gentoo-catalyst; +Cc: Brian Dolbec

New minimal start script, moving the original catalyst script to catalyst/main.py.
Add __init__.py's to modules and arch sub-pkgs.
skip __init__.py when loading the modules.
Update the module loading paths for the new locations.
Fix catalyst_support import to new location and specify imported modules.
---
 arch/alpha.py                            |   75 --
 arch/amd64.py                            |   83 --
 arch/arm.py                              |  133 ---
 arch/hppa.py                             |   40 -
 arch/ia64.py                             |   16 -
 arch/mips.py                             |  464 --------
 arch/powerpc.py                          |  124 ---
 arch/s390.py                             |   33 -
 arch/sh.py                               |  116 --
 arch/sparc.py                            |   42 -
 arch/x86.py                              |  153 ---
 bin/catalyst                             |   46 +
 catalyst                                 |  419 -------
 catalyst/__init__.py                     |    0
 catalyst/arch/__init__.py                |    1 +
 catalyst/arch/alpha.py                   |   75 ++
 catalyst/arch/amd64.py                   |   83 ++
 catalyst/arch/arm.py                     |  133 +++
 catalyst/arch/hppa.py                    |   40 +
 catalyst/arch/ia64.py                    |   16 +
 catalyst/arch/mips.py                    |  464 ++++++++
 catalyst/arch/powerpc.py                 |  124 +++
 catalyst/arch/s390.py                    |   33 +
 catalyst/arch/sh.py                      |  116 ++
 catalyst/arch/sparc.py                   |   42 +
 catalyst/arch/x86.py                     |  153 +++
 catalyst/config.py                       |  122 +++
 catalyst/main.py                         |  428 ++++++++
 catalyst/modules/__init__.py             |    1 +
 catalyst/modules/builder.py              |   20 +
 catalyst/modules/catalyst_lock.py        |  468 ++++++++
 catalyst/modules/catalyst_support.py     |  718 ++++++++++++
 catalyst/modules/embedded_target.py      |   51 +
 catalyst/modules/generic_stage_target.py | 1741 ++++++++++++++++++++++++++++++
 catalyst/modules/generic_target.py       |   11 +
 catalyst/modules/grp_target.py           |  118 ++
 catalyst/modules/livecd_stage1_target.py |   75 ++
 catalyst/modules/livecd_stage2_target.py |  148 +++
 catalyst/modules/netboot2_target.py      |  166 +++
 catalyst/modules/netboot_target.py       |  128 +++
 catalyst/modules/snapshot_target.py      |   91 ++
 catalyst/modules/stage1_target.py        |   97 ++
 catalyst/modules/stage2_target.py        |   66 ++
 catalyst/modules/stage3_target.py        |   31 +
 catalyst/modules/stage4_target.py        |   43 +
 catalyst/modules/tinderbox_target.py     |   48 +
 catalyst/util.py                         |   14 +
 modules/__init__.py                      |    0
 modules/builder.py                       |   20 -
 modules/catalyst/__init__.py             |    0
 modules/catalyst/config.py               |  122 ---
 modules/catalyst/util.py                 |   14 -
 modules/catalyst_lock.py                 |  468 --------
 modules/catalyst_support.py              |  718 ------------
 modules/embedded_target.py               |   51 -
 modules/generic_stage_target.py          | 1740 -----------------------------
 modules/generic_target.py                |   11 -
 modules/grp_target.py                    |  118 --
 modules/livecd_stage1_target.py          |   75 --
 modules/livecd_stage2_target.py          |  148 ---
 modules/netboot2_target.py               |  166 ---
 modules/netboot_target.py                |  128 ---
 modules/snapshot_target.py               |   91 --
 modules/stage1_target.py                 |   97 --
 modules/stage2_target.py                 |   66 --
 modules/stage3_target.py                 |   31 -
 modules/stage4_target.py                 |   43 -
 modules/tinderbox_target.py              |   48 -
 68 files changed, 5911 insertions(+), 5853 deletions(-)
 delete mode 100644 arch/alpha.py
 delete mode 100644 arch/amd64.py
 delete mode 100644 arch/arm.py
 delete mode 100644 arch/hppa.py
 delete mode 100644 arch/ia64.py
 delete mode 100644 arch/mips.py
 delete mode 100644 arch/powerpc.py
 delete mode 100644 arch/s390.py
 delete mode 100644 arch/sh.py
 delete mode 100644 arch/sparc.py
 delete mode 100644 arch/x86.py
 create mode 100755 bin/catalyst
 delete mode 100755 catalyst
 create mode 100644 catalyst/__init__.py
 create mode 100644 catalyst/arch/__init__.py
 create mode 100644 catalyst/arch/alpha.py
 create mode 100644 catalyst/arch/amd64.py
 create mode 100644 catalyst/arch/arm.py
 create mode 100644 catalyst/arch/hppa.py
 create mode 100644 catalyst/arch/ia64.py
 create mode 100644 catalyst/arch/mips.py
 create mode 100644 catalyst/arch/powerpc.py
 create mode 100644 catalyst/arch/s390.py
 create mode 100644 catalyst/arch/sh.py
 create mode 100644 catalyst/arch/sparc.py
 create mode 100644 catalyst/arch/x86.py
 create mode 100644 catalyst/config.py
 create mode 100644 catalyst/main.py
 create mode 100644 catalyst/modules/__init__.py
 create mode 100644 catalyst/modules/builder.py
 create mode 100644 catalyst/modules/catalyst_lock.py
 create mode 100644 catalyst/modules/catalyst_support.py
 create mode 100644 catalyst/modules/embedded_target.py
 create mode 100644 catalyst/modules/generic_stage_target.py
 create mode 100644 catalyst/modules/generic_target.py
 create mode 100644 catalyst/modules/grp_target.py
 create mode 100644 catalyst/modules/livecd_stage1_target.py
 create mode 100644 catalyst/modules/livecd_stage2_target.py
 create mode 100644 catalyst/modules/netboot2_target.py
 create mode 100644 catalyst/modules/netboot_target.py
 create mode 100644 catalyst/modules/snapshot_target.py
 create mode 100644 catalyst/modules/stage1_target.py
 create mode 100644 catalyst/modules/stage2_target.py
 create mode 100644 catalyst/modules/stage3_target.py
 create mode 100644 catalyst/modules/stage4_target.py
 create mode 100644 catalyst/modules/tinderbox_target.py
 create mode 100644 catalyst/util.py
 delete mode 100644 modules/__init__.py
 delete mode 100644 modules/builder.py
 delete mode 100644 modules/catalyst/__init__.py
 delete mode 100644 modules/catalyst/config.py
 delete mode 100644 modules/catalyst/util.py
 delete mode 100644 modules/catalyst_lock.py
 delete mode 100644 modules/catalyst_support.py
 delete mode 100644 modules/embedded_target.py
 delete mode 100644 modules/generic_stage_target.py
 delete mode 100644 modules/generic_target.py
 delete mode 100644 modules/grp_target.py
 delete mode 100644 modules/livecd_stage1_target.py
 delete mode 100644 modules/livecd_stage2_target.py
 delete mode 100644 modules/netboot2_target.py
 delete mode 100644 modules/netboot_target.py
 delete mode 100644 modules/snapshot_target.py
 delete mode 100644 modules/stage1_target.py
 delete mode 100644 modules/stage2_target.py
 delete mode 100644 modules/stage3_target.py
 delete mode 100644 modules/stage4_target.py
 delete mode 100644 modules/tinderbox_target.py

diff --git a/arch/alpha.py b/arch/alpha.py
deleted file mode 100644
index f0fc95a..0000000
--- a/arch/alpha.py
+++ /dev/null
@@ -1,75 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_alpha(builder.generic):
-	"abstract base class for all alpha builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CFLAGS"]="-mieee -pipe"
-
-class arch_alpha(generic_alpha):
-	"builder class for generic alpha (ev4+)"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=ev4"
-		self.settings["CHOST"]="alpha-unknown-linux-gnu"
-
-class arch_ev4(generic_alpha):
-	"builder class for alpha ev4"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=ev4"
-		self.settings["CHOST"]="alphaev4-unknown-linux-gnu"
-
-class arch_ev45(generic_alpha):
-	"builder class for alpha ev45"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=ev45"
-		self.settings["CHOST"]="alphaev45-unknown-linux-gnu"
-
-class arch_ev5(generic_alpha):
-	"builder class for alpha ev5"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=ev5"
-		self.settings["CHOST"]="alphaev5-unknown-linux-gnu"
-
-class arch_ev56(generic_alpha):
-	"builder class for alpha ev56 (ev5 plus BWX)"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=ev56"
-		self.settings["CHOST"]="alphaev56-unknown-linux-gnu"
-
-class arch_pca56(generic_alpha):
-	"builder class for alpha pca56 (ev5 plus BWX & MAX)"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=pca56"
-		self.settings["CHOST"]="alphaev56-unknown-linux-gnu"
-
-class arch_ev6(generic_alpha):
-	"builder class for alpha ev6"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=ev6"
-		self.settings["CHOST"]="alphaev6-unknown-linux-gnu"
-		self.settings["HOSTUSE"]=["ev6"]
-
-class arch_ev67(generic_alpha):
-	"builder class for alpha ev67 (ev6 plus CIX)"
-	def __init__(self,myspec):
-		generic_alpha.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -O2 -mcpu=ev67"
-		self.settings["CHOST"]="alphaev67-unknown-linux-gnu"
-		self.settings["HOSTUSE"]=["ev6"]
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({ "alpha":arch_alpha, "ev4":arch_ev4, "ev45":arch_ev45,
-		"ev5":arch_ev5, "ev56":arch_ev56, "pca56":arch_pca56,
-		"ev6":arch_ev6, "ev67":arch_ev67 },
-	("alpha", ))
diff --git a/arch/amd64.py b/arch/amd64.py
deleted file mode 100644
index 262b55a..0000000
--- a/arch/amd64.py
+++ /dev/null
@@ -1,83 +0,0 @@
-
-import builder
-
-class generic_amd64(builder.generic):
-	"abstract base class for all amd64 builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-
-class arch_amd64(generic_amd64):
-	"builder class for generic amd64 (Intel and AMD)"
-	def __init__(self,myspec):
-		generic_amd64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="x86_64-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
-
-class arch_nocona(generic_amd64):
-	"improved version of Intel Pentium 4 CPU with 64-bit extensions, MMX, SSE, SSE2 and SSE3 support"
-	def __init__(self,myspec):
-		generic_amd64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=nocona -pipe"
-		self.settings["CHOST"]="x86_64-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
-
-# Requires gcc 4.3 to use this class
-class arch_core2(generic_amd64):
-	"Intel Core 2 CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3 support"
-	def __init__(self,myspec):
-		generic_amd64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=core2 -pipe"
-		self.settings["CHOST"]="x86_64-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2","ssse3"]
-
-class arch_k8(generic_amd64):
-	"generic k8, opteron and athlon64 support"
-	def __init__(self,myspec):
-		generic_amd64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=k8 -pipe"
-		self.settings["CHOST"]="x86_64-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2","3dnow"]
-
-class arch_k8_sse3(generic_amd64):
-	"improved versions of k8, opteron and athlon64 with SSE3 support"
-	def __init__(self,myspec):
-		generic_amd64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=k8-sse3 -pipe"
-		self.settings["CHOST"]="x86_64-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2","3dnow"]
-
-class arch_amdfam10(generic_amd64):
-	"AMD Family 10h core based CPUs with x86-64 instruction set support"
-	def __init__(self,myspec):
-		generic_amd64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=amdfam10 -pipe"
-		self.settings["CHOST"]="x86_64-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2","3dnow"]
-
-class arch_x32(generic_amd64):
-	"builder class for generic x32 (Intel and AMD)"
-	def __init__(self,myspec):
-		generic_amd64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="x86_64-pc-linux-gnux32"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
-
-def register():
-	"inform main catalyst program of the contents of this plugin"
-	return ({
-		"amd64"		: arch_amd64,
-		"k8"		: arch_k8,
-		"opteron"	: arch_k8,
-		"athlon64"	: arch_k8,
-		"athlonfx"	: arch_k8,
-		"nocona"	: arch_nocona,
-		"core2"		: arch_core2,
-		"k8-sse3"	: arch_k8_sse3,
-		"opteron-sse3"	: arch_k8_sse3,
-		"athlon64-sse3"	: arch_k8_sse3,
-		"amdfam10"	: arch_amdfam10,
-		"barcelona"	: arch_amdfam10,
-		"x32"		: arch_x32,
-	}, ("x86_64","amd64","nocona"))
diff --git a/arch/arm.py b/arch/arm.py
deleted file mode 100644
index 2de3942..0000000
--- a/arch/arm.py
+++ /dev/null
@@ -1,133 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_arm(builder.generic):
-	"Abstract base class for all arm (little endian) builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CFLAGS"]="-O2 -pipe"
-
-class generic_armeb(builder.generic):
-	"Abstract base class for all arm (big endian) builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CFLAGS"]="-O2 -pipe"
-
-class arch_arm(generic_arm):
-	"Builder class for arm (little endian) target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="arm-unknown-linux-gnu"
-
-class arch_armeb(generic_armeb):
-	"Builder class for arm (big endian) target"
-	def __init__(self,myspec):
-		generic_armeb.__init__(self,myspec)
-		self.settings["CHOST"]="armeb-unknown-linux-gnu"
-
-class arch_armv4l(generic_arm):
-	"Builder class for armv4l target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv4l-unknown-linux-gnu"
-		self.settings["CFLAGS"]+=" -march=armv4"
-
-class arch_armv4tl(generic_arm):
-	"Builder class for armv4tl target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv4tl-softfloat-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv4t"
-
-class arch_armv5tl(generic_arm):
-	"Builder class for armv5tl target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv5tl-softfloat-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv5t"
-
-class arch_armv5tel(generic_arm):
-	"Builder class for armv5tel target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv5tel-softfloat-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv5te"
-
-class arch_armv5tejl(generic_arm):
-	"Builder class for armv5tejl target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv5tejl-softfloat-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv5te"
-
-class arch_armv6j(generic_arm):
-	"Builder class for armv6j target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv6j-softfp-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv6j -mfpu=vfp -mfloat-abi=softfp"
-
-class arch_armv6z(generic_arm):
-	"Builder class for armv6z target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv6z-softfp-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv6z -mfpu=vfp -mfloat-abi=softfp"
-
-class arch_armv6zk(generic_arm):
-	"Builder class for armv6zk target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv6zk-softfp-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv6zk -mfpu=vfp -mfloat-abi=softfp"
-
-class arch_armv7a(generic_arm):
-	"Builder class for armv7a target"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv7a-softfp-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv7-a -mfpu=vfpv3-d16 -mfloat-abi=softfp"
-
-class arch_armv6j_hardfp(generic_arm):
-	"Builder class for armv6j hardfloat target, needs >=gcc-4.5"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv6j-hardfloat-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv6j -mfpu=vfp -mfloat-abi=hard"
-
-class arch_armv7a_hardfp(generic_arm):
-	"Builder class for armv7a hardfloat target, needs >=gcc-4.5"
-	def __init__(self,myspec):
-		generic_arm.__init__(self,myspec)
-		self.settings["CHOST"]="armv7a-hardfloat-linux-gnueabi"
-		self.settings["CFLAGS"]+=" -march=armv7-a -mfpu=vfpv3-d16 -mfloat-abi=hard"
-
-class arch_armv5teb(generic_armeb):
-	"Builder class for armv5teb (XScale) target"
-	def __init__(self,myspec):
-		generic_armeb.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -mcpu=xscale"
-		self.settings["CHOST"]="armv5teb-softfloat-linux-gnueabi"
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({
-		"arm"    : arch_arm,
-		"armv4l" : arch_armv4l,
-		"armv4tl": arch_armv4tl,
-		"armv5tl": arch_armv5tl,
-		"armv5tel": arch_armv5tel,
-		"armv5tejl": arch_armv5tejl,
-		"armv6j" : arch_armv6j,
-		"armv6z" : arch_armv6z,
-		"armv6zk" : arch_armv6zk,
-		"armv7a" : arch_armv7a,
-		"armv6j_hardfp" : arch_armv6j_hardfp,
-		"armv7a_hardfp" : arch_armv7a_hardfp,
-		"armeb"  : arch_armeb,
-		"armv5teb" : arch_armv5teb
-	}, ("arm", "armv4l", "armv4tl", "armv5tl", "armv5tel", "armv5tejl", "armv6l",
-"armv7l", "armeb", "armv5teb") )
diff --git a/arch/hppa.py b/arch/hppa.py
deleted file mode 100644
index f804398..0000000
--- a/arch/hppa.py
+++ /dev/null
@@ -1,40 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_hppa(builder.generic):
-	"Abstract base class for all hppa builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CXXFLAGS"]="-O2 -pipe"
-
-class arch_hppa(generic_hppa):
-	"Builder class for hppa systems"
-	def __init__(self,myspec):
-		generic_hppa.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -march=1.0"
-		self.settings["CHOST"]="hppa-unknown-linux-gnu"
-
-class arch_hppa1_1(generic_hppa):
-	"Builder class for hppa 1.1 systems"
-	def __init__(self,myspec):
-		generic_hppa.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -march=1.1"
-		self.settings["CHOST"]="hppa1.1-unknown-linux-gnu"
-
-class arch_hppa2_0(generic_hppa):
-	"Builder class for hppa 2.0 systems"
-	def __init__(self,myspec):
-		generic_hppa.__init__(self,myspec)
-		self.settings["CFLAGS"]+=" -march=2.0"
-		self.settings["CHOST"]="hppa2.0-unknown-linux-gnu"
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({
-			"hppa":		arch_hppa,
-			"hppa1.1":	arch_hppa1_1,
-			"hppa2.0":	arch_hppa2_0
-	}, ("parisc","parisc64","hppa","hppa64") )
diff --git a/arch/ia64.py b/arch/ia64.py
deleted file mode 100644
index 825af70..0000000
--- a/arch/ia64.py
+++ /dev/null
@@ -1,16 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class arch_ia64(builder.generic):
-	"builder class for ia64"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="ia64-unknown-linux-gnu"
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({ "ia64":arch_ia64 }, ("ia64", ))
diff --git a/arch/mips.py b/arch/mips.py
deleted file mode 100644
index b3730fa..0000000
--- a/arch/mips.py
+++ /dev/null
@@ -1,464 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_mips(builder.generic):
-	"Abstract base class for all mips builders [Big-endian]"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CHOST"]="mips-unknown-linux-gnu"
-
-class generic_mipsel(builder.generic):
-	"Abstract base class for all mipsel builders [Little-endian]"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CHOST"]="mipsel-unknown-linux-gnu"
-
-class generic_mips64(builder.generic):
-	"Abstract base class for all mips64 builders [Big-endian]"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CHOST"]="mips64-unknown-linux-gnu"
-
-class generic_mips64el(builder.generic):
-	"Abstract base class for all mips64el builders [Little-endian]"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-		self.settings["CHOST"]="mips64el-unknown-linux-gnu"
-
-class arch_mips1(generic_mips):
-	"Builder class for MIPS I [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips1 -mabi=32 -mplt -pipe"
-
-class arch_mips32(generic_mips):
-	"Builder class for MIPS 32 [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
-
-class arch_mips32_softfloat(generic_mips):
-	"Builder class for MIPS 32 [Big-endian softfloat]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
-		self.settings["CHOST"]="mips-softfloat-linux-gnu"
-
-class arch_mips32r2(generic_mips):
-	"Builder class for MIPS 32r2 [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
-
-class arch_mips32r2_softfloat(generic_mips):
-	"Builder class for MIPS 32r2 [Big-endian softfloat]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
-		self.settings["CHOST"]="mips-softfloat-linux-gnu"
-
-class arch_mips3(generic_mips):
-	"Builder class for MIPS III [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=32 -mplt -mfix-r4000 -mfix-r4400 -pipe"
-
-class arch_mips3_n32(generic_mips64):
-	"Builder class for MIPS III [Big-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=n32 -mplt -mfix-r4000 -mfix-r4400 -pipe"
-
-class arch_mips3_n64(generic_mips64):
-	"Builder class for MIPS III [Big-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=64 -mfix-r4000 -mfix-r4400 -pipe"
-
-class arch_mips3_multilib(generic_mips64):
-	"Builder class for MIPS III [Big-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mplt -mfix-r4000 -mfix-r4400 -pipe"
-
-class arch_mips4(generic_mips):
-	"Builder class for MIPS IV [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=32 -mplt -pipe"
-
-class arch_mips4_n32(generic_mips64):
-	"Builder class for MIPS IV [Big-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=n32 -mplt -pipe"
-
-class arch_mips4_n64(generic_mips64):
-	"Builder class for MIPS IV [Big-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=64 -pipe"
-
-class arch_mips4_multilib(generic_mips64):
-	"Builder class for MIPS IV [Big-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mplt -pipe"
-
-class arch_mips4_r10k(generic_mips):
-	"Builder class for MIPS IV R10k [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=r10k -mabi=32 -mplt -pipe"
-
-class arch_mips4_r10k_n32(generic_mips64):
-	"Builder class for MIPS IV R10k [Big-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=r10k -mabi=n32 -mplt -pipe"
-
-class arch_mips4_r10k_n64(generic_mips64):
-	"Builder class for MIPS IV R10k [Big-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=r10k -mabi=64 -pipe"
-
-class arch_mips4_r10k_multilib(generic_mips64):
-	"Builder class for MIPS IV R10k [Big-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=r10k -mplt -pipe"
-
-class arch_mips64(generic_mips):
-	"Builder class for MIPS 64 [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=32 -mplt -pipe"
-
-class arch_mips64_n32(generic_mips64):
-	"Builder class for MIPS 64 [Big-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=n32 -mplt -pipe"
-
-class arch_mips64_n64(generic_mips64):
-	"Builder class for MIPS 64 [Big-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=64 -pipe"
-
-class arch_mips64_multilib(generic_mips64):
-	"Builder class for MIPS 64 [Big-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mplt -pipe"
-
-class arch_mips64r2(generic_mips):
-	"Builder class for MIPS 64r2 [Big-endian]"
-	def __init__(self,myspec):
-		generic_mips.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=32 -mplt -pipe"
-
-class arch_mips64r2_n32(generic_mips64):
-	"Builder class for MIPS 64r2 [Big-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=n32 -mplt -pipe"
-
-class arch_mips64r2_n64(generic_mips64):
-	"Builder class for MIPS 64r2 [Big-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=64 -pipe"
-
-class arch_mips64r2_multilib(generic_mips64):
-	"Builder class for MIPS 64r2 [Big-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mplt -pipe"
-
-class arch_mipsel1(generic_mipsel):
-	"Builder class for MIPS I [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips1 -mabi=32 -mplt -pipe"
-
-class arch_mips32el(generic_mipsel):
-	"Builder class for MIPS 32 [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
-
-class arch_mips32el_softfloat(generic_mipsel):
-	"Builder class for MIPS 32 [Little-endian softfloat]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
-		self.settings["CHOST"]="mipsel-softfloat-linux-gnu"
-
-class arch_mips32r2el(generic_mipsel):
-	"Builder class for MIPS 32r2 [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
-
-class arch_mips32r2el_softfloat(generic_mipsel):
-	"Builder class for MIPS 32r2 [Little-endian softfloat]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
-		self.settings["CHOST"]="mipsel-softfloat-linux-gnu"
-
-class arch_mipsel3(generic_mipsel):
-	"Builder class for MIPS III [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_mipsel3_n32(generic_mips64el):
-	"Builder class for MIPS III [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=n32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_mipsel3_n64(generic_mips64el):
-	"Builder class for MIPS III [Little-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=64 -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_mipsel3_multilib(generic_mips64el):
-	"Builder class for MIPS III [Little-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips3 -mplt -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_loongson2e(generic_mipsel):
-	"Builder class for Loongson 2E [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2e -mabi=32 -mplt -pipe"
-
-class arch_loongson2e_n32(generic_mips64el):
-	"Builder class for Loongson 2E [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2e -mabi=n32 -mplt -pipe"
-
-class arch_loongson2e_n64(generic_mips64el):
-	"Builder class for Loongson 2E [Little-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2e -mabi=64 -pipe"
-
-class arch_loongson2e_multilib(generic_mips64el):
-	"Builder class for Loongson 2E [Little-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2e -mplt -pipe"
-
-class arch_loongson2f(generic_mipsel):
-	"Builder class for Loongson 2F [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2f -mabi=32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_loongson2f_n32(generic_mips64el):
-	"Builder class for Loongson 2F [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2f -mabi=n32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_loongson2f_n64(generic_mips64el):
-	"Builder class for Loongson 2F [Little-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2f -mabi=64 -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_loongson2f_multilib(generic_mips64el):
-	"Builder class for Loongson 2F [Little-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson2f -mplt -Wa,-mfix-loongson2f-nop -pipe"
-
-class arch_mipsel4(generic_mipsel):
-	"Builder class for MIPS IV [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=32 -mplt -pipe"
-
-class arch_mipsel4_n32(generic_mips64el):
-	"Builder class for MIPS IV [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=n32 -mplt -pipe"
-
-class arch_mipsel4_n64(generic_mips64el):
-	"Builder class for MIPS IV [Little-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=64 -pipe"
-
-class arch_mipsel4_multilib(generic_mips64el):
-	"Builder class for MIPS IV [Little-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips4 -mplt -pipe"
-
-class arch_mips64el(generic_mipsel):
-	"Builder class for MIPS 64 [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=32 -mplt -pipe"
-
-class arch_mips64el_n32(generic_mips64el):
-	"Builder class for MIPS 64 [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=n32 -mplt -pipe"
-
-class arch_mips64el_n64(generic_mips64el):
-	"Builder class for MIPS 64 [Little-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=64 -pipe"
-
-class arch_mips64el_multilib(generic_mips64el):
-	"Builder class for MIPS 64 [Little-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64 -mplt -pipe"
-
-class arch_mips64r2el(generic_mipsel):
-	"Builder class for MIPS 64r2 [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=32 -mplt -pipe"
-
-class arch_mips64r2el_n32(generic_mips64el):
-	"Builder class for MIPS 64r2 [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=n32 -mplt -pipe"
-
-class arch_mips64r2el_n64(generic_mips64el):
-	"Builder class for MIPS 64r2 [Little-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=64 -pipe"
-
-class arch_mips64r2el_multilib(generic_mips64el):
-	"Builder class for MIPS 64r2 [Little-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mplt -pipe"
-
-class arch_loongson3a(generic_mipsel):
-	"Builder class for Loongson 3A [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson3a -mabi=32 -mplt -pipe"
-
-class arch_loongson3a_n32(generic_mips64el):
-	"Builder class for Loongson 3A [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson3a -mabi=n32 -mplt -pipe"
-
-class arch_loongson3a_n64(generic_mips64el):
-	"Builder class for Loongson 3A [Little-endian N64]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson3a -mabi=64 -pipe"
-
-class arch_loongson3a_multilib(generic_mips64el):
-	"Builder class for Loongson 3A [Little-endian multilib]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=loongson3a -mplt -pipe"
-
-class arch_cobalt(generic_mipsel):
-	"Builder class for cobalt [Little-endian]"
-	def __init__(self,myspec):
-		generic_mipsel.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=r5000 -mabi=32 -mplt -pipe"
-		self.settings["HOSTUSE"]=["cobalt"]
-
-class arch_cobalt_n32(generic_mips64el):
-	"Builder class for cobalt [Little-endian N32]"
-	def __init__(self,myspec):
-		generic_mips64el.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=r5000 -mabi=n32 -mplt -pipe"
-		self.settings["HOSTUSE"]=["cobalt"]
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({
-			"cobalt"				: arch_cobalt,
-			"cobalt_n32"			: arch_cobalt_n32,
-			"mips"					: arch_mips1,
-			"mips1"					: arch_mips1,
-			"mips32"				: arch_mips32,
-			"mips32_softfloat"		: arch_mips32_softfloat,
-			"mips32r2"				: arch_mips32r2,
-			"mips32r2_softfloat"	: arch_mips32r2_softfloat,
-			"mips3"					: arch_mips3,
-			"mips3_n32"				: arch_mips3_n32,
-			"mips3_n64"				: arch_mips3_n64,
-			"mips3_multilib"		: arch_mips3_multilib,
-			"mips4"					: arch_mips4,
-			"mips4_n32"				: arch_mips4_n32,
-			"mips4_n64"				: arch_mips4_n64,
-			"mips4_multilib"		: arch_mips4_multilib,
-			"mips4_r10k"			: arch_mips4_r10k,
-			"mips4_r10k_n32"		: arch_mips4_r10k_n32,
-			"mips4_r10k_n64"		: arch_mips4_r10k_n64,
-			"mips4_r10k_multilib"	: arch_mips4_r10k_multilib,
-			"mips64"				: arch_mips64,
-			"mips64_n32"			: arch_mips64_n32,
-			"mips64_n64"			: arch_mips64_n64,
-			"mips64_multilib"		: arch_mips64_multilib,
-			"mips64r2"				: arch_mips64r2,
-			"mips64r2_n32"			: arch_mips64r2_n32,
-			"mips64r2_n64"			: arch_mips64r2_n64,
-			"mips64r2_multilib"		: arch_mips64r2_multilib,
-			"mipsel"				: arch_mipsel1,
-			"mipsel1"				: arch_mipsel1,
-			"mips32el"				: arch_mips32el,
-			"mips32el_softfloat"	: arch_mips32el_softfloat,
-			"mips32r2el"			: arch_mips32r2el,
-			"mips32r2el_softfloat"	: arch_mips32r2el_softfloat,
-			"mipsel3"				: arch_mipsel3,
-			"mipsel3_n32"			: arch_mipsel3_n32,
-			"mipsel3_n64"			: arch_mipsel3_n64,
-			"mipsel3_multilib"		: arch_mipsel3_multilib,
-			"mipsel4"				: arch_mipsel4,
-			"mipsel4_n32"			: arch_mipsel4_n32,
-			"mipsel4_n64"			: arch_mipsel4_n64,
-			"mipsel4_multilib"		: arch_mipsel4_multilib,
-			"mips64el"				: arch_mips64el,
-			"mips64el_n32"			: arch_mips64el_n32,
-			"mips64el_n64"			: arch_mips64el_n64,
-			"mips64el_multilib"		: arch_mips64el_multilib,
-			"mips64r2el"			: arch_mips64r2el,
-			"mips64r2el_n32"		: arch_mips64r2el_n32,
-			"mips64r2el_n64"		: arch_mips64r2el_n64,
-			"mips64r2el_multilib"	: arch_mips64r2el_multilib,
-			"loongson2e"			: arch_loongson2e,
-			"loongson2e_n32"		: arch_loongson2e_n32,
-			"loongson2e_n64"		: arch_loongson2e_n64,
-			"loongson2e_multilib"	: arch_loongson2e_multilib,
-			"loongson2f"			: arch_loongson2f,
-			"loongson2f_n32"		: arch_loongson2f_n32,
-			"loongson2f_n64"		: arch_loongson2f_n64,
-			"loongson2f_multilib"	: arch_loongson2f_multilib,
-			"loongson3a"			: arch_loongson3a,
-			"loongson3a_n32"		: arch_loongson3a_n32,
-			"loongson3a_n64"		: arch_loongson3a_n64,
-			"loongson3a_multilib"	: arch_loongson3a_multilib,
-	}, ("mips","mips64"))
diff --git a/arch/powerpc.py b/arch/powerpc.py
deleted file mode 100644
index e9f611b..0000000
--- a/arch/powerpc.py
+++ /dev/null
@@ -1,124 +0,0 @@
-
-import os,builder
-from catalyst_support import *
-
-class generic_ppc(builder.generic):
-	"abstract base class for all 32-bit powerpc builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHOST"]="powerpc-unknown-linux-gnu"
-		if self.settings["buildarch"]=="ppc64":
-			if not os.path.exists("/bin/linux32") and not os.path.exists("/usr/bin/linux32"):
-				raise CatalystError,"required executable linux32 not found (\"emerge setarch\" to fix.)"
-			self.settings["CHROOT"]="linux32 chroot"
-			self.settings["crosscompile"] = False;
-		else:
-			self.settings["CHROOT"]="chroot"
-
-class generic_ppc64(builder.generic):
-	"abstract base class for all 64-bit powerpc builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-
-class arch_ppc(generic_ppc):
-	"builder class for generic powerpc"
-	def __init__(self,myspec):
-		generic_ppc.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mcpu=powerpc -mtune=powerpc -pipe"
-
-class arch_ppc64(generic_ppc64):
-	"builder class for generic ppc64"
-	def __init__(self,myspec):
-		generic_ppc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="powerpc64-unknown-linux-gnu"
-
-class arch_970(arch_ppc64):
-	"builder class for 970 aka G5 under ppc64"
-	def __init__(self,myspec):
-		arch_ppc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe -mcpu=970 -mtune=970"
-		self.settings["HOSTUSE"]=["altivec"]
-
-class arch_cell(arch_ppc64):
-	"builder class for cell under ppc64"
-	def __init__(self,myspec):
-		arch_ppc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe -mcpu=cell -mtune=cell"
-		self.settings["HOSTUSE"]=["altivec","ibm"]
-
-class arch_g3(generic_ppc):
-	def __init__(self,myspec):
-		generic_ppc.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mcpu=G3 -mtune=G3 -pipe"
-
-class arch_g4(generic_ppc):
-	def __init__(self,myspec):
-		generic_ppc.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mcpu=G4 -mtune=G4 -maltivec -mabi=altivec -pipe"
-		self.settings["HOSTUSE"]=["altivec"]
-
-class arch_g5(generic_ppc):
-	def __init__(self,myspec):
-		generic_ppc.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mcpu=G5 -mtune=G5 -maltivec -mabi=altivec -pipe"
-		self.settings["HOSTUSE"]=["altivec"]
-
-class arch_power(generic_ppc):
-	"builder class for generic power"
-	def __init__(self,myspec):
-		generic_ppc.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mcpu=power -mtune=power -pipe"
-
-class arch_power_ppc(generic_ppc):
-	"builder class for generic powerpc/power"
-	def __init__(self,myspec):
-		generic_ppc.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mcpu=common -mtune=common -pipe"
-
-class arch_power3(arch_ppc64):
-	"builder class for power3 under ppc64"
-	def __init__(self,myspec):
-		arch_ppc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power3 -mtune=power3"
-		self.settings["HOSTUSE"]=["ibm"]
-
-class arch_power4(arch_ppc64):
-	"builder class for power4 under ppc64"
-	def __init__(self,myspec):
-		arch_ppc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power4 -mtune=power4"
-		self.settings["HOSTUSE"]=["ibm"]
-
-class arch_power5(arch_ppc64):
-	"builder class for power5 under ppc64"
-	def __init__(self,myspec):
-		arch_ppc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power5 -mtune=power5"
-		self.settings["HOSTUSE"]=["ibm"]
-
-class arch_power6(arch_ppc64):
-	"builder class for power6 under ppc64"
-	def __init__(self,myspec):
-		arch_ppc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power6 -mtune=power6"
-		self.settings["HOSTUSE"]=["altivec","ibm"]
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({
-		"970"		: arch_970,
-		"cell"		: arch_cell,
-		"g3"		: arch_g3,
-		"g4"		: arch_g4,
-		"g5"		: arch_g5,
-		"power"		: arch_power,
-		"power-ppc"	: arch_power_ppc,
-		"power3"	: arch_power3,
-		"power4"	: arch_power4,
-		"power5"	: arch_power5,
-		"power6"	: arch_power6,
-		"ppc"		: arch_ppc,
-		"ppc64"		: arch_ppc64
-	}, ("ppc","ppc64","powerpc","powerpc64"))
diff --git a/arch/s390.py b/arch/s390.py
deleted file mode 100644
index bf22f66..0000000
--- a/arch/s390.py
+++ /dev/null
@@ -1,33 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_s390(builder.generic):
-	"abstract base class for all s390 builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-
-class generic_s390x(builder.generic):
-	"abstract base class for all s390x builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-
-class arch_s390(generic_s390):
-	"builder class for generic s390"
-	def __init__(self,myspec):
-		generic_s390.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="s390-ibm-linux-gnu"
-
-class arch_s390x(generic_s390x):
-	"builder class for generic s390x"
-	def __init__(self,myspec):
-		generic_s390x.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="s390x-ibm-linux-gnu"
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({"s390":arch_s390,"s390x":arch_s390x}, ("s390", "s390x"))
diff --git a/arch/sh.py b/arch/sh.py
deleted file mode 100644
index 2fc9531..0000000
--- a/arch/sh.py
+++ /dev/null
@@ -1,116 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_sh(builder.generic):
-	"Abstract base class for all sh builders [Little-endian]"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-
-class generic_sheb(builder.generic):
-	"Abstract base class for all sheb builders [Big-endian]"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-
-class arch_sh(generic_sh):
-	"Builder class for SH [Little-endian]"
-	def __init__(self,myspec):
-		generic_sh.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="sh-unknown-linux-gnu"
-
-class arch_sh2(generic_sh):
-	"Builder class for SH-2 [Little-endian]"
-	def __init__(self,myspec):
-		generic_sh.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m2 -pipe"
-		self.settings["CHOST"]="sh2-unknown-linux-gnu"
-
-class arch_sh2a(generic_sh):
-	"Builder class for SH-2A [Little-endian]"
-	def __init__(self,myspec):
-		generic_sh.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m2a -pipe"
-		self.settings["CHOST"]="sh2a-unknown-linux-gnu"
-
-class arch_sh3(generic_sh):
-	"Builder class for SH-3 [Little-endian]"
-	def __init__(self,myspec):
-		generic_sh.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m3 -pipe"
-		self.settings["CHOST"]="sh3-unknown-linux-gnu"
-
-class arch_sh4(generic_sh):
-	"Builder class for SH-4 [Little-endian]"
-	def __init__(self,myspec):
-		generic_sh.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m4 -pipe"
-		self.settings["CHOST"]="sh4-unknown-linux-gnu"
-
-class arch_sh4a(generic_sh):
-	"Builder class for SH-4A [Little-endian]"
-	def __init__(self,myspec):
-		generic_sh.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m4a -pipe"
-		self.settings["CHOST"]="sh4a-unknown-linux-gnu"
-
-class arch_sheb(generic_sheb):
-	"Builder class for SH [Big-endian]"
-	def __init__(self,myspec):
-		generic_sheb.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="sheb-unknown-linux-gnu"
-
-class arch_sh2eb(generic_sheb):
-	"Builder class for SH-2 [Big-endian]"
-	def __init__(self,myspec):
-		generic_sheb.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m2 -pipe"
-		self.settings["CHOST"]="sh2eb-unknown-linux-gnu"
-
-class arch_sh2aeb(generic_sheb):
-	"Builder class for SH-2A [Big-endian]"
-	def __init__(self,myspec):
-		generic_sheb.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m2a -pipe"
-		self.settings["CHOST"]="sh2aeb-unknown-linux-gnu"
-
-class arch_sh3eb(generic_sheb):
-	"Builder class for SH-3 [Big-endian]"
-	def __init__(self,myspec):
-		generic_sheb.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m3 -pipe"
-		self.settings["CHOST"]="sh3eb-unknown-linux-gnu"
-
-class arch_sh4eb(generic_sheb):
-	"Builder class for SH-4 [Big-endian]"
-	def __init__(self,myspec):
-		generic_sheb.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m4 -pipe"
-		self.settings["CHOST"]="sh4eb-unknown-linux-gnu"
-
-class arch_sh4aeb(generic_sheb):
-	"Builder class for SH-4A [Big-endian]"
-	def __init__(self,myspec):
-		generic_sheb.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -m4a -pipe"
-		self.settings["CHOST"]="sh4aeb-unknown-linux-gnu"
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({
-			"sh"	:arch_sh,
-			"sh2"	:arch_sh2,
-			"sh2a"	:arch_sh2a,
-			"sh3"	:arch_sh3,
-			"sh4"	:arch_sh4,
-			"sh4a"	:arch_sh4a,
-			"sheb"	:arch_sheb,
-			"sh2eb" :arch_sh2eb,
-			"sh2aeb" :arch_sh2aeb,
-			"sh3eb"	:arch_sh3eb,
-			"sh4eb"	:arch_sh4eb,
-			"sh4aeb" :arch_sh4aeb
-	}, ("sh2","sh2a","sh3","sh4","sh4a","sh2eb","sh2aeb","sh3eb","sh4eb","sh4aeb"))
diff --git a/arch/sparc.py b/arch/sparc.py
deleted file mode 100644
index 5eb5344..0000000
--- a/arch/sparc.py
+++ /dev/null
@@ -1,42 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_sparc(builder.generic):
-	"abstract base class for all sparc builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		if self.settings["buildarch"]=="sparc64":
-			if not os.path.exists("/bin/linux32") and not os.path.exists("/usr/bin/linux32"):
-				raise CatalystError,"required executable linux32 not found (\"emerge setarch\" to fix.)"
-			self.settings["CHROOT"]="linux32 chroot"
-			self.settings["crosscompile"] = False;
-		else:
-			self.settings["CHROOT"]="chroot"
-
-class generic_sparc64(builder.generic):
-	"abstract base class for all sparc64 builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		self.settings["CHROOT"]="chroot"
-
-class arch_sparc(generic_sparc):
-	"builder class for generic sparc (sun4cdm)"
-	def __init__(self,myspec):
-		generic_sparc.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -pipe"
-		self.settings["CHOST"]="sparc-unknown-linux-gnu"
-
-class arch_sparc64(generic_sparc64):
-	"builder class for generic sparc64 (sun4u)"
-	def __init__(self,myspec):
-		generic_sparc64.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mcpu=ultrasparc -pipe"
-		self.settings["CHOST"]="sparc-unknown-linux-gnu"
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({
-		"sparc"		: arch_sparc,
-		"sparc64"	: arch_sparc64
-	}, ("sparc","sparc64", ))
diff --git a/arch/x86.py b/arch/x86.py
deleted file mode 100644
index 0391b79..0000000
--- a/arch/x86.py
+++ /dev/null
@@ -1,153 +0,0 @@
-
-import builder,os
-from catalyst_support import *
-
-class generic_x86(builder.generic):
-	"abstract base class for all x86 builders"
-	def __init__(self,myspec):
-		builder.generic.__init__(self,myspec)
-		if self.settings["buildarch"]=="amd64":
-			if not os.path.exists("/bin/linux32") and not os.path.exists("/usr/bin/linux32"):
-					raise CatalystError,"required executable linux32 not found (\"emerge setarch\" to fix.)"
-			self.settings["CHROOT"]="linux32 chroot"
-			self.settings["crosscompile"] = False;
-		else:
-			self.settings["CHROOT"]="chroot"
-
-class arch_x86(generic_x86):
-	"builder class for generic x86 (386+)"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -mtune=i686 -pipe"
-		self.settings["CHOST"]="i386-pc-linux-gnu"
-
-class arch_i386(generic_x86):
-	"Intel i386 CPU"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=i386 -pipe"
-		self.settings["CHOST"]="i386-pc-linux-gnu"
-
-class arch_i486(generic_x86):
-	"Intel i486 CPU"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=i486 -pipe"
-		self.settings["CHOST"]="i486-pc-linux-gnu"
-
-class arch_i586(generic_x86):
-	"Intel Pentium CPU"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=i586 -pipe"
-		self.settings["CHOST"]="i586-pc-linux-gnu"
-
-class arch_i686(generic_x86):
-	"Intel Pentium Pro CPU"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=i686 -pipe"
-		self.settings["CHOST"]="i686-pc-linux-gnu"
-
-class arch_pentium_mmx(generic_x86):
-	"Intel Pentium MMX CPU with MMX support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=pentium-mmx -pipe"
-		self.settings["HOSTUSE"]=["mmx"]
-
-class arch_pentium2(generic_x86):
-	"Intel Pentium 2 CPU with MMX support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=pentium2 -pipe"
-		self.settings["HOSTUSE"]=["mmx"]
-
-class arch_pentium3(generic_x86):
-	"Intel Pentium 3 CPU with MMX and SSE support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=pentium3 -pipe"
-		self.settings["HOSTUSE"]=["mmx","sse"]
-
-class arch_pentium4(generic_x86):
-	"Intel Pentium 4 CPU with MMX, SSE and SSE2 support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=pentium4 -pipe"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
-
-class arch_pentium_m(generic_x86):
-	"Intel Pentium M CPU with MMX, SSE and SSE2 support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=pentium-m -pipe"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
-
-class arch_prescott(generic_x86):
-	"improved version of Intel Pentium 4 CPU with MMX, SSE, SSE2 and SSE3 support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=prescott -pipe"
-		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
-		self.settings["CHOST"]="i686-pc-linux-gnu"
-
-class arch_k6(generic_x86):
-	"AMD K6 CPU with MMX support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=k6 -pipe"
-		self.settings["CHOST"]="i686-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx"]
-
-class arch_k6_2(generic_x86):
-	"AMD K6-2 CPU with MMX and 3dNOW! support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=k6-2 -pipe"
-		self.settings["CHOST"]="i686-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","3dnow"]
-
-class arch_athlon(generic_x86):
-	"AMD Athlon CPU with MMX, 3dNOW!, enhanced 3dNOW! and SSE prefetch support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=athlon -pipe"
-		self.settings["CHOST"]="i686-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","3dnow"]
-
-class arch_athlon_xp(generic_x86):
-	"improved AMD Athlon CPU with MMX, 3dNOW!, enhanced 3dNOW! and full SSE support"
-	def __init__(self,myspec):
-		generic_x86.__init__(self,myspec)
-		self.settings["CFLAGS"]="-O2 -march=athlon-xp -pipe"
-		self.settings["CHOST"]="i686-pc-linux-gnu"
-		self.settings["HOSTUSE"]=["mmx","3dnow","sse"]
-
-def register():
-	"Inform main catalyst program of the contents of this plugin."
-	return ({
-		"x86"			: arch_x86,
-		"i386"			: arch_i386,
-		"i486"			: arch_i486,
-		"i586"			: arch_i586,
-		"i686"			: arch_i686,
-		"pentium"		: arch_i586,
-		"pentium2"		: arch_pentium2,
-		"pentium3"		: arch_pentium3,
-		"pentium3m"		: arch_pentium3,
-		"pentium-m"		: arch_pentium_m,
-		"pentium4"		: arch_pentium4,
-		"pentium4m"		: arch_pentium4,
-		"pentiumpro"		: arch_i686,
-		"pentium-mmx"		: arch_pentium_mmx,
-		"prescott"		: arch_prescott,
-		"k6"			: arch_k6,
-		"k6-2"			: arch_k6_2,
-		"k6-3"			: arch_k6_2,
-		"athlon"		: arch_athlon,
-		"athlon-tbird"		: arch_athlon,
-		"athlon-4"		: arch_athlon_xp,
-		"athlon-xp"		: arch_athlon_xp,
-		"athlon-mp"		: arch_athlon_xp
-	}, ('i386', 'i486', 'i586', 'i686'))
diff --git a/bin/catalyst b/bin/catalyst
new file mode 100755
index 0000000..ace43fc
--- /dev/null
+++ b/bin/catalyst
@@ -0,0 +1,46 @@
+#!/usr/bin/python2 -OO
+
+# Maintained in full by:
+# Catalyst Team <catalyst@gentoo.org>
+# Release Engineering Team <releng@gentoo.org>
+# Andrew Gaffney <agaffney@gentoo.org>
+# Chris Gianelloni <wolf31o2@wolf31o2.org>
+# $Id$
+
+
+from __future__ import print_function
+
+import sys
+
+__maintainer__="Catalyst <catalyst@gentoo.org>"
+__version__="2.0.12.2"
+
+
+# This block ensures that ^C interrupts are handled quietly.
+try:
+	import signal
+
+	def exithandler(signum,frame):
+		signal.signal(signal.SIGINT, signal.SIG_IGN)
+		signal.signal(signal.SIGTERM, signal.SIG_IGN)
+		print()
+		sys.exit(1)
+
+	signal.signal(signal.SIGINT, exithandler)
+	signal.signal(signal.SIGTERM, exithandler)
+	signal.signal(signal.SIGPIPE, signal.SIG_DFL)
+
+except KeyboardInterrupt:
+	print()
+	sys.exit(1)
+
+
+from catalyst.main import main
+
+try:
+	main()
+except KeyboardInterrupt:
+	print("Aborted.")
+	sys.exit(130)
+sys.exit(0)
+
diff --git a/catalyst b/catalyst
deleted file mode 100755
index cb6c022..0000000
--- a/catalyst
+++ /dev/null
@@ -1,419 +0,0 @@
-#!/usr/bin/python2 -OO
-
-# Maintained in full by:
-# Catalyst Team <catalyst@gentoo.org>
-# Release Engineering Team <releng@gentoo.org>
-# Andrew Gaffney <agaffney@gentoo.org>
-# Chris Gianelloni <wolf31o2@wolf31o2.org>
-# $Id$
-
-import os
-import sys
-import imp
-import string
-import getopt
-import pdb
-import os.path
-
-import modules.catalyst.config
-import modules.catalyst.util
-
-__maintainer__="Catalyst <catalyst@gentoo.org>"
-__version__="2.0.15"
-
-conf_values={}
-
-def usage():
-	print """Usage catalyst [options] [-C variable=value...] [ -s identifier]
- -a --clear-autoresume  clear autoresume flags
- -c --config            use specified configuration file
- -C --cli               catalyst commandline (MUST BE LAST OPTION)
- -d --debug             enable debugging
- -f --file              read specfile
- -F --fetchonly         fetch files only
- -h --help              print this help message
- -p --purge             clear tmp dirs,package cache, autoresume flags
- -P --purgeonly         clear tmp dirs,package cache, autoresume flags and exit
- -T --purgetmponly      clear tmp dirs and autoresume flags and exit
- -s --snapshot          generate a release snapshot
- -V --version           display version information
- -v --verbose           verbose output
-
-Usage examples:
-
-Using the commandline option (-C, --cli) to build a Portage snapshot:
-catalyst -C target=snapshot version_stamp=my_date
-
-Using the snapshot option (-s, --snapshot) to build a release snapshot:
-catalyst -s 20071121"
-
-Using the specfile option (-f, --file) to build a stage target:
-catalyst -f stage1-specfile.spec
-"""
-
-
-def version():
-	print "Catalyst, version "+__version__
-	print "Copyright 2003-2008 Gentoo Foundation"
-	print "Copyright 2008-2012 various authors"
-	print "Distributed under the GNU General Public License version 2.1\n"
-
-def parse_config(myconfig):
-	# search a couple of different areas for the main config file
-	myconf={}
-	config_file=""
-
-	confdefaults = {
-		"distdir": "/usr/portage/distfiles",
-		"hash_function": "crc32",
-		"icecream": "/var/cache/icecream",
-		"local_overlay": "/usr/local/portage",
-		"options": "",
-		"packagedir": "/usr/portage/packages",
-		"portdir": "/usr/portage",
-		"repo_name": "portage",
-		"sharedir": "/usr/share/catalyst",
-		"snapshot_name": "portage-",
-		"snapshot_cache": "/var/tmp/catalyst/snapshot_cache",
-		"storedir": "/var/tmp/catalyst",
-		}
-
-	# first, try the one passed (presumably from the cmdline)
-	if myconfig:
-		if os.path.exists(myconfig):
-			print "Using command line specified Catalyst configuration file, "+myconfig
-			config_file=myconfig
-
-		else:
-			print "!!! catalyst: Could not use specified configuration file "+\
-				myconfig
-			sys.exit(1)
-
-	# next, try the default location
-	elif os.path.exists("/etc/catalyst/catalyst.conf"):
-		print "Using default Catalyst configuration file, /etc/catalyst/catalyst.conf"
-		config_file="/etc/catalyst/catalyst.conf"
-
-	# can't find a config file (we are screwed), so bail out
-	else:
-		print "!!! catalyst: Could not find a suitable configuration file"
-		sys.exit(1)
-
-	# now, try and parse the config file "config_file"
-	try:
-#		execfile(config_file, myconf, myconf)
-		myconfig = modules.catalyst.config.ConfigParser(config_file)
-		myconf.update(myconfig.get_values())
-
-	except:
-		print "!!! catalyst: Unable to parse configuration file, "+myconfig
-		sys.exit(1)
-
-	# now, load up the values into conf_values so that we can use them
-	for x in confdefaults.keys():
-		if x in myconf:
-			print "Setting",x,"to config file value \""+myconf[x]+"\""
-			conf_values[x]=myconf[x]
-		else:
-			print "Setting",x,"to default value \""+confdefaults[x]+"\""
-			conf_values[x]=confdefaults[x]
-
-	# parse out the rest of the options from the config file
-	if "autoresume" in string.split(conf_values["options"]):
-		print "Autoresuming support enabled."
-		conf_values["AUTORESUME"]="1"
-
-	if "bindist" in string.split(conf_values["options"]):
-		print "Binary redistribution enabled"
-		conf_values["BINDIST"]="1"
-	else:
-		print "Bindist is not enabled in catalyst.conf"
-		print "Binary redistribution of generated stages/isos may be prohibited by law."
-		print "Please see the use description for bindist on any package you are including."
-
-	if "ccache" in string.split(conf_values["options"]):
-		print "Compiler cache support enabled."
-		conf_values["CCACHE"]="1"
-
-	if "clear-autoresume" in string.split(conf_values["options"]):
-		print "Cleaning autoresume flags support enabled."
-		conf_values["CLEAR_AUTORESUME"]="1"
-
-	if "distcc" in string.split(conf_values["options"]):
-		print "Distcc support enabled."
-		conf_values["DISTCC"]="1"
-
-	if "icecream" in string.split(conf_values["options"]):
-		print "Icecream compiler cluster support enabled."
-		conf_values["ICECREAM"]="1"
-
-	if "kerncache" in string.split(conf_values["options"]):
-		print "Kernel cache support enabled."
-		conf_values["KERNCACHE"]="1"
-
-	if "pkgcache" in string.split(conf_values["options"]):
-		print "Package cache support enabled."
-		conf_values["PKGCACHE"]="1"
-
-	if "preserve_libs" in string.split(conf_values["options"]):
-		print "Preserving libs during unmerge."
-		conf_values["PRESERVE_LIBS"]="1"
-
-	if "purge" in string.split(conf_values["options"]):
-		print "Purge support enabled."
-		conf_values["PURGE"]="1"
-
-	if "seedcache" in string.split(conf_values["options"]):
-		print "Seed cache support enabled."
-		conf_values["SEEDCACHE"]="1"
-
-	if "snapcache" in string.split(conf_values["options"]):
-		print "Snapshot cache support enabled."
-		conf_values["SNAPCACHE"]="1"
-
-	if "digests" in myconf:
-		conf_values["digests"]=myconf["digests"]
-	if "contents" in myconf:
-		conf_values["contents"]=myconf["contents"]
-
-	if "envscript" in myconf:
-		print "Envscript support enabled."
-		conf_values["ENVSCRIPT"]=myconf["envscript"]
-
-	if "var_tmpfs_portage" in myconf:
-		conf_values["var_tmpfs_portage"]=myconf["var_tmpfs_portage"];
-
-	if "port_logdir" in myconf:
-		conf_values["port_logdir"]=myconf["port_logdir"];
-
-def import_modules():
-	# import catalyst's own modules (i.e. catalyst_support and the arch modules)
-	targetmap={}
-
-	try:
-		for x in required_build_targets:
-			try:
-				fh=open(conf_values["sharedir"]+"/modules/"+x+".py")
-				module=imp.load_module(x,fh,"modules/"+x+".py",(".py","r",imp.PY_SOURCE))
-				fh.close()
-
-			except IOError:
-				raise CatalystError,"Can't find "+x+".py plugin in "+\
-					conf_values["sharedir"]+"/modules/"
-
-		for x in valid_build_targets:
-			try:
-				fh=open(conf_values["sharedir"]+"/modules/"+x+".py")
-				module=imp.load_module(x,fh,"modules/"+x+".py",(".py","r",imp.PY_SOURCE))
-				module.register(targetmap)
-				fh.close()
-
-			except IOError:
-				raise CatalystError,"Can't find "+x+".py plugin in "+\
-					conf_values["sharedir"]+"/modules/"
-
-	except ImportError:
-		print "!!! catalyst: Python modules not found in "+\
-			conf_values["sharedir"]+"/modules; exiting."
-		sys.exit(1)
-
-	return targetmap
-
-def build_target(addlargs, targetmap):
-	try:
-		if addlargs["target"] not in targetmap:
-			raise CatalystError,"Target \""+addlargs["target"]+"\" not available."
-
-		mytarget=targetmap[addlargs["target"]](conf_values, addlargs)
-
-		mytarget.run()
-
-	except:
-		modules.catalyst.util.print_traceback()
-		print "!!! catalyst: Error encountered during run of target " + addlargs["target"]
-		sys.exit(1)
-
-if __name__ == "__main__":
-	targetmap={}
-
-	version()
-	if os.getuid() != 0:
-		# catalyst cannot be run as a normal user due to chroots, mounts, etc
-		print "!!! catalyst: This script requires root privileges to operate"
-		sys.exit(2)
-
-	# we need some options in order to work correctly
-	if len(sys.argv) < 2:
-		usage()
-		sys.exit(2)
-
-	# parse out the command line arguments
-	try:
-		opts,args = getopt.getopt(sys.argv[1:], "apPThvdc:C:f:FVs:", ["purge", "purgeonly", "purgetmponly", "help", "version", "debug",\
-			"clear-autoresume", "config=", "cli=", "file=", "fetch", "verbose","snapshot="])
-
-	except getopt.GetoptError:
-		usage()
-		sys.exit(2)
-
-	# defaults for commandline opts
-	debug=False
-	verbose=False
-	fetch=False
-	myconfig=""
-	myspecfile=""
-	mycmdline=[]
-	myopts=[]
-
-	# check preconditions
-	if len(opts) == 0:
-		print "!!! catalyst: please specify one of either -f or -C\n"
-		usage()
-		sys.exit(2)
-
-	run = False
-	for o, a in opts:
-		if o in ("-h", "--help"):
-			usage()
-			sys.exit(1)
-
-		if o in ("-V", "--version"):
-			print "Catalyst version "+__version__
-			sys.exit(1)
-
-		if o in ("-d", "--debug"):
-			conf_values["DEBUG"]="1"
-			conf_values["VERBOSE"]="1"
-
-		if o in ("-c", "--config"):
-			myconfig=a
-
-		if o in ("-C", "--cli"):
-			run = True
-			x=sys.argv.index(o)+1
-			while x < len(sys.argv):
-				mycmdline.append(sys.argv[x])
-				x=x+1
-
-		if o in ("-f", "--file"):
-			run = True
-			myspecfile=a
-
-		if o in ("-F", "--fetchonly"):
-			conf_values["FETCH"]="1"
-
-		if o in ("-v", "--verbose"):
-			conf_values["VERBOSE"]="1"
-
-		if o in ("-s", "--snapshot"):
-			if len(sys.argv) < 3:
-				print "!!! catalyst: missing snapshot identifier\n"
-				usage()
-				sys.exit(2)
-			else:
-				run = True
-				mycmdline.append("target=snapshot")
-				mycmdline.append("version_stamp="+a)
-
-		if o in ("-p", "--purge"):
-			conf_values["PURGE"] = "1"
-
-		if o in ("-P", "--purgeonly"):
-			conf_values["PURGEONLY"] = "1"
-
-		if o in ("-T", "--purgetmponly"):
-			conf_values["PURGETMPONLY"] = "1"
-
-		if o in ("-a", "--clear-autoresume"):
-			conf_values["CLEAR_AUTORESUME"] = "1"
-
-	if not run:
-		print "!!! catalyst: please specify one of either -f or -C\n"
-		usage()
-		sys.exit(2)
-
-	# import configuration file and import our main module using those settings
-	parse_config(myconfig)
-	sys.path.append(conf_values["sharedir"]+"/modules")
-	from catalyst_support import *
-
-	# Start checking that digests are valid now that the hash_map was imported
-	# from catalyst_support
-	if "digests" in conf_values:
-		for i in conf_values["digests"].split():
-			if i not in hash_map:
-				print
-				print i+" is not a valid digest entry"
-				print "Valid digest entries:"
-				print hash_map.keys()
-				print
-				print "Catalyst aborting...."
-				sys.exit(2)
-			if find_binary(hash_map[i][1]) == None:
-				print
-				print "digest="+i
-				print "\tThe "+hash_map[i][1]+\
-					" binary was not found. It needs to be in your system path"
-				print
-				print "Catalyst aborting...."
-				sys.exit(2)
-	if "hash_function" in conf_values:
-		if conf_values["hash_function"] not in hash_map:
-			print
-			print conf_values["hash_function"]+\
-				" is not a valid hash_function entry"
-			print "Valid hash_function entries:"
-			print hash_map.keys()
-			print
-			print "Catalyst aborting...."
-			sys.exit(2)
-		if find_binary(hash_map[conf_values["hash_function"]][1]) == None:
-			print
-			print "hash_function="+conf_values["hash_function"]
-			print "\tThe "+hash_map[conf_values["hash_function"]][1]+\
-				" binary was not found. It needs to be in your system path"
-			print
-			print "Catalyst aborting...."
-			sys.exit(2)
-
-	# import the rest of the catalyst modules
-	targetmap=import_modules()
-
-	addlargs={}
-
-	if myspecfile:
-		spec = modules.catalyst.config.SpecParser(myspecfile)
-		addlargs.update(spec.get_values())
-
-	if mycmdline:
-		try:
-			cmdline = modules.catalyst.config.ConfigParser()
-			cmdline.parse_lines(mycmdline)
-			addlargs.update(cmdline.get_values())
-		except CatalystError:
-			print "!!! catalyst: Could not parse commandline, exiting."
-			sys.exit(1)
-
-	if "target" not in addlargs:
-		raise CatalystError, "Required value \"target\" not specified."
-
-	# everything is setup, so the build is a go
-	try:
-		build_target(addlargs, targetmap)
-
-	except CatalystError:
-		print
-		print "Catalyst aborting...."
-		sys.exit(2)
-	except KeyboardInterrupt:
-		print "\nCatalyst build aborted due to user interrupt ( Ctrl-C )"
-		print
-		print "Catalyst aborting...."
-		sys.exit(2)
-	except LockInUse:
-		print "Catalyst aborting...."
-		sys.exit(2)
-	except:
-		print "Catalyst aborting...."
-		raise
-		sys.exit(2)
diff --git a/catalyst/__init__.py b/catalyst/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/catalyst/arch/__init__.py b/catalyst/arch/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/catalyst/arch/__init__.py
@@ -0,0 +1 @@
+
diff --git a/catalyst/arch/alpha.py b/catalyst/arch/alpha.py
new file mode 100644
index 0000000..f0fc95a
--- /dev/null
+++ b/catalyst/arch/alpha.py
@@ -0,0 +1,75 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_alpha(builder.generic):
+	"abstract base class for all alpha builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CFLAGS"]="-mieee -pipe"
+
+class arch_alpha(generic_alpha):
+	"builder class for generic alpha (ev4+)"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=ev4"
+		self.settings["CHOST"]="alpha-unknown-linux-gnu"
+
+class arch_ev4(generic_alpha):
+	"builder class for alpha ev4"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=ev4"
+		self.settings["CHOST"]="alphaev4-unknown-linux-gnu"
+
+class arch_ev45(generic_alpha):
+	"builder class for alpha ev45"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=ev45"
+		self.settings["CHOST"]="alphaev45-unknown-linux-gnu"
+
+class arch_ev5(generic_alpha):
+	"builder class for alpha ev5"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=ev5"
+		self.settings["CHOST"]="alphaev5-unknown-linux-gnu"
+
+class arch_ev56(generic_alpha):
+	"builder class for alpha ev56 (ev5 plus BWX)"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=ev56"
+		self.settings["CHOST"]="alphaev56-unknown-linux-gnu"
+
+class arch_pca56(generic_alpha):
+	"builder class for alpha pca56 (ev5 plus BWX & MAX)"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=pca56"
+		self.settings["CHOST"]="alphaev56-unknown-linux-gnu"
+
+class arch_ev6(generic_alpha):
+	"builder class for alpha ev6"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=ev6"
+		self.settings["CHOST"]="alphaev6-unknown-linux-gnu"
+		self.settings["HOSTUSE"]=["ev6"]
+
+class arch_ev67(generic_alpha):
+	"builder class for alpha ev67 (ev6 plus CIX)"
+	def __init__(self,myspec):
+		generic_alpha.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -O2 -mcpu=ev67"
+		self.settings["CHOST"]="alphaev67-unknown-linux-gnu"
+		self.settings["HOSTUSE"]=["ev6"]
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({ "alpha":arch_alpha, "ev4":arch_ev4, "ev45":arch_ev45,
+		"ev5":arch_ev5, "ev56":arch_ev56, "pca56":arch_pca56,
+		"ev6":arch_ev6, "ev67":arch_ev67 },
+	("alpha", ))
diff --git a/catalyst/arch/amd64.py b/catalyst/arch/amd64.py
new file mode 100644
index 0000000..262b55a
--- /dev/null
+++ b/catalyst/arch/amd64.py
@@ -0,0 +1,83 @@
+
+import builder
+
+class generic_amd64(builder.generic):
+	"abstract base class for all amd64 builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+
+class arch_amd64(generic_amd64):
+	"builder class for generic amd64 (Intel and AMD)"
+	def __init__(self,myspec):
+		generic_amd64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="x86_64-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
+
+class arch_nocona(generic_amd64):
+	"improved version of Intel Pentium 4 CPU with 64-bit extensions, MMX, SSE, SSE2 and SSE3 support"
+	def __init__(self,myspec):
+		generic_amd64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=nocona -pipe"
+		self.settings["CHOST"]="x86_64-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
+
+# Requires gcc 4.3 to use this class
+class arch_core2(generic_amd64):
+	"Intel Core 2 CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3 support"
+	def __init__(self,myspec):
+		generic_amd64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=core2 -pipe"
+		self.settings["CHOST"]="x86_64-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2","ssse3"]
+
+class arch_k8(generic_amd64):
+	"generic k8, opteron and athlon64 support"
+	def __init__(self,myspec):
+		generic_amd64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=k8 -pipe"
+		self.settings["CHOST"]="x86_64-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2","3dnow"]
+
+class arch_k8_sse3(generic_amd64):
+	"improved versions of k8, opteron and athlon64 with SSE3 support"
+	def __init__(self,myspec):
+		generic_amd64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=k8-sse3 -pipe"
+		self.settings["CHOST"]="x86_64-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2","3dnow"]
+
+class arch_amdfam10(generic_amd64):
+	"AMD Family 10h core based CPUs with x86-64 instruction set support"
+	def __init__(self,myspec):
+		generic_amd64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=amdfam10 -pipe"
+		self.settings["CHOST"]="x86_64-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2","3dnow"]
+
+class arch_x32(generic_amd64):
+	"builder class for generic x32 (Intel and AMD)"
+	def __init__(self,myspec):
+		generic_amd64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="x86_64-pc-linux-gnux32"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
+
+def register():
+	"inform main catalyst program of the contents of this plugin"
+	return ({
+		"amd64"		: arch_amd64,
+		"k8"		: arch_k8,
+		"opteron"	: arch_k8,
+		"athlon64"	: arch_k8,
+		"athlonfx"	: arch_k8,
+		"nocona"	: arch_nocona,
+		"core2"		: arch_core2,
+		"k8-sse3"	: arch_k8_sse3,
+		"opteron-sse3"	: arch_k8_sse3,
+		"athlon64-sse3"	: arch_k8_sse3,
+		"amdfam10"	: arch_amdfam10,
+		"barcelona"	: arch_amdfam10,
+		"x32"		: arch_x32,
+	}, ("x86_64","amd64","nocona"))
diff --git a/catalyst/arch/arm.py b/catalyst/arch/arm.py
new file mode 100644
index 0000000..2de3942
--- /dev/null
+++ b/catalyst/arch/arm.py
@@ -0,0 +1,133 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_arm(builder.generic):
+	"Abstract base class for all arm (little endian) builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CFLAGS"]="-O2 -pipe"
+
+class generic_armeb(builder.generic):
+	"Abstract base class for all arm (big endian) builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CFLAGS"]="-O2 -pipe"
+
+class arch_arm(generic_arm):
+	"Builder class for arm (little endian) target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="arm-unknown-linux-gnu"
+
+class arch_armeb(generic_armeb):
+	"Builder class for arm (big endian) target"
+	def __init__(self,myspec):
+		generic_armeb.__init__(self,myspec)
+		self.settings["CHOST"]="armeb-unknown-linux-gnu"
+
+class arch_armv4l(generic_arm):
+	"Builder class for armv4l target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv4l-unknown-linux-gnu"
+		self.settings["CFLAGS"]+=" -march=armv4"
+
+class arch_armv4tl(generic_arm):
+	"Builder class for armv4tl target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv4tl-softfloat-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv4t"
+
+class arch_armv5tl(generic_arm):
+	"Builder class for armv5tl target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv5tl-softfloat-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv5t"
+
+class arch_armv5tel(generic_arm):
+	"Builder class for armv5tel target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv5tel-softfloat-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv5te"
+
+class arch_armv5tejl(generic_arm):
+	"Builder class for armv5tejl target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv5tejl-softfloat-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv5te"
+
+class arch_armv6j(generic_arm):
+	"Builder class for armv6j target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv6j-softfp-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv6j -mfpu=vfp -mfloat-abi=softfp"
+
+class arch_armv6z(generic_arm):
+	"Builder class for armv6z target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv6z-softfp-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv6z -mfpu=vfp -mfloat-abi=softfp"
+
+class arch_armv6zk(generic_arm):
+	"Builder class for armv6zk target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv6zk-softfp-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv6zk -mfpu=vfp -mfloat-abi=softfp"
+
+class arch_armv7a(generic_arm):
+	"Builder class for armv7a target"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv7a-softfp-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv7-a -mfpu=vfpv3-d16 -mfloat-abi=softfp"
+
+class arch_armv6j_hardfp(generic_arm):
+	"Builder class for armv6j hardfloat target, needs >=gcc-4.5"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv6j-hardfloat-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv6j -mfpu=vfp -mfloat-abi=hard"
+
+class arch_armv7a_hardfp(generic_arm):
+	"Builder class for armv7a hardfloat target, needs >=gcc-4.5"
+	def __init__(self,myspec):
+		generic_arm.__init__(self,myspec)
+		self.settings["CHOST"]="armv7a-hardfloat-linux-gnueabi"
+		self.settings["CFLAGS"]+=" -march=armv7-a -mfpu=vfpv3-d16 -mfloat-abi=hard"
+
+class arch_armv5teb(generic_armeb):
+	"Builder class for armv5teb (XScale) target"
+	def __init__(self,myspec):
+		generic_armeb.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -mcpu=xscale"
+		self.settings["CHOST"]="armv5teb-softfloat-linux-gnueabi"
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({
+		"arm"    : arch_arm,
+		"armv4l" : arch_armv4l,
+		"armv4tl": arch_armv4tl,
+		"armv5tl": arch_armv5tl,
+		"armv5tel": arch_armv5tel,
+		"armv5tejl": arch_armv5tejl,
+		"armv6j" : arch_armv6j,
+		"armv6z" : arch_armv6z,
+		"armv6zk" : arch_armv6zk,
+		"armv7a" : arch_armv7a,
+		"armv6j_hardfp" : arch_armv6j_hardfp,
+		"armv7a_hardfp" : arch_armv7a_hardfp,
+		"armeb"  : arch_armeb,
+		"armv5teb" : arch_armv5teb
+	}, ("arm", "armv4l", "armv4tl", "armv5tl", "armv5tel", "armv5tejl", "armv6l",
+"armv7l", "armeb", "armv5teb") )
diff --git a/catalyst/arch/hppa.py b/catalyst/arch/hppa.py
new file mode 100644
index 0000000..f804398
--- /dev/null
+++ b/catalyst/arch/hppa.py
@@ -0,0 +1,40 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_hppa(builder.generic):
+	"Abstract base class for all hppa builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CXXFLAGS"]="-O2 -pipe"
+
+class arch_hppa(generic_hppa):
+	"Builder class for hppa systems"
+	def __init__(self,myspec):
+		generic_hppa.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -march=1.0"
+		self.settings["CHOST"]="hppa-unknown-linux-gnu"
+
+class arch_hppa1_1(generic_hppa):
+	"Builder class for hppa 1.1 systems"
+	def __init__(self,myspec):
+		generic_hppa.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -march=1.1"
+		self.settings["CHOST"]="hppa1.1-unknown-linux-gnu"
+
+class arch_hppa2_0(generic_hppa):
+	"Builder class for hppa 2.0 systems"
+	def __init__(self,myspec):
+		generic_hppa.__init__(self,myspec)
+		self.settings["CFLAGS"]+=" -march=2.0"
+		self.settings["CHOST"]="hppa2.0-unknown-linux-gnu"
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({
+			"hppa":		arch_hppa,
+			"hppa1.1":	arch_hppa1_1,
+			"hppa2.0":	arch_hppa2_0
+	}, ("parisc","parisc64","hppa","hppa64") )
diff --git a/catalyst/arch/ia64.py b/catalyst/arch/ia64.py
new file mode 100644
index 0000000..825af70
--- /dev/null
+++ b/catalyst/arch/ia64.py
@@ -0,0 +1,16 @@
+
+import builder,os
+from catalyst_support import *
+
+class arch_ia64(builder.generic):
+	"builder class for ia64"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="ia64-unknown-linux-gnu"
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({ "ia64":arch_ia64 }, ("ia64", ))
diff --git a/catalyst/arch/mips.py b/catalyst/arch/mips.py
new file mode 100644
index 0000000..b3730fa
--- /dev/null
+++ b/catalyst/arch/mips.py
@@ -0,0 +1,464 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_mips(builder.generic):
+	"Abstract base class for all mips builders [Big-endian]"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CHOST"]="mips-unknown-linux-gnu"
+
+class generic_mipsel(builder.generic):
+	"Abstract base class for all mipsel builders [Little-endian]"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CHOST"]="mipsel-unknown-linux-gnu"
+
+class generic_mips64(builder.generic):
+	"Abstract base class for all mips64 builders [Big-endian]"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CHOST"]="mips64-unknown-linux-gnu"
+
+class generic_mips64el(builder.generic):
+	"Abstract base class for all mips64el builders [Little-endian]"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+		self.settings["CHOST"]="mips64el-unknown-linux-gnu"
+
+class arch_mips1(generic_mips):
+	"Builder class for MIPS I [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips1 -mabi=32 -mplt -pipe"
+
+class arch_mips32(generic_mips):
+	"Builder class for MIPS 32 [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
+
+class arch_mips32_softfloat(generic_mips):
+	"Builder class for MIPS 32 [Big-endian softfloat]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
+		self.settings["CHOST"]="mips-softfloat-linux-gnu"
+
+class arch_mips32r2(generic_mips):
+	"Builder class for MIPS 32r2 [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
+
+class arch_mips32r2_softfloat(generic_mips):
+	"Builder class for MIPS 32r2 [Big-endian softfloat]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
+		self.settings["CHOST"]="mips-softfloat-linux-gnu"
+
+class arch_mips3(generic_mips):
+	"Builder class for MIPS III [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=32 -mplt -mfix-r4000 -mfix-r4400 -pipe"
+
+class arch_mips3_n32(generic_mips64):
+	"Builder class for MIPS III [Big-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=n32 -mplt -mfix-r4000 -mfix-r4400 -pipe"
+
+class arch_mips3_n64(generic_mips64):
+	"Builder class for MIPS III [Big-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=64 -mfix-r4000 -mfix-r4400 -pipe"
+
+class arch_mips3_multilib(generic_mips64):
+	"Builder class for MIPS III [Big-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mplt -mfix-r4000 -mfix-r4400 -pipe"
+
+class arch_mips4(generic_mips):
+	"Builder class for MIPS IV [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=32 -mplt -pipe"
+
+class arch_mips4_n32(generic_mips64):
+	"Builder class for MIPS IV [Big-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=n32 -mplt -pipe"
+
+class arch_mips4_n64(generic_mips64):
+	"Builder class for MIPS IV [Big-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=64 -pipe"
+
+class arch_mips4_multilib(generic_mips64):
+	"Builder class for MIPS IV [Big-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mplt -pipe"
+
+class arch_mips4_r10k(generic_mips):
+	"Builder class for MIPS IV R10k [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=r10k -mabi=32 -mplt -pipe"
+
+class arch_mips4_r10k_n32(generic_mips64):
+	"Builder class for MIPS IV R10k [Big-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=r10k -mabi=n32 -mplt -pipe"
+
+class arch_mips4_r10k_n64(generic_mips64):
+	"Builder class for MIPS IV R10k [Big-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=r10k -mabi=64 -pipe"
+
+class arch_mips4_r10k_multilib(generic_mips64):
+	"Builder class for MIPS IV R10k [Big-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=r10k -mplt -pipe"
+
+class arch_mips64(generic_mips):
+	"Builder class for MIPS 64 [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=32 -mplt -pipe"
+
+class arch_mips64_n32(generic_mips64):
+	"Builder class for MIPS 64 [Big-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=n32 -mplt -pipe"
+
+class arch_mips64_n64(generic_mips64):
+	"Builder class for MIPS 64 [Big-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=64 -pipe"
+
+class arch_mips64_multilib(generic_mips64):
+	"Builder class for MIPS 64 [Big-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mplt -pipe"
+
+class arch_mips64r2(generic_mips):
+	"Builder class for MIPS 64r2 [Big-endian]"
+	def __init__(self,myspec):
+		generic_mips.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=32 -mplt -pipe"
+
+class arch_mips64r2_n32(generic_mips64):
+	"Builder class for MIPS 64r2 [Big-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=n32 -mplt -pipe"
+
+class arch_mips64r2_n64(generic_mips64):
+	"Builder class for MIPS 64r2 [Big-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=64 -pipe"
+
+class arch_mips64r2_multilib(generic_mips64):
+	"Builder class for MIPS 64r2 [Big-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mplt -pipe"
+
+class arch_mipsel1(generic_mipsel):
+	"Builder class for MIPS I [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips1 -mabi=32 -mplt -pipe"
+
+class arch_mips32el(generic_mipsel):
+	"Builder class for MIPS 32 [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
+
+class arch_mips32el_softfloat(generic_mipsel):
+	"Builder class for MIPS 32 [Little-endian softfloat]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32 -mabi=32 -mplt -pipe"
+		self.settings["CHOST"]="mipsel-softfloat-linux-gnu"
+
+class arch_mips32r2el(generic_mipsel):
+	"Builder class for MIPS 32r2 [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
+
+class arch_mips32r2el_softfloat(generic_mipsel):
+	"Builder class for MIPS 32r2 [Little-endian softfloat]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips32r2 -mabi=32 -mplt -pipe"
+		self.settings["CHOST"]="mipsel-softfloat-linux-gnu"
+
+class arch_mipsel3(generic_mipsel):
+	"Builder class for MIPS III [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_mipsel3_n32(generic_mips64el):
+	"Builder class for MIPS III [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=n32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_mipsel3_n64(generic_mips64el):
+	"Builder class for MIPS III [Little-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mabi=64 -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_mipsel3_multilib(generic_mips64el):
+	"Builder class for MIPS III [Little-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips3 -mplt -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_loongson2e(generic_mipsel):
+	"Builder class for Loongson 2E [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2e -mabi=32 -mplt -pipe"
+
+class arch_loongson2e_n32(generic_mips64el):
+	"Builder class for Loongson 2E [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2e -mabi=n32 -mplt -pipe"
+
+class arch_loongson2e_n64(generic_mips64el):
+	"Builder class for Loongson 2E [Little-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2e -mabi=64 -pipe"
+
+class arch_loongson2e_multilib(generic_mips64el):
+	"Builder class for Loongson 2E [Little-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2e -mplt -pipe"
+
+class arch_loongson2f(generic_mipsel):
+	"Builder class for Loongson 2F [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2f -mabi=32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_loongson2f_n32(generic_mips64el):
+	"Builder class for Loongson 2F [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2f -mabi=n32 -mplt -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_loongson2f_n64(generic_mips64el):
+	"Builder class for Loongson 2F [Little-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2f -mabi=64 -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_loongson2f_multilib(generic_mips64el):
+	"Builder class for Loongson 2F [Little-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson2f -mplt -Wa,-mfix-loongson2f-nop -pipe"
+
+class arch_mipsel4(generic_mipsel):
+	"Builder class for MIPS IV [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=32 -mplt -pipe"
+
+class arch_mipsel4_n32(generic_mips64el):
+	"Builder class for MIPS IV [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=n32 -mplt -pipe"
+
+class arch_mipsel4_n64(generic_mips64el):
+	"Builder class for MIPS IV [Little-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mabi=64 -pipe"
+
+class arch_mipsel4_multilib(generic_mips64el):
+	"Builder class for MIPS IV [Little-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips4 -mplt -pipe"
+
+class arch_mips64el(generic_mipsel):
+	"Builder class for MIPS 64 [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=32 -mplt -pipe"
+
+class arch_mips64el_n32(generic_mips64el):
+	"Builder class for MIPS 64 [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=n32 -mplt -pipe"
+
+class arch_mips64el_n64(generic_mips64el):
+	"Builder class for MIPS 64 [Little-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mabi=64 -pipe"
+
+class arch_mips64el_multilib(generic_mips64el):
+	"Builder class for MIPS 64 [Little-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64 -mplt -pipe"
+
+class arch_mips64r2el(generic_mipsel):
+	"Builder class for MIPS 64r2 [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=32 -mplt -pipe"
+
+class arch_mips64r2el_n32(generic_mips64el):
+	"Builder class for MIPS 64r2 [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=n32 -mplt -pipe"
+
+class arch_mips64r2el_n64(generic_mips64el):
+	"Builder class for MIPS 64r2 [Little-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mabi=64 -pipe"
+
+class arch_mips64r2el_multilib(generic_mips64el):
+	"Builder class for MIPS 64r2 [Little-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=mips64r2 -mplt -pipe"
+
+class arch_loongson3a(generic_mipsel):
+	"Builder class for Loongson 3A [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson3a -mabi=32 -mplt -pipe"
+
+class arch_loongson3a_n32(generic_mips64el):
+	"Builder class for Loongson 3A [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson3a -mabi=n32 -mplt -pipe"
+
+class arch_loongson3a_n64(generic_mips64el):
+	"Builder class for Loongson 3A [Little-endian N64]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson3a -mabi=64 -pipe"
+
+class arch_loongson3a_multilib(generic_mips64el):
+	"Builder class for Loongson 3A [Little-endian multilib]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=loongson3a -mplt -pipe"
+
+class arch_cobalt(generic_mipsel):
+	"Builder class for cobalt [Little-endian]"
+	def __init__(self,myspec):
+		generic_mipsel.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=r5000 -mabi=32 -mplt -pipe"
+		self.settings["HOSTUSE"]=["cobalt"]
+
+class arch_cobalt_n32(generic_mips64el):
+	"Builder class for cobalt [Little-endian N32]"
+	def __init__(self,myspec):
+		generic_mips64el.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=r5000 -mabi=n32 -mplt -pipe"
+		self.settings["HOSTUSE"]=["cobalt"]
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({
+			"cobalt"				: arch_cobalt,
+			"cobalt_n32"			: arch_cobalt_n32,
+			"mips"					: arch_mips1,
+			"mips1"					: arch_mips1,
+			"mips32"				: arch_mips32,
+			"mips32_softfloat"		: arch_mips32_softfloat,
+			"mips32r2"				: arch_mips32r2,
+			"mips32r2_softfloat"	: arch_mips32r2_softfloat,
+			"mips3"					: arch_mips3,
+			"mips3_n32"				: arch_mips3_n32,
+			"mips3_n64"				: arch_mips3_n64,
+			"mips3_multilib"		: arch_mips3_multilib,
+			"mips4"					: arch_mips4,
+			"mips4_n32"				: arch_mips4_n32,
+			"mips4_n64"				: arch_mips4_n64,
+			"mips4_multilib"		: arch_mips4_multilib,
+			"mips4_r10k"			: arch_mips4_r10k,
+			"mips4_r10k_n32"		: arch_mips4_r10k_n32,
+			"mips4_r10k_n64"		: arch_mips4_r10k_n64,
+			"mips4_r10k_multilib"	: arch_mips4_r10k_multilib,
+			"mips64"				: arch_mips64,
+			"mips64_n32"			: arch_mips64_n32,
+			"mips64_n64"			: arch_mips64_n64,
+			"mips64_multilib"		: arch_mips64_multilib,
+			"mips64r2"				: arch_mips64r2,
+			"mips64r2_n32"			: arch_mips64r2_n32,
+			"mips64r2_n64"			: arch_mips64r2_n64,
+			"mips64r2_multilib"		: arch_mips64r2_multilib,
+			"mipsel"				: arch_mipsel1,
+			"mipsel1"				: arch_mipsel1,
+			"mips32el"				: arch_mips32el,
+			"mips32el_softfloat"	: arch_mips32el_softfloat,
+			"mips32r2el"			: arch_mips32r2el,
+			"mips32r2el_softfloat"	: arch_mips32r2el_softfloat,
+			"mipsel3"				: arch_mipsel3,
+			"mipsel3_n32"			: arch_mipsel3_n32,
+			"mipsel3_n64"			: arch_mipsel3_n64,
+			"mipsel3_multilib"		: arch_mipsel3_multilib,
+			"mipsel4"				: arch_mipsel4,
+			"mipsel4_n32"			: arch_mipsel4_n32,
+			"mipsel4_n64"			: arch_mipsel4_n64,
+			"mipsel4_multilib"		: arch_mipsel4_multilib,
+			"mips64el"				: arch_mips64el,
+			"mips64el_n32"			: arch_mips64el_n32,
+			"mips64el_n64"			: arch_mips64el_n64,
+			"mips64el_multilib"		: arch_mips64el_multilib,
+			"mips64r2el"			: arch_mips64r2el,
+			"mips64r2el_n32"		: arch_mips64r2el_n32,
+			"mips64r2el_n64"		: arch_mips64r2el_n64,
+			"mips64r2el_multilib"	: arch_mips64r2el_multilib,
+			"loongson2e"			: arch_loongson2e,
+			"loongson2e_n32"		: arch_loongson2e_n32,
+			"loongson2e_n64"		: arch_loongson2e_n64,
+			"loongson2e_multilib"	: arch_loongson2e_multilib,
+			"loongson2f"			: arch_loongson2f,
+			"loongson2f_n32"		: arch_loongson2f_n32,
+			"loongson2f_n64"		: arch_loongson2f_n64,
+			"loongson2f_multilib"	: arch_loongson2f_multilib,
+			"loongson3a"			: arch_loongson3a,
+			"loongson3a_n32"		: arch_loongson3a_n32,
+			"loongson3a_n64"		: arch_loongson3a_n64,
+			"loongson3a_multilib"	: arch_loongson3a_multilib,
+	}, ("mips","mips64"))
diff --git a/catalyst/arch/powerpc.py b/catalyst/arch/powerpc.py
new file mode 100644
index 0000000..e9f611b
--- /dev/null
+++ b/catalyst/arch/powerpc.py
@@ -0,0 +1,124 @@
+
+import os,builder
+from catalyst_support import *
+
+class generic_ppc(builder.generic):
+	"abstract base class for all 32-bit powerpc builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHOST"]="powerpc-unknown-linux-gnu"
+		if self.settings["buildarch"]=="ppc64":
+			if not os.path.exists("/bin/linux32") and not os.path.exists("/usr/bin/linux32"):
+				raise CatalystError,"required executable linux32 not found (\"emerge setarch\" to fix.)"
+			self.settings["CHROOT"]="linux32 chroot"
+			self.settings["crosscompile"] = False;
+		else:
+			self.settings["CHROOT"]="chroot"
+
+class generic_ppc64(builder.generic):
+	"abstract base class for all 64-bit powerpc builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+
+class arch_ppc(generic_ppc):
+	"builder class for generic powerpc"
+	def __init__(self,myspec):
+		generic_ppc.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mcpu=powerpc -mtune=powerpc -pipe"
+
+class arch_ppc64(generic_ppc64):
+	"builder class for generic ppc64"
+	def __init__(self,myspec):
+		generic_ppc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="powerpc64-unknown-linux-gnu"
+
+class arch_970(arch_ppc64):
+	"builder class for 970 aka G5 under ppc64"
+	def __init__(self,myspec):
+		arch_ppc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe -mcpu=970 -mtune=970"
+		self.settings["HOSTUSE"]=["altivec"]
+
+class arch_cell(arch_ppc64):
+	"builder class for cell under ppc64"
+	def __init__(self,myspec):
+		arch_ppc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe -mcpu=cell -mtune=cell"
+		self.settings["HOSTUSE"]=["altivec","ibm"]
+
+class arch_g3(generic_ppc):
+	def __init__(self,myspec):
+		generic_ppc.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mcpu=G3 -mtune=G3 -pipe"
+
+class arch_g4(generic_ppc):
+	def __init__(self,myspec):
+		generic_ppc.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mcpu=G4 -mtune=G4 -maltivec -mabi=altivec -pipe"
+		self.settings["HOSTUSE"]=["altivec"]
+
+class arch_g5(generic_ppc):
+	def __init__(self,myspec):
+		generic_ppc.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mcpu=G5 -mtune=G5 -maltivec -mabi=altivec -pipe"
+		self.settings["HOSTUSE"]=["altivec"]
+
+class arch_power(generic_ppc):
+	"builder class for generic power"
+	def __init__(self,myspec):
+		generic_ppc.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mcpu=power -mtune=power -pipe"
+
+class arch_power_ppc(generic_ppc):
+	"builder class for generic powerpc/power"
+	def __init__(self,myspec):
+		generic_ppc.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mcpu=common -mtune=common -pipe"
+
+class arch_power3(arch_ppc64):
+	"builder class for power3 under ppc64"
+	def __init__(self,myspec):
+		arch_ppc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power3 -mtune=power3"
+		self.settings["HOSTUSE"]=["ibm"]
+
+class arch_power4(arch_ppc64):
+	"builder class for power4 under ppc64"
+	def __init__(self,myspec):
+		arch_ppc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power4 -mtune=power4"
+		self.settings["HOSTUSE"]=["ibm"]
+
+class arch_power5(arch_ppc64):
+	"builder class for power5 under ppc64"
+	def __init__(self,myspec):
+		arch_ppc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power5 -mtune=power5"
+		self.settings["HOSTUSE"]=["ibm"]
+
+class arch_power6(arch_ppc64):
+	"builder class for power6 under ppc64"
+	def __init__(self,myspec):
+		arch_ppc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe -mcpu=power6 -mtune=power6"
+		self.settings["HOSTUSE"]=["altivec","ibm"]
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({
+		"970"		: arch_970,
+		"cell"		: arch_cell,
+		"g3"		: arch_g3,
+		"g4"		: arch_g4,
+		"g5"		: arch_g5,
+		"power"		: arch_power,
+		"power-ppc"	: arch_power_ppc,
+		"power3"	: arch_power3,
+		"power4"	: arch_power4,
+		"power5"	: arch_power5,
+		"power6"	: arch_power6,
+		"ppc"		: arch_ppc,
+		"ppc64"		: arch_ppc64
+	}, ("ppc","ppc64","powerpc","powerpc64"))
diff --git a/catalyst/arch/s390.py b/catalyst/arch/s390.py
new file mode 100644
index 0000000..bf22f66
--- /dev/null
+++ b/catalyst/arch/s390.py
@@ -0,0 +1,33 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_s390(builder.generic):
+	"abstract base class for all s390 builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+
+class generic_s390x(builder.generic):
+	"abstract base class for all s390x builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+
+class arch_s390(generic_s390):
+	"builder class for generic s390"
+	def __init__(self,myspec):
+		generic_s390.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="s390-ibm-linux-gnu"
+
+class arch_s390x(generic_s390x):
+	"builder class for generic s390x"
+	def __init__(self,myspec):
+		generic_s390x.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="s390x-ibm-linux-gnu"
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({"s390":arch_s390,"s390x":arch_s390x}, ("s390", "s390x"))
diff --git a/catalyst/arch/sh.py b/catalyst/arch/sh.py
new file mode 100644
index 0000000..2fc9531
--- /dev/null
+++ b/catalyst/arch/sh.py
@@ -0,0 +1,116 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_sh(builder.generic):
+	"Abstract base class for all sh builders [Little-endian]"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+
+class generic_sheb(builder.generic):
+	"Abstract base class for all sheb builders [Big-endian]"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+
+class arch_sh(generic_sh):
+	"Builder class for SH [Little-endian]"
+	def __init__(self,myspec):
+		generic_sh.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="sh-unknown-linux-gnu"
+
+class arch_sh2(generic_sh):
+	"Builder class for SH-2 [Little-endian]"
+	def __init__(self,myspec):
+		generic_sh.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m2 -pipe"
+		self.settings["CHOST"]="sh2-unknown-linux-gnu"
+
+class arch_sh2a(generic_sh):
+	"Builder class for SH-2A [Little-endian]"
+	def __init__(self,myspec):
+		generic_sh.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m2a -pipe"
+		self.settings["CHOST"]="sh2a-unknown-linux-gnu"
+
+class arch_sh3(generic_sh):
+	"Builder class for SH-3 [Little-endian]"
+	def __init__(self,myspec):
+		generic_sh.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m3 -pipe"
+		self.settings["CHOST"]="sh3-unknown-linux-gnu"
+
+class arch_sh4(generic_sh):
+	"Builder class for SH-4 [Little-endian]"
+	def __init__(self,myspec):
+		generic_sh.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m4 -pipe"
+		self.settings["CHOST"]="sh4-unknown-linux-gnu"
+
+class arch_sh4a(generic_sh):
+	"Builder class for SH-4A [Little-endian]"
+	def __init__(self,myspec):
+		generic_sh.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m4a -pipe"
+		self.settings["CHOST"]="sh4a-unknown-linux-gnu"
+
+class arch_sheb(generic_sheb):
+	"Builder class for SH [Big-endian]"
+	def __init__(self,myspec):
+		generic_sheb.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="sheb-unknown-linux-gnu"
+
+class arch_sh2eb(generic_sheb):
+	"Builder class for SH-2 [Big-endian]"
+	def __init__(self,myspec):
+		generic_sheb.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m2 -pipe"
+		self.settings["CHOST"]="sh2eb-unknown-linux-gnu"
+
+class arch_sh2aeb(generic_sheb):
+	"Builder class for SH-2A [Big-endian]"
+	def __init__(self,myspec):
+		generic_sheb.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m2a -pipe"
+		self.settings["CHOST"]="sh2aeb-unknown-linux-gnu"
+
+class arch_sh3eb(generic_sheb):
+	"Builder class for SH-3 [Big-endian]"
+	def __init__(self,myspec):
+		generic_sheb.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m3 -pipe"
+		self.settings["CHOST"]="sh3eb-unknown-linux-gnu"
+
+class arch_sh4eb(generic_sheb):
+	"Builder class for SH-4 [Big-endian]"
+	def __init__(self,myspec):
+		generic_sheb.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m4 -pipe"
+		self.settings["CHOST"]="sh4eb-unknown-linux-gnu"
+
+class arch_sh4aeb(generic_sheb):
+	"Builder class for SH-4A [Big-endian]"
+	def __init__(self,myspec):
+		generic_sheb.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -m4a -pipe"
+		self.settings["CHOST"]="sh4aeb-unknown-linux-gnu"
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({
+			"sh"	:arch_sh,
+			"sh2"	:arch_sh2,
+			"sh2a"	:arch_sh2a,
+			"sh3"	:arch_sh3,
+			"sh4"	:arch_sh4,
+			"sh4a"	:arch_sh4a,
+			"sheb"	:arch_sheb,
+			"sh2eb" :arch_sh2eb,
+			"sh2aeb" :arch_sh2aeb,
+			"sh3eb"	:arch_sh3eb,
+			"sh4eb"	:arch_sh4eb,
+			"sh4aeb" :arch_sh4aeb
+	}, ("sh2","sh2a","sh3","sh4","sh4a","sh2eb","sh2aeb","sh3eb","sh4eb","sh4aeb"))
diff --git a/catalyst/arch/sparc.py b/catalyst/arch/sparc.py
new file mode 100644
index 0000000..5eb5344
--- /dev/null
+++ b/catalyst/arch/sparc.py
@@ -0,0 +1,42 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_sparc(builder.generic):
+	"abstract base class for all sparc builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		if self.settings["buildarch"]=="sparc64":
+			if not os.path.exists("/bin/linux32") and not os.path.exists("/usr/bin/linux32"):
+				raise CatalystError,"required executable linux32 not found (\"emerge setarch\" to fix.)"
+			self.settings["CHROOT"]="linux32 chroot"
+			self.settings["crosscompile"] = False;
+		else:
+			self.settings["CHROOT"]="chroot"
+
+class generic_sparc64(builder.generic):
+	"abstract base class for all sparc64 builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		self.settings["CHROOT"]="chroot"
+
+class arch_sparc(generic_sparc):
+	"builder class for generic sparc (sun4cdm)"
+	def __init__(self,myspec):
+		generic_sparc.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -pipe"
+		self.settings["CHOST"]="sparc-unknown-linux-gnu"
+
+class arch_sparc64(generic_sparc64):
+	"builder class for generic sparc64 (sun4u)"
+	def __init__(self,myspec):
+		generic_sparc64.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mcpu=ultrasparc -pipe"
+		self.settings["CHOST"]="sparc-unknown-linux-gnu"
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({
+		"sparc"		: arch_sparc,
+		"sparc64"	: arch_sparc64
+	}, ("sparc","sparc64", ))
diff --git a/catalyst/arch/x86.py b/catalyst/arch/x86.py
new file mode 100644
index 0000000..0391b79
--- /dev/null
+++ b/catalyst/arch/x86.py
@@ -0,0 +1,153 @@
+
+import builder,os
+from catalyst_support import *
+
+class generic_x86(builder.generic):
+	"abstract base class for all x86 builders"
+	def __init__(self,myspec):
+		builder.generic.__init__(self,myspec)
+		if self.settings["buildarch"]=="amd64":
+			if not os.path.exists("/bin/linux32") and not os.path.exists("/usr/bin/linux32"):
+					raise CatalystError,"required executable linux32 not found (\"emerge setarch\" to fix.)"
+			self.settings["CHROOT"]="linux32 chroot"
+			self.settings["crosscompile"] = False;
+		else:
+			self.settings["CHROOT"]="chroot"
+
+class arch_x86(generic_x86):
+	"builder class for generic x86 (386+)"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -mtune=i686 -pipe"
+		self.settings["CHOST"]="i386-pc-linux-gnu"
+
+class arch_i386(generic_x86):
+	"Intel i386 CPU"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=i386 -pipe"
+		self.settings["CHOST"]="i386-pc-linux-gnu"
+
+class arch_i486(generic_x86):
+	"Intel i486 CPU"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=i486 -pipe"
+		self.settings["CHOST"]="i486-pc-linux-gnu"
+
+class arch_i586(generic_x86):
+	"Intel Pentium CPU"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=i586 -pipe"
+		self.settings["CHOST"]="i586-pc-linux-gnu"
+
+class arch_i686(generic_x86):
+	"Intel Pentium Pro CPU"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=i686 -pipe"
+		self.settings["CHOST"]="i686-pc-linux-gnu"
+
+class arch_pentium_mmx(generic_x86):
+	"Intel Pentium MMX CPU with MMX support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=pentium-mmx -pipe"
+		self.settings["HOSTUSE"]=["mmx"]
+
+class arch_pentium2(generic_x86):
+	"Intel Pentium 2 CPU with MMX support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=pentium2 -pipe"
+		self.settings["HOSTUSE"]=["mmx"]
+
+class arch_pentium3(generic_x86):
+	"Intel Pentium 3 CPU with MMX and SSE support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=pentium3 -pipe"
+		self.settings["HOSTUSE"]=["mmx","sse"]
+
+class arch_pentium4(generic_x86):
+	"Intel Pentium 4 CPU with MMX, SSE and SSE2 support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=pentium4 -pipe"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
+
+class arch_pentium_m(generic_x86):
+	"Intel Pentium M CPU with MMX, SSE and SSE2 support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=pentium-m -pipe"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
+
+class arch_prescott(generic_x86):
+	"improved version of Intel Pentium 4 CPU with MMX, SSE, SSE2 and SSE3 support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=prescott -pipe"
+		self.settings["HOSTUSE"]=["mmx","sse","sse2"]
+		self.settings["CHOST"]="i686-pc-linux-gnu"
+
+class arch_k6(generic_x86):
+	"AMD K6 CPU with MMX support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=k6 -pipe"
+		self.settings["CHOST"]="i686-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx"]
+
+class arch_k6_2(generic_x86):
+	"AMD K6-2 CPU with MMX and 3dNOW! support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=k6-2 -pipe"
+		self.settings["CHOST"]="i686-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","3dnow"]
+
+class arch_athlon(generic_x86):
+	"AMD Athlon CPU with MMX, 3dNOW!, enhanced 3dNOW! and SSE prefetch support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=athlon -pipe"
+		self.settings["CHOST"]="i686-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","3dnow"]
+
+class arch_athlon_xp(generic_x86):
+	"improved AMD Athlon CPU with MMX, 3dNOW!, enhanced 3dNOW! and full SSE support"
+	def __init__(self,myspec):
+		generic_x86.__init__(self,myspec)
+		self.settings["CFLAGS"]="-O2 -march=athlon-xp -pipe"
+		self.settings["CHOST"]="i686-pc-linux-gnu"
+		self.settings["HOSTUSE"]=["mmx","3dnow","sse"]
+
+def register():
+	"Inform main catalyst program of the contents of this plugin."
+	return ({
+		"x86"			: arch_x86,
+		"i386"			: arch_i386,
+		"i486"			: arch_i486,
+		"i586"			: arch_i586,
+		"i686"			: arch_i686,
+		"pentium"		: arch_i586,
+		"pentium2"		: arch_pentium2,
+		"pentium3"		: arch_pentium3,
+		"pentium3m"		: arch_pentium3,
+		"pentium-m"		: arch_pentium_m,
+		"pentium4"		: arch_pentium4,
+		"pentium4m"		: arch_pentium4,
+		"pentiumpro"		: arch_i686,
+		"pentium-mmx"		: arch_pentium_mmx,
+		"prescott"		: arch_prescott,
+		"k6"			: arch_k6,
+		"k6-2"			: arch_k6_2,
+		"k6-3"			: arch_k6_2,
+		"athlon"		: arch_athlon,
+		"athlon-tbird"		: arch_athlon,
+		"athlon-4"		: arch_athlon_xp,
+		"athlon-xp"		: arch_athlon_xp,
+		"athlon-mp"		: arch_athlon_xp
+	}, ('i386', 'i486', 'i586', 'i686'))
diff --git a/catalyst/config.py b/catalyst/config.py
new file mode 100644
index 0000000..726bf74
--- /dev/null
+++ b/catalyst/config.py
@@ -0,0 +1,122 @@
+import re
+from modules.catalyst_support import *
+
+class ParserBase:
+
+	filename = ""
+	lines = None
+	values = None
+	key_value_separator = "="
+	multiple_values = False
+	empty_values = True
+
+	def __getitem__(self, key):
+		return self.values[key]
+
+	def get_values(self):
+		return self.values
+
+	def dump(self):
+		dump = ""
+		for x in self.values.keys():
+			dump += x + " = " + repr(self.values[x]) + "\n"
+		return dump
+
+	def parse_file(self, filename):
+		try:
+			myf = open(filename, "r")
+		except:
+			raise CatalystError, "Could not open file " + filename
+		self.lines = myf.readlines()
+		myf.close()
+		self.filename = filename
+		self.parse()
+
+	def parse_lines(self, lines):
+		self.lines = lines
+		self.parse()
+
+	def parse(self):
+		values = {}
+		cur_array = []
+
+		trailing_comment=re.compile('\s*#.*$')
+		white_space=re.compile('\s+')
+
+		for x, myline in enumerate(self.lines):
+			myline = myline.strip()
+
+			# Force the line to be clean
+			# Remove Comments ( anything following # )
+			myline = trailing_comment.sub("", myline)
+
+			# Skip any blank lines
+			if not myline: continue
+
+			# Look for separator
+			msearch = myline.find(self.key_value_separator)
+
+			# If separator found assume its a new key
+			if msearch != -1:
+				# Split on the first occurence of the separator creating two strings in the array mobjs
+				mobjs = myline.split(self.key_value_separator, 1)
+				mobjs[1] = mobjs[1].strip().strip('"')
+
+#				# Check that this key doesn't exist already in the spec
+#				if mobjs[0] in values:
+#					raise Exception("You have a duplicate key (" + mobjs[0] + ") in your spec. Please fix it")
+
+				# Start a new array using the first element of mobjs
+				cur_array = [mobjs[0]]
+				if mobjs[1]:
+					if self.multiple_values:
+						# split on white space creating additional array elements
+#						subarray = white_space.split(mobjs[1])
+						subarray = mobjs[1].split()
+						cur_array += subarray
+					else:
+						cur_array += [mobjs[1]]
+
+			# Else add on to the last key we were working on
+			else:
+				if self.multiple_values:
+#					mobjs = white_space.split(myline)
+#					cur_array += mobjs
+					cur_array += myline.split()
+				else:
+					raise CatalystError, "Syntax error: " + x
+
+			# XXX: Do we really still need this "single value is a string" behavior?
+			if len(cur_array) == 2:
+				values[cur_array[0]] = cur_array[1]
+			else:
+				values[cur_array[0]] = cur_array[1:]
+
+		if not self.empty_values:
+			for x in values.keys():
+				# Delete empty key pairs
+				if not values[x]:
+					print "\n\tWARNING: No value set for key " + x + "...deleting"
+					del values[x]
+
+		self.values = values
+
+class SpecParser(ParserBase):
+
+	key_value_separator = ':'
+	multiple_values = True
+	empty_values = False
+
+	def __init__(self, filename=""):
+		if filename:
+			self.parse_file(filename)
+
+class ConfigParser(ParserBase):
+
+	key_value_separator = '='
+	multiple_values = False
+	empty_values = True
+
+	def __init__(self, filename=""):
+		if filename:
+			self.parse_file(filename)
diff --git a/catalyst/main.py b/catalyst/main.py
new file mode 100644
index 0000000..aebb495
--- /dev/null
+++ b/catalyst/main.py
@@ -0,0 +1,428 @@
+#!/usr/bin/python2 -OO
+
+# Maintained in full by:
+# Catalyst Team <catalyst@gentoo.org>
+# Release Engineering Team <releng@gentoo.org>
+# Andrew Gaffney <agaffney@gentoo.org>
+# Chris Gianelloni <wolf31o2@wolf31o2.org>
+# $Id$
+
+import os
+import sys
+import imp
+import string
+import getopt
+import pdb
+import os.path
+
+__selfpath__ = os.path.abspath(os.path.dirname(__file__))
+
+sys.path.append(__selfpath__ + "/modules")
+
+import catalyst.config
+import catalyst.util
+from catalyst.modules.catalyst_support import (required_build_targets,
+	valid_build_targets, CatalystError, hash_map, find_binary, LockInUse)
+
+__maintainer__="Catalyst <catalyst@gentoo.org>"
+__version__="2.0.15"
+
+conf_values={}
+
+def usage():
+	print """Usage catalyst [options] [-C variable=value...] [ -s identifier]
+ -a --clear-autoresume  clear autoresume flags
+ -c --config            use specified configuration file
+ -C --cli               catalyst commandline (MUST BE LAST OPTION)
+ -d --debug             enable debugging
+ -f --file              read specfile
+ -F --fetchonly         fetch files only
+ -h --help              print this help message
+ -p --purge             clear tmp dirs,package cache, autoresume flags
+ -P --purgeonly         clear tmp dirs,package cache, autoresume flags and exit
+ -T --purgetmponly      clear tmp dirs and autoresume flags and exit
+ -s --snapshot          generate a release snapshot
+ -V --version           display version information
+ -v --verbose           verbose output
+
+Usage examples:
+
+Using the commandline option (-C, --cli) to build a Portage snapshot:
+catalyst -C target=snapshot version_stamp=my_date
+
+Using the snapshot option (-s, --snapshot) to build a release snapshot:
+catalyst -s 20071121"
+
+Using the specfile option (-f, --file) to build a stage target:
+catalyst -f stage1-specfile.spec
+"""
+
+
+def version():
+	print "Catalyst, version "+__version__
+	print "Copyright 2003-2008 Gentoo Foundation"
+	print "Copyright 2008-2012 various authors"
+	print "Distributed under the GNU General Public License version 2.1\n"
+
+def parse_config(myconfig):
+	# search a couple of different areas for the main config file
+	myconf={}
+	config_file=""
+
+	confdefaults = {
+		"distdir": "/usr/portage/distfiles",
+		"hash_function": "crc32",
+		"icecream": "/var/cache/icecream",
+		"local_overlay": "/usr/local/portage",
+		"options": "",
+		"packagedir": "/usr/portage/packages",
+		"portdir": "/usr/portage",
+		"repo_name": "portage",
+		"sharedir": "/usr/share/catalyst",
+		"snapshot_name": "portage-",
+		"snapshot_cache": "/var/tmp/catalyst/snapshot_cache",
+		"storedir": "/var/tmp/catalyst",
+		}
+
+	# first, try the one passed (presumably from the cmdline)
+	if myconfig:
+		if os.path.exists(myconfig):
+			print "Using command line specified Catalyst configuration file, "+myconfig
+			config_file=myconfig
+
+		else:
+			print "!!! catalyst: Could not use specified configuration file "+\
+				myconfig
+			sys.exit(1)
+
+	# next, try the default location
+	elif os.path.exists("/etc/catalyst/catalyst.conf"):
+		print "Using default Catalyst configuration file, /etc/catalyst/catalyst.conf"
+		config_file="/etc/catalyst/catalyst.conf"
+
+	# can't find a config file (we are screwed), so bail out
+	else:
+		print "!!! catalyst: Could not find a suitable configuration file"
+		sys.exit(1)
+
+	# now, try and parse the config file "config_file"
+	try:
+#		execfile(config_file, myconf, myconf)
+		myconfig = catalyst.config.ConfigParser(config_file)
+		myconf.update(myconfig.get_values())
+
+	except:
+		print "!!! catalyst: Unable to parse configuration file, "+myconfig
+		sys.exit(1)
+
+	# now, load up the values into conf_values so that we can use them
+	for x in confdefaults.keys():
+		if x in myconf:
+			print "Setting",x,"to config file value \""+myconf[x]+"\""
+			conf_values[x]=myconf[x]
+		else:
+			print "Setting",x,"to default value \""+confdefaults[x]+"\""
+			conf_values[x]=confdefaults[x]
+
+	# add our python base directory to use for loading target arch's
+	conf_values["PythonDir"] = __selfpath__
+
+	# parse out the rest of the options from the config file
+	if "autoresume" in string.split(conf_values["options"]):
+		print "Autoresuming support enabled."
+		conf_values["AUTORESUME"]="1"
+
+	if "bindist" in string.split(conf_values["options"]):
+		print "Binary redistribution enabled"
+		conf_values["BINDIST"]="1"
+	else:
+		print "Bindist is not enabled in catalyst.conf"
+		print "Binary redistribution of generated stages/isos may be prohibited by law."
+		print "Please see the use description for bindist on any package you are including."
+
+	if "ccache" in string.split(conf_values["options"]):
+		print "Compiler cache support enabled."
+		conf_values["CCACHE"]="1"
+
+	if "clear-autoresume" in string.split(conf_values["options"]):
+		print "Cleaning autoresume flags support enabled."
+		conf_values["CLEAR_AUTORESUME"]="1"
+
+	if "distcc" in string.split(conf_values["options"]):
+		print "Distcc support enabled."
+		conf_values["DISTCC"]="1"
+
+	if "icecream" in string.split(conf_values["options"]):
+		print "Icecream compiler cluster support enabled."
+		conf_values["ICECREAM"]="1"
+
+	if "kerncache" in string.split(conf_values["options"]):
+		print "Kernel cache support enabled."
+		conf_values["KERNCACHE"]="1"
+
+	if "pkgcache" in string.split(conf_values["options"]):
+		print "Package cache support enabled."
+		conf_values["PKGCACHE"]="1"
+
+	if "preserve_libs" in string.split(conf_values["options"]):
+		print "Preserving libs during unmerge."
+		conf_values["PRESERVE_LIBS"]="1"
+
+	if "purge" in string.split(conf_values["options"]):
+		print "Purge support enabled."
+		conf_values["PURGE"]="1"
+
+	if "seedcache" in string.split(conf_values["options"]):
+		print "Seed cache support enabled."
+		conf_values["SEEDCACHE"]="1"
+
+	if "snapcache" in string.split(conf_values["options"]):
+		print "Snapshot cache support enabled."
+		conf_values["SNAPCACHE"]="1"
+
+	if "digests" in myconf:
+		conf_values["digests"]=myconf["digests"]
+	if "contents" in myconf:
+		conf_values["contents"]=myconf["contents"]
+
+	if "envscript" in myconf:
+		print "Envscript support enabled."
+		conf_values["ENVSCRIPT"]=myconf["envscript"]
+
+	if "var_tmpfs_portage" in myconf:
+		conf_values["var_tmpfs_portage"]=myconf["var_tmpfs_portage"];
+
+	if "port_logdir" in myconf:
+		conf_values["port_logdir"]=myconf["port_logdir"];
+
+def import_modules():
+	# import catalyst's own modules (i.e. catalyst_support and the arch modules)
+	targetmap={}
+
+	try:
+		module_dir = __selfpath__ + "/modules/"
+		for x in required_build_targets:
+			try:
+				fh=open(module_dir + x + ".py")
+				module=imp.load_module(x, fh,"modules/" + x + ".py",
+					(".py", "r", imp.PY_SOURCE))
+				fh.close()
+
+			except IOError:
+				raise CatalystError, "Can't find " + x + ".py plugin in " + \
+					module_dir
+		for x in valid_build_targets:
+			try:
+				fh=open(module_dir + x + ".py")
+				module=imp.load_module(x, fh, "modules/" + x + ".py",
+					(".py", "r", imp.PY_SOURCE))
+				module.register(targetmap)
+				fh.close()
+
+			except IOError:
+				raise CatalystError,"Can't find " + x + ".py plugin in " + \
+					module_dir
+
+	except ImportError:
+		print "!!! catalyst: Python modules not found in "+\
+			module_dir + "; exiting."
+		sys.exit(1)
+
+	return targetmap
+
+def build_target(addlargs, targetmap):
+	try:
+		if addlargs["target"] not in targetmap:
+			raise CatalystError,"Target \""+addlargs["target"]+"\" not available."
+
+		mytarget=targetmap[addlargs["target"]](conf_values, addlargs)
+
+		mytarget.run()
+
+	except:
+		catalyst.util.print_traceback()
+		print "!!! catalyst: Error encountered during run of target " + addlargs["target"]
+		sys.exit(1)
+
+def main():
+	targetmap={}
+
+	version()
+	if os.getuid() != 0:
+		# catalyst cannot be run as a normal user due to chroots, mounts, etc
+		print "!!! catalyst: This script requires root privileges to operate"
+		sys.exit(2)
+
+	# we need some options in order to work correctly
+	if len(sys.argv) < 2:
+		usage()
+		sys.exit(2)
+
+	# parse out the command line arguments
+	try:
+		opts,args = getopt.getopt(sys.argv[1:], "apPThvdc:C:f:FVs:", ["purge", "purgeonly", "purgetmponly", "help", "version", "debug",\
+			"clear-autoresume", "config=", "cli=", "file=", "fetch", "verbose","snapshot="])
+
+	except getopt.GetoptError:
+		usage()
+		sys.exit(2)
+
+	# defaults for commandline opts
+	debug=False
+	verbose=False
+	fetch=False
+	myconfig=""
+	myspecfile=""
+	mycmdline=[]
+	myopts=[]
+
+	# check preconditions
+	if len(opts) == 0:
+		print "!!! catalyst: please specify one of either -f or -C\n"
+		usage()
+		sys.exit(2)
+
+	run = False
+	for o, a in opts:
+		if o in ("-h", "--help"):
+			usage()
+			sys.exit(1)
+
+		if o in ("-V", "--version"):
+			print "Catalyst version "+__version__
+			sys.exit(1)
+
+		if o in ("-d", "--debug"):
+			conf_values["DEBUG"]="1"
+			conf_values["VERBOSE"]="1"
+
+		if o in ("-c", "--config"):
+			myconfig=a
+
+		if o in ("-C", "--cli"):
+			run = True
+			x=sys.argv.index(o)+1
+			while x < len(sys.argv):
+				mycmdline.append(sys.argv[x])
+				x=x+1
+
+		if o in ("-f", "--file"):
+			run = True
+			myspecfile=a
+
+		if o in ("-F", "--fetchonly"):
+			conf_values["FETCH"]="1"
+
+		if o in ("-v", "--verbose"):
+			conf_values["VERBOSE"]="1"
+
+		if o in ("-s", "--snapshot"):
+			if len(sys.argv) < 3:
+				print "!!! catalyst: missing snapshot identifier\n"
+				usage()
+				sys.exit(2)
+			else:
+				run = True
+				mycmdline.append("target=snapshot")
+				mycmdline.append("version_stamp="+a)
+
+		if o in ("-p", "--purge"):
+			conf_values["PURGE"] = "1"
+
+		if o in ("-P", "--purgeonly"):
+			conf_values["PURGEONLY"] = "1"
+
+		if o in ("-T", "--purgetmponly"):
+			conf_values["PURGETMPONLY"] = "1"
+
+		if o in ("-a", "--clear-autoresume"):
+			conf_values["CLEAR_AUTORESUME"] = "1"
+
+	if not run:
+		print "!!! catalyst: please specify one of either -f or -C\n"
+		usage()
+		sys.exit(2)
+
+	# import configuration file and import our main module using those settings
+	parse_config(myconfig)
+
+	# Start checking that digests are valid now that the hash_map was imported
+	# from catalyst_support
+	if "digests" in conf_values:
+		for i in conf_values["digests"].split():
+			if i not in hash_map:
+				print
+				print i+" is not a valid digest entry"
+				print "Valid digest entries:"
+				print hash_map.keys()
+				print
+				print "Catalyst aborting...."
+				sys.exit(2)
+			if find_binary(hash_map[i][1]) == None:
+				print
+				print "digest="+i
+				print "\tThe "+hash_map[i][1]+\
+					" binary was not found. It needs to be in your system path"
+				print
+				print "Catalyst aborting...."
+				sys.exit(2)
+	if "hash_function" in conf_values:
+		if conf_values["hash_function"] not in hash_map:
+			print
+			print conf_values["hash_function"]+\
+				" is not a valid hash_function entry"
+			print "Valid hash_function entries:"
+			print hash_map.keys()
+			print
+			print "Catalyst aborting...."
+			sys.exit(2)
+		if find_binary(hash_map[conf_values["hash_function"]][1]) == None:
+			print
+			print "hash_function="+conf_values["hash_function"]
+			print "\tThe "+hash_map[conf_values["hash_function"]][1]+\
+				" binary was not found. It needs to be in your system path"
+			print
+			print "Catalyst aborting...."
+			sys.exit(2)
+
+	# import the rest of the catalyst modules
+	targetmap=import_modules()
+
+	addlargs={}
+
+	if myspecfile:
+		spec = catalyst.config.SpecParser(myspecfile)
+		addlargs.update(spec.get_values())
+
+	if mycmdline:
+		try:
+			cmdline = catalyst.config.ConfigParser()
+			cmdline.parse_lines(mycmdline)
+			addlargs.update(cmdline.get_values())
+		except CatalystError:
+			print "!!! catalyst: Could not parse commandline, exiting."
+			sys.exit(1)
+
+	if "target" not in addlargs:
+		raise CatalystError, "Required value \"target\" not specified."
+
+	# everything is setup, so the build is a go
+	try:
+		build_target(addlargs, targetmap)
+
+	except CatalystError:
+		print
+		print "Catalyst aborting...."
+		sys.exit(2)
+	except KeyboardInterrupt:
+		print "\nCatalyst build aborted due to user interrupt ( Ctrl-C )"
+		print
+		print "Catalyst aborting...."
+		sys.exit(2)
+	except LockInUse:
+		print "Catalyst aborting...."
+		sys.exit(2)
+	except:
+		print "Catalyst aborting...."
+		raise
+		sys.exit(2)
diff --git a/catalyst/modules/__init__.py b/catalyst/modules/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/catalyst/modules/__init__.py
@@ -0,0 +1 @@
+
diff --git a/catalyst/modules/builder.py b/catalyst/modules/builder.py
new file mode 100644
index 0000000..ad27d78
--- /dev/null
+++ b/catalyst/modules/builder.py
@@ -0,0 +1,20 @@
+
+class generic:
+	def __init__(self,myspec):
+		self.settings=myspec
+
+	def mount_safety_check(self):
+		"""
+		Make sure that no bind mounts exist in chrootdir (to use before
+		cleaning the directory, to make sure we don't wipe the contents of
+		a bind mount
+		"""
+		pass
+
+	def mount_all(self):
+		"""do all bind mounts"""
+		pass
+
+	def umount_all(self):
+		"""unmount all bind mounts"""
+		pass
diff --git a/catalyst/modules/catalyst_lock.py b/catalyst/modules/catalyst_lock.py
new file mode 100644
index 0000000..5311cf8
--- /dev/null
+++ b/catalyst/modules/catalyst_lock.py
@@ -0,0 +1,468 @@
+#!/usr/bin/python
+import os
+import fcntl
+import errno
+import sys
+import string
+import time
+from catalyst_support import *
+
+def writemsg(mystr):
+	sys.stderr.write(mystr)
+	sys.stderr.flush()
+
+class LockDir:
+	locking_method=fcntl.flock
+	lock_dirs_in_use=[]
+	die_on_failed_lock=True
+	def __del__(self):
+		self.clean_my_hardlocks()
+		self.delete_lock_from_path_list()
+		if self.islocked():
+			self.fcntl_unlock()
+
+	def __init__(self,lockdir):
+		self.locked=False
+		self.myfd=None
+		self.set_gid(250)
+		self.locking_method=LockDir.locking_method
+		self.set_lockdir(lockdir)
+		self.set_lockfilename(".catalyst_lock")
+		self.set_lockfile()
+
+		if LockDir.lock_dirs_in_use.count(lockdir)>0:
+			raise "This directory already associated with a lock object"
+		else:
+			LockDir.lock_dirs_in_use.append(lockdir)
+
+		self.hardlock_paths={}
+
+	def delete_lock_from_path_list(self):
+		i=0
+		try:
+			if LockDir.lock_dirs_in_use:
+				for x in LockDir.lock_dirs_in_use:
+					if LockDir.lock_dirs_in_use[i] == self.lockdir:
+						del LockDir.lock_dirs_in_use[i]
+						break
+						i=i+1
+		except AttributeError:
+			pass
+
+	def islocked(self):
+		if self.locked:
+			return True
+		else:
+			return False
+
+	def set_gid(self,gid):
+		if not self.islocked():
+#			if "DEBUG" in self.settings:
+#				print "setting gid to", gid
+			self.gid=gid
+
+	def set_lockdir(self,lockdir):
+		if not os.path.exists(lockdir):
+			os.makedirs(lockdir)
+		if os.path.isdir(lockdir):
+			if not self.islocked():
+				if lockdir[-1] == "/":
+					lockdir=lockdir[:-1]
+				self.lockdir=normpath(lockdir)
+#				if "DEBUG" in self.settings:
+#					print "setting lockdir to", self.lockdir
+		else:
+			raise "the lock object needs a path to a dir"
+
+	def set_lockfilename(self,lockfilename):
+		if not self.islocked():
+			self.lockfilename=lockfilename
+#			if "DEBUG" in self.settings:
+#				print "setting lockfilename to", self.lockfilename
+
+	def set_lockfile(self):
+		if not self.islocked():
+			self.lockfile=normpath(self.lockdir+'/'+self.lockfilename)
+#			if "DEBUG" in self.settings:
+#				print "setting lockfile to", self.lockfile
+
+	def read_lock(self):
+		if not self.locking_method == "HARDLOCK":
+			self.fcntl_lock("read")
+		else:
+			print "HARDLOCKING doesnt support shared-read locks"
+			print "using exclusive write locks"
+			self.hard_lock()
+
+	def write_lock(self):
+		if not self.locking_method == "HARDLOCK":
+			self.fcntl_lock("write")
+		else:
+			self.hard_lock()
+
+	def unlock(self):
+		if not self.locking_method == "HARDLOCK":
+			self.fcntl_unlock()
+		else:
+			self.hard_unlock()
+
+	def fcntl_lock(self,locktype):
+		if self.myfd==None:
+			if not os.path.exists(os.path.dirname(self.lockdir)):
+				raise DirectoryNotFound, os.path.dirname(self.lockdir)
+			if not os.path.exists(self.lockfile):
+				old_mask=os.umask(000)
+				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
+				try:
+					if os.stat(self.lockfile).st_gid != self.gid:
+						os.chown(self.lockfile,os.getuid(),self.gid)
+				except SystemExit, e:
+					raise
+				except OSError, e:
+					if e[0] == 2: #XXX: No such file or directory
+						return self.fcntl_locking(locktype)
+					else:
+						writemsg("Cannot chown a lockfile. This could cause inconvenience later.\n")
+
+				os.umask(old_mask)
+			else:
+				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
+
+		try:
+			if locktype == "read":
+				self.locking_method(self.myfd,fcntl.LOCK_SH|fcntl.LOCK_NB)
+			else:
+				self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
+		except IOError, e:
+			if "errno" not in dir(e):
+				raise
+			if e.errno == errno.EAGAIN:
+				if not LockDir.die_on_failed_lock:
+					# Resource temp unavailable; eg, someone beat us to the lock.
+					writemsg("waiting for lock on %s\n" % self.lockfile)
+
+					# Try for the exclusive or shared lock again.
+					if locktype == "read":
+						self.locking_method(self.myfd,fcntl.LOCK_SH)
+					else:
+						self.locking_method(self.myfd,fcntl.LOCK_EX)
+				else:
+					raise LockInUse,self.lockfile
+			elif e.errno == errno.ENOLCK:
+				pass
+			else:
+				raise
+		if not os.path.exists(self.lockfile):
+			os.close(self.myfd)
+			self.myfd=None
+			#writemsg("lockfile recurse\n")
+			self.fcntl_lock(locktype)
+		else:
+			self.locked=True
+			#writemsg("Lockfile obtained\n")
+
+	def fcntl_unlock(self):
+		import fcntl
+		unlinkfile = 1
+		if not os.path.exists(self.lockfile):
+			print "lockfile does not exist '%s'" % self.lockfile
+			if (self.myfd != None):
+				try:
+					os.close(myfd)
+					self.myfd=None
+				except:
+					pass
+				return False
+
+			try:
+				if self.myfd == None:
+					self.myfd = os.open(self.lockfile, os.O_WRONLY,0660)
+					unlinkfile = 1
+					self.locking_method(self.myfd,fcntl.LOCK_UN)
+			except SystemExit, e:
+				raise
+			except Exception, e:
+				os.close(self.myfd)
+				self.myfd=None
+				raise IOError, "Failed to unlock file '%s'\n" % self.lockfile
+				try:
+					# This sleep call was added to allow other processes that are
+					# waiting for a lock to be able to grab it before it is deleted.
+					# lockfile() already accounts for this situation, however, and
+					# the sleep here adds more time than is saved overall, so am
+					# commenting until it is proved necessary.
+					#time.sleep(0.0001)
+					if unlinkfile:
+						InUse=False
+						try:
+							self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
+						except:
+							print "Read lock may be in effect. skipping lockfile delete..."
+							InUse=True
+							# We won the lock, so there isn't competition for it.
+							# We can safely delete the file.
+							#writemsg("Got the lockfile...\n")
+							#writemsg("Unlinking...\n")
+							self.locking_method(self.myfd,fcntl.LOCK_UN)
+					if not InUse:
+						os.unlink(self.lockfile)
+						os.close(self.myfd)
+						self.myfd=None
+#						if "DEBUG" in self.settings:
+#							print "Unlinked lockfile..."
+				except SystemExit, e:
+					raise
+				except Exception, e:
+					# We really don't care... Someone else has the lock.
+					# So it is their problem now.
+					print "Failed to get lock... someone took it."
+					print str(e)
+
+					# Why test lockfilename?  Because we may have been handed an
+					# fd originally, and the caller might not like having their
+					# open fd closed automatically on them.
+					#if type(lockfilename) == types.StringType:
+					#        os.close(myfd)
+
+		if (self.myfd != None):
+			os.close(self.myfd)
+			self.myfd=None
+			self.locked=False
+			time.sleep(.0001)
+
+	def hard_lock(self,max_wait=14400):
+		"""Does the NFS, hardlink shuffle to ensure locking on the disk.
+		We create a PRIVATE lockfile, that is just a placeholder on the disk.
+		Then we HARDLINK the real lockfile to that private file.
+		If our file can 2 references, then we have the lock. :)
+		Otherwise we lather, rise, and repeat.
+		We default to a 4 hour timeout.
+		"""
+
+		self.myhardlock = self.hardlock_name(self.lockdir)
+
+		start_time = time.time()
+		reported_waiting = False
+
+		while(time.time() < (start_time + max_wait)):
+			# We only need it to exist.
+			self.myfd = os.open(self.myhardlock, os.O_CREAT|os.O_RDWR,0660)
+			os.close(self.myfd)
+
+			self.add_hardlock_file_to_cleanup()
+			if not os.path.exists(self.myhardlock):
+				raise FileNotFound, "Created lockfile is missing: %(filename)s" % {"filename":self.myhardlock}
+			try:
+				res = os.link(self.myhardlock, self.lockfile)
+			except SystemExit, e:
+				raise
+			except Exception, e:
+#				if "DEBUG" in self.settings:
+#					print "lockfile(): Hardlink: Link failed."
+#					print "Exception: ",e
+				pass
+
+			if self.hardlink_is_mine(self.myhardlock, self.lockfile):
+				# We have the lock.
+				if reported_waiting:
+					print
+				return True
+
+			if reported_waiting:
+				writemsg(".")
+			else:
+				reported_waiting = True
+				print
+				print "Waiting on (hardlink) lockfile: (one '.' per 3 seconds)"
+				print "Lockfile: " + self.lockfile
+			time.sleep(3)
+
+		os.unlink(self.myhardlock)
+		return False
+
+	def hard_unlock(self):
+		try:
+			if os.path.exists(self.myhardlock):
+				os.unlink(self.myhardlock)
+			if os.path.exists(self.lockfile):
+				os.unlink(self.lockfile)
+		except SystemExit, e:
+			raise
+		except:
+			writemsg("Something strange happened to our hardlink locks.\n")
+
+	def add_hardlock_file_to_cleanup(self):
+		#mypath = self.normpath(path)
+		if os.path.isdir(self.lockdir) and os.path.isfile(self.myhardlock):
+			self.hardlock_paths[self.lockdir]=self.myhardlock
+
+	def remove_hardlock_file_from_cleanup(self):
+		if self.lockdir in self.hardlock_paths:
+			del self.hardlock_paths[self.lockdir]
+			print self.hardlock_paths
+
+	def hardlock_name(self, path):
+		mypath=path+"/.hardlock-"+os.uname()[1]+"-"+str(os.getpid())
+		newpath = os.path.normpath(mypath)
+		if len(newpath) > 1:
+			if newpath[1] == "/":
+				newpath = "/"+newpath.lstrip("/")
+		return newpath
+
+	def hardlink_is_mine(self,link,lock):
+		import stat
+		try:
+			myhls = os.stat(link)
+			mylfs = os.stat(lock)
+		except SystemExit, e:
+			raise
+		except:
+			myhls = None
+			mylfs = None
+
+		if myhls:
+			if myhls[stat.ST_NLINK] == 2:
+				return True
+		if mylfs:
+			if mylfs[stat.ST_INO] == myhls[stat.ST_INO]:
+				return True
+		return False
+
+	def hardlink_active(lock):
+		if not os.path.exists(lock):
+			return False
+
+	def clean_my_hardlocks(self):
+		try:
+			for x in self.hardlock_paths.keys():
+				self.hardlock_cleanup(x)
+		except AttributeError:
+			pass
+
+	def hardlock_cleanup(self,path):
+		mypid  = str(os.getpid())
+		myhost = os.uname()[1]
+		mydl = os.listdir(path)
+		results = []
+		mycount = 0
+
+		mylist = {}
+		for x in mydl:
+			filepath=path+"/"+x
+			if os.path.isfile(filepath):
+				parts = filepath.split(".hardlock-")
+			if len(parts) == 2:
+				filename = parts[0]
+				hostpid  = parts[1].split("-")
+				host  = "-".join(hostpid[:-1])
+				pid   = hostpid[-1]
+			if filename not in mylist:
+				mylist[filename] = {}
+
+			if host not in mylist[filename]:
+				mylist[filename][host] = []
+				mylist[filename][host].append(pid)
+				mycount += 1
+			else:
+				mylist[filename][host].append(pid)
+				mycount += 1
+
+
+		results.append("Found %(count)s locks" % {"count":mycount})
+		for x in mylist.keys():
+			if myhost in mylist[x]:
+				mylockname = self.hardlock_name(x)
+				if self.hardlink_is_mine(mylockname, self.lockfile) or \
+					not os.path.exists(self.lockfile):
+					for y in mylist[x].keys():
+						for z in mylist[x][y]:
+							filename = x+".hardlock-"+y+"-"+z
+							if filename == mylockname:
+								self.hard_unlock()
+								continue
+							try:
+								# We're sweeping through, unlinking everyone's locks.
+								os.unlink(filename)
+								results.append("Unlinked: " + filename)
+							except SystemExit, e:
+								raise
+							except Exception,e:
+								pass
+					try:
+						os.unlink(x)
+						results.append("Unlinked: " + x)
+						os.unlink(mylockname)
+						results.append("Unlinked: " + mylockname)
+					except SystemExit, e:
+						raise
+					except Exception,e:
+						pass
+				else:
+					try:
+						os.unlink(mylockname)
+						results.append("Unlinked: " + mylockname)
+					except SystemExit, e:
+						raise
+					except Exception,e:
+						pass
+		return results
+
+if __name__ == "__main__":
+
+	def lock_work():
+		print
+		for i in range(1,6):
+			print i,time.time()
+			time.sleep(1)
+		print
+	def normpath(mypath):
+		newpath = os.path.normpath(mypath)
+		if len(newpath) > 1:
+			if newpath[1] == "/":
+				newpath = "/"+newpath.lstrip("/")
+		return newpath
+
+	print "Lock 5 starting"
+	import time
+	Lock1=LockDir("/tmp/lock_path")
+	Lock1.write_lock()
+	print "Lock1 write lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+	Lock1.read_lock()
+	print "Lock1 read lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+	Lock1.read_lock()
+	print "Lock1 read lock"
+
+	Lock1.write_lock()
+	print "Lock1 write lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+	Lock1.read_lock()
+	print "Lock1 read lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+#Lock1.write_lock()
+#time.sleep(2)
+#Lock1.unlock()
+    ##Lock1.write_lock()
+    #time.sleep(2)
+    #Lock1.unlock()
diff --git a/catalyst/modules/catalyst_support.py b/catalyst/modules/catalyst_support.py
new file mode 100644
index 0000000..316dfa3
--- /dev/null
+++ b/catalyst/modules/catalyst_support.py
@@ -0,0 +1,718 @@
+
+import sys,string,os,types,re,signal,traceback,time
+#import md5,sha
+selinux_capable = False
+#userpriv_capable = (os.getuid() == 0)
+#fakeroot_capable = False
+BASH_BINARY             = "/bin/bash"
+
+try:
+        import resource
+        max_fd_limit=resource.getrlimit(RLIMIT_NOFILE)
+except SystemExit, e:
+        raise
+except:
+        # hokay, no resource module.
+        max_fd_limit=256
+
+# pids this process knows of.
+spawned_pids = []
+
+try:
+        import urllib
+except SystemExit, e:
+        raise
+
+def cleanup(pids,block_exceptions=True):
+        """function to go through and reap the list of pids passed to it"""
+        global spawned_pids
+        if type(pids) == int:
+                pids = [pids]
+        for x in pids:
+                try:
+                        os.kill(x,signal.SIGTERM)
+                        if os.waitpid(x,os.WNOHANG)[1] == 0:
+                                # feisty bugger, still alive.
+                                os.kill(x,signal.SIGKILL)
+                                os.waitpid(x,0)
+
+                except OSError, oe:
+                        if block_exceptions:
+                                pass
+                        if oe.errno not in (10,3):
+                                raise oe
+                except SystemExit:
+                        raise
+                except Exception:
+                        if block_exceptions:
+                                pass
+                try:                    spawned_pids.remove(x)
+                except IndexError:      pass
+
+
+
+# a function to turn a string of non-printable characters into a string of
+# hex characters
+def hexify(str):
+	hexStr = string.hexdigits
+	r = ''
+	for ch in str:
+		i = ord(ch)
+		r = r + hexStr[(i >> 4) & 0xF] + hexStr[i & 0xF]
+	return r
+# hexify()
+
+def generate_contents(file,contents_function="auto",verbose=False):
+	try:
+		_ = contents_function
+		if _ == 'auto' and file.endswith('.iso'):
+			_ = 'isoinfo-l'
+		if (_ in ['tar-tv','auto']):
+			if file.endswith('.tgz') or file.endswith('.tar.gz'):
+				_ = 'tar-tvz'
+			elif file.endswith('.tbz2') or file.endswith('.tar.bz2'):
+				_ = 'tar-tvj'
+			elif file.endswith('.tar'):
+				_ = 'tar-tv'
+
+		if _ == 'auto':
+			warn('File %r has unknown type for automatic detection.' % (file, ))
+			return None
+		else:
+			contents_function = _
+			_ = contents_map[contents_function]
+			return _[0](file,_[1],verbose)
+	except:
+		raise CatalystError,\
+			"Error generating contents, is appropriate utility (%s) installed on your system?" \
+			% (contents_function, )
+
+def calc_contents(file,cmd,verbose):
+	args={ 'file': file }
+	cmd=cmd % dict(args)
+	a=os.popen(cmd)
+	mylines=a.readlines()
+	a.close()
+	result="".join(mylines)
+	if verbose:
+		print result
+	return result
+
+# This has map must be defined after the function calc_content
+# It is possible to call different functions from this but they must be defined
+# before hash_map
+# Key,function,cmd
+contents_map={
+	# 'find' is disabled because it requires the source path, which is not
+	# always available
+	#"find"		:[calc_contents,"find %(path)s"],
+	"tar-tv":[calc_contents,"tar tvf %(file)s"],
+	"tar-tvz":[calc_contents,"tar tvzf %(file)s"],
+	"tar-tvj":[calc_contents,"tar -I lbzip2 -tvf %(file)s"],
+	"isoinfo-l":[calc_contents,"isoinfo -l -i %(file)s"],
+	# isoinfo-f should be a last resort only
+	"isoinfo-f":[calc_contents,"isoinfo -f -i %(file)s"],
+}
+
+def generate_hash(file,hash_function="crc32",verbose=False):
+	try:
+		return hash_map[hash_function][0](file,hash_map[hash_function][1],hash_map[hash_function][2],\
+			hash_map[hash_function][3],verbose)
+	except:
+		raise CatalystError,"Error generating hash, is appropriate utility installed on your system?"
+
+def calc_hash(file,cmd,cmd_args,id_string="MD5",verbose=False):
+	a=os.popen(cmd+" "+cmd_args+" "+file)
+	mylines=a.readlines()
+	a.close()
+	mylines=mylines[0].split()
+	result=mylines[0]
+	if verbose:
+		print id_string+" (%s) = %s" % (file, result)
+	return result
+
+def calc_hash2(file,cmd,cmd_args,id_string="MD5",verbose=False):
+	a=os.popen(cmd+" "+cmd_args+" "+file)
+	header=a.readline()
+	mylines=a.readline().split()
+	hash=mylines[0]
+	short_file=os.path.split(mylines[1])[1]
+	a.close()
+	result=header+hash+"  "+short_file+"\n"
+	if verbose:
+		print header+" (%s) = %s" % (short_file, result)
+	return result
+
+# This has map must be defined after the function calc_hash
+# It is possible to call different functions from this but they must be defined
+# before hash_map
+# Key,function,cmd,cmd_args,Print string
+hash_map={
+	 "adler32":[calc_hash2,"shash","-a ADLER32","ADLER32"],\
+	 "crc32":[calc_hash2,"shash","-a CRC32","CRC32"],\
+	 "crc32b":[calc_hash2,"shash","-a CRC32B","CRC32B"],\
+	 "gost":[calc_hash2,"shash","-a GOST","GOST"],\
+	 "haval128":[calc_hash2,"shash","-a HAVAL128","HAVAL128"],\
+	 "haval160":[calc_hash2,"shash","-a HAVAL160","HAVAL160"],\
+	 "haval192":[calc_hash2,"shash","-a HAVAL192","HAVAL192"],\
+	 "haval224":[calc_hash2,"shash","-a HAVAL224","HAVAL224"],\
+	 "haval256":[calc_hash2,"shash","-a HAVAL256","HAVAL256"],\
+	 "md2":[calc_hash2,"shash","-a MD2","MD2"],\
+	 "md4":[calc_hash2,"shash","-a MD4","MD4"],\
+	 "md5":[calc_hash2,"shash","-a MD5","MD5"],\
+	 "ripemd128":[calc_hash2,"shash","-a RIPEMD128","RIPEMD128"],\
+	 "ripemd160":[calc_hash2,"shash","-a RIPEMD160","RIPEMD160"],\
+	 "ripemd256":[calc_hash2,"shash","-a RIPEMD256","RIPEMD256"],\
+	 "ripemd320":[calc_hash2,"shash","-a RIPEMD320","RIPEMD320"],\
+	 "sha1":[calc_hash2,"shash","-a SHA1","SHA1"],\
+	 "sha224":[calc_hash2,"shash","-a SHA224","SHA224"],\
+	 "sha256":[calc_hash2,"shash","-a SHA256","SHA256"],\
+	 "sha384":[calc_hash2,"shash","-a SHA384","SHA384"],\
+	 "sha512":[calc_hash2,"shash","-a SHA512","SHA512"],\
+	 "snefru128":[calc_hash2,"shash","-a SNEFRU128","SNEFRU128"],\
+	 "snefru256":[calc_hash2,"shash","-a SNEFRU256","SNEFRU256"],\
+	 "tiger":[calc_hash2,"shash","-a TIGER","TIGER"],\
+	 "tiger128":[calc_hash2,"shash","-a TIGER128","TIGER128"],\
+	 "tiger160":[calc_hash2,"shash","-a TIGER160","TIGER160"],\
+	 "whirlpool":[calc_hash2,"shash","-a WHIRLPOOL","WHIRLPOOL"],\
+	 }
+
+def read_from_clst(file):
+	line = ''
+	myline = ''
+	try:
+		myf=open(file,"r")
+	except:
+		return -1
+		#raise CatalystError, "Could not open file "+file
+	for line in myf.readlines():
+	    #line = string.replace(line, "\n", "") # drop newline
+	    myline = myline + line
+	myf.close()
+	return myline
+# read_from_clst
+
+# these should never be touched
+required_build_targets=["generic_target","generic_stage_target"]
+
+# new build types should be added here
+valid_build_targets=["stage1_target","stage2_target","stage3_target","stage4_target","grp_target",
+			"livecd_stage1_target","livecd_stage2_target","embedded_target",
+			"tinderbox_target","snapshot_target","netboot_target","netboot2_target"]
+
+required_config_file_values=["storedir","sharedir","distdir","portdir"]
+valid_config_file_values=required_config_file_values[:]
+valid_config_file_values.append("PKGCACHE")
+valid_config_file_values.append("KERNCACHE")
+valid_config_file_values.append("CCACHE")
+valid_config_file_values.append("DISTCC")
+valid_config_file_values.append("ICECREAM")
+valid_config_file_values.append("ENVSCRIPT")
+valid_config_file_values.append("AUTORESUME")
+valid_config_file_values.append("FETCH")
+valid_config_file_values.append("CLEAR_AUTORESUME")
+valid_config_file_values.append("options")
+valid_config_file_values.append("DEBUG")
+valid_config_file_values.append("VERBOSE")
+valid_config_file_values.append("PURGE")
+valid_config_file_values.append("PURGEONLY")
+valid_config_file_values.append("SNAPCACHE")
+valid_config_file_values.append("snapshot_cache")
+valid_config_file_values.append("hash_function")
+valid_config_file_values.append("digests")
+valid_config_file_values.append("contents")
+valid_config_file_values.append("SEEDCACHE")
+
+verbosity=1
+
+def list_bashify(mylist):
+	if type(mylist)==types.StringType:
+		mypack=[mylist]
+	else:
+		mypack=mylist[:]
+	for x in range(0,len(mypack)):
+		# surround args with quotes for passing to bash,
+		# allows things like "<" to remain intact
+		mypack[x]="'"+mypack[x]+"'"
+	mypack=string.join(mypack)
+	return mypack
+
+def list_to_string(mylist):
+	if type(mylist)==types.StringType:
+		mypack=[mylist]
+	else:
+		mypack=mylist[:]
+	for x in range(0,len(mypack)):
+		# surround args with quotes for passing to bash,
+		# allows things like "<" to remain intact
+		mypack[x]=mypack[x]
+	mypack=string.join(mypack)
+	return mypack
+
+class CatalystError(Exception):
+	def __init__(self, message):
+		if message:
+			(type,value)=sys.exc_info()[:2]
+			if value!=None:
+				print
+				print traceback.print_exc(file=sys.stdout)
+			print
+			print "!!! catalyst: "+message
+			print
+
+class LockInUse(Exception):
+	def __init__(self, message):
+		if message:
+			#(type,value)=sys.exc_info()[:2]
+			#if value!=None:
+			    #print
+			    #kprint traceback.print_exc(file=sys.stdout)
+			print
+			print "!!! catalyst lock file in use: "+message
+			print
+
+def die(msg=None):
+	warn(msg)
+	sys.exit(1)
+
+def warn(msg):
+	print "!!! catalyst: "+msg
+
+def find_binary(myc):
+	"""look through the environmental path for an executable file named whatever myc is"""
+        # this sucks. badly.
+        p=os.getenv("PATH")
+        if p == None:
+                return None
+        for x in p.split(":"):
+                #if it exists, and is executable
+                if os.path.exists("%s/%s" % (x,myc)) and os.stat("%s/%s" % (x,myc))[0] & 0x0248:
+                        return "%s/%s" % (x,myc)
+        return None
+
+def spawn_bash(mycommand,env={},debug=False,opt_name=None,**keywords):
+	"""spawn mycommand as an arguement to bash"""
+	args=[BASH_BINARY]
+	if not opt_name:
+	    opt_name=mycommand.split()[0]
+	if "BASH_ENV" not in env:
+	    env["BASH_ENV"] = "/etc/spork/is/not/valid/profile.env"
+	if debug:
+	    args.append("-x")
+	args.append("-c")
+	args.append(mycommand)
+	return spawn(args,env=env,opt_name=opt_name,**keywords)
+
+#def spawn_get_output(mycommand,spawn_type=spawn,raw_exit_code=False,emulate_gso=True, \
+#        collect_fds=[1],fd_pipes=None,**keywords):
+
+def spawn_get_output(mycommand,raw_exit_code=False,emulate_gso=True, \
+        collect_fds=[1],fd_pipes=None,**keywords):
+        """call spawn, collecting the output to fd's specified in collect_fds list
+        emulate_gso is a compatability hack to emulate commands.getstatusoutput's return, minus the
+        requirement it always be a bash call (spawn_type controls the actual spawn call), and minus the
+        'lets let log only stdin and let stderr slide by'.
+
+        emulate_gso was deprecated from the day it was added, so convert your code over.
+        spawn_type is the passed in function to call- typically spawn_bash, spawn, spawn_sandbox, or spawn_fakeroot"""
+        global selinux_capable
+        pr,pw=os.pipe()
+
+        #if type(spawn_type) not in [types.FunctionType, types.MethodType]:
+        #        s="spawn_type must be passed a function, not",type(spawn_type),spawn_type
+        #        raise Exception,s
+
+        if fd_pipes==None:
+                fd_pipes={}
+                fd_pipes[0] = 0
+
+        for x in collect_fds:
+                fd_pipes[x] = pw
+        keywords["returnpid"]=True
+
+        mypid=spawn_bash(mycommand,fd_pipes=fd_pipes,**keywords)
+        os.close(pw)
+        if type(mypid) != types.ListType:
+                os.close(pr)
+                return [mypid, "%s: No such file or directory" % mycommand.split()[0]]
+
+        fd=os.fdopen(pr,"r")
+        mydata=fd.readlines()
+        fd.close()
+        if emulate_gso:
+                mydata=string.join(mydata)
+                if len(mydata) and mydata[-1] == "\n":
+                        mydata=mydata[:-1]
+        retval=os.waitpid(mypid[0],0)[1]
+        cleanup(mypid)
+        if raw_exit_code:
+                return [retval,mydata]
+        retval=process_exit_code(retval)
+        return [retval, mydata]
+
+# base spawn function
+def spawn(mycommand,env={},raw_exit_code=False,opt_name=None,fd_pipes=None,returnpid=False,\
+	 uid=None,gid=None,groups=None,umask=None,logfile=None,path_lookup=True,\
+	 selinux_context=None, raise_signals=False, func_call=False):
+	"""base fork/execve function.
+	mycommand is the desired command- if you need a command to execute in a bash/sandbox/fakeroot
+	environment, use the appropriate spawn call.  This is a straight fork/exec code path.
+	Can either have a tuple, or a string passed in.  If uid/gid/groups/umask specified, it changes
+	the forked process to said value.  If path_lookup is on, a non-absolute command will be converted
+	to an absolute command, otherwise it returns None.
+
+	selinux_context is the desired context, dependant on selinux being available.
+	opt_name controls the name the processor goes by.
+	fd_pipes controls which file descriptor numbers are left open in the forked process- it's a dict of
+	current fd's raw fd #, desired #.
+
+	func_call is a boolean for specifying to execute a python function- use spawn_func instead.
+	raise_signals is questionable.  Basically throw an exception if signal'd.  No exception is thrown
+	if raw_input is on.
+
+	logfile overloads the specified fd's to write to a tee process which logs to logfile
+	returnpid returns the relevant pids (a list, including the logging process if logfile is on).
+
+	non-returnpid calls to spawn will block till the process has exited, returning the exitcode/signal
+	raw_exit_code controls whether the actual waitpid result is returned, or intrepretted."""
+
+	myc=''
+	if not func_call:
+		if type(mycommand)==types.StringType:
+			mycommand=mycommand.split()
+		myc = mycommand[0]
+		if not os.access(myc, os.X_OK):
+			if not path_lookup:
+				return None
+			myc = find_binary(myc)
+			if myc == None:
+			    return None
+        mypid=[]
+	if logfile:
+		pr,pw=os.pipe()
+		mypid.extend(spawn(('tee','-i','-a',logfile),returnpid=True,fd_pipes={0:pr,1:1,2:2}))
+		retval=os.waitpid(mypid[-1],os.WNOHANG)[1]
+		if retval != 0:
+			# he's dead jim.
+			if raw_exit_code:
+				return retval
+			return process_exit_code(retval)
+
+		if fd_pipes == None:
+			fd_pipes={}
+			fd_pipes[0] = 0
+		fd_pipes[1]=pw
+		fd_pipes[2]=pw
+
+	if not opt_name:
+		opt_name = mycommand[0]
+	myargs=[opt_name]
+	myargs.extend(mycommand[1:])
+	global spawned_pids
+	mypid.append(os.fork())
+	if mypid[-1] != 0:
+		#log the bugger.
+		spawned_pids.extend(mypid)
+
+	if mypid[-1] == 0:
+		if func_call:
+			spawned_pids = []
+
+		# this may look ugly, but basically it moves file descriptors around to ensure no
+		# handles that are needed are accidentally closed during the final dup2 calls.
+		trg_fd=[]
+		if type(fd_pipes)==types.DictType:
+			src_fd=[]
+			k=fd_pipes.keys()
+			k.sort()
+
+			#build list of which fds will be where, and where they are at currently
+			for x in k:
+				trg_fd.append(x)
+				src_fd.append(fd_pipes[x])
+
+			# run through said list dup'ing descriptors so that they won't be waxed
+			# by other dup calls.
+			for x in range(0,len(trg_fd)):
+				if trg_fd[x] == src_fd[x]:
+					continue
+				if trg_fd[x] in src_fd[x+1:]:
+					new=os.dup2(trg_fd[x],max(src_fd) + 1)
+					os.close(trg_fd[x])
+					try:
+						while True:
+							src_fd[s.index(trg_fd[x])]=new
+					except SystemExit, e:
+						raise
+					except:
+						pass
+
+			# transfer the fds to their final pre-exec position.
+			for x in range(0,len(trg_fd)):
+				if trg_fd[x] != src_fd[x]:
+					os.dup2(src_fd[x], trg_fd[x])
+		else:
+			trg_fd=[0,1,2]
+
+		# wax all open descriptors that weren't requested be left open.
+		for x in range(0,max_fd_limit):
+			if x not in trg_fd:
+				try:
+					os.close(x)
+                                except SystemExit, e:
+                                        raise
+                                except:
+                                        pass
+
+                # note this order must be preserved- can't change gid/groups if you change uid first.
+                if selinux_capable and selinux_context:
+                        import selinux
+                        selinux.setexec(selinux_context)
+                if gid:
+                        os.setgid(gid)
+                if groups:
+                        os.setgroups(groups)
+                if uid:
+                        os.setuid(uid)
+                if umask:
+                        os.umask(umask)
+                else:
+                        os.umask(022)
+
+                try:
+                        #print "execing", myc, myargs
+                        if func_call:
+                                # either use a passed in func for interpretting the results, or return if no exception.
+                                # note the passed in list, and dict are expanded.
+                                if len(mycommand) == 4:
+                                        os._exit(mycommand[3](mycommand[0](*mycommand[1],**mycommand[2])))
+                                try:
+                                        mycommand[0](*mycommand[1],**mycommand[2])
+                                except Exception,e:
+                                        print "caught exception",e," in forked func",mycommand[0]
+                                sys.exit(0)
+
+			#os.execvp(myc,myargs)
+                        os.execve(myc,myargs,env)
+                except SystemExit, e:
+                        raise
+                except Exception, e:
+                        if not func_call:
+                                raise str(e)+":\n   "+myc+" "+string.join(myargs)
+                        print "func call failed"
+
+                # If the execve fails, we need to report it, and exit
+                # *carefully* --- report error here
+                os._exit(1)
+                sys.exit(1)
+                return # should never get reached
+
+        # if we were logging, kill the pipes.
+        if logfile:
+                os.close(pr)
+                os.close(pw)
+
+        if returnpid:
+                return mypid
+
+        # loop through pids (typically one, unless logging), either waiting on their death, or waxing them
+        # if the main pid (mycommand) returned badly.
+        while len(mypid):
+                retval=os.waitpid(mypid[-1],0)[1]
+                if retval != 0:
+                        cleanup(mypid[0:-1],block_exceptions=False)
+                        # at this point we've killed all other kid pids generated via this call.
+                        # return now.
+                        if raw_exit_code:
+                                return retval
+                        return process_exit_code(retval,throw_signals=raise_signals)
+                else:
+                        mypid.pop(-1)
+        cleanup(mypid)
+        return 0
+
+def cmd(mycmd,myexc="",env={}):
+	try:
+		sys.stdout.flush()
+		retval=spawn_bash(mycmd,env)
+		if retval != 0:
+			raise CatalystError,myexc
+	except:
+		raise
+
+def process_exit_code(retval,throw_signals=False):
+        """process a waitpid returned exit code, returning exit code if it exit'd, or the
+        signal if it died from signalling
+        if throw_signals is on, it raises a SystemExit if the process was signaled.
+        This is intended for usage with threads, although at the moment you can't signal individual
+        threads in python, only the master thread, so it's a questionable option."""
+        if (retval & 0xff)==0:
+                return retval >> 8 # return exit code
+        else:
+                if throw_signals:
+                        #use systemexit, since portage is stupid about exception catching.
+                        raise SystemExit()
+                return (retval & 0xff) << 8 # interrupted by signal
+
+def file_locate(settings,filelist,expand=1):
+	#if expand=1, non-absolute paths will be accepted and
+	# expanded to os.getcwd()+"/"+localpath if file exists
+	for myfile in filelist:
+		if myfile not in settings:
+			#filenames such as cdtar are optional, so we don't assume the variable is defined.
+			pass
+		else:
+		    if len(settings[myfile])==0:
+			    raise CatalystError, "File variable \""+myfile+"\" has a length of zero (not specified.)"
+		    if settings[myfile][0]=="/":
+			    if not os.path.exists(settings[myfile]):
+				    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]
+		    elif expand and os.path.exists(os.getcwd()+"/"+settings[myfile]):
+			    settings[myfile]=os.getcwd()+"/"+settings[myfile]
+		    else:
+			    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]+" (2nd try)"
+"""
+Spec file format:
+
+The spec file format is a very simple and easy-to-use format for storing data. Here's an example
+file:
+
+item1: value1
+item2: foo bar oni
+item3:
+	meep
+	bark
+	gleep moop
+
+This file would be interpreted as defining three items: item1, item2 and item3. item1 would contain
+the string value "value1". Item2 would contain an ordered list [ "foo", "bar", "oni" ]. item3
+would contain an ordered list as well: [ "meep", "bark", "gleep", "moop" ]. It's important to note
+that the order of multiple-value items is preserved, but the order that the items themselves are
+defined are not preserved. In other words, "foo", "bar", "oni" ordering is preserved but "item1"
+"item2" "item3" ordering is not, as the item strings are stored in a dictionary (hash).
+"""
+
+def parse_makeconf(mylines):
+	mymakeconf={}
+	pos=0
+	pat=re.compile("([0-9a-zA-Z_]*)=(.*)")
+	while pos<len(mylines):
+		if len(mylines[pos])<=1:
+			#skip blanks
+			pos += 1
+			continue
+		if mylines[pos][0] in ["#"," ","\t"]:
+			#skip indented lines, comments
+			pos += 1
+			continue
+		else:
+			myline=mylines[pos]
+			mobj=pat.match(myline)
+			pos += 1
+			if mobj.group(2):
+			    clean_string = re.sub(r"\"",r"",mobj.group(2))
+			    mymakeconf[mobj.group(1)]=clean_string
+	return mymakeconf
+
+def read_makeconf(mymakeconffile):
+	if os.path.exists(mymakeconffile):
+		try:
+			try:
+				import snakeoil.fileutils
+				return snakeoil.fileutils.read_bash_dict(mymakeconffile, sourcing_command="source")
+			except ImportError:
+				try:
+					import portage.util
+					return portage.util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
+				except:
+					try:
+						import portage_util
+						return portage_util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
+					except ImportError:
+						myf=open(mymakeconffile,"r")
+						mylines=myf.readlines()
+						myf.close()
+						return parse_makeconf(mylines)
+		except:
+			raise CatalystError, "Could not parse make.conf file "+mymakeconffile
+	else:
+		makeconf={}
+		return makeconf
+
+def msg(mymsg,verblevel=1):
+	if verbosity>=verblevel:
+		print mymsg
+
+def pathcompare(path1,path2):
+	# Change double slashes to slash
+	path1 = re.sub(r"//",r"/",path1)
+	path2 = re.sub(r"//",r"/",path2)
+	# Removing ending slash
+	path1 = re.sub("/$","",path1)
+	path2 = re.sub("/$","",path2)
+
+	if path1 == path2:
+		return 1
+	return 0
+
+def ismount(path):
+	"enhanced to handle bind mounts"
+	if os.path.ismount(path):
+		return 1
+	a=os.popen("mount")
+	mylines=a.readlines()
+	a.close()
+	for line in mylines:
+		mysplit=line.split()
+		if pathcompare(path,mysplit[2]):
+			return 1
+	return 0
+
+def addl_arg_parse(myspec,addlargs,requiredspec,validspec):
+	"helper function to help targets parse additional arguments"
+	global valid_config_file_values
+
+	messages = []
+	for x in addlargs.keys():
+		if x not in validspec and x not in valid_config_file_values and x not in requiredspec:
+			messages.append("Argument \""+x+"\" not recognized.")
+		else:
+			myspec[x]=addlargs[x]
+
+	for x in requiredspec:
+		if x not in myspec:
+			messages.append("Required argument \""+x+"\" not specified.")
+
+	if messages:
+		raise CatalystError, '\n\tAlso: '.join(messages)
+
+def touch(myfile):
+	try:
+		myf=open(myfile,"w")
+		myf.close()
+	except IOError:
+		raise CatalystError, "Could not touch "+myfile+"."
+
+def countdown(secs=5, doing="Starting"):
+        if secs:
+		print ">>> Waiting",secs,"seconds before starting..."
+		print ">>> (Control-C to abort)...\n"+doing+" in: ",
+		ticks=range(secs)
+		ticks.reverse()
+		for sec in ticks:
+			sys.stdout.write(str(sec+1)+" ")
+			sys.stdout.flush()
+			time.sleep(1)
+		print
+
+def normpath(mypath):
+	TrailingSlash=False
+        if mypath[-1] == "/":
+	    TrailingSlash=True
+        newpath = os.path.normpath(mypath)
+        if len(newpath) > 1:
+                if newpath[:2] == "//":
+                        newpath = newpath[1:]
+	if TrailingSlash:
+	    newpath=newpath+'/'
+        return newpath
diff --git a/catalyst/modules/embedded_target.py b/catalyst/modules/embedded_target.py
new file mode 100644
index 0000000..f38ea00
--- /dev/null
+++ b/catalyst/modules/embedded_target.py
@@ -0,0 +1,51 @@
+"""
+Enbedded target, similar to the stage2 target, builds upon a stage2 tarball.
+
+A stage2 tarball is unpacked, but instead
+of building a stage3, it emerges @system into another directory
+inside the stage2 system.  This way, we do not have to emerge GCC/portage
+into the staged system.
+It may sound complicated but basically it runs
+ROOT=/tmp/submerge emerge --something foo bar .
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,imp,types,shutil
+from catalyst_support import *
+from generic_stage_target import *
+from stat import *
+
+class embedded_target(generic_stage_target):
+	"""
+	Builder class for embedded target
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=[]
+		self.valid_values.extend(["embedded/empty","embedded/rm","embedded/unmerge","embedded/fs-prepare","embedded/fs-finish","embedded/mergeroot","embedded/packages","embedded/fs-type","embedded/runscript","boot/kernel","embedded/linuxrc"])
+		self.valid_values.extend(["embedded/use"])
+		if "embedded/fs-type" in addlargs:
+			self.valid_values.append("embedded/fs-ops")
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		self.set_build_kernel_vars(addlargs)
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["dir_setup","unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir",\
+					"portage_overlay","bind","chroot_setup",\
+					"setup_environment","build_kernel","build_packages",\
+					"bootloader","root_overlay","fsscript","unmerge",\
+					"unbind","remove","empty","clean","capture","clear_autoresume"]
+
+	def set_stage_path(self):
+		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+"/tmp/mergeroot")
+		print "embedded stage path is "+self.settings["stage_path"]
+
+	def set_root_path(self):
+		self.settings["root_path"]=normpath("/tmp/mergeroot")
+		print "embedded root path is "+self.settings["root_path"]
+
+def register(foo):
+	foo.update({"embedded":embedded_target})
+	return foo
diff --git a/catalyst/modules/generic_stage_target.py b/catalyst/modules/generic_stage_target.py
new file mode 100644
index 0000000..63d919d
--- /dev/null
+++ b/catalyst/modules/generic_stage_target.py
@@ -0,0 +1,1741 @@
+import os,string,imp,types,shutil
+from catalyst_support import *
+from generic_target import *
+from stat import *
+import catalyst_lock
+
+
+PORT_LOGDIR_CLEAN = \
+	'find "${PORT_LOGDIR}" -type f ! -name "summary.log*" -mtime +30 -delete'
+
+TARGET_MOUNTS_DEFAULTS = {
+	"ccache": "/var/tmp/ccache",
+	"dev": "/dev",
+	"devpts": "/dev/pts",
+	"distdir": "/usr/portage/distfiles",
+	"icecream": "/usr/lib/icecc/bin",
+	"kerncache": "/tmp/kerncache",
+	"packagedir": "/usr/portage/packages",
+	"portdir": "/usr/portage",
+	"port_tmpdir": "/var/tmp/portage",
+	"port_logdir": "/var/log/portage",
+	"proc": "/proc",
+	"shm": "/dev/shm",
+	}
+
+SOURCE_MOUNTS_DEFAULTS = {
+	"dev": "/dev",
+	"devpts": "/dev/pts",
+	"distdir": "/usr/portage/distfiles",
+	"portdir": "/usr/portage",
+	"port_tmpdir": "tmpfs",
+	"proc": "/proc",
+	"shm": "shmfs",
+	}
+
+
+class generic_stage_target(generic_target):
+	"""
+	This class does all of the chroot setup, copying of files, etc. It is
+	the driver class for pretty much everything that Catalyst does.
+	"""
+	def __init__(self,myspec,addlargs):
+		self.required_values.extend(["version_stamp","target","subarch",\
+			"rel_type","profile","snapshot","source_subpath"])
+
+		self.valid_values.extend(["version_stamp","target","subarch",\
+			"rel_type","profile","snapshot","source_subpath","portage_confdir",\
+			"cflags","cxxflags","ldflags","cbuild","hostuse","portage_overlay",\
+			"distcc_hosts","makeopts","pkgcache_path","kerncache_path"])
+
+		self.set_valid_build_kernel_vars(addlargs)
+		generic_target.__init__(self,myspec,addlargs)
+
+		"""
+		The semantics of subarchmap and machinemap changed a bit in 2.0.3 to
+		work better with vapier's CBUILD stuff. I've removed the "monolithic"
+		machinemap from this file and split up its contents amongst the
+		various arch/foo.py files.
+
+		When register() is called on each module in the arch/ dir, it now
+		returns a tuple instead of acting on the subarchmap dict that is
+		passed to it. The tuple contains the values that were previously
+		added to subarchmap as well as a new list of CHOSTs that go along
+		with that arch. This allows us to build machinemap on the fly based
+		on the keys in subarchmap and the values of the 2nd list returned
+		(tmpmachinemap).
+
+		Also, after talking with vapier. I have a slightly better idea of what
+		certain variables are used for and what they should be set to. Neither
+		'buildarch' or 'hostarch' are used directly, so their value doesn't
+		really matter. They are just compared to determine if we are
+		cross-compiling. Because of this, they are just set to the name of the
+		module in arch/ that the subarch is part of to make things simpler.
+		The entire build process is still based off of 'subarch' like it was
+		previously. -agaffney
+		"""
+
+		self.archmap = {}
+		self.subarchmap = {}
+		machinemap = {}
+		arch_dir = self.settings["PythonDir"] + "/arch/"
+		for x in [x[:-3] for x in os.listdir(arch_dir) if x.endswith(".py")]:
+			if x == "__init__":
+				continue
+			try:
+				fh=open(arch_dir + x + ".py")
+				"""
+				This next line loads the plugin as a module and assigns it to
+				archmap[x]
+				"""
+				self.archmap[x]=imp.load_module(x,fh,"../arch/" + x + ".py",
+					(".py", "r", imp.PY_SOURCE))
+				"""
+				This next line registers all the subarches supported in the
+				plugin
+				"""
+				tmpsubarchmap, tmpmachinemap = self.archmap[x].register()
+				self.subarchmap.update(tmpsubarchmap)
+				for machine in tmpmachinemap:
+					machinemap[machine] = x
+				for subarch in tmpsubarchmap:
+					machinemap[subarch] = x
+				fh.close()
+			except IOError:
+				"""
+				This message should probably change a bit, since everything in
+				the dir should load just fine. If it doesn't, it's probably a
+				syntax error in the module
+				"""
+				msg("Can't find/load " + x + ".py plugin in " + arch_dir)
+
+		if "chost" in self.settings:
+			hostmachine = self.settings["chost"].split("-")[0]
+			if hostmachine not in machinemap:
+				raise CatalystError, "Unknown host machine type "+hostmachine
+			self.settings["hostarch"]=machinemap[hostmachine]
+		else:
+			hostmachine = self.settings["subarch"]
+			if hostmachine in machinemap:
+				hostmachine = machinemap[hostmachine]
+			self.settings["hostarch"]=hostmachine
+		if "cbuild" in self.settings:
+			buildmachine = self.settings["cbuild"].split("-")[0]
+		else:
+			buildmachine = os.uname()[4]
+		if buildmachine not in machinemap:
+			raise CatalystError, "Unknown build machine type "+buildmachine
+		self.settings["buildarch"]=machinemap[buildmachine]
+		self.settings["crosscompile"]=(self.settings["hostarch"]!=\
+			self.settings["buildarch"])
+
+		""" Call arch constructor, pass our settings """
+		try:
+			self.arch=self.subarchmap[self.settings["subarch"]](self.settings)
+		except KeyError:
+			print "Invalid subarch: "+self.settings["subarch"]
+			print "Choose one of the following:",
+			for x in self.subarchmap:
+				print x,
+			print
+			sys.exit(2)
+
+		print "Using target:",self.settings["target"]
+		""" Print a nice informational message """
+		if self.settings["buildarch"]==self.settings["hostarch"]:
+			print "Building natively for",self.settings["hostarch"]
+		elif self.settings["crosscompile"]:
+			print "Cross-compiling on",self.settings["buildarch"],\
+				"for different machine type",self.settings["hostarch"]
+		else:
+			print "Building on",self.settings["buildarch"],\
+				"for alternate personality type",self.settings["hostarch"]
+
+		""" This must be set first as other set_ options depend on this """
+		self.set_spec_prefix()
+
+		""" Define all of our core variables """
+		self.set_target_profile()
+		self.set_target_subpath()
+		self.set_source_subpath()
+
+		""" Set paths """
+		self.set_snapshot_path()
+		self.set_root_path()
+		self.set_source_path()
+		self.set_snapcache_path()
+		self.set_chroot_path()
+		self.set_autoresume_path()
+		self.set_dest_path()
+		self.set_stage_path()
+		self.set_target_path()
+
+		self.set_controller_file()
+		self.set_action_sequence()
+		self.set_use()
+		self.set_cleanables()
+		self.set_iso_volume_id()
+		self.set_build_kernel_vars()
+		self.set_fsscript()
+		self.set_install_mask()
+		self.set_rcadd()
+		self.set_rcdel()
+		self.set_cdtar()
+		self.set_fstype()
+		self.set_fsops()
+		self.set_iso()
+		self.set_packages()
+		self.set_rm()
+		self.set_linuxrc()
+		self.set_busybox_config()
+		self.set_overlay()
+		self.set_portage_overlay()
+		self.set_root_overlay()
+
+		"""
+		This next line checks to make sure that the specified variables exist
+		on disk.
+		"""
+		#pdb.set_trace()
+		file_locate(self.settings,["source_path","snapshot_path","distdir"],\
+			expand=0)
+		""" If we are using portage_confdir, check that as well. """
+		if "portage_confdir" in self.settings:
+			file_locate(self.settings,["portage_confdir"],expand=0)
+
+		""" Setup our mount points """
+		# initialize our target mounts.
+		self.target_mounts = TARGET_MOUNTS_DEFAULTS.copy()
+
+		self.mounts = ["proc", "dev", "portdir", "distdir", "port_tmpdir"]
+		# initialize our source mounts
+		self.mountmap = SOURCE_MOUNTS_DEFAULTS.copy()
+		# update them from settings
+		self.mountmap["distdir"] = self.settings["distdir"]
+		self.mountmap["portdir"] = normpath("/".join([
+			self.settings["snapshot_cache_path"],
+			self.settings["repo_name"],
+			]))
+		if "SNAPCACHE" not in self.settings:
+			self.mounts.remove("portdir")
+			#self.mountmap["portdir"] = None
+		if os.uname()[0] == "Linux":
+			self.mounts.append("devpts")
+			self.mounts.append("shm")
+
+		self.set_mounts()
+
+		"""
+		Configure any user specified options (either in catalyst.conf or on
+		the command line).
+		"""
+		if "PKGCACHE" in self.settings:
+			self.set_pkgcache_path()
+			print "Location of the package cache is "+\
+				self.settings["pkgcache_path"]
+			self.mounts.append("packagedir")
+			self.mountmap["packagedir"] = self.settings["pkgcache_path"]
+
+		if "KERNCACHE" in self.settings:
+			self.set_kerncache_path()
+			print "Location of the kerncache is "+\
+				self.settings["kerncache_path"]
+			self.mounts.append("kerncache")
+			self.mountmap["kerncache"] = self.settings["kerncache_path"]
+
+		if "CCACHE" in self.settings:
+			if "CCACHE_DIR" in os.environ:
+				ccdir=os.environ["CCACHE_DIR"]
+				del os.environ["CCACHE_DIR"]
+			else:
+				ccdir="/root/.ccache"
+			if not os.path.isdir(ccdir):
+				raise CatalystError,\
+					"Compiler cache support can't be enabled (can't find "+\
+					ccdir+")"
+			self.mounts.append("ccache")
+			self.mountmap["ccache"] = ccdir
+			""" for the chroot: """
+			self.env["CCACHE_DIR"] = self.target_mounts["ccache"]
+
+		if "ICECREAM" in self.settings:
+			self.mounts.append("icecream")
+			self.mountmap["icecream"] = self.settings["icecream"]
+			self.env["PATH"] = self.target_mounts["icecream"] + ":" + \
+				self.env["PATH"]
+
+		if "port_logdir" in self.settings:
+			self.mounts.append("port_logdir")
+			self.mountmap["port_logdir"] = self.settings["port_logdir"]
+			self.env["PORT_LOGDIR"] = self.settings["port_logdir"]
+			self.env["PORT_LOGDIR_CLEAN"] = PORT_LOGDIR_CLEAN
+
+	def override_cbuild(self):
+		if "CBUILD" in self.makeconf:
+			self.settings["CBUILD"]=self.makeconf["CBUILD"]
+
+	def override_chost(self):
+		if "CHOST" in self.makeconf:
+			self.settings["CHOST"]=self.makeconf["CHOST"]
+
+	def override_cflags(self):
+		if "CFLAGS" in self.makeconf:
+			self.settings["CFLAGS"]=self.makeconf["CFLAGS"]
+
+	def override_cxxflags(self):
+		if "CXXFLAGS" in self.makeconf:
+			self.settings["CXXFLAGS"]=self.makeconf["CXXFLAGS"]
+
+	def override_ldflags(self):
+		if "LDFLAGS" in self.makeconf:
+			self.settings["LDFLAGS"]=self.makeconf["LDFLAGS"]
+
+	def set_install_mask(self):
+		if "install_mask" in self.settings:
+			if type(self.settings["install_mask"])!=types.StringType:
+				self.settings["install_mask"]=\
+					string.join(self.settings["install_mask"])
+
+	def set_spec_prefix(self):
+		self.settings["spec_prefix"]=self.settings["target"]
+
+	def set_target_profile(self):
+		self.settings["target_profile"]=self.settings["profile"]
+
+	def set_target_subpath(self):
+		self.settings["target_subpath"]=self.settings["rel_type"]+"/"+\
+				self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
+				self.settings["version_stamp"]
+
+	def set_source_subpath(self):
+		if type(self.settings["source_subpath"])!=types.StringType:
+			raise CatalystError,\
+				"source_subpath should have been a string. Perhaps you have something wrong in your spec file?"
+
+	def set_pkgcache_path(self):
+		if "pkgcache_path" in self.settings:
+			if type(self.settings["pkgcache_path"])!=types.StringType:
+				self.settings["pkgcache_path"]=\
+					normpath(string.join(self.settings["pkgcache_path"]))
+		else:
+			self.settings["pkgcache_path"]=\
+				normpath(self.settings["storedir"]+"/packages/"+\
+				self.settings["target_subpath"]+"/")
+
+	def set_kerncache_path(self):
+		if "kerncache_path" in self.settings:
+			if type(self.settings["kerncache_path"])!=types.StringType:
+				self.settings["kerncache_path"]=\
+					normpath(string.join(self.settings["kerncache_path"]))
+		else:
+			self.settings["kerncache_path"]=normpath(self.settings["storedir"]+\
+				"/kerncache/"+self.settings["target_subpath"]+"/")
+
+	def set_target_path(self):
+		self.settings["target_path"] = normpath(self.settings["storedir"] +
+			"/builds/" + self.settings["target_subpath"].rstrip('/') +
+			".tar.bz2")
+		if "AUTORESUME" in self.settings\
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"setup_target_path"):
+			print \
+				"Resume point detected, skipping target path setup operation..."
+		else:
+			""" First clean up any existing target stuff """
+			# XXX WTF are we removing the old tarball before we start building the
+			# XXX new one? If the build fails, you don't want to be left with
+			# XXX nothing at all
+#			if os.path.isfile(self.settings["target_path"]):
+#				cmd("rm -f "+self.settings["target_path"],\
+#					"Could not remove existing file: "\
+#					+self.settings["target_path"],env=self.env)
+			touch(self.settings["autoresume_path"]+"setup_target_path")
+
+			if not os.path.exists(self.settings["storedir"]+"/builds/"):
+				os.makedirs(self.settings["storedir"]+"/builds/")
+
+	def set_fsscript(self):
+		if self.settings["spec_prefix"]+"/fsscript" in self.settings:
+			self.settings["fsscript"]=\
+				self.settings[self.settings["spec_prefix"]+"/fsscript"]
+			del self.settings[self.settings["spec_prefix"]+"/fsscript"]
+
+	def set_rcadd(self):
+		if self.settings["spec_prefix"]+"/rcadd" in self.settings:
+			self.settings["rcadd"]=\
+				self.settings[self.settings["spec_prefix"]+"/rcadd"]
+			del self.settings[self.settings["spec_prefix"]+"/rcadd"]
+
+	def set_rcdel(self):
+		if self.settings["spec_prefix"]+"/rcdel" in self.settings:
+			self.settings["rcdel"]=\
+				self.settings[self.settings["spec_prefix"]+"/rcdel"]
+			del self.settings[self.settings["spec_prefix"]+"/rcdel"]
+
+	def set_cdtar(self):
+		if self.settings["spec_prefix"]+"/cdtar" in self.settings:
+			self.settings["cdtar"]=\
+				normpath(self.settings[self.settings["spec_prefix"]+"/cdtar"])
+			del self.settings[self.settings["spec_prefix"]+"/cdtar"]
+
+	def set_iso(self):
+		if self.settings["spec_prefix"]+"/iso" in self.settings:
+			if self.settings[self.settings["spec_prefix"]+"/iso"].startswith('/'):
+				self.settings["iso"]=\
+					normpath(self.settings[self.settings["spec_prefix"]+"/iso"])
+			else:
+				# This automatically prepends the build dir to the ISO output path
+				# if it doesn't start with a /
+				self.settings["iso"] = normpath(self.settings["storedir"] + \
+					"/builds/" + self.settings["rel_type"] + "/" + \
+					self.settings[self.settings["spec_prefix"]+"/iso"])
+			del self.settings[self.settings["spec_prefix"]+"/iso"]
+
+	def set_fstype(self):
+		if self.settings["spec_prefix"]+"/fstype" in self.settings:
+			self.settings["fstype"]=\
+				self.settings[self.settings["spec_prefix"]+"/fstype"]
+			del self.settings[self.settings["spec_prefix"]+"/fstype"]
+
+		if "fstype" not in self.settings:
+			self.settings["fstype"]="normal"
+			for x in self.valid_values:
+				if x ==  self.settings["spec_prefix"]+"/fstype":
+					print "\n"+self.settings["spec_prefix"]+\
+						"/fstype is being set to the default of \"normal\"\n"
+
+	def set_fsops(self):
+		if "fstype" in self.settings:
+			self.valid_values.append("fsops")
+			if self.settings["spec_prefix"]+"/fsops" in self.settings:
+				self.settings["fsops"]=\
+					self.settings[self.settings["spec_prefix"]+"/fsops"]
+				del self.settings[self.settings["spec_prefix"]+"/fsops"]
+
+	def set_source_path(self):
+		if "SEEDCACHE" in self.settings\
+			and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+\
+				self.settings["source_subpath"]+"/")):
+			self.settings["source_path"]=normpath(self.settings["storedir"]+\
+				"/tmp/"+self.settings["source_subpath"]+"/")
+		else:
+			self.settings["source_path"] = normpath(self.settings["storedir"] +
+				"/builds/" + self.settings["source_subpath"].rstrip("/") +
+				".tar.bz2")
+			if os.path.isfile(self.settings["source_path"]):
+				# XXX: Is this even necessary if the previous check passes?
+				if os.path.exists(self.settings["source_path"]):
+					self.settings["source_path_hash"]=\
+						generate_hash(self.settings["source_path"],\
+						hash_function=self.settings["hash_function"],\
+						verbose=False)
+		print "Source path set to "+self.settings["source_path"]
+		if os.path.isdir(self.settings["source_path"]):
+			print "\tIf this is not desired, remove this directory or turn off"
+			print "\tseedcache in the options of catalyst.conf the source path"
+			print "\twill then be "+\
+				normpath(self.settings["storedir"] + "/builds/" +
+					self.settings["source_subpath"].rstrip("/") + ".tar.bz2\n")
+
+	def set_dest_path(self):
+		if "root_path" in self.settings:
+			self.settings["destpath"]=normpath(self.settings["chroot_path"]+\
+				self.settings["root_path"])
+		else:
+			self.settings["destpath"]=normpath(self.settings["chroot_path"])
+
+	def set_cleanables(self):
+		self.settings["cleanables"]=["/etc/resolv.conf","/var/tmp/*","/tmp/*",\
+			"/root/*", self.settings["portdir"]]
+
+	def set_snapshot_path(self):
+		self.settings["snapshot_path"] = normpath(self.settings["storedir"] +
+			"/snapshots/" + self.settings["snapshot_name"] +
+			self.settings["snapshot"].rstrip("/") + ".tar.xz")
+
+		if os.path.exists(self.settings["snapshot_path"]):
+			self.settings["snapshot_path_hash"]=\
+				generate_hash(self.settings["snapshot_path"],\
+				hash_function=self.settings["hash_function"],verbose=False)
+		else:
+			self.settings["snapshot_path"]=normpath(self.settings["storedir"]+\
+				"/snapshots/" + self.settings["snapshot_name"] +
+				self.settings["snapshot"].rstrip("/") + ".tar.bz2")
+
+			if os.path.exists(self.settings["snapshot_path"]):
+				self.settings["snapshot_path_hash"]=\
+					generate_hash(self.settings["snapshot_path"],\
+					hash_function=self.settings["hash_function"],verbose=False)
+
+	def set_snapcache_path(self):
+		if "SNAPCACHE" in self.settings:
+			self.settings["snapshot_cache_path"]=\
+				normpath(self.settings["snapshot_cache"]+"/"+\
+				self.settings["snapshot"])
+			self.snapcache_lock=\
+				catalyst_lock.LockDir(self.settings["snapshot_cache_path"])
+			print "Caching snapshot to "+self.settings["snapshot_cache_path"]
+
+	def set_chroot_path(self):
+		"""
+		NOTE: the trailing slash has been removed
+		Things *could* break if you don't use a proper join()
+		"""
+		self.settings["chroot_path"]=normpath(self.settings["storedir"]+\
+			"/tmp/"+self.settings["target_subpath"])
+		self.chroot_lock=catalyst_lock.LockDir(self.settings["chroot_path"])
+
+	def set_autoresume_path(self):
+		self.settings["autoresume_path"]=normpath(self.settings["storedir"]+\
+			"/tmp/"+self.settings["rel_type"]+"/"+".autoresume-"+\
+			self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
+			self.settings["version_stamp"]+"/")
+		if "AUTORESUME" in self.settings:
+			print "The autoresume path is " + self.settings["autoresume_path"]
+		if not os.path.exists(self.settings["autoresume_path"]):
+			os.makedirs(self.settings["autoresume_path"],0755)
+
+	def set_controller_file(self):
+		self.settings["controller_file"]=normpath(self.settings["sharedir"]+\
+			"/targets/"+self.settings["target"]+"/"+self.settings["target"]+\
+			"-controller.sh")
+
+	def set_iso_volume_id(self):
+		if self.settings["spec_prefix"]+"/volid" in self.settings:
+			self.settings["iso_volume_id"]=\
+				self.settings[self.settings["spec_prefix"]+"/volid"]
+			if len(self.settings["iso_volume_id"])>32:
+				raise CatalystError,\
+					"ISO volume ID must not exceed 32 characters."
+		else:
+			self.settings["iso_volume_id"]="catalyst "+self.settings["snapshot"]
+
+	def set_action_sequence(self):
+		""" Default action sequence for run method """
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+				"setup_confdir","portage_overlay",\
+				"base_dirs","bind","chroot_setup","setup_environment",\
+				"run_local","preclean","unbind","clean"]
+#		if "TARBALL" in self.settings or \
+#			"FETCH" not in self.settings:
+		if "FETCH" not in self.settings:
+			self.settings["action_sequence"].append("capture")
+		self.settings["action_sequence"].append("clear_autoresume")
+
+	def set_use(self):
+		if self.settings["spec_prefix"]+"/use" in self.settings:
+			self.settings["use"]=\
+				self.settings[self.settings["spec_prefix"]+"/use"]
+			del self.settings[self.settings["spec_prefix"]+"/use"]
+		if "use" not in self.settings:
+			self.settings["use"]=""
+		if type(self.settings["use"])==types.StringType:
+			self.settings["use"]=self.settings["use"].split()
+
+		# Force bindist when options ask for it
+		if "BINDIST" in self.settings:
+			self.settings["use"].append("bindist")
+
+	def set_stage_path(self):
+		self.settings["stage_path"]=normpath(self.settings["chroot_path"])
+
+	def set_mounts(self):
+		pass
+
+	def set_packages(self):
+		pass
+
+	def set_rm(self):
+		if self.settings["spec_prefix"]+"/rm" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/rm"])==types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/rm"]=\
+					self.settings[self.settings["spec_prefix"]+"/rm"].split()
+
+	def set_linuxrc(self):
+		if self.settings["spec_prefix"]+"/linuxrc" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/linuxrc"])==types.StringType:
+				self.settings["linuxrc"]=\
+					self.settings[self.settings["spec_prefix"]+"/linuxrc"]
+				del self.settings[self.settings["spec_prefix"]+"/linuxrc"]
+
+	def set_busybox_config(self):
+		if self.settings["spec_prefix"]+"/busybox_config" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/busybox_config"])==types.StringType:
+				self.settings["busybox_config"]=\
+					self.settings[self.settings["spec_prefix"]+"/busybox_config"]
+				del self.settings[self.settings["spec_prefix"]+"/busybox_config"]
+
+	def set_portage_overlay(self):
+		if "portage_overlay" in self.settings:
+			if type(self.settings["portage_overlay"])==types.StringType:
+				self.settings["portage_overlay"]=\
+					self.settings["portage_overlay"].split()
+			print "portage_overlay directories are set to: \""+\
+				string.join(self.settings["portage_overlay"])+"\""
+
+	def set_overlay(self):
+		if self.settings["spec_prefix"]+"/overlay" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/overlay"])==types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/overlay"]=\
+					self.settings[self.settings["spec_prefix"]+\
+					"/overlay"].split()
+
+	def set_root_overlay(self):
+		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/root_overlay"])==types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/root_overlay"]=\
+					self.settings[self.settings["spec_prefix"]+\
+					"/root_overlay"].split()
+
+	def set_root_path(self):
+		""" ROOT= variable for emerges """
+		self.settings["root_path"]="/"
+
+	def set_valid_build_kernel_vars(self,addlargs):
+		if "boot/kernel" in addlargs:
+			if type(addlargs["boot/kernel"])==types.StringType:
+				loopy=[addlargs["boot/kernel"]]
+			else:
+				loopy=addlargs["boot/kernel"]
+
+			for x in loopy:
+				self.valid_values.append("boot/kernel/"+x+"/aliases")
+				self.valid_values.append("boot/kernel/"+x+"/config")
+				self.valid_values.append("boot/kernel/"+x+"/console")
+				self.valid_values.append("boot/kernel/"+x+"/extraversion")
+				self.valid_values.append("boot/kernel/"+x+"/gk_action")
+				self.valid_values.append("boot/kernel/"+x+"/gk_kernargs")
+				self.valid_values.append("boot/kernel/"+x+"/initramfs_overlay")
+				self.valid_values.append("boot/kernel/"+x+"/machine_type")
+				self.valid_values.append("boot/kernel/"+x+"/sources")
+				self.valid_values.append("boot/kernel/"+x+"/softlevel")
+				self.valid_values.append("boot/kernel/"+x+"/use")
+				self.valid_values.append("boot/kernel/"+x+"/packages")
+				if "boot/kernel/"+x+"/packages" in addlargs:
+					if type(addlargs["boot/kernel/"+x+\
+						"/packages"])==types.StringType:
+						addlargs["boot/kernel/"+x+"/packages"]=\
+							[addlargs["boot/kernel/"+x+"/packages"]]
+
+	def set_build_kernel_vars(self):
+		if self.settings["spec_prefix"]+"/gk_mainargs" in self.settings:
+			self.settings["gk_mainargs"]=\
+				self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
+			del self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
+
+	def kill_chroot_pids(self):
+		print "Checking for processes running in chroot and killing them."
+
+		"""
+		Force environment variables to be exported so script can see them
+		"""
+		self.setup_environment()
+
+		if os.path.exists(self.settings["sharedir"]+\
+			"/targets/support/kill-chroot-pids.sh"):
+			cmd("/bin/bash "+self.settings["sharedir"]+\
+				"/targets/support/kill-chroot-pids.sh",\
+				"kill-chroot-pids script failed.",env=self.env)
+
+	def mount_safety_check(self):
+		"""
+		Check and verify that none of our paths in mypath are mounted. We don't
+		want to clean up with things still mounted, and this allows us to check.
+		Returns 1 on ok, 0 on "something is still mounted" case.
+		"""
+
+		if not os.path.exists(self.settings["chroot_path"]):
+			return
+
+		print "self.mounts =", self.mounts
+		for x in self.mounts:
+			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
+			print "mount_safety_check() x =", x, target
+			if not os.path.exists(target):
+				continue
+
+			if ismount(target):
+				""" Something is still mounted "" """
+				try:
+					print target + " is still mounted; performing auto-bind-umount...",
+					""" Try to umount stuff ourselves """
+					self.unbind()
+					if ismount(target):
+						raise CatalystError, "Auto-unbind failed for " + target
+					else:
+						print "Auto-unbind successful..."
+				except CatalystError:
+					raise CatalystError, "Unable to auto-unbind " + target
+
+	def unpack(self):
+		unpack=True
+
+		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+\
+			"unpack")
+
+		if "SEEDCACHE" in self.settings:
+			if os.path.isdir(self.settings["source_path"]):
+				""" SEEDCACHE Is a directory, use rsync """
+				unpack_cmd="rsync -a --delete "+self.settings["source_path"]+\
+					" "+self.settings["chroot_path"]
+				display_msg="\nStarting rsync from "+\
+					self.settings["source_path"]+"\nto "+\
+					self.settings["chroot_path"]+\
+					" (This may take some time) ...\n"
+				error_msg="Rsync of "+self.settings["source_path"]+" to "+\
+					self.settings["chroot_path"]+" failed."
+			else:
+				""" SEEDCACHE is a not a directory, try untar'ing """
+				print "Referenced SEEDCACHE does not appear to be a directory, trying to untar..."
+				display_msg="\nStarting tar extract from "+\
+					self.settings["source_path"]+"\nto "+\
+					self.settings["chroot_path"]+\
+						" (This may take some time) ...\n"
+				if "bz2" == self.settings["chroot_path"][-3:]:
+					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+						self.settings["chroot_path"]
+				else:
+					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+						self.settings["chroot_path"]
+				error_msg="Tarball extraction of "+\
+					self.settings["source_path"]+" to "+\
+					self.settings["chroot_path"]+" failed."
+		else:
+			""" No SEEDCACHE, use tar """
+			display_msg="\nStarting tar extract from "+\
+				self.settings["source_path"]+"\nto "+\
+				self.settings["chroot_path"]+\
+				" (This may take some time) ...\n"
+			if "bz2" == self.settings["chroot_path"][-3:]:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+					self.settings["chroot_path"]
+			else:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+					self.settings["chroot_path"]
+			error_msg="Tarball extraction of "+self.settings["source_path"]+\
+				" to "+self.settings["chroot_path"]+" failed."
+
+		if "AUTORESUME" in self.settings:
+			if os.path.isdir(self.settings["source_path"]) \
+				and os.path.exists(self.settings["autoresume_path"]+"unpack"):
+				""" Autoresume is valid, SEEDCACHE is valid """
+				unpack=False
+				invalid_snapshot=False
+
+			elif os.path.isfile(self.settings["source_path"]) \
+				and self.settings["source_path_hash"]==clst_unpack_hash:
+				""" Autoresume is valid, tarball is valid """
+				unpack=False
+				invalid_snapshot=True
+
+			elif os.path.isdir(self.settings["source_path"]) \
+				and not os.path.exists(self.settings["autoresume_path"]+\
+				"unpack"):
+				""" Autoresume is invalid, SEEDCACHE """
+				unpack=True
+				invalid_snapshot=False
+
+			elif os.path.isfile(self.settings["source_path"]) \
+				and self.settings["source_path_hash"]!=clst_unpack_hash:
+				""" Autoresume is invalid, tarball """
+				unpack=True
+				invalid_snapshot=True
+		else:
+			""" No autoresume, SEEDCACHE """
+			if "SEEDCACHE" in self.settings:
+				""" SEEDCACHE so let's run rsync and let it clean up """
+				if os.path.isdir(self.settings["source_path"]):
+					unpack=True
+					invalid_snapshot=False
+				elif os.path.isfile(self.settings["source_path"]):
+					""" Tarball so unpack and remove anything already there """
+					unpack=True
+					invalid_snapshot=True
+				""" No autoresume, no SEEDCACHE """
+			else:
+				""" Tarball so unpack and remove anything already there """
+				if os.path.isfile(self.settings["source_path"]):
+					unpack=True
+					invalid_snapshot=True
+				elif os.path.isdir(self.settings["source_path"]):
+					""" We should never reach this, so something is very wrong """
+					raise CatalystError,\
+						"source path is a dir but seedcache is not enabled"
+
+		if unpack:
+			self.mount_safety_check()
+
+			if invalid_snapshot:
+				if "AUTORESUME" in self.settings:
+					print "No Valid Resume point detected, cleaning up..."
+
+				self.clear_autoresume()
+				self.clear_chroot()
+
+			if not os.path.exists(self.settings["chroot_path"]):
+				os.makedirs(self.settings["chroot_path"])
+
+			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
+				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
+
+			if "PKGCACHE" in self.settings:
+				if not os.path.exists(self.settings["pkgcache_path"]):
+					os.makedirs(self.settings["pkgcache_path"],0755)
+
+			if "KERNCACHE" in self.settings:
+				if not os.path.exists(self.settings["kerncache_path"]):
+					os.makedirs(self.settings["kerncache_path"],0755)
+
+			print display_msg
+			cmd(unpack_cmd,error_msg,env=self.env)
+
+			if "source_path_hash" in self.settings:
+				myf=open(self.settings["autoresume_path"]+"unpack","w")
+				myf.write(self.settings["source_path_hash"])
+				myf.close()
+			else:
+				touch(self.settings["autoresume_path"]+"unpack")
+		else:
+			print "Resume point detected, skipping unpack operation..."
+
+	def unpack_snapshot(self):
+		unpack=True
+		snapshot_hash=read_from_clst(self.settings["autoresume_path"]+\
+			"unpack_portage")
+
+		if "SNAPCACHE" in self.settings:
+			snapshot_cache_hash=\
+				read_from_clst(self.settings["snapshot_cache_path"]+\
+				"catalyst-hash")
+			destdir=self.settings["snapshot_cache_path"]
+			if "bz2" == self.settings["chroot_path"][-3:]:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+destdir
+			else:
+				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+destdir
+			unpack_errmsg="Error unpacking snapshot"
+			cleanup_msg="Cleaning up invalid snapshot cache at \n\t"+\
+				self.settings["snapshot_cache_path"]+\
+				" (This can take a long time)..."
+			cleanup_errmsg="Error removing existing snapshot cache directory."
+			self.snapshot_lock_object=self.snapcache_lock
+
+			if self.settings["snapshot_path_hash"]==snapshot_cache_hash:
+				print "Valid snapshot cache, skipping unpack of portage tree..."
+				unpack=False
+		else:
+			destdir = normpath(self.settings["chroot_path"] + self.settings["portdir"])
+			cleanup_errmsg="Error removing existing snapshot directory."
+			cleanup_msg=\
+				"Cleaning up existing portage tree (This can take a long time)..."
+			if "bz2" == self.settings["chroot_path"][-3:]:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+\
+					self.settings["chroot_path"]+"/usr"
+			else:
+				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+\
+					self.settings["chroot_path"]+"/usr"
+			unpack_errmsg="Error unpacking snapshot"
+
+			if "AUTORESUME" in self.settings \
+				and os.path.exists(self.settings["chroot_path"]+\
+					self.settings["portdir"]) \
+				and os.path.exists(self.settings["autoresume_path"]\
+					+"unpack_portage") \
+				and self.settings["snapshot_path_hash"] == snapshot_hash:
+					print \
+						"Valid Resume point detected, skipping unpack of portage tree..."
+					unpack=False
+
+		if unpack:
+			if "SNAPCACHE" in self.settings:
+				self.snapshot_lock_object.write_lock()
+			if os.path.exists(destdir):
+				print cleanup_msg
+				cleanup_cmd="rm -rf "+destdir
+				cmd(cleanup_cmd,cleanup_errmsg,env=self.env)
+			if not os.path.exists(destdir):
+				os.makedirs(destdir,0755)
+
+			print "Unpacking portage tree (This can take a long time) ..."
+			cmd(unpack_cmd,unpack_errmsg,env=self.env)
+
+			if "SNAPCACHE" in self.settings:
+				myf=open(self.settings["snapshot_cache_path"]+"catalyst-hash","w")
+				myf.write(self.settings["snapshot_path_hash"])
+				myf.close()
+			else:
+				print "Setting snapshot autoresume point"
+				myf=open(self.settings["autoresume_path"]+"unpack_portage","w")
+				myf.write(self.settings["snapshot_path_hash"])
+				myf.close()
+
+			if "SNAPCACHE" in self.settings:
+				self.snapshot_lock_object.unlock()
+
+	def config_profile_link(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"config_profile_link"):
+			print \
+				"Resume point detected, skipping config_profile_link operation..."
+		else:
+			# TODO: zmedico and I discussed making this a directory and pushing
+			# in a parent file, as well as other user-specified configuration.
+			print "Configuring profile link..."
+			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.profile",\
+					"Error zapping profile link",env=self.env)
+			cmd("mkdir -p "+self.settings["chroot_path"]+"/etc/portage/")
+			cmd("ln -sf ../.." + self.settings["portdir"] + "/profiles/" + \
+				self.settings["target_profile"]+" "+\
+				self.settings["chroot_path"]+"/etc/portage/make.profile",\
+				"Error creating profile link",env=self.env)
+			touch(self.settings["autoresume_path"]+"config_profile_link")
+
+	def setup_confdir(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"setup_confdir"):
+			print "Resume point detected, skipping setup_confdir operation..."
+		else:
+			if "portage_confdir" in self.settings:
+				print "Configuring /etc/portage..."
+				cmd("rsync -a "+self.settings["portage_confdir"]+"/ "+\
+					self.settings["chroot_path"]+"/etc/portage/",\
+					"Error copying /etc/portage",env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_confdir")
+
+	def portage_overlay(self):
+		""" We copy the contents of our overlays to /usr/local/portage """
+		if "portage_overlay" in self.settings:
+			for x in self.settings["portage_overlay"]:
+				if os.path.exists(x):
+					print "Copying overlay dir " +x
+					cmd("mkdir -p "+self.settings["chroot_path"]+\
+						self.settings["local_overlay"],\
+						"Could not make portage_overlay dir",env=self.env)
+					cmd("cp -R "+x+"/* "+self.settings["chroot_path"]+\
+						self.settings["local_overlay"],\
+						"Could not copy portage_overlay",env=self.env)
+
+	def root_overlay(self):
+		""" Copy over the root_overlay """
+		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
+			for x in self.settings[self.settings["spec_prefix"]+\
+				"/root_overlay"]:
+				if os.path.exists(x):
+					print "Copying root_overlay: "+x
+					cmd("rsync -a "+x+"/ "+\
+						self.settings["chroot_path"],\
+						self.settings["spec_prefix"]+"/root_overlay: "+x+\
+						" copy failed.",env=self.env)
+
+	def base_dirs(self):
+		pass
+
+	def bind(self):
+		for x in self.mounts:
+			#print "bind(); x =", x
+			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
+			if not os.path.exists(target):
+				os.makedirs(target, 0755)
+
+			if not os.path.exists(self.mountmap[x]):
+				if self.mountmap[x] not in ["tmpfs", "shmfs"]:
+					os.makedirs(self.mountmap[x], 0755)
+
+			src=self.mountmap[x]
+			#print "bind(); src =", src
+			if "SNAPCACHE" in self.settings and x == "portdir":
+				self.snapshot_lock_object.read_lock()
+			if os.uname()[0] == "FreeBSD":
+				if src == "/dev":
+					cmd = "mount -t devfs none " + target
+					retval=os.system(cmd)
+				else:
+					cmd = "mount_nullfs " + src + " " + target
+					retval=os.system(cmd)
+			else:
+				if src == "tmpfs":
+					if "var_tmpfs_portage" in self.settings:
+						cmd = "mount -t tmpfs -o size=" + \
+							self.settings["var_tmpfs_portage"] + "G " + \
+							src + " " + target
+						retval=os.system(cmd)
+				elif src == "shmfs":
+					cmd = "mount -t tmpfs -o noexec,nosuid,nodev shm " + target
+					retval=os.system(cmd)
+				else:
+					cmd = "mount --bind " + src + " " + target
+					#print "bind(); cmd =", cmd
+					retval=os.system(cmd)
+			if retval!=0:
+				self.unbind()
+				raise CatalystError,"Couldn't bind mount " + src
+
+	def unbind(self):
+		ouch=0
+		mypath=self.settings["chroot_path"]
+		myrevmounts=self.mounts[:]
+		myrevmounts.reverse()
+		""" Unmount in reverse order for nested bind-mounts """
+		for x in myrevmounts:
+			target = normpath(mypath + self.target_mounts[x])
+			if not os.path.exists(target):
+				continue
+
+			if not ismount(target):
+				continue
+
+			retval=os.system("umount " + target)
+
+			if retval!=0:
+				warn("First attempt to unmount: " + target + " failed.")
+				warn("Killing any pids still running in the chroot")
+
+				self.kill_chroot_pids()
+
+				retval2 = os.system("umount " + target)
+				if retval2!=0:
+					ouch=1
+					warn("Couldn't umount bind mount: " + target)
+
+			if "SNAPCACHE" in self.settings and x == "/usr/portage":
+				try:
+					"""
+					It's possible the snapshot lock object isn't created yet.
+					This is because mount safety check calls unbind before the
+					target is fully initialized
+					"""
+					self.snapshot_lock_object.unlock()
+				except:
+					pass
+		if ouch:
+			"""
+			if any bind mounts really failed, then we need to raise
+			this to potentially prevent an upcoming bash stage cleanup script
+			from wiping our bind mounts.
+			"""
+			raise CatalystError,\
+				"Couldn't umount one or more bind-mounts; aborting for safety."
+
+	def chroot_setup(self):
+		self.makeconf=read_makeconf(self.settings["chroot_path"]+\
+			"/etc/portage/make.conf")
+		self.override_cbuild()
+		self.override_chost()
+		self.override_cflags()
+		self.override_cxxflags()
+		self.override_ldflags()
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"chroot_setup"):
+			print "Resume point detected, skipping chroot_setup operation..."
+		else:
+			print "Setting up chroot..."
+
+			#self.makeconf=read_makeconf(self.settings["chroot_path"]+"/etc/portage/make.conf")
+
+			cmd("cp /etc/resolv.conf "+self.settings["chroot_path"]+"/etc",\
+				"Could not copy resolv.conf into place.",env=self.env)
+
+			""" Copy over the envscript, if applicable """
+			if "ENVSCRIPT" in self.settings:
+				if not os.path.exists(self.settings["ENVSCRIPT"]):
+					raise CatalystError,\
+						"Can't find envscript "+self.settings["ENVSCRIPT"]
+
+				print "\nWarning!!!!"
+				print "\tOverriding certain env variables may cause catastrophic failure."
+				print "\tIf your build fails look here first as the possible problem."
+				print "\tCatalyst assumes you know what you are doing when setting"
+				print "\t\tthese variables."
+				print "\tCatalyst Maintainers use VERY minimal envscripts if used at all"
+				print "\tYou have been warned\n"
+
+				cmd("cp "+self.settings["ENVSCRIPT"]+" "+\
+					self.settings["chroot_path"]+"/tmp/envscript",\
+					"Could not copy envscript into place.",env=self.env)
+
+			"""
+			Copy over /etc/hosts from the host in case there are any
+			specialties in there
+			"""
+			if os.path.exists(self.settings["chroot_path"]+"/etc/hosts"):
+				cmd("mv "+self.settings["chroot_path"]+"/etc/hosts "+\
+					self.settings["chroot_path"]+"/etc/hosts.catalyst",\
+					"Could not backup /etc/hosts",env=self.env)
+				cmd("cp /etc/hosts "+self.settings["chroot_path"]+"/etc/hosts",\
+					"Could not copy /etc/hosts",env=self.env)
+
+			""" Modify and write out make.conf (for the chroot) """
+			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.conf",\
+				"Could not remove "+self.settings["chroot_path"]+\
+				"/etc/portage/make.conf",env=self.env)
+			myf=open(self.settings["chroot_path"]+"/etc/portage/make.conf","w")
+			myf.write("# These settings were set by the catalyst build script that automatically\n# built this stage.\n")
+			myf.write("# Please consult /usr/share/portage/config/make.conf.example for a more\n# detailed example.\n")
+			if "CFLAGS" in self.settings:
+				myf.write('CFLAGS="'+self.settings["CFLAGS"]+'"\n')
+			if "CXXFLAGS" in self.settings:
+				if self.settings["CXXFLAGS"]!=self.settings["CFLAGS"]:
+					myf.write('CXXFLAGS="'+self.settings["CXXFLAGS"]+'"\n')
+				else:
+					myf.write('CXXFLAGS="${CFLAGS}"\n')
+			else:
+				myf.write('CXXFLAGS="${CFLAGS}"\n')
+
+			if "LDFLAGS" in self.settings:
+				myf.write("# LDFLAGS is unsupported.  USE AT YOUR OWN RISK!\n")
+				myf.write('LDFLAGS="'+self.settings["LDFLAGS"]+'"\n')
+			if "CBUILD" in self.settings:
+				myf.write("# This should not be changed unless you know exactly what you are doing.  You\n# should probably be using a different stage, instead.\n")
+				myf.write('CBUILD="'+self.settings["CBUILD"]+'"\n')
+
+			myf.write("# WARNING: Changing your CHOST is not something that should be done lightly.\n# Please consult http://www.gentoo.org/doc/en/change-chost.xml before changing.\n")
+			myf.write('CHOST="'+self.settings["CHOST"]+'"\n')
+
+			""" Figure out what our USE vars are for building """
+			myusevars=[]
+			if "HOSTUSE" in self.settings:
+				myusevars.extend(self.settings["HOSTUSE"])
+
+			if "use" in self.settings:
+				myusevars.extend(self.settings["use"])
+
+			if myusevars:
+				myf.write("# These are the USE flags that were used in addition to what is provided by the\n# profile used for building.\n")
+				myusevars = sorted(set(myusevars))
+				myf.write('USE="'+string.join(myusevars)+'"\n')
+				if '-*' in myusevars:
+					print "\nWarning!!!  "
+					print "\tThe use of -* in "+self.settings["spec_prefix"]+\
+						"/use will cause portage to ignore"
+					print "\tpackage.use in the profile and portage_confdir. You've been warned!"
+
+			myf.write('PORTDIR="%s"\n' % self.settings['portdir'])
+			myf.write('DISTDIR="%s"\n' % self.settings['distdir'])
+			myf.write('PKGDIR="%s"\n' % self.settings['packagedir'])
+
+			""" Setup the portage overlay """
+			if "portage_overlay" in self.settings:
+				myf.write('PORTDIR_OVERLAY="/usr/local/portage"\n')
+
+			myf.close()
+			cmd("cp "+self.settings["chroot_path"]+"/etc/portage/make.conf "+\
+				self.settings["chroot_path"]+"/etc/portage/make.conf.catalyst",\
+				"Could not backup /etc/portage/make.conf",env=self.env)
+			touch(self.settings["autoresume_path"]+"chroot_setup")
+
+	def fsscript(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"fsscript"):
+			print "Resume point detected, skipping fsscript operation..."
+		else:
+			if "fsscript" in self.settings:
+				if os.path.exists(self.settings["controller_file"]):
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" fsscript","fsscript script failed.",env=self.env)
+					touch(self.settings["autoresume_path"]+"fsscript")
+
+	def rcupdate(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"rcupdate"):
+			print "Resume point detected, skipping rcupdate operation..."
+		else:
+			if os.path.exists(self.settings["controller_file"]):
+				cmd("/bin/bash "+self.settings["controller_file"]+" rc-update",\
+					"rc-update script failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"rcupdate")
+
+	def clean(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"clean"):
+			print "Resume point detected, skipping clean operation..."
+		else:
+			for x in self.settings["cleanables"]:
+				print "Cleaning chroot: "+x+"... "
+				cmd("rm -rf "+self.settings["destpath"]+x,"Couldn't clean "+\
+					x,env=self.env)
+
+		""" Put /etc/hosts back into place """
+		if os.path.exists(self.settings["chroot_path"]+"/etc/hosts.catalyst"):
+			cmd("mv -f "+self.settings["chroot_path"]+"/etc/hosts.catalyst "+\
+				self.settings["chroot_path"]+"/etc/hosts",\
+				"Could not replace /etc/hosts",env=self.env)
+
+		""" Remove our overlay """
+		if os.path.exists(self.settings["chroot_path"] + self.settings["local_overlay"]):
+			cmd("rm -rf " + self.settings["chroot_path"] + self.settings["local_overlay"],
+				"Could not remove " + self.settings["local_overlay"], env=self.env)
+			cmd("sed -i '/^PORTDIR_OVERLAY/d' "+self.settings["chroot_path"]+\
+				"/etc/portage/make.conf",\
+				"Could not remove PORTDIR_OVERLAY from make.conf",env=self.env)
+
+		""" Clean up old and obsoleted files in /etc """
+		if os.path.exists(self.settings["stage_path"]+"/etc"):
+			cmd("find "+self.settings["stage_path"]+\
+				"/etc -maxdepth 1 -name \"*-\" | xargs rm -f",\
+				"Could not remove stray files in /etc",env=self.env)
+
+		if os.path.exists(self.settings["controller_file"]):
+			cmd("/bin/bash "+self.settings["controller_file"]+" clean",\
+				"clean script failed.",env=self.env)
+			touch(self.settings["autoresume_path"]+"clean")
+
+	def empty(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"empty"):
+			print "Resume point detected, skipping empty operation..."
+		else:
+			if self.settings["spec_prefix"]+"/empty" in self.settings:
+				if type(self.settings[self.settings["spec_prefix"]+\
+					"/empty"])==types.StringType:
+					self.settings[self.settings["spec_prefix"]+"/empty"]=\
+						self.settings[self.settings["spec_prefix"]+\
+						"/empty"].split()
+				for x in self.settings[self.settings["spec_prefix"]+"/empty"]:
+					myemp=self.settings["destpath"]+x
+					if not os.path.isdir(myemp) or os.path.islink(myemp):
+						print x,"not a directory or does not exist, skipping 'empty' operation."
+						continue
+					print "Emptying directory",x
+					"""
+					stat the dir, delete the dir, recreate the dir and set
+					the proper perms and ownership
+					"""
+					mystat=os.stat(myemp)
+					shutil.rmtree(myemp)
+					os.makedirs(myemp,0755)
+					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+					os.chmod(myemp,mystat[ST_MODE])
+			touch(self.settings["autoresume_path"]+"empty")
+
+	def remove(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"remove"):
+			print "Resume point detected, skipping remove operation..."
+		else:
+			if self.settings["spec_prefix"]+"/rm" in self.settings:
+				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
+					"""
+					We're going to shell out for all these cleaning
+					operations, so we get easy glob handling.
+					"""
+					print "livecd: removing "+x
+					os.system("rm -rf "+self.settings["chroot_path"]+x)
+				try:
+					if os.path.exists(self.settings["controller_file"]):
+						cmd("/bin/bash "+self.settings["controller_file"]+\
+							" clean","Clean  failed.",env=self.env)
+						touch(self.settings["autoresume_path"]+"remove")
+				except:
+					self.unbind()
+					raise
+
+	def preclean(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"preclean"):
+			print "Resume point detected, skipping preclean operation..."
+		else:
+			try:
+				if os.path.exists(self.settings["controller_file"]):
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" preclean","preclean script failed.",env=self.env)
+					touch(self.settings["autoresume_path"]+"preclean")
+
+			except:
+				self.unbind()
+				raise CatalystError, "Build failed, could not execute preclean"
+
+	def capture(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"capture"):
+			print "Resume point detected, skipping capture operation..."
+		else:
+			""" Capture target in a tarball """
+			mypath=self.settings["target_path"].split("/")
+			""" Remove filename from path """
+			mypath=string.join(mypath[:-1],"/")
+
+			""" Now make sure path exists """
+			if not os.path.exists(mypath):
+				os.makedirs(mypath)
+
+			print "Creating stage tarball..."
+
+			cmd("tar -I lbzip2 -cpf "+self.settings["target_path"]+" -C "+\
+				self.settings["stage_path"]+" .",\
+				"Couldn't create stage tarball",env=self.env)
+
+			self.gen_contents_file(self.settings["target_path"])
+			self.gen_digest_file(self.settings["target_path"])
+
+			touch(self.settings["autoresume_path"]+"capture")
+
+	def run_local(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"run_local"):
+			print "Resume point detected, skipping run_local operation..."
+		else:
+			try:
+				if os.path.exists(self.settings["controller_file"]):
+					cmd("/bin/bash "+self.settings["controller_file"]+" run",\
+						"run script failed.",env=self.env)
+					touch(self.settings["autoresume_path"]+"run_local")
+
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"Stage build aborting due to error."
+
+	def setup_environment(self):
+		"""
+		Modify the current environment. This is an ugly hack that should be
+		fixed. We need this to use the os.system() call since we can't
+		specify our own environ
+		"""
+		for x in self.settings.keys():
+			""" Sanitize var names by doing "s|/-.|_|g" """
+			varname="clst_"+string.replace(x,"/","_")
+			varname=string.replace(varname,"-","_")
+			varname=string.replace(varname,".","_")
+			if type(self.settings[x])==types.StringType:
+				""" Prefix to prevent namespace clashes """
+				#os.environ[varname]=self.settings[x]
+				self.env[varname]=self.settings[x]
+			elif type(self.settings[x])==types.ListType:
+				#os.environ[varname]=string.join(self.settings[x])
+				self.env[varname]=string.join(self.settings[x])
+			elif type(self.settings[x])==types.BooleanType:
+				if self.settings[x]:
+					self.env[varname]="true"
+				else:
+					self.env[varname]="false"
+		if "makeopts" in self.settings:
+			self.env["MAKEOPTS"]=self.settings["makeopts"]
+
+	def run(self):
+		self.chroot_lock.write_lock()
+
+		""" Kill any pids in the chroot "" """
+		self.kill_chroot_pids()
+
+		""" Check for mounts right away and abort if we cannot unmount them """
+		self.mount_safety_check()
+
+		if "CLEAR_AUTORESUME" in self.settings:
+			self.clear_autoresume()
+
+		if "PURGETMPONLY" in self.settings:
+			self.purge()
+			return
+
+		if "PURGEONLY" in self.settings:
+			self.purge()
+			return
+
+		if "PURGE" in self.settings:
+			self.purge()
+
+		for x in self.settings["action_sequence"]:
+			print "--- Running action sequence: "+x
+			sys.stdout.flush()
+			try:
+				apply(getattr(self,x))
+			except:
+				self.mount_safety_check()
+				raise
+
+		self.chroot_lock.unlock()
+
+	def unmerge(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"unmerge"):
+			print "Resume point detected, skipping unmerge operation..."
+		else:
+			if self.settings["spec_prefix"]+"/unmerge" in self.settings:
+				if type(self.settings[self.settings["spec_prefix"]+\
+					"/unmerge"])==types.StringType:
+					self.settings[self.settings["spec_prefix"]+"/unmerge"]=\
+						[self.settings[self.settings["spec_prefix"]+"/unmerge"]]
+				myunmerge=\
+					self.settings[self.settings["spec_prefix"]+"/unmerge"][:]
+
+				for x in range(0,len(myunmerge)):
+					"""
+					Surround args with quotes for passing to bash, allows
+					things like "<" to remain intact
+					"""
+					myunmerge[x]="'"+myunmerge[x]+"'"
+				myunmerge=string.join(myunmerge)
+
+				""" Before cleaning, unmerge stuff """
+				try:
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" unmerge "+ myunmerge,"Unmerge script failed.",\
+						env=self.env)
+					print "unmerge shell script"
+				except CatalystError:
+					self.unbind()
+					raise
+				touch(self.settings["autoresume_path"]+"unmerge")
+
+	def target_setup(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"target_setup"):
+			print "Resume point detected, skipping target_setup operation..."
+		else:
+			print "Setting up filesystems per filesystem type"
+			cmd("/bin/bash "+self.settings["controller_file"]+\
+				" target_image_setup "+ self.settings["target_path"],\
+				"target_image_setup script failed.",env=self.env)
+			touch(self.settings["autoresume_path"]+"target_setup")
+
+	def setup_overlay(self):
+		if "AUTORESUME" in self.settings \
+		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
+			print "Resume point detected, skipping setup_overlay operation..."
+		else:
+			if self.settings["spec_prefix"]+"/overlay" in self.settings:
+				for x in self.settings[self.settings["spec_prefix"]+"/overlay"]:
+					if os.path.exists(x):
+						cmd("rsync -a "+x+"/ "+\
+							self.settings["target_path"],\
+							self.settings["spec_prefix"]+"overlay: "+x+\
+							" copy failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_overlay")
+
+	def create_iso(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"create_iso"):
+			print "Resume point detected, skipping create_iso operation..."
+		else:
+			""" Create the ISO """
+			if "iso" in self.settings:
+				cmd("/bin/bash "+self.settings["controller_file"]+" iso "+\
+					self.settings["iso"],"ISO creation script failed.",\
+					env=self.env)
+				self.gen_contents_file(self.settings["iso"])
+				self.gen_digest_file(self.settings["iso"])
+				touch(self.settings["autoresume_path"]+"create_iso")
+			else:
+				print "WARNING: livecd/iso was not defined."
+				print "An ISO Image will not be created."
+
+	def build_packages(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"build_packages"):
+			print "Resume point detected, skipping build_packages operation..."
+		else:
+			if self.settings["spec_prefix"]+"/packages" in self.settings:
+				if "AUTORESUME" in self.settings \
+					and os.path.exists(self.settings["autoresume_path"]+\
+						"build_packages"):
+					print "Resume point detected, skipping build_packages operation..."
+				else:
+					mypack=\
+						list_bashify(self.settings[self.settings["spec_prefix"]\
+						+"/packages"])
+					try:
+						cmd("/bin/bash "+self.settings["controller_file"]+\
+							" build_packages "+mypack,\
+							"Error in attempt to build packages",env=self.env)
+						touch(self.settings["autoresume_path"]+"build_packages")
+					except CatalystError:
+						self.unbind()
+						raise CatalystError,self.settings["spec_prefix"]+\
+							"build aborting due to error."
+
+	def build_kernel(self):
+		"Build all configured kernels"
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"build_kernel"):
+			print "Resume point detected, skipping build_kernel operation..."
+		else:
+			if "boot/kernel" in self.settings:
+				try:
+					mynames=self.settings["boot/kernel"]
+					if type(mynames)==types.StringType:
+						mynames=[mynames]
+					"""
+					Execute the script that sets up the kernel build environment
+					"""
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" pre-kmerge ","Runscript pre-kmerge failed",\
+						env=self.env)
+					for kname in mynames:
+						self._build_kernel(kname=kname)
+					touch(self.settings["autoresume_path"]+"build_kernel")
+				except CatalystError:
+					self.unbind()
+					raise CatalystError,\
+						"build aborting due to kernel build error."
+
+	def _build_kernel(self, kname):
+		"Build a single configured kernel by name"
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]\
+				+"build_kernel_"+kname):
+			print "Resume point detected, skipping build_kernel for "+kname+" operation..."
+			return
+		self._copy_kernel_config(kname=kname)
+
+		"""
+		If we need to pass special options to the bootloader
+		for this kernel put them into the environment
+		"""
+		if "boot/kernel/"+kname+"/kernelopts" in self.settings:
+			myopts=self.settings["boot/kernel/"+kname+\
+				"/kernelopts"]
+
+			if type(myopts) != types.StringType:
+				myopts = string.join(myopts)
+				self.env[kname+"_kernelopts"]=myopts
+
+			else:
+				self.env[kname+"_kernelopts"]=""
+
+		if "boot/kernel/"+kname+"/extraversion" not in self.settings:
+			self.settings["boot/kernel/"+kname+\
+				"/extraversion"]=""
+
+		self.env["clst_kextraversion"]=\
+			self.settings["boot/kernel/"+kname+\
+			"/extraversion"]
+
+		self._copy_initramfs_overlay(kname=kname)
+
+		""" Execute the script that builds the kernel """
+		cmd("/bin/bash "+self.settings["controller_file"]+\
+			" kernel "+kname,\
+			"Runscript kernel build failed",env=self.env)
+
+		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
+			if os.path.exists(self.settings["chroot_path"]+\
+				"/tmp/initramfs_overlay/"):
+				print "Cleaning up temporary overlay dir"
+				cmd("rm -R "+self.settings["chroot_path"]+\
+					"/tmp/initramfs_overlay/",env=self.env)
+
+		touch(self.settings["autoresume_path"]+\
+			"build_kernel_"+kname)
+
+		"""
+		Execute the script that cleans up the kernel build
+		environment
+		"""
+		cmd("/bin/bash "+self.settings["controller_file"]+\
+			" post-kmerge ",
+			"Runscript post-kmerge failed",env=self.env)
+
+	def _copy_kernel_config(self, kname):
+		if "boot/kernel/"+kname+"/config" in self.settings:
+			if not os.path.exists(self.settings["boot/kernel/"+kname+"/config"]):
+				self.unbind()
+				raise CatalystError,\
+					"Can't find kernel config: "+\
+					self.settings["boot/kernel/"+kname+\
+					"/config"]
+
+			try:
+				cmd("cp "+self.settings["boot/kernel/"+kname+\
+					"/config"]+" "+\
+					self.settings["chroot_path"]+"/var/tmp/"+\
+					kname+".config",\
+					"Couldn't copy kernel config: "+\
+					self.settings["boot/kernel/"+kname+\
+					"/config"],env=self.env)
+
+			except CatalystError:
+				self.unbind()
+
+	def _copy_initramfs_overlay(self, kname):
+		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
+			if os.path.exists(self.settings["boot/kernel/"+\
+				kname+"/initramfs_overlay"]):
+				print "Copying initramfs_overlay dir "+\
+					self.settings["boot/kernel/"+kname+\
+					"/initramfs_overlay"]
+
+				cmd("mkdir -p "+\
+					self.settings["chroot_path"]+\
+					"/tmp/initramfs_overlay/"+\
+					self.settings["boot/kernel/"+kname+\
+					"/initramfs_overlay"],env=self.env)
+
+				cmd("cp -R "+self.settings["boot/kernel/"+\
+					kname+"/initramfs_overlay"]+"/* "+\
+					self.settings["chroot_path"]+\
+					"/tmp/initramfs_overlay/"+\
+					self.settings["boot/kernel/"+kname+\
+					"/initramfs_overlay"],env=self.env)
+
+	def bootloader(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"bootloader"):
+			print "Resume point detected, skipping bootloader operation..."
+		else:
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+\
+					" bootloader " + self.settings["target_path"],\
+					"Bootloader script failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"bootloader")
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"Script aborting due to error."
+
+	def livecd_update(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"livecd_update"):
+			print "Resume point detected, skipping build_packages operation..."
+		else:
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+\
+					" livecd-update","livecd-update failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"livecd_update")
+
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"build aborting due to livecd_update error."
+
+	def clear_chroot(self):
+		myemp=self.settings["chroot_path"]
+		if os.path.isdir(myemp):
+			print "Emptying directory",myemp
+			"""
+			stat the dir, delete the dir, recreate the dir and set
+			the proper perms and ownership
+			"""
+			mystat=os.stat(myemp)
+			#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
+			""" There's no easy way to change flags recursively in python """
+			if os.uname()[0] == "FreeBSD":
+				os.system("chflags -R noschg "+myemp)
+			shutil.rmtree(myemp)
+			os.makedirs(myemp,0755)
+			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+			os.chmod(myemp,mystat[ST_MODE])
+
+	def clear_packages(self):
+		if "PKGCACHE" in self.settings:
+			print "purging the pkgcache ..."
+
+			myemp=self.settings["pkgcache_path"]
+			if os.path.isdir(myemp):
+				print "Emptying directory",myemp
+				"""
+				stat the dir, delete the dir, recreate the dir and set
+				the proper perms and ownership
+				"""
+				mystat=os.stat(myemp)
+				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
+				shutil.rmtree(myemp)
+				os.makedirs(myemp,0755)
+				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+				os.chmod(myemp,mystat[ST_MODE])
+
+	def clear_kerncache(self):
+		if "KERNCACHE" in self.settings:
+			print "purging the kerncache ..."
+
+			myemp=self.settings["kerncache_path"]
+			if os.path.isdir(myemp):
+				print "Emptying directory",myemp
+				"""
+				stat the dir, delete the dir, recreate the dir and set
+				the proper perms and ownership
+				"""
+				mystat=os.stat(myemp)
+				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
+				shutil.rmtree(myemp)
+				os.makedirs(myemp,0755)
+				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+				os.chmod(myemp,mystat[ST_MODE])
+
+	def clear_autoresume(self):
+		""" Clean resume points since they are no longer needed """
+		if "AUTORESUME" in self.settings:
+			print "Removing AutoResume Points: ..."
+		myemp=self.settings["autoresume_path"]
+		if os.path.isdir(myemp):
+				if "AUTORESUME" in self.settings:
+					print "Emptying directory",myemp
+				"""
+				stat the dir, delete the dir, recreate the dir and set
+				the proper perms and ownership
+				"""
+				mystat=os.stat(myemp)
+				if os.uname()[0] == "FreeBSD":
+					cmd("chflags -R noschg "+myemp,\
+						"Could not remove immutable flag for file "\
+						+myemp)
+				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env-self.env)
+				shutil.rmtree(myemp)
+				os.makedirs(myemp,0755)
+				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+				os.chmod(myemp,mystat[ST_MODE])
+
+	def gen_contents_file(self,file):
+		if os.path.exists(file+".CONTENTS"):
+			os.remove(file+".CONTENTS")
+		if "contents" in self.settings:
+			if os.path.exists(file):
+				myf=open(file+".CONTENTS","w")
+				keys={}
+				for i in self.settings["contents"].split():
+					keys[i]=1
+					array=keys.keys()
+					array.sort()
+				for j in array:
+					contents=generate_contents(file,contents_function=j,\
+						verbose="VERBOSE" in self.settings)
+					if contents:
+						myf.write(contents)
+				myf.close()
+
+	def gen_digest_file(self,file):
+		if os.path.exists(file+".DIGESTS"):
+			os.remove(file+".DIGESTS")
+		if "digests" in self.settings:
+			if os.path.exists(file):
+				myf=open(file+".DIGESTS","w")
+				keys={}
+				for i in self.settings["digests"].split():
+					keys[i]=1
+					array=keys.keys()
+					array.sort()
+				for f in [file, file+'.CONTENTS']:
+					if os.path.exists(f):
+						if "all" in array:
+							for k in hash_map.keys():
+								hash=generate_hash(f,hash_function=k,verbose=\
+									"VERBOSE" in self.settings)
+								myf.write(hash)
+						else:
+							for j in array:
+								hash=generate_hash(f,hash_function=j,verbose=\
+									"VERBOSE" in self.settings)
+								myf.write(hash)
+				myf.close()
+
+	def purge(self):
+		countdown(10,"Purging Caches ...")
+		if any(k in self.settings for k in ("PURGE","PURGEONLY","PURGETMPONLY")):
+			print "clearing autoresume ..."
+			self.clear_autoresume()
+
+			print "clearing chroot ..."
+			self.clear_chroot()
+
+			if "PURGETMPONLY" not in self.settings:
+				print "clearing package cache ..."
+				self.clear_packages()
+
+			print "clearing kerncache ..."
+			self.clear_kerncache()
+
+# vim: ts=4 sw=4 sta et sts=4 ai
diff --git a/catalyst/modules/generic_target.py b/catalyst/modules/generic_target.py
new file mode 100644
index 0000000..fe96bd7
--- /dev/null
+++ b/catalyst/modules/generic_target.py
@@ -0,0 +1,11 @@
+from catalyst_support import *
+
+class generic_target:
+	"""
+	The toplevel class for generic_stage_target. This is about as generic as we get.
+	"""
+	def __init__(self,myspec,addlargs):
+		addl_arg_parse(myspec,addlargs,self.required_values,self.valid_values)
+		self.settings=myspec
+		self.env={}
+		self.env["PATH"]="/bin:/sbin:/usr/bin:/usr/sbin"
diff --git a/catalyst/modules/grp_target.py b/catalyst/modules/grp_target.py
new file mode 100644
index 0000000..6941522
--- /dev/null
+++ b/catalyst/modules/grp_target.py
@@ -0,0 +1,118 @@
+"""
+Gentoo Reference Platform (GRP) target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,types,glob
+from catalyst_support import *
+from generic_stage_target import *
+
+class grp_target(generic_stage_target):
+	"""
+	The builder class for GRP (Gentoo Reference Platform) builds.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["version_stamp","target","subarch",\
+			"rel_type","profile","snapshot","source_subpath"]
+
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend(["grp/use"])
+		if "grp" not in addlargs:
+			raise CatalystError,"Required value \"grp\" not specified in spec."
+
+		self.required_values.extend(["grp"])
+		if type(addlargs["grp"])==types.StringType:
+			addlargs["grp"]=[addlargs["grp"]]
+
+		if "grp/use" in addlargs:
+			if type(addlargs["grp/use"])==types.StringType:
+				addlargs["grp/use"]=[addlargs["grp/use"]]
+
+		for x in addlargs["grp"]:
+			self.required_values.append("grp/"+x+"/packages")
+			self.required_values.append("grp/"+x+"/type")
+
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+			print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			#if os.path.isdir(self.settings["target_path"]):
+				#cmd("rm -rf "+self.settings["target_path"],
+				#"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
+			if not os.path.exists(self.settings["target_path"]):
+				os.makedirs(self.settings["target_path"])
+
+			touch(self.settings["autoresume_path"]+"setup_target_path")
+
+	def run_local(self):
+		for pkgset in self.settings["grp"]:
+			# example call: "grp.sh run pkgset cd1 xmms vim sys-apps/gleep"
+			mypackages=list_bashify(self.settings["grp/"+pkgset+"/packages"])
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+" run "+self.settings["grp/"+pkgset+"/type"]\
+					+" "+pkgset+" "+mypackages,env=self.env)
+
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"GRP build aborting due to error."
+
+	def set_use(self):
+		generic_stage_target.set_use(self)
+		if "BINDIST" in self.settings:
+			if "use" in self.settings:
+				self.settings["use"].append("bindist")
+			else:
+				self.settings["use"]=["bindist"]
+
+	def set_mounts(self):
+	    self.mounts.append("/tmp/grp")
+            self.mountmap["/tmp/grp"]=self.settings["target_path"]
+
+	def generate_digests(self):
+		for pkgset in self.settings["grp"]:
+			if self.settings["grp/"+pkgset+"/type"] == "pkgset":
+				destdir=normpath(self.settings["target_path"]+"/"+pkgset+"/All")
+				print "Digesting files in the pkgset....."
+				digests=glob.glob(destdir+'/*.DIGESTS')
+				for i in digests:
+					if os.path.exists(i):
+						os.remove(i)
+
+				files=os.listdir(destdir)
+				#ignore files starting with '.' using list comprehension
+				files=[filename for filename in files if filename[0] != '.']
+				for i in files:
+					if os.path.isfile(normpath(destdir+"/"+i)):
+						self.gen_contents_file(normpath(destdir+"/"+i))
+						self.gen_digest_file(normpath(destdir+"/"+i))
+			else:
+				destdir=normpath(self.settings["target_path"]+"/"+pkgset)
+				print "Digesting files in the srcset....."
+
+				digests=glob.glob(destdir+'/*.DIGESTS')
+				for i in digests:
+					if os.path.exists(i):
+						os.remove(i)
+
+				files=os.listdir(destdir)
+				#ignore files starting with '.' using list comprehension
+				files=[filename for filename in files if filename[0] != '.']
+				for i in files:
+					if os.path.isfile(normpath(destdir+"/"+i)):
+						#self.gen_contents_file(normpath(destdir+"/"+i))
+						self.gen_digest_file(normpath(destdir+"/"+i))
+
+	def set_action_sequence(self):
+	    self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir","portage_overlay","bind","chroot_setup",\
+					"setup_environment","run_local","unbind",\
+					"generate_digests","clear_autoresume"]
+
+def register(foo):
+	foo.update({"grp":grp_target})
+	return foo
diff --git a/catalyst/modules/livecd_stage1_target.py b/catalyst/modules/livecd_stage1_target.py
new file mode 100644
index 0000000..59de9bb
--- /dev/null
+++ b/catalyst/modules/livecd_stage1_target.py
@@ -0,0 +1,75 @@
+"""
+LiveCD stage1 target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst_support import *
+from generic_stage_target import *
+
+class livecd_stage1_target(generic_stage_target):
+	"""
+	Builder class for LiveCD stage1.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["livecd/packages"]
+		self.valid_values=self.required_values[:]
+
+		self.valid_values.extend(["livecd/use"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir","portage_overlay",\
+					"bind","chroot_setup","setup_environment","build_packages",\
+					"unbind", "clean","clear_autoresume"]
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"])
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			if os.path.exists(self.settings["target_path"]):
+				cmd("rm -rf "+self.settings["target_path"],\
+					"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_target_path")
+
+			if not os.path.exists(self.settings["target_path"]):
+				os.makedirs(self.settings["target_path"])
+
+	def set_target_path(self):
+		pass
+
+	def set_spec_prefix(self):
+	                self.settings["spec_prefix"]="livecd"
+
+	def set_use(self):
+		generic_stage_target.set_use(self)
+		if "use" in self.settings:
+			self.settings["use"].append("livecd")
+			if "BINDIST" in self.settings:
+				self.settings["use"].append("bindist")
+		else:
+			self.settings["use"]=["livecd"]
+			if "BINDIST" in self.settings:
+				self.settings["use"].append("bindist")
+
+	def set_packages(self):
+		generic_stage_target.set_packages(self)
+		if self.settings["spec_prefix"]+"/packages" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+"/packages"]) == types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/packages"] = \
+					self.settings[self.settings["spec_prefix"]+"/packages"].split()
+		self.settings[self.settings["spec_prefix"]+"/packages"].append("app-misc/livecd-tools")
+
+	def set_pkgcache_path(self):
+		if "pkgcache_path" in self.settings:
+			if type(self.settings["pkgcache_path"]) != types.StringType:
+				self.settings["pkgcache_path"]=normpath(string.join(self.settings["pkgcache_path"]))
+		else:
+			generic_stage_target.set_pkgcache_path(self)
+
+def register(foo):
+	foo.update({"livecd-stage1":livecd_stage1_target})
+	return foo
diff --git a/catalyst/modules/livecd_stage2_target.py b/catalyst/modules/livecd_stage2_target.py
new file mode 100644
index 0000000..c74c16d
--- /dev/null
+++ b/catalyst/modules/livecd_stage2_target.py
@@ -0,0 +1,148 @@
+"""
+LiveCD stage2 target, builds upon previous LiveCD stage1 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,types,stat,shutil
+from catalyst_support import *
+from generic_stage_target import *
+
+class livecd_stage2_target(generic_stage_target):
+	"""
+	Builder class for a LiveCD stage2 build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["boot/kernel"]
+
+		self.valid_values=[]
+
+		self.valid_values.extend(self.required_values)
+		self.valid_values.extend(["livecd/cdtar","livecd/empty","livecd/rm",\
+			"livecd/unmerge","livecd/iso","livecd/gk_mainargs","livecd/type",\
+			"livecd/readme","livecd/motd","livecd/overlay",\
+			"livecd/modblacklist","livecd/splash_theme","livecd/rcadd",\
+			"livecd/rcdel","livecd/fsscript","livecd/xinitrc",\
+			"livecd/root_overlay","livecd/users","portage_overlay",\
+			"livecd/fstype","livecd/fsops","livecd/linuxrc","livecd/bootargs",\
+			"gamecd/conf","livecd/xdm","livecd/xsession","livecd/volid"])
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		if "livecd/type" not in self.settings:
+			self.settings["livecd/type"] = "generic-livecd"
+
+		file_locate(self.settings, ["cdtar","controller_file"])
+
+	def set_source_path(self):
+		self.settings["source_path"] = normpath(self.settings["storedir"] +
+			"/builds/" + self.settings["source_subpath"].rstrip("/") +
+			".tar.bz2")
+		if os.path.isfile(self.settings["source_path"]):
+			self.settings["source_path_hash"]=generate_hash(self.settings["source_path"])
+		else:
+			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/")
+		if not os.path.exists(self.settings["source_path"]):
+			raise CatalystError,"Source Path: "+self.settings["source_path"]+" does not exist."
+
+	def set_spec_prefix(self):
+	    self.settings["spec_prefix"]="livecd"
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			if os.path.isdir(self.settings["target_path"]):
+				cmd("rm -rf "+self.settings["target_path"],
+				"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_target_path")
+			if not os.path.exists(self.settings["target_path"]):
+				os.makedirs(self.settings["target_path"])
+
+	def run_local(self):
+		# what modules do we want to blacklist?
+		if "livecd/modblacklist" in self.settings:
+			try:
+				myf=open(self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf","a")
+			except:
+				self.unbind()
+				raise CatalystError,"Couldn't open "+self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf."
+
+			myf.write("\n#Added by Catalyst:")
+			# workaround until config.py is using configparser
+			if isinstance(self.settings["livecd/modblacklist"], str):
+				self.settings["livecd/modblacklist"] = self.settings["livecd/modblacklist"].split()
+			for x in self.settings["livecd/modblacklist"]:
+				myf.write("\nblacklist "+x)
+			myf.close()
+
+	def unpack(self):
+		unpack=True
+		display_msg=None
+
+		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+"unpack")
+
+		if os.path.isdir(self.settings["source_path"]):
+			unpack_cmd="rsync -a --delete "+self.settings["source_path"]+" "+self.settings["chroot_path"]
+			display_msg="\nStarting rsync from "+self.settings["source_path"]+"\nto "+\
+				self.settings["chroot_path"]+" (This may take some time) ...\n"
+			error_msg="Rsync of "+self.settings["source_path"]+" to "+self.settings["chroot_path"]+" failed."
+			invalid_snapshot=False
+
+		if "AUTORESUME" in self.settings:
+			if os.path.isdir(self.settings["source_path"]) and \
+				os.path.exists(self.settings["autoresume_path"]+"unpack"):
+				print "Resume point detected, skipping unpack operation..."
+				unpack=False
+			elif "source_path_hash" in self.settings:
+				if self.settings["source_path_hash"] != clst_unpack_hash:
+					invalid_snapshot=True
+
+		if unpack:
+			self.mount_safety_check()
+			if invalid_snapshot:
+				print "No Valid Resume point detected, cleaning up  ..."
+				#os.remove(self.settings["autoresume_path"]+"dir_setup")
+				self.clear_autoresume()
+				self.clear_chroot()
+				#self.dir_setup()
+
+			if not os.path.exists(self.settings["chroot_path"]):
+				os.makedirs(self.settings["chroot_path"])
+
+			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
+				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
+
+			if "PKGCACHE" in self.settings:
+				if not os.path.exists(self.settings["pkgcache_path"]):
+					os.makedirs(self.settings["pkgcache_path"],0755)
+
+			if not display_msg:
+				raise CatalystError,"Could not find appropriate source. Please check the 'source_subpath' setting in the spec file."
+
+			print display_msg
+			cmd(unpack_cmd,error_msg,env=self.env)
+
+			if "source_path_hash" in self.settings:
+				myf=open(self.settings["autoresume_path"]+"unpack","w")
+				myf.write(self.settings["source_path_hash"])
+				myf.close()
+			else:
+				touch(self.settings["autoresume_path"]+"unpack")
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+				"config_profile_link","setup_confdir","portage_overlay",\
+				"bind","chroot_setup","setup_environment","run_local",\
+				"build_kernel"]
+		if "FETCH" not in self.settings:
+			self.settings["action_sequence"] += ["bootloader","preclean",\
+				"livecd_update","root_overlay","fsscript","rcupdate","unmerge",\
+				"unbind","remove","empty","target_setup",\
+				"setup_overlay","create_iso"]
+		self.settings["action_sequence"].append("clear_autoresume")
+
+def register(foo):
+	foo.update({"livecd-stage2":livecd_stage2_target})
+	return foo
diff --git a/catalyst/modules/netboot2_target.py b/catalyst/modules/netboot2_target.py
new file mode 100644
index 0000000..1ab7e7d
--- /dev/null
+++ b/catalyst/modules/netboot2_target.py
@@ -0,0 +1,166 @@
+"""
+netboot target, version 2
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,types
+from catalyst_support import *
+from generic_stage_target import *
+
+class netboot2_target(generic_stage_target):
+	"""
+	Builder class for a netboot build, version 2
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[
+			"boot/kernel"
+		]
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend([
+			"netboot2/packages",
+			"netboot2/use",
+			"netboot2/extra_files",
+			"netboot2/overlay",
+			"netboot2/busybox_config",
+			"netboot2/root_overlay",
+			"netboot2/linuxrc"
+		])
+
+		try:
+			if "netboot2/packages" in addlargs:
+				if type(addlargs["netboot2/packages"]) == types.StringType:
+					loopy=[addlargs["netboot2/packages"]]
+				else:
+					loopy=addlargs["netboot2/packages"]
+
+				for x in loopy:
+					self.valid_values.append("netboot2/packages/"+x+"/files")
+		except:
+			raise CatalystError,"configuration error in netboot2/packages."
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		self.set_build_kernel_vars()
+		self.settings["merge_path"]=normpath("/tmp/image/")
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+\
+			self.settings["target_subpath"]+"/")
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			if os.path.isfile(self.settings["target_path"]):
+				cmd("rm -f "+self.settings["target_path"], \
+					"Could not remove existing file: "+self.settings["target_path"],env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_target_path")
+
+		if not os.path.exists(self.settings["storedir"]+"/builds/"):
+			os.makedirs(self.settings["storedir"]+"/builds/")
+
+	def copy_files_to_image(self):
+		# copies specific files from the buildroot to merge_path
+		myfiles=[]
+
+		# check for autoresume point
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"copy_files_to_image"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			if "netboot2/packages" in self.settings:
+				if type(self.settings["netboot2/packages"]) == types.StringType:
+					loopy=[self.settings["netboot2/packages"]]
+				else:
+					loopy=self.settings["netboot2/packages"]
+
+			for x in loopy:
+				if "netboot2/packages/"+x+"/files" in self.settings:
+				    if type(self.settings["netboot2/packages/"+x+"/files"]) == types.ListType:
+					    myfiles.extend(self.settings["netboot2/packages/"+x+"/files"])
+				    else:
+					    myfiles.append(self.settings["netboot2/packages/"+x+"/files"])
+
+			if "netboot2/extra_files" in self.settings:
+				if type(self.settings["netboot2/extra_files"]) == types.ListType:
+					myfiles.extend(self.settings["netboot2/extra_files"])
+				else:
+					myfiles.append(self.settings["netboot2/extra_files"])
+
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+\
+					" image " + list_bashify(myfiles),env=self.env)
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"Failed to copy files to image!"
+
+			touch(self.settings["autoresume_path"]+"copy_files_to_image")
+
+	def setup_overlay(self):
+		if "AUTORESUME" in self.settings \
+		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
+			print "Resume point detected, skipping setup_overlay operation..."
+		else:
+			if "netboot2/overlay" in self.settings:
+				for x in self.settings["netboot2/overlay"]:
+					if os.path.exists(x):
+						cmd("rsync -a "+x+"/ "+\
+							self.settings["chroot_path"] + self.settings["merge_path"], "netboot2/overlay: "+x+" copy failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_overlay")
+
+	def move_kernels(self):
+		# we're done, move the kernels to builds/*
+		# no auto resume here as we always want the
+		# freshest images moved
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+\
+				" final",env=self.env)
+			print ">>> Netboot Build Finished!"
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"Failed to move kernel images!"
+
+	def remove(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"remove"):
+			print "Resume point detected, skipping remove operation..."
+		else:
+			if self.settings["spec_prefix"]+"/rm" in self.settings:
+				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
+					# we're going to shell out for all these cleaning operations,
+					# so we get easy glob handling
+					print "netboot2: removing " + x
+					os.system("rm -rf " + self.settings["chroot_path"] + self.settings["merge_path"] + x)
+
+	def empty(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"empty"):
+			print "Resume point detected, skipping empty operation..."
+		else:
+			if "netboot2/empty" in self.settings:
+				if type(self.settings["netboot2/empty"])==types.StringType:
+					self.settings["netboot2/empty"]=self.settings["netboot2/empty"].split()
+				for x in self.settings["netboot2/empty"]:
+					myemp=self.settings["chroot_path"] + self.settings["merge_path"] + x
+					if not os.path.isdir(myemp):
+						print x,"not a directory or does not exist, skipping 'empty' operation."
+						continue
+					print "Emptying directory", x
+					# stat the dir, delete the dir, recreate the dir and set
+					# the proper perms and ownership
+					mystat=os.stat(myemp)
+					shutil.rmtree(myemp)
+					os.makedirs(myemp,0755)
+					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+					os.chmod(myemp,mystat[ST_MODE])
+		touch(self.settings["autoresume_path"]+"empty")
+
+	def set_action_sequence(self):
+	    self.settings["action_sequence"]=["unpack","unpack_snapshot","config_profile_link",
+	    				"setup_confdir","portage_overlay","bind","chroot_setup",\
+					"setup_environment","build_packages","root_overlay",\
+					"copy_files_to_image","setup_overlay","build_kernel","move_kernels",\
+					"remove","empty","unbind","clean","clear_autoresume"]
+
+def register(foo):
+	foo.update({"netboot2":netboot2_target})
+	return foo
diff --git a/catalyst/modules/netboot_target.py b/catalyst/modules/netboot_target.py
new file mode 100644
index 0000000..ff2c81f
--- /dev/null
+++ b/catalyst/modules/netboot_target.py
@@ -0,0 +1,128 @@
+"""
+netboot target, version 1
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,types
+from catalyst_support import *
+from generic_stage_target import *
+
+class netboot_target(generic_stage_target):
+	"""
+	Builder class for a netboot build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.valid_values = [
+			"netboot/kernel/sources",
+			"netboot/kernel/config",
+			"netboot/kernel/prebuilt",
+
+			"netboot/busybox_config",
+
+			"netboot/extra_files",
+			"netboot/packages"
+		]
+		self.required_values=[]
+
+		try:
+			if "netboot/packages" in addlargs:
+				if type(addlargs["netboot/packages"]) == types.StringType:
+					loopy=[addlargs["netboot/packages"]]
+				else:
+					loopy=addlargs["netboot/packages"]
+
+		#	for x in loopy:
+		#		self.required_values.append("netboot/packages/"+x+"/files")
+		except:
+			raise CatalystError,"configuration error in netboot/packages."
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		self.set_build_kernel_vars(addlargs)
+		if "netboot/busybox_config" in addlargs:
+			file_locate(self.settings, ["netboot/busybox_config"])
+
+		# Custom Kernel Tarball --- use that instead ...
+
+		# unless the user wants specific CFLAGS/CXXFLAGS, let's use -Os
+
+		for envvar in "CFLAGS", "CXXFLAGS":
+			if envvar not in os.environ and envvar not in addlargs:
+				self.settings[envvar] = "-Os -pipe"
+
+	def set_root_path(self):
+		# ROOT= variable for emerges
+		self.settings["root_path"]=normpath("/tmp/image")
+		print "netboot root path is "+self.settings["root_path"]
+
+#	def build_packages(self):
+#		# build packages
+#		if "netboot/packages" in self.settings:
+#			mypack=list_bashify(self.settings["netboot/packages"])
+#		try:
+#			cmd("/bin/bash "+self.settings["controller_file"]+" packages "+mypack,env=self.env)
+#		except CatalystError:
+#			self.unbind()
+#			raise CatalystError,"netboot build aborting due to error."
+
+	def build_busybox(self):
+		# build busybox
+		if "netboot/busybox_config" in self.settings:
+			mycmd = self.settings["netboot/busybox_config"]
+		else:
+			mycmd = ""
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+" busybox "+ mycmd,env=self.env)
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"netboot build aborting due to error."
+
+	def copy_files_to_image(self):
+		# create image
+		myfiles=[]
+		if "netboot/packages" in self.settings:
+			if type(self.settings["netboot/packages"]) == types.StringType:
+				loopy=[self.settings["netboot/packages"]]
+			else:
+				loopy=self.settings["netboot/packages"]
+
+		for x in loopy:
+			if "netboot/packages/"+x+"/files" in self.settings:
+			    if type(self.settings["netboot/packages/"+x+"/files"]) == types.ListType:
+				    myfiles.extend(self.settings["netboot/packages/"+x+"/files"])
+			    else:
+				    myfiles.append(self.settings["netboot/packages/"+x+"/files"])
+
+		if "netboot/extra_files" in self.settings:
+			if type(self.settings["netboot/extra_files"]) == types.ListType:
+				myfiles.extend(self.settings["netboot/extra_files"])
+			else:
+				myfiles.append(self.settings["netboot/extra_files"])
+
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+\
+				" image " + list_bashify(myfiles),env=self.env)
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"netboot build aborting due to error."
+
+	def create_netboot_files(self):
+		# finish it all up
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+" finish",env=self.env)
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"netboot build aborting due to error."
+
+		# end
+		print "netboot: build finished !"
+
+	def set_action_sequence(self):
+	    self.settings["action_sequence"]=["unpack","unpack_snapshot",
+	    				"config_profile_link","setup_confdir","bind","chroot_setup",\
+						"setup_environment","build_packages","build_busybox",\
+						"build_kernel","copy_files_to_image",\
+						"clean","create_netboot_files","unbind","clear_autoresume"]
+
+def register(foo):
+	foo.update({"netboot":netboot_target})
+	return foo
diff --git a/catalyst/modules/snapshot_target.py b/catalyst/modules/snapshot_target.py
new file mode 100644
index 0000000..ba1bab5
--- /dev/null
+++ b/catalyst/modules/snapshot_target.py
@@ -0,0 +1,91 @@
+"""
+Snapshot target
+"""
+
+import os
+from catalyst_support import *
+from generic_stage_target import *
+
+class snapshot_target(generic_stage_target):
+	"""
+	Builder class for snapshots.
+	"""
+	def __init__(self,myspec,addlargs):
+		self.required_values=["version_stamp","target"]
+		self.valid_values=["version_stamp","target"]
+
+		generic_target.__init__(self,myspec,addlargs)
+		self.settings=myspec
+		self.settings["target_subpath"]="portage"
+		st=self.settings["storedir"]
+		self.settings["snapshot_path"] = normpath(st + "/snapshots/"
+			+ self.settings["snapshot_name"]
+			+ self.settings["version_stamp"] + ".tar.bz2")
+		self.settings["tmp_path"]=normpath(st+"/tmp/"+self.settings["target_subpath"])
+
+	def setup(self):
+		x=normpath(self.settings["storedir"]+"/snapshots")
+		if not os.path.exists(x):
+			os.makedirs(x)
+
+	def mount_safety_check(self):
+		pass
+
+	def run(self):
+		if "PURGEONLY" in self.settings:
+			self.purge()
+			return
+
+		if "PURGE" in self.settings:
+			self.purge()
+
+		self.setup()
+		print "Creating Portage tree snapshot "+self.settings["version_stamp"]+\
+			" from "+self.settings["portdir"]+"..."
+
+		mytmp=self.settings["tmp_path"]
+		if not os.path.exists(mytmp):
+			os.makedirs(mytmp)
+
+		cmd("rsync -a --delete --exclude /packages/ --exclude /distfiles/ " +
+			"--exclude /local/ --exclude CVS/ --exclude .svn --filter=H_**/files/digest-* " +
+			self.settings["portdir"] + "/ " + mytmp + "/%s/" % self.settings["repo_name"],
+			"Snapshot failure", env=self.env)
+
+		print "Compressing Portage snapshot tarball..."
+		cmd("tar -I lbzip2 -cf " + self.settings["snapshot_path"] + " -C " +
+			mytmp + " " + self.settings["repo_name"],
+			"Snapshot creation failure",env=self.env)
+
+		self.gen_contents_file(self.settings["snapshot_path"])
+		self.gen_digest_file(self.settings["snapshot_path"])
+
+		self.cleanup()
+		print "snapshot: complete!"
+
+	def kill_chroot_pids(self):
+		pass
+
+	def cleanup(self):
+		print "Cleaning up..."
+
+	def purge(self):
+		myemp=self.settings["tmp_path"]
+		if os.path.isdir(myemp):
+			print "Emptying directory",myemp
+			"""
+			stat the dir, delete the dir, recreate the dir and set
+			the proper perms and ownership
+			"""
+			mystat=os.stat(myemp)
+			""" There's no easy way to change flags recursively in python """
+			if os.uname()[0] == "FreeBSD":
+				os.system("chflags -R noschg "+myemp)
+			shutil.rmtree(myemp)
+			os.makedirs(myemp,0755)
+			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+			os.chmod(myemp,mystat[ST_MODE])
+
+def register(foo):
+	foo.update({"snapshot":snapshot_target})
+	return foo
diff --git a/catalyst/modules/stage1_target.py b/catalyst/modules/stage1_target.py
new file mode 100644
index 0000000..5f4ffa0
--- /dev/null
+++ b/catalyst/modules/stage1_target.py
@@ -0,0 +1,97 @@
+"""
+stage1 target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst_support import *
+from generic_stage_target import *
+
+class stage1_target(generic_stage_target):
+	"""
+	Builder class for a stage1 installation tarball build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=["chost"]
+		self.valid_values.extend(["update_seed","update_seed_command"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_stage_path(self):
+		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+self.settings["root_path"])
+		print "stage1 stage path is "+self.settings["stage_path"]
+
+	def set_root_path(self):
+		# sets the root path, relative to 'chroot_path', of the stage1 root
+		self.settings["root_path"]=normpath("/tmp/stage1root")
+		print "stage1 root path is "+self.settings["root_path"]
+
+	def set_cleanables(self):
+		generic_stage_target.set_cleanables(self)
+		self.settings["cleanables"].extend([\
+		"/usr/share/zoneinfo", "/etc/portage/package*"])
+
+	# XXX: How do these override_foo() functions differ from the ones in generic_stage_target and why aren't they in stage3_target?
+
+	def override_chost(self):
+		if "chost" in self.settings:
+			self.settings["CHOST"]=list_to_string(self.settings["chost"])
+
+	def override_cflags(self):
+		if "cflags" in self.settings:
+			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
+
+	def override_cxxflags(self):
+		if "cxxflags" in self.settings:
+			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
+
+	def override_ldflags(self):
+		if "ldflags" in self.settings:
+			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
+
+	def set_portage_overlay(self):
+		generic_stage_target.set_portage_overlay(self)
+		if "portage_overlay" in self.settings:
+			print "\nWARNING !!!!!"
+			print "\tUsing an portage overlay for earlier stages could cause build issues."
+			print "\tIf you break it, you buy it. Don't complain to us about it."
+			print "\tDont say we did not warn you\n"
+
+	def base_dirs(self):
+		if os.uname()[0] == "FreeBSD":
+			# baselayout no longer creates the .keep files in proc and dev for FreeBSD as it
+			# would create them too late...we need them earlier before bind mounting filesystems
+			# since proc and dev are not writeable, so...create them here
+			if not os.path.exists(self.settings["stage_path"]+"/proc"):
+				os.makedirs(self.settings["stage_path"]+"/proc")
+			if not os.path.exists(self.settings["stage_path"]+"/dev"):
+				os.makedirs(self.settings["stage_path"]+"/dev")
+			if not os.path.isfile(self.settings["stage_path"]+"/proc/.keep"):
+				try:
+					proc_keepfile = open(self.settings["stage_path"]+"/proc/.keep","w")
+					proc_keepfile.write('')
+					proc_keepfile.close()
+				except IOError:
+					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
+			if not os.path.isfile(self.settings["stage_path"]+"/dev/.keep"):
+				try:
+					dev_keepfile = open(self.settings["stage_path"]+"/dev/.keep","w")
+					dev_keepfile.write('')
+					dev_keepfile.close()
+				except IOError:
+					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
+		else:
+			pass
+
+	def set_mounts(self):
+		# stage_path/proc probably doesn't exist yet, so create it
+		if not os.path.exists(self.settings["stage_path"]+"/proc"):
+			os.makedirs(self.settings["stage_path"]+"/proc")
+
+		# alter the mount mappings to bind mount proc onto it
+		self.mounts.append("stage1root/proc")
+		self.target_mounts["stage1root/proc"] = "/tmp/stage1root/proc"
+		self.mountmap["stage1root/proc"] = "/proc"
+
+def register(foo):
+	foo.update({"stage1":stage1_target})
+	return foo
diff --git a/catalyst/modules/stage2_target.py b/catalyst/modules/stage2_target.py
new file mode 100644
index 0000000..803ec59
--- /dev/null
+++ b/catalyst/modules/stage2_target.py
@@ -0,0 +1,66 @@
+"""
+stage2 target, builds upon previous stage1 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst_support import *
+from generic_stage_target import *
+
+class stage2_target(generic_stage_target):
+	"""
+	Builder class for a stage2 installation tarball build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=["chost"]
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_source_path(self):
+		if "SEEDCACHE" in self.settings and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")):
+			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")
+		else:
+			self.settings["source_path"] = normpath(self.settings["storedir"] +
+				"/builds/" + self.settings["source_subpath"].rstrip("/") +
+				".tar.bz2")
+			if os.path.isfile(self.settings["source_path"]):
+				if os.path.exists(self.settings["source_path"]):
+				# XXX: Is this even necessary if the previous check passes?
+					self.settings["source_path_hash"]=generate_hash(self.settings["source_path"],\
+						hash_function=self.settings["hash_function"],verbose=False)
+		print "Source path set to "+self.settings["source_path"]
+		if os.path.isdir(self.settings["source_path"]):
+			print "\tIf this is not desired, remove this directory or turn of seedcache in the options of catalyst.conf"
+			print "\tthe source path will then be " + \
+				normpath(self.settings["storedir"] + "/builds/" + \
+				self.settings["source_subpath"].restrip("/") + ".tar.bz2\n")
+
+	# XXX: How do these override_foo() functions differ from the ones in
+	# generic_stage_target and why aren't they in stage3_target?
+
+	def override_chost(self):
+		if "chost" in self.settings:
+			self.settings["CHOST"]=list_to_string(self.settings["chost"])
+
+	def override_cflags(self):
+		if "cflags" in self.settings:
+			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
+
+	def override_cxxflags(self):
+		if "cxxflags" in self.settings:
+			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
+
+	def override_ldflags(self):
+		if "ldflags" in self.settings:
+			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
+
+	def set_portage_overlay(self):
+			generic_stage_target.set_portage_overlay(self)
+			if "portage_overlay" in self.settings:
+				print "\nWARNING !!!!!"
+				print "\tUsing an portage overlay for earlier stages could cause build issues."
+				print "\tIf you break it, you buy it. Don't complain to us about it."
+				print "\tDont say we did not warn you\n"
+
+def register(foo):
+	foo.update({"stage2":stage2_target})
+	return foo
diff --git a/catalyst/modules/stage3_target.py b/catalyst/modules/stage3_target.py
new file mode 100644
index 0000000..4d3a008
--- /dev/null
+++ b/catalyst/modules/stage3_target.py
@@ -0,0 +1,31 @@
+"""
+stage3 target, builds upon previous stage2/stage3 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst_support import *
+from generic_stage_target import *
+
+class stage3_target(generic_stage_target):
+	"""
+	Builder class for a stage3 installation tarball build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=[]
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_portage_overlay(self):
+		generic_stage_target.set_portage_overlay(self)
+		if "portage_overlay" in self.settings:
+			print "\nWARNING !!!!!"
+			print "\tUsing an overlay for earlier stages could cause build issues."
+			print "\tIf you break it, you buy it. Don't complain to us about it."
+			print "\tDont say we did not warn you\n"
+
+	def set_cleanables(self):
+		generic_stage_target.set_cleanables(self)
+
+def register(foo):
+	foo.update({"stage3":stage3_target})
+	return foo
diff --git a/catalyst/modules/stage4_target.py b/catalyst/modules/stage4_target.py
new file mode 100644
index 0000000..ce41b2d
--- /dev/null
+++ b/catalyst/modules/stage4_target.py
@@ -0,0 +1,43 @@
+"""
+stage4 target, builds upon previous stage3/stage4 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst_support import *
+from generic_stage_target import *
+
+class stage4_target(generic_stage_target):
+	"""
+	Builder class for stage4.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["stage4/packages"]
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend(["stage4/use","boot/kernel",\
+				"stage4/root_overlay","stage4/fsscript",\
+				"stage4/gk_mainargs","splash_theme",\
+				"portage_overlay","stage4/rcadd","stage4/rcdel",\
+				"stage4/linuxrc","stage4/unmerge","stage4/rm","stage4/empty"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_cleanables(self):
+		self.settings["cleanables"]=["/var/tmp/*","/tmp/*"]
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir","portage_overlay",\
+					"bind","chroot_setup","setup_environment","build_packages",\
+					"build_kernel","bootloader","root_overlay","fsscript",\
+					"preclean","rcupdate","unmerge","unbind","remove","empty",\
+					"clean"]
+
+#		if "TARBALL" in self.settings or \
+#			"FETCH" not in self.settings:
+		if "FETCH" not in self.settings:
+			self.settings["action_sequence"].append("capture")
+		self.settings["action_sequence"].append("clear_autoresume")
+
+def register(foo):
+	foo.update({"stage4":stage4_target})
+	return foo
+
diff --git a/catalyst/modules/tinderbox_target.py b/catalyst/modules/tinderbox_target.py
new file mode 100644
index 0000000..ca55610
--- /dev/null
+++ b/catalyst/modules/tinderbox_target.py
@@ -0,0 +1,48 @@
+"""
+Tinderbox target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst_support import *
+from generic_stage_target import *
+
+class tinderbox_target(generic_stage_target):
+	"""
+	Builder class for the tinderbox target
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["tinderbox/packages"]
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend(["tinderbox/use"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def run_local(self):
+		# tinderbox
+		# example call: "grp.sh run xmms vim sys-apps/gleep"
+		try:
+			if os.path.exists(self.settings["controller_file"]):
+			    cmd("/bin/bash "+self.settings["controller_file"]+" run "+\
+				list_bashify(self.settings["tinderbox/packages"]),"run script failed.",env=self.env)
+
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"Tinderbox aborting due to error."
+
+	def set_cleanables(self):
+		self.settings['cleanables'] = [
+			'/etc/resolv.conf',
+			'/var/tmp/*',
+			'/root/*',
+			self.settings['portdir'],
+			]
+
+	def set_action_sequence(self):
+		#Default action sequence for run method
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+		              "config_profile_link","setup_confdir","bind","chroot_setup",\
+		              "setup_environment","run_local","preclean","unbind","clean",\
+		              "clear_autoresume"]
+
+def register(foo):
+	foo.update({"tinderbox":tinderbox_target})
+	return foo
diff --git a/catalyst/util.py b/catalyst/util.py
new file mode 100644
index 0000000..ff12086
--- /dev/null
+++ b/catalyst/util.py
@@ -0,0 +1,14 @@
+"""
+Collection of utility functions for catalyst
+"""
+
+import sys, traceback
+
+def capture_traceback():
+	etype, value, tb = sys.exc_info()
+	s = [x.strip() for x in traceback.format_exception(etype, value, tb)]
+	return s
+
+def print_traceback():
+	for x in capture_traceback():
+		print x
diff --git a/modules/__init__.py b/modules/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/modules/builder.py b/modules/builder.py
deleted file mode 100644
index ad27d78..0000000
--- a/modules/builder.py
+++ /dev/null
@@ -1,20 +0,0 @@
-
-class generic:
-	def __init__(self,myspec):
-		self.settings=myspec
-
-	def mount_safety_check(self):
-		"""
-		Make sure that no bind mounts exist in chrootdir (to use before
-		cleaning the directory, to make sure we don't wipe the contents of
-		a bind mount
-		"""
-		pass
-
-	def mount_all(self):
-		"""do all bind mounts"""
-		pass
-
-	def umount_all(self):
-		"""unmount all bind mounts"""
-		pass
diff --git a/modules/catalyst/__init__.py b/modules/catalyst/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/modules/catalyst/config.py b/modules/catalyst/config.py
deleted file mode 100644
index 726bf74..0000000
--- a/modules/catalyst/config.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import re
-from modules.catalyst_support import *
-
-class ParserBase:
-
-	filename = ""
-	lines = None
-	values = None
-	key_value_separator = "="
-	multiple_values = False
-	empty_values = True
-
-	def __getitem__(self, key):
-		return self.values[key]
-
-	def get_values(self):
-		return self.values
-
-	def dump(self):
-		dump = ""
-		for x in self.values.keys():
-			dump += x + " = " + repr(self.values[x]) + "\n"
-		return dump
-
-	def parse_file(self, filename):
-		try:
-			myf = open(filename, "r")
-		except:
-			raise CatalystError, "Could not open file " + filename
-		self.lines = myf.readlines()
-		myf.close()
-		self.filename = filename
-		self.parse()
-
-	def parse_lines(self, lines):
-		self.lines = lines
-		self.parse()
-
-	def parse(self):
-		values = {}
-		cur_array = []
-
-		trailing_comment=re.compile('\s*#.*$')
-		white_space=re.compile('\s+')
-
-		for x, myline in enumerate(self.lines):
-			myline = myline.strip()
-
-			# Force the line to be clean
-			# Remove Comments ( anything following # )
-			myline = trailing_comment.sub("", myline)
-
-			# Skip any blank lines
-			if not myline: continue
-
-			# Look for separator
-			msearch = myline.find(self.key_value_separator)
-
-			# If separator found assume its a new key
-			if msearch != -1:
-				# Split on the first occurence of the separator creating two strings in the array mobjs
-				mobjs = myline.split(self.key_value_separator, 1)
-				mobjs[1] = mobjs[1].strip().strip('"')
-
-#				# Check that this key doesn't exist already in the spec
-#				if mobjs[0] in values:
-#					raise Exception("You have a duplicate key (" + mobjs[0] + ") in your spec. Please fix it")
-
-				# Start a new array using the first element of mobjs
-				cur_array = [mobjs[0]]
-				if mobjs[1]:
-					if self.multiple_values:
-						# split on white space creating additional array elements
-#						subarray = white_space.split(mobjs[1])
-						subarray = mobjs[1].split()
-						cur_array += subarray
-					else:
-						cur_array += [mobjs[1]]
-
-			# Else add on to the last key we were working on
-			else:
-				if self.multiple_values:
-#					mobjs = white_space.split(myline)
-#					cur_array += mobjs
-					cur_array += myline.split()
-				else:
-					raise CatalystError, "Syntax error: " + x
-
-			# XXX: Do we really still need this "single value is a string" behavior?
-			if len(cur_array) == 2:
-				values[cur_array[0]] = cur_array[1]
-			else:
-				values[cur_array[0]] = cur_array[1:]
-
-		if not self.empty_values:
-			for x in values.keys():
-				# Delete empty key pairs
-				if not values[x]:
-					print "\n\tWARNING: No value set for key " + x + "...deleting"
-					del values[x]
-
-		self.values = values
-
-class SpecParser(ParserBase):
-
-	key_value_separator = ':'
-	multiple_values = True
-	empty_values = False
-
-	def __init__(self, filename=""):
-		if filename:
-			self.parse_file(filename)
-
-class ConfigParser(ParserBase):
-
-	key_value_separator = '='
-	multiple_values = False
-	empty_values = True
-
-	def __init__(self, filename=""):
-		if filename:
-			self.parse_file(filename)
diff --git a/modules/catalyst/util.py b/modules/catalyst/util.py
deleted file mode 100644
index ff12086..0000000
--- a/modules/catalyst/util.py
+++ /dev/null
@@ -1,14 +0,0 @@
-"""
-Collection of utility functions for catalyst
-"""
-
-import sys, traceback
-
-def capture_traceback():
-	etype, value, tb = sys.exc_info()
-	s = [x.strip() for x in traceback.format_exception(etype, value, tb)]
-	return s
-
-def print_traceback():
-	for x in capture_traceback():
-		print x
diff --git a/modules/catalyst_lock.py b/modules/catalyst_lock.py
deleted file mode 100644
index 5311cf8..0000000
--- a/modules/catalyst_lock.py
+++ /dev/null
@@ -1,468 +0,0 @@
-#!/usr/bin/python
-import os
-import fcntl
-import errno
-import sys
-import string
-import time
-from catalyst_support import *
-
-def writemsg(mystr):
-	sys.stderr.write(mystr)
-	sys.stderr.flush()
-
-class LockDir:
-	locking_method=fcntl.flock
-	lock_dirs_in_use=[]
-	die_on_failed_lock=True
-	def __del__(self):
-		self.clean_my_hardlocks()
-		self.delete_lock_from_path_list()
-		if self.islocked():
-			self.fcntl_unlock()
-
-	def __init__(self,lockdir):
-		self.locked=False
-		self.myfd=None
-		self.set_gid(250)
-		self.locking_method=LockDir.locking_method
-		self.set_lockdir(lockdir)
-		self.set_lockfilename(".catalyst_lock")
-		self.set_lockfile()
-
-		if LockDir.lock_dirs_in_use.count(lockdir)>0:
-			raise "This directory already associated with a lock object"
-		else:
-			LockDir.lock_dirs_in_use.append(lockdir)
-
-		self.hardlock_paths={}
-
-	def delete_lock_from_path_list(self):
-		i=0
-		try:
-			if LockDir.lock_dirs_in_use:
-				for x in LockDir.lock_dirs_in_use:
-					if LockDir.lock_dirs_in_use[i] == self.lockdir:
-						del LockDir.lock_dirs_in_use[i]
-						break
-						i=i+1
-		except AttributeError:
-			pass
-
-	def islocked(self):
-		if self.locked:
-			return True
-		else:
-			return False
-
-	def set_gid(self,gid):
-		if not self.islocked():
-#			if "DEBUG" in self.settings:
-#				print "setting gid to", gid
-			self.gid=gid
-
-	def set_lockdir(self,lockdir):
-		if not os.path.exists(lockdir):
-			os.makedirs(lockdir)
-		if os.path.isdir(lockdir):
-			if not self.islocked():
-				if lockdir[-1] == "/":
-					lockdir=lockdir[:-1]
-				self.lockdir=normpath(lockdir)
-#				if "DEBUG" in self.settings:
-#					print "setting lockdir to", self.lockdir
-		else:
-			raise "the lock object needs a path to a dir"
-
-	def set_lockfilename(self,lockfilename):
-		if not self.islocked():
-			self.lockfilename=lockfilename
-#			if "DEBUG" in self.settings:
-#				print "setting lockfilename to", self.lockfilename
-
-	def set_lockfile(self):
-		if not self.islocked():
-			self.lockfile=normpath(self.lockdir+'/'+self.lockfilename)
-#			if "DEBUG" in self.settings:
-#				print "setting lockfile to", self.lockfile
-
-	def read_lock(self):
-		if not self.locking_method == "HARDLOCK":
-			self.fcntl_lock("read")
-		else:
-			print "HARDLOCKING doesnt support shared-read locks"
-			print "using exclusive write locks"
-			self.hard_lock()
-
-	def write_lock(self):
-		if not self.locking_method == "HARDLOCK":
-			self.fcntl_lock("write")
-		else:
-			self.hard_lock()
-
-	def unlock(self):
-		if not self.locking_method == "HARDLOCK":
-			self.fcntl_unlock()
-		else:
-			self.hard_unlock()
-
-	def fcntl_lock(self,locktype):
-		if self.myfd==None:
-			if not os.path.exists(os.path.dirname(self.lockdir)):
-				raise DirectoryNotFound, os.path.dirname(self.lockdir)
-			if not os.path.exists(self.lockfile):
-				old_mask=os.umask(000)
-				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
-				try:
-					if os.stat(self.lockfile).st_gid != self.gid:
-						os.chown(self.lockfile,os.getuid(),self.gid)
-				except SystemExit, e:
-					raise
-				except OSError, e:
-					if e[0] == 2: #XXX: No such file or directory
-						return self.fcntl_locking(locktype)
-					else:
-						writemsg("Cannot chown a lockfile. This could cause inconvenience later.\n")
-
-				os.umask(old_mask)
-			else:
-				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
-
-		try:
-			if locktype == "read":
-				self.locking_method(self.myfd,fcntl.LOCK_SH|fcntl.LOCK_NB)
-			else:
-				self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
-		except IOError, e:
-			if "errno" not in dir(e):
-				raise
-			if e.errno == errno.EAGAIN:
-				if not LockDir.die_on_failed_lock:
-					# Resource temp unavailable; eg, someone beat us to the lock.
-					writemsg("waiting for lock on %s\n" % self.lockfile)
-
-					# Try for the exclusive or shared lock again.
-					if locktype == "read":
-						self.locking_method(self.myfd,fcntl.LOCK_SH)
-					else:
-						self.locking_method(self.myfd,fcntl.LOCK_EX)
-				else:
-					raise LockInUse,self.lockfile
-			elif e.errno == errno.ENOLCK:
-				pass
-			else:
-				raise
-		if not os.path.exists(self.lockfile):
-			os.close(self.myfd)
-			self.myfd=None
-			#writemsg("lockfile recurse\n")
-			self.fcntl_lock(locktype)
-		else:
-			self.locked=True
-			#writemsg("Lockfile obtained\n")
-
-	def fcntl_unlock(self):
-		import fcntl
-		unlinkfile = 1
-		if not os.path.exists(self.lockfile):
-			print "lockfile does not exist '%s'" % self.lockfile
-			if (self.myfd != None):
-				try:
-					os.close(myfd)
-					self.myfd=None
-				except:
-					pass
-				return False
-
-			try:
-				if self.myfd == None:
-					self.myfd = os.open(self.lockfile, os.O_WRONLY,0660)
-					unlinkfile = 1
-					self.locking_method(self.myfd,fcntl.LOCK_UN)
-			except SystemExit, e:
-				raise
-			except Exception, e:
-				os.close(self.myfd)
-				self.myfd=None
-				raise IOError, "Failed to unlock file '%s'\n" % self.lockfile
-				try:
-					# This sleep call was added to allow other processes that are
-					# waiting for a lock to be able to grab it before it is deleted.
-					# lockfile() already accounts for this situation, however, and
-					# the sleep here adds more time than is saved overall, so am
-					# commenting until it is proved necessary.
-					#time.sleep(0.0001)
-					if unlinkfile:
-						InUse=False
-						try:
-							self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
-						except:
-							print "Read lock may be in effect. skipping lockfile delete..."
-							InUse=True
-							# We won the lock, so there isn't competition for it.
-							# We can safely delete the file.
-							#writemsg("Got the lockfile...\n")
-							#writemsg("Unlinking...\n")
-							self.locking_method(self.myfd,fcntl.LOCK_UN)
-					if not InUse:
-						os.unlink(self.lockfile)
-						os.close(self.myfd)
-						self.myfd=None
-#						if "DEBUG" in self.settings:
-#							print "Unlinked lockfile..."
-				except SystemExit, e:
-					raise
-				except Exception, e:
-					# We really don't care... Someone else has the lock.
-					# So it is their problem now.
-					print "Failed to get lock... someone took it."
-					print str(e)
-
-					# Why test lockfilename?  Because we may have been handed an
-					# fd originally, and the caller might not like having their
-					# open fd closed automatically on them.
-					#if type(lockfilename) == types.StringType:
-					#        os.close(myfd)
-
-		if (self.myfd != None):
-			os.close(self.myfd)
-			self.myfd=None
-			self.locked=False
-			time.sleep(.0001)
-
-	def hard_lock(self,max_wait=14400):
-		"""Does the NFS, hardlink shuffle to ensure locking on the disk.
-		We create a PRIVATE lockfile, that is just a placeholder on the disk.
-		Then we HARDLINK the real lockfile to that private file.
-		If our file can 2 references, then we have the lock. :)
-		Otherwise we lather, rise, and repeat.
-		We default to a 4 hour timeout.
-		"""
-
-		self.myhardlock = self.hardlock_name(self.lockdir)
-
-		start_time = time.time()
-		reported_waiting = False
-
-		while(time.time() < (start_time + max_wait)):
-			# We only need it to exist.
-			self.myfd = os.open(self.myhardlock, os.O_CREAT|os.O_RDWR,0660)
-			os.close(self.myfd)
-
-			self.add_hardlock_file_to_cleanup()
-			if not os.path.exists(self.myhardlock):
-				raise FileNotFound, "Created lockfile is missing: %(filename)s" % {"filename":self.myhardlock}
-			try:
-				res = os.link(self.myhardlock, self.lockfile)
-			except SystemExit, e:
-				raise
-			except Exception, e:
-#				if "DEBUG" in self.settings:
-#					print "lockfile(): Hardlink: Link failed."
-#					print "Exception: ",e
-				pass
-
-			if self.hardlink_is_mine(self.myhardlock, self.lockfile):
-				# We have the lock.
-				if reported_waiting:
-					print
-				return True
-
-			if reported_waiting:
-				writemsg(".")
-			else:
-				reported_waiting = True
-				print
-				print "Waiting on (hardlink) lockfile: (one '.' per 3 seconds)"
-				print "Lockfile: " + self.lockfile
-			time.sleep(3)
-
-		os.unlink(self.myhardlock)
-		return False
-
-	def hard_unlock(self):
-		try:
-			if os.path.exists(self.myhardlock):
-				os.unlink(self.myhardlock)
-			if os.path.exists(self.lockfile):
-				os.unlink(self.lockfile)
-		except SystemExit, e:
-			raise
-		except:
-			writemsg("Something strange happened to our hardlink locks.\n")
-
-	def add_hardlock_file_to_cleanup(self):
-		#mypath = self.normpath(path)
-		if os.path.isdir(self.lockdir) and os.path.isfile(self.myhardlock):
-			self.hardlock_paths[self.lockdir]=self.myhardlock
-
-	def remove_hardlock_file_from_cleanup(self):
-		if self.lockdir in self.hardlock_paths:
-			del self.hardlock_paths[self.lockdir]
-			print self.hardlock_paths
-
-	def hardlock_name(self, path):
-		mypath=path+"/.hardlock-"+os.uname()[1]+"-"+str(os.getpid())
-		newpath = os.path.normpath(mypath)
-		if len(newpath) > 1:
-			if newpath[1] == "/":
-				newpath = "/"+newpath.lstrip("/")
-		return newpath
-
-	def hardlink_is_mine(self,link,lock):
-		import stat
-		try:
-			myhls = os.stat(link)
-			mylfs = os.stat(lock)
-		except SystemExit, e:
-			raise
-		except:
-			myhls = None
-			mylfs = None
-
-		if myhls:
-			if myhls[stat.ST_NLINK] == 2:
-				return True
-		if mylfs:
-			if mylfs[stat.ST_INO] == myhls[stat.ST_INO]:
-				return True
-		return False
-
-	def hardlink_active(lock):
-		if not os.path.exists(lock):
-			return False
-
-	def clean_my_hardlocks(self):
-		try:
-			for x in self.hardlock_paths.keys():
-				self.hardlock_cleanup(x)
-		except AttributeError:
-			pass
-
-	def hardlock_cleanup(self,path):
-		mypid  = str(os.getpid())
-		myhost = os.uname()[1]
-		mydl = os.listdir(path)
-		results = []
-		mycount = 0
-
-		mylist = {}
-		for x in mydl:
-			filepath=path+"/"+x
-			if os.path.isfile(filepath):
-				parts = filepath.split(".hardlock-")
-			if len(parts) == 2:
-				filename = parts[0]
-				hostpid  = parts[1].split("-")
-				host  = "-".join(hostpid[:-1])
-				pid   = hostpid[-1]
-			if filename not in mylist:
-				mylist[filename] = {}
-
-			if host not in mylist[filename]:
-				mylist[filename][host] = []
-				mylist[filename][host].append(pid)
-				mycount += 1
-			else:
-				mylist[filename][host].append(pid)
-				mycount += 1
-
-
-		results.append("Found %(count)s locks" % {"count":mycount})
-		for x in mylist.keys():
-			if myhost in mylist[x]:
-				mylockname = self.hardlock_name(x)
-				if self.hardlink_is_mine(mylockname, self.lockfile) or \
-					not os.path.exists(self.lockfile):
-					for y in mylist[x].keys():
-						for z in mylist[x][y]:
-							filename = x+".hardlock-"+y+"-"+z
-							if filename == mylockname:
-								self.hard_unlock()
-								continue
-							try:
-								# We're sweeping through, unlinking everyone's locks.
-								os.unlink(filename)
-								results.append("Unlinked: " + filename)
-							except SystemExit, e:
-								raise
-							except Exception,e:
-								pass
-					try:
-						os.unlink(x)
-						results.append("Unlinked: " + x)
-						os.unlink(mylockname)
-						results.append("Unlinked: " + mylockname)
-					except SystemExit, e:
-						raise
-					except Exception,e:
-						pass
-				else:
-					try:
-						os.unlink(mylockname)
-						results.append("Unlinked: " + mylockname)
-					except SystemExit, e:
-						raise
-					except Exception,e:
-						pass
-		return results
-
-if __name__ == "__main__":
-
-	def lock_work():
-		print
-		for i in range(1,6):
-			print i,time.time()
-			time.sleep(1)
-		print
-	def normpath(mypath):
-		newpath = os.path.normpath(mypath)
-		if len(newpath) > 1:
-			if newpath[1] == "/":
-				newpath = "/"+newpath.lstrip("/")
-		return newpath
-
-	print "Lock 5 starting"
-	import time
-	Lock1=LockDir("/tmp/lock_path")
-	Lock1.write_lock()
-	print "Lock1 write lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-	Lock1.read_lock()
-	print "Lock1 read lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-	Lock1.read_lock()
-	print "Lock1 read lock"
-
-	Lock1.write_lock()
-	print "Lock1 write lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-	Lock1.read_lock()
-	print "Lock1 read lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-#Lock1.write_lock()
-#time.sleep(2)
-#Lock1.unlock()
-    ##Lock1.write_lock()
-    #time.sleep(2)
-    #Lock1.unlock()
diff --git a/modules/catalyst_support.py b/modules/catalyst_support.py
deleted file mode 100644
index 316dfa3..0000000
--- a/modules/catalyst_support.py
+++ /dev/null
@@ -1,718 +0,0 @@
-
-import sys,string,os,types,re,signal,traceback,time
-#import md5,sha
-selinux_capable = False
-#userpriv_capable = (os.getuid() == 0)
-#fakeroot_capable = False
-BASH_BINARY             = "/bin/bash"
-
-try:
-        import resource
-        max_fd_limit=resource.getrlimit(RLIMIT_NOFILE)
-except SystemExit, e:
-        raise
-except:
-        # hokay, no resource module.
-        max_fd_limit=256
-
-# pids this process knows of.
-spawned_pids = []
-
-try:
-        import urllib
-except SystemExit, e:
-        raise
-
-def cleanup(pids,block_exceptions=True):
-        """function to go through and reap the list of pids passed to it"""
-        global spawned_pids
-        if type(pids) == int:
-                pids = [pids]
-        for x in pids:
-                try:
-                        os.kill(x,signal.SIGTERM)
-                        if os.waitpid(x,os.WNOHANG)[1] == 0:
-                                # feisty bugger, still alive.
-                                os.kill(x,signal.SIGKILL)
-                                os.waitpid(x,0)
-
-                except OSError, oe:
-                        if block_exceptions:
-                                pass
-                        if oe.errno not in (10,3):
-                                raise oe
-                except SystemExit:
-                        raise
-                except Exception:
-                        if block_exceptions:
-                                pass
-                try:                    spawned_pids.remove(x)
-                except IndexError:      pass
-
-
-
-# a function to turn a string of non-printable characters into a string of
-# hex characters
-def hexify(str):
-	hexStr = string.hexdigits
-	r = ''
-	for ch in str:
-		i = ord(ch)
-		r = r + hexStr[(i >> 4) & 0xF] + hexStr[i & 0xF]
-	return r
-# hexify()
-
-def generate_contents(file,contents_function="auto",verbose=False):
-	try:
-		_ = contents_function
-		if _ == 'auto' and file.endswith('.iso'):
-			_ = 'isoinfo-l'
-		if (_ in ['tar-tv','auto']):
-			if file.endswith('.tgz') or file.endswith('.tar.gz'):
-				_ = 'tar-tvz'
-			elif file.endswith('.tbz2') or file.endswith('.tar.bz2'):
-				_ = 'tar-tvj'
-			elif file.endswith('.tar'):
-				_ = 'tar-tv'
-
-		if _ == 'auto':
-			warn('File %r has unknown type for automatic detection.' % (file, ))
-			return None
-		else:
-			contents_function = _
-			_ = contents_map[contents_function]
-			return _[0](file,_[1],verbose)
-	except:
-		raise CatalystError,\
-			"Error generating contents, is appropriate utility (%s) installed on your system?" \
-			% (contents_function, )
-
-def calc_contents(file,cmd,verbose):
-	args={ 'file': file }
-	cmd=cmd % dict(args)
-	a=os.popen(cmd)
-	mylines=a.readlines()
-	a.close()
-	result="".join(mylines)
-	if verbose:
-		print result
-	return result
-
-# This has map must be defined after the function calc_content
-# It is possible to call different functions from this but they must be defined
-# before hash_map
-# Key,function,cmd
-contents_map={
-	# 'find' is disabled because it requires the source path, which is not
-	# always available
-	#"find"		:[calc_contents,"find %(path)s"],
-	"tar-tv":[calc_contents,"tar tvf %(file)s"],
-	"tar-tvz":[calc_contents,"tar tvzf %(file)s"],
-	"tar-tvj":[calc_contents,"tar -I lbzip2 -tvf %(file)s"],
-	"isoinfo-l":[calc_contents,"isoinfo -l -i %(file)s"],
-	# isoinfo-f should be a last resort only
-	"isoinfo-f":[calc_contents,"isoinfo -f -i %(file)s"],
-}
-
-def generate_hash(file,hash_function="crc32",verbose=False):
-	try:
-		return hash_map[hash_function][0](file,hash_map[hash_function][1],hash_map[hash_function][2],\
-			hash_map[hash_function][3],verbose)
-	except:
-		raise CatalystError,"Error generating hash, is appropriate utility installed on your system?"
-
-def calc_hash(file,cmd,cmd_args,id_string="MD5",verbose=False):
-	a=os.popen(cmd+" "+cmd_args+" "+file)
-	mylines=a.readlines()
-	a.close()
-	mylines=mylines[0].split()
-	result=mylines[0]
-	if verbose:
-		print id_string+" (%s) = %s" % (file, result)
-	return result
-
-def calc_hash2(file,cmd,cmd_args,id_string="MD5",verbose=False):
-	a=os.popen(cmd+" "+cmd_args+" "+file)
-	header=a.readline()
-	mylines=a.readline().split()
-	hash=mylines[0]
-	short_file=os.path.split(mylines[1])[1]
-	a.close()
-	result=header+hash+"  "+short_file+"\n"
-	if verbose:
-		print header+" (%s) = %s" % (short_file, result)
-	return result
-
-# This has map must be defined after the function calc_hash
-# It is possible to call different functions from this but they must be defined
-# before hash_map
-# Key,function,cmd,cmd_args,Print string
-hash_map={
-	 "adler32":[calc_hash2,"shash","-a ADLER32","ADLER32"],\
-	 "crc32":[calc_hash2,"shash","-a CRC32","CRC32"],\
-	 "crc32b":[calc_hash2,"shash","-a CRC32B","CRC32B"],\
-	 "gost":[calc_hash2,"shash","-a GOST","GOST"],\
-	 "haval128":[calc_hash2,"shash","-a HAVAL128","HAVAL128"],\
-	 "haval160":[calc_hash2,"shash","-a HAVAL160","HAVAL160"],\
-	 "haval192":[calc_hash2,"shash","-a HAVAL192","HAVAL192"],\
-	 "haval224":[calc_hash2,"shash","-a HAVAL224","HAVAL224"],\
-	 "haval256":[calc_hash2,"shash","-a HAVAL256","HAVAL256"],\
-	 "md2":[calc_hash2,"shash","-a MD2","MD2"],\
-	 "md4":[calc_hash2,"shash","-a MD4","MD4"],\
-	 "md5":[calc_hash2,"shash","-a MD5","MD5"],\
-	 "ripemd128":[calc_hash2,"shash","-a RIPEMD128","RIPEMD128"],\
-	 "ripemd160":[calc_hash2,"shash","-a RIPEMD160","RIPEMD160"],\
-	 "ripemd256":[calc_hash2,"shash","-a RIPEMD256","RIPEMD256"],\
-	 "ripemd320":[calc_hash2,"shash","-a RIPEMD320","RIPEMD320"],\
-	 "sha1":[calc_hash2,"shash","-a SHA1","SHA1"],\
-	 "sha224":[calc_hash2,"shash","-a SHA224","SHA224"],\
-	 "sha256":[calc_hash2,"shash","-a SHA256","SHA256"],\
-	 "sha384":[calc_hash2,"shash","-a SHA384","SHA384"],\
-	 "sha512":[calc_hash2,"shash","-a SHA512","SHA512"],\
-	 "snefru128":[calc_hash2,"shash","-a SNEFRU128","SNEFRU128"],\
-	 "snefru256":[calc_hash2,"shash","-a SNEFRU256","SNEFRU256"],\
-	 "tiger":[calc_hash2,"shash","-a TIGER","TIGER"],\
-	 "tiger128":[calc_hash2,"shash","-a TIGER128","TIGER128"],\
-	 "tiger160":[calc_hash2,"shash","-a TIGER160","TIGER160"],\
-	 "whirlpool":[calc_hash2,"shash","-a WHIRLPOOL","WHIRLPOOL"],\
-	 }
-
-def read_from_clst(file):
-	line = ''
-	myline = ''
-	try:
-		myf=open(file,"r")
-	except:
-		return -1
-		#raise CatalystError, "Could not open file "+file
-	for line in myf.readlines():
-	    #line = string.replace(line, "\n", "") # drop newline
-	    myline = myline + line
-	myf.close()
-	return myline
-# read_from_clst
-
-# these should never be touched
-required_build_targets=["generic_target","generic_stage_target"]
-
-# new build types should be added here
-valid_build_targets=["stage1_target","stage2_target","stage3_target","stage4_target","grp_target",
-			"livecd_stage1_target","livecd_stage2_target","embedded_target",
-			"tinderbox_target","snapshot_target","netboot_target","netboot2_target"]
-
-required_config_file_values=["storedir","sharedir","distdir","portdir"]
-valid_config_file_values=required_config_file_values[:]
-valid_config_file_values.append("PKGCACHE")
-valid_config_file_values.append("KERNCACHE")
-valid_config_file_values.append("CCACHE")
-valid_config_file_values.append("DISTCC")
-valid_config_file_values.append("ICECREAM")
-valid_config_file_values.append("ENVSCRIPT")
-valid_config_file_values.append("AUTORESUME")
-valid_config_file_values.append("FETCH")
-valid_config_file_values.append("CLEAR_AUTORESUME")
-valid_config_file_values.append("options")
-valid_config_file_values.append("DEBUG")
-valid_config_file_values.append("VERBOSE")
-valid_config_file_values.append("PURGE")
-valid_config_file_values.append("PURGEONLY")
-valid_config_file_values.append("SNAPCACHE")
-valid_config_file_values.append("snapshot_cache")
-valid_config_file_values.append("hash_function")
-valid_config_file_values.append("digests")
-valid_config_file_values.append("contents")
-valid_config_file_values.append("SEEDCACHE")
-
-verbosity=1
-
-def list_bashify(mylist):
-	if type(mylist)==types.StringType:
-		mypack=[mylist]
-	else:
-		mypack=mylist[:]
-	for x in range(0,len(mypack)):
-		# surround args with quotes for passing to bash,
-		# allows things like "<" to remain intact
-		mypack[x]="'"+mypack[x]+"'"
-	mypack=string.join(mypack)
-	return mypack
-
-def list_to_string(mylist):
-	if type(mylist)==types.StringType:
-		mypack=[mylist]
-	else:
-		mypack=mylist[:]
-	for x in range(0,len(mypack)):
-		# surround args with quotes for passing to bash,
-		# allows things like "<" to remain intact
-		mypack[x]=mypack[x]
-	mypack=string.join(mypack)
-	return mypack
-
-class CatalystError(Exception):
-	def __init__(self, message):
-		if message:
-			(type,value)=sys.exc_info()[:2]
-			if value!=None:
-				print
-				print traceback.print_exc(file=sys.stdout)
-			print
-			print "!!! catalyst: "+message
-			print
-
-class LockInUse(Exception):
-	def __init__(self, message):
-		if message:
-			#(type,value)=sys.exc_info()[:2]
-			#if value!=None:
-			    #print
-			    #kprint traceback.print_exc(file=sys.stdout)
-			print
-			print "!!! catalyst lock file in use: "+message
-			print
-
-def die(msg=None):
-	warn(msg)
-	sys.exit(1)
-
-def warn(msg):
-	print "!!! catalyst: "+msg
-
-def find_binary(myc):
-	"""look through the environmental path for an executable file named whatever myc is"""
-        # this sucks. badly.
-        p=os.getenv("PATH")
-        if p == None:
-                return None
-        for x in p.split(":"):
-                #if it exists, and is executable
-                if os.path.exists("%s/%s" % (x,myc)) and os.stat("%s/%s" % (x,myc))[0] & 0x0248:
-                        return "%s/%s" % (x,myc)
-        return None
-
-def spawn_bash(mycommand,env={},debug=False,opt_name=None,**keywords):
-	"""spawn mycommand as an arguement to bash"""
-	args=[BASH_BINARY]
-	if not opt_name:
-	    opt_name=mycommand.split()[0]
-	if "BASH_ENV" not in env:
-	    env["BASH_ENV"] = "/etc/spork/is/not/valid/profile.env"
-	if debug:
-	    args.append("-x")
-	args.append("-c")
-	args.append(mycommand)
-	return spawn(args,env=env,opt_name=opt_name,**keywords)
-
-#def spawn_get_output(mycommand,spawn_type=spawn,raw_exit_code=False,emulate_gso=True, \
-#        collect_fds=[1],fd_pipes=None,**keywords):
-
-def spawn_get_output(mycommand,raw_exit_code=False,emulate_gso=True, \
-        collect_fds=[1],fd_pipes=None,**keywords):
-        """call spawn, collecting the output to fd's specified in collect_fds list
-        emulate_gso is a compatability hack to emulate commands.getstatusoutput's return, minus the
-        requirement it always be a bash call (spawn_type controls the actual spawn call), and minus the
-        'lets let log only stdin and let stderr slide by'.
-
-        emulate_gso was deprecated from the day it was added, so convert your code over.
-        spawn_type is the passed in function to call- typically spawn_bash, spawn, spawn_sandbox, or spawn_fakeroot"""
-        global selinux_capable
-        pr,pw=os.pipe()
-
-        #if type(spawn_type) not in [types.FunctionType, types.MethodType]:
-        #        s="spawn_type must be passed a function, not",type(spawn_type),spawn_type
-        #        raise Exception,s
-
-        if fd_pipes==None:
-                fd_pipes={}
-                fd_pipes[0] = 0
-
-        for x in collect_fds:
-                fd_pipes[x] = pw
-        keywords["returnpid"]=True
-
-        mypid=spawn_bash(mycommand,fd_pipes=fd_pipes,**keywords)
-        os.close(pw)
-        if type(mypid) != types.ListType:
-                os.close(pr)
-                return [mypid, "%s: No such file or directory" % mycommand.split()[0]]
-
-        fd=os.fdopen(pr,"r")
-        mydata=fd.readlines()
-        fd.close()
-        if emulate_gso:
-                mydata=string.join(mydata)
-                if len(mydata) and mydata[-1] == "\n":
-                        mydata=mydata[:-1]
-        retval=os.waitpid(mypid[0],0)[1]
-        cleanup(mypid)
-        if raw_exit_code:
-                return [retval,mydata]
-        retval=process_exit_code(retval)
-        return [retval, mydata]
-
-# base spawn function
-def spawn(mycommand,env={},raw_exit_code=False,opt_name=None,fd_pipes=None,returnpid=False,\
-	 uid=None,gid=None,groups=None,umask=None,logfile=None,path_lookup=True,\
-	 selinux_context=None, raise_signals=False, func_call=False):
-	"""base fork/execve function.
-	mycommand is the desired command- if you need a command to execute in a bash/sandbox/fakeroot
-	environment, use the appropriate spawn call.  This is a straight fork/exec code path.
-	Can either have a tuple, or a string passed in.  If uid/gid/groups/umask specified, it changes
-	the forked process to said value.  If path_lookup is on, a non-absolute command will be converted
-	to an absolute command, otherwise it returns None.
-
-	selinux_context is the desired context, dependant on selinux being available.
-	opt_name controls the name the processor goes by.
-	fd_pipes controls which file descriptor numbers are left open in the forked process- it's a dict of
-	current fd's raw fd #, desired #.
-
-	func_call is a boolean for specifying to execute a python function- use spawn_func instead.
-	raise_signals is questionable.  Basically throw an exception if signal'd.  No exception is thrown
-	if raw_input is on.
-
-	logfile overloads the specified fd's to write to a tee process which logs to logfile
-	returnpid returns the relevant pids (a list, including the logging process if logfile is on).
-
-	non-returnpid calls to spawn will block till the process has exited, returning the exitcode/signal
-	raw_exit_code controls whether the actual waitpid result is returned, or intrepretted."""
-
-	myc=''
-	if not func_call:
-		if type(mycommand)==types.StringType:
-			mycommand=mycommand.split()
-		myc = mycommand[0]
-		if not os.access(myc, os.X_OK):
-			if not path_lookup:
-				return None
-			myc = find_binary(myc)
-			if myc == None:
-			    return None
-        mypid=[]
-	if logfile:
-		pr,pw=os.pipe()
-		mypid.extend(spawn(('tee','-i','-a',logfile),returnpid=True,fd_pipes={0:pr,1:1,2:2}))
-		retval=os.waitpid(mypid[-1],os.WNOHANG)[1]
-		if retval != 0:
-			# he's dead jim.
-			if raw_exit_code:
-				return retval
-			return process_exit_code(retval)
-
-		if fd_pipes == None:
-			fd_pipes={}
-			fd_pipes[0] = 0
-		fd_pipes[1]=pw
-		fd_pipes[2]=pw
-
-	if not opt_name:
-		opt_name = mycommand[0]
-	myargs=[opt_name]
-	myargs.extend(mycommand[1:])
-	global spawned_pids
-	mypid.append(os.fork())
-	if mypid[-1] != 0:
-		#log the bugger.
-		spawned_pids.extend(mypid)
-
-	if mypid[-1] == 0:
-		if func_call:
-			spawned_pids = []
-
-		# this may look ugly, but basically it moves file descriptors around to ensure no
-		# handles that are needed are accidentally closed during the final dup2 calls.
-		trg_fd=[]
-		if type(fd_pipes)==types.DictType:
-			src_fd=[]
-			k=fd_pipes.keys()
-			k.sort()
-
-			#build list of which fds will be where, and where they are at currently
-			for x in k:
-				trg_fd.append(x)
-				src_fd.append(fd_pipes[x])
-
-			# run through said list dup'ing descriptors so that they won't be waxed
-			# by other dup calls.
-			for x in range(0,len(trg_fd)):
-				if trg_fd[x] == src_fd[x]:
-					continue
-				if trg_fd[x] in src_fd[x+1:]:
-					new=os.dup2(trg_fd[x],max(src_fd) + 1)
-					os.close(trg_fd[x])
-					try:
-						while True:
-							src_fd[s.index(trg_fd[x])]=new
-					except SystemExit, e:
-						raise
-					except:
-						pass
-
-			# transfer the fds to their final pre-exec position.
-			for x in range(0,len(trg_fd)):
-				if trg_fd[x] != src_fd[x]:
-					os.dup2(src_fd[x], trg_fd[x])
-		else:
-			trg_fd=[0,1,2]
-
-		# wax all open descriptors that weren't requested be left open.
-		for x in range(0,max_fd_limit):
-			if x not in trg_fd:
-				try:
-					os.close(x)
-                                except SystemExit, e:
-                                        raise
-                                except:
-                                        pass
-
-                # note this order must be preserved- can't change gid/groups if you change uid first.
-                if selinux_capable and selinux_context:
-                        import selinux
-                        selinux.setexec(selinux_context)
-                if gid:
-                        os.setgid(gid)
-                if groups:
-                        os.setgroups(groups)
-                if uid:
-                        os.setuid(uid)
-                if umask:
-                        os.umask(umask)
-                else:
-                        os.umask(022)
-
-                try:
-                        #print "execing", myc, myargs
-                        if func_call:
-                                # either use a passed in func for interpretting the results, or return if no exception.
-                                # note the passed in list, and dict are expanded.
-                                if len(mycommand) == 4:
-                                        os._exit(mycommand[3](mycommand[0](*mycommand[1],**mycommand[2])))
-                                try:
-                                        mycommand[0](*mycommand[1],**mycommand[2])
-                                except Exception,e:
-                                        print "caught exception",e," in forked func",mycommand[0]
-                                sys.exit(0)
-
-			#os.execvp(myc,myargs)
-                        os.execve(myc,myargs,env)
-                except SystemExit, e:
-                        raise
-                except Exception, e:
-                        if not func_call:
-                                raise str(e)+":\n   "+myc+" "+string.join(myargs)
-                        print "func call failed"
-
-                # If the execve fails, we need to report it, and exit
-                # *carefully* --- report error here
-                os._exit(1)
-                sys.exit(1)
-                return # should never get reached
-
-        # if we were logging, kill the pipes.
-        if logfile:
-                os.close(pr)
-                os.close(pw)
-
-        if returnpid:
-                return mypid
-
-        # loop through pids (typically one, unless logging), either waiting on their death, or waxing them
-        # if the main pid (mycommand) returned badly.
-        while len(mypid):
-                retval=os.waitpid(mypid[-1],0)[1]
-                if retval != 0:
-                        cleanup(mypid[0:-1],block_exceptions=False)
-                        # at this point we've killed all other kid pids generated via this call.
-                        # return now.
-                        if raw_exit_code:
-                                return retval
-                        return process_exit_code(retval,throw_signals=raise_signals)
-                else:
-                        mypid.pop(-1)
-        cleanup(mypid)
-        return 0
-
-def cmd(mycmd,myexc="",env={}):
-	try:
-		sys.stdout.flush()
-		retval=spawn_bash(mycmd,env)
-		if retval != 0:
-			raise CatalystError,myexc
-	except:
-		raise
-
-def process_exit_code(retval,throw_signals=False):
-        """process a waitpid returned exit code, returning exit code if it exit'd, or the
-        signal if it died from signalling
-        if throw_signals is on, it raises a SystemExit if the process was signaled.
-        This is intended for usage with threads, although at the moment you can't signal individual
-        threads in python, only the master thread, so it's a questionable option."""
-        if (retval & 0xff)==0:
-                return retval >> 8 # return exit code
-        else:
-                if throw_signals:
-                        #use systemexit, since portage is stupid about exception catching.
-                        raise SystemExit()
-                return (retval & 0xff) << 8 # interrupted by signal
-
-def file_locate(settings,filelist,expand=1):
-	#if expand=1, non-absolute paths will be accepted and
-	# expanded to os.getcwd()+"/"+localpath if file exists
-	for myfile in filelist:
-		if myfile not in settings:
-			#filenames such as cdtar are optional, so we don't assume the variable is defined.
-			pass
-		else:
-		    if len(settings[myfile])==0:
-			    raise CatalystError, "File variable \""+myfile+"\" has a length of zero (not specified.)"
-		    if settings[myfile][0]=="/":
-			    if not os.path.exists(settings[myfile]):
-				    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]
-		    elif expand and os.path.exists(os.getcwd()+"/"+settings[myfile]):
-			    settings[myfile]=os.getcwd()+"/"+settings[myfile]
-		    else:
-			    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]+" (2nd try)"
-"""
-Spec file format:
-
-The spec file format is a very simple and easy-to-use format for storing data. Here's an example
-file:
-
-item1: value1
-item2: foo bar oni
-item3:
-	meep
-	bark
-	gleep moop
-
-This file would be interpreted as defining three items: item1, item2 and item3. item1 would contain
-the string value "value1". Item2 would contain an ordered list [ "foo", "bar", "oni" ]. item3
-would contain an ordered list as well: [ "meep", "bark", "gleep", "moop" ]. It's important to note
-that the order of multiple-value items is preserved, but the order that the items themselves are
-defined are not preserved. In other words, "foo", "bar", "oni" ordering is preserved but "item1"
-"item2" "item3" ordering is not, as the item strings are stored in a dictionary (hash).
-"""
-
-def parse_makeconf(mylines):
-	mymakeconf={}
-	pos=0
-	pat=re.compile("([0-9a-zA-Z_]*)=(.*)")
-	while pos<len(mylines):
-		if len(mylines[pos])<=1:
-			#skip blanks
-			pos += 1
-			continue
-		if mylines[pos][0] in ["#"," ","\t"]:
-			#skip indented lines, comments
-			pos += 1
-			continue
-		else:
-			myline=mylines[pos]
-			mobj=pat.match(myline)
-			pos += 1
-			if mobj.group(2):
-			    clean_string = re.sub(r"\"",r"",mobj.group(2))
-			    mymakeconf[mobj.group(1)]=clean_string
-	return mymakeconf
-
-def read_makeconf(mymakeconffile):
-	if os.path.exists(mymakeconffile):
-		try:
-			try:
-				import snakeoil.fileutils
-				return snakeoil.fileutils.read_bash_dict(mymakeconffile, sourcing_command="source")
-			except ImportError:
-				try:
-					import portage.util
-					return portage.util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
-				except:
-					try:
-						import portage_util
-						return portage_util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
-					except ImportError:
-						myf=open(mymakeconffile,"r")
-						mylines=myf.readlines()
-						myf.close()
-						return parse_makeconf(mylines)
-		except:
-			raise CatalystError, "Could not parse make.conf file "+mymakeconffile
-	else:
-		makeconf={}
-		return makeconf
-
-def msg(mymsg,verblevel=1):
-	if verbosity>=verblevel:
-		print mymsg
-
-def pathcompare(path1,path2):
-	# Change double slashes to slash
-	path1 = re.sub(r"//",r"/",path1)
-	path2 = re.sub(r"//",r"/",path2)
-	# Removing ending slash
-	path1 = re.sub("/$","",path1)
-	path2 = re.sub("/$","",path2)
-
-	if path1 == path2:
-		return 1
-	return 0
-
-def ismount(path):
-	"enhanced to handle bind mounts"
-	if os.path.ismount(path):
-		return 1
-	a=os.popen("mount")
-	mylines=a.readlines()
-	a.close()
-	for line in mylines:
-		mysplit=line.split()
-		if pathcompare(path,mysplit[2]):
-			return 1
-	return 0
-
-def addl_arg_parse(myspec,addlargs,requiredspec,validspec):
-	"helper function to help targets parse additional arguments"
-	global valid_config_file_values
-
-	messages = []
-	for x in addlargs.keys():
-		if x not in validspec and x not in valid_config_file_values and x not in requiredspec:
-			messages.append("Argument \""+x+"\" not recognized.")
-		else:
-			myspec[x]=addlargs[x]
-
-	for x in requiredspec:
-		if x not in myspec:
-			messages.append("Required argument \""+x+"\" not specified.")
-
-	if messages:
-		raise CatalystError, '\n\tAlso: '.join(messages)
-
-def touch(myfile):
-	try:
-		myf=open(myfile,"w")
-		myf.close()
-	except IOError:
-		raise CatalystError, "Could not touch "+myfile+"."
-
-def countdown(secs=5, doing="Starting"):
-        if secs:
-		print ">>> Waiting",secs,"seconds before starting..."
-		print ">>> (Control-C to abort)...\n"+doing+" in: ",
-		ticks=range(secs)
-		ticks.reverse()
-		for sec in ticks:
-			sys.stdout.write(str(sec+1)+" ")
-			sys.stdout.flush()
-			time.sleep(1)
-		print
-
-def normpath(mypath):
-	TrailingSlash=False
-        if mypath[-1] == "/":
-	    TrailingSlash=True
-        newpath = os.path.normpath(mypath)
-        if len(newpath) > 1:
-                if newpath[:2] == "//":
-                        newpath = newpath[1:]
-	if TrailingSlash:
-	    newpath=newpath+'/'
-        return newpath
diff --git a/modules/embedded_target.py b/modules/embedded_target.py
deleted file mode 100644
index f38ea00..0000000
--- a/modules/embedded_target.py
+++ /dev/null
@@ -1,51 +0,0 @@
-"""
-Enbedded target, similar to the stage2 target, builds upon a stage2 tarball.
-
-A stage2 tarball is unpacked, but instead
-of building a stage3, it emerges @system into another directory
-inside the stage2 system.  This way, we do not have to emerge GCC/portage
-into the staged system.
-It may sound complicated but basically it runs
-ROOT=/tmp/submerge emerge --something foo bar .
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,imp,types,shutil
-from catalyst_support import *
-from generic_stage_target import *
-from stat import *
-
-class embedded_target(generic_stage_target):
-	"""
-	Builder class for embedded target
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=[]
-		self.valid_values.extend(["embedded/empty","embedded/rm","embedded/unmerge","embedded/fs-prepare","embedded/fs-finish","embedded/mergeroot","embedded/packages","embedded/fs-type","embedded/runscript","boot/kernel","embedded/linuxrc"])
-		self.valid_values.extend(["embedded/use"])
-		if "embedded/fs-type" in addlargs:
-			self.valid_values.append("embedded/fs-ops")
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		self.set_build_kernel_vars(addlargs)
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["dir_setup","unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir",\
-					"portage_overlay","bind","chroot_setup",\
-					"setup_environment","build_kernel","build_packages",\
-					"bootloader","root_overlay","fsscript","unmerge",\
-					"unbind","remove","empty","clean","capture","clear_autoresume"]
-
-	def set_stage_path(self):
-		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+"/tmp/mergeroot")
-		print "embedded stage path is "+self.settings["stage_path"]
-
-	def set_root_path(self):
-		self.settings["root_path"]=normpath("/tmp/mergeroot")
-		print "embedded root path is "+self.settings["root_path"]
-
-def register(foo):
-	foo.update({"embedded":embedded_target})
-	return foo
diff --git a/modules/generic_stage_target.py b/modules/generic_stage_target.py
deleted file mode 100644
index a5b52b0..0000000
--- a/modules/generic_stage_target.py
+++ /dev/null
@@ -1,1740 +0,0 @@
-import os,string,imp,types,shutil
-from catalyst_support import *
-from generic_target import *
-from stat import *
-import catalyst_lock
-
-
-PORT_LOGDIR_CLEAN = \
-	'find "${PORT_LOGDIR}" -type f ! -name "summary.log*" -mtime +30 -delete'
-
-TARGET_MOUNTS_DEFAULTS = {
-	"ccache": "/var/tmp/ccache",
-	"dev": "/dev",
-	"devpts": "/dev/pts",
-	"distdir": "/usr/portage/distfiles",
-	"icecream": "/usr/lib/icecc/bin",
-	"kerncache": "/tmp/kerncache",
-	"packagedir": "/usr/portage/packages",
-	"portdir": "/usr/portage",
-	"port_tmpdir": "/var/tmp/portage",
-	"port_logdir": "/var/log/portage",
-	"proc": "/proc",
-	"shm": "/dev/shm",
-	}
-
-SOURCE_MOUNTS_DEFAULTS = {
-	"dev": "/dev",
-	"devpts": "/dev/pts",
-	"distdir": "/usr/portage/distfiles",
-	"portdir": "/usr/portage",
-	"port_tmpdir": "tmpfs",
-	"proc": "/proc",
-	"shm": "shmfs",
-	}
-
-
-class generic_stage_target(generic_target):
-	"""
-	This class does all of the chroot setup, copying of files, etc. It is
-	the driver class for pretty much everything that Catalyst does.
-	"""
-	def __init__(self,myspec,addlargs):
-		self.required_values.extend(["version_stamp","target","subarch",\
-			"rel_type","profile","snapshot","source_subpath"])
-
-		self.valid_values.extend(["version_stamp","target","subarch",\
-			"rel_type","profile","snapshot","source_subpath","portage_confdir",\
-			"cflags","cxxflags","ldflags","cbuild","hostuse","portage_overlay",\
-			"distcc_hosts","makeopts","pkgcache_path","kerncache_path"])
-
-		self.set_valid_build_kernel_vars(addlargs)
-		generic_target.__init__(self,myspec,addlargs)
-
-		"""
-		The semantics of subarchmap and machinemap changed a bit in 2.0.3 to
-		work better with vapier's CBUILD stuff. I've removed the "monolithic"
-		machinemap from this file and split up its contents amongst the
-		various arch/foo.py files.
-
-		When register() is called on each module in the arch/ dir, it now
-		returns a tuple instead of acting on the subarchmap dict that is
-		passed to it. The tuple contains the values that were previously
-		added to subarchmap as well as a new list of CHOSTs that go along
-		with that arch. This allows us to build machinemap on the fly based
-		on the keys in subarchmap and the values of the 2nd list returned
-		(tmpmachinemap).
-
-		Also, after talking with vapier. I have a slightly better idea of what
-		certain variables are used for and what they should be set to. Neither
-		'buildarch' or 'hostarch' are used directly, so their value doesn't
-		really matter. They are just compared to determine if we are
-		cross-compiling. Because of this, they are just set to the name of the
-		module in arch/ that the subarch is part of to make things simpler.
-		The entire build process is still based off of 'subarch' like it was
-		previously. -agaffney
-		"""
-
-		self.archmap = {}
-		self.subarchmap = {}
-		machinemap = {}
-		for x in [x[:-3] for x in os.listdir(self.settings["sharedir"]+\
-			"/arch/") if x.endswith(".py")]:
-			try:
-				fh=open(self.settings["sharedir"]+"/arch/"+x+".py")
-				"""
-				This next line loads the plugin as a module and assigns it to
-				archmap[x]
-				"""
-				self.archmap[x]=imp.load_module(x,fh,"arch/"+x+\
-					".py",(".py","r",imp.PY_SOURCE))
-				"""
-				This next line registers all the subarches supported in the
-				plugin
-				"""
-				tmpsubarchmap, tmpmachinemap = self.archmap[x].register()
-				self.subarchmap.update(tmpsubarchmap)
-				for machine in tmpmachinemap:
-					machinemap[machine] = x
-				for subarch in tmpsubarchmap:
-					machinemap[subarch] = x
-				fh.close()
-			except IOError:
-				"""
-				This message should probably change a bit, since everything in
-				the dir should load just fine. If it doesn't, it's probably a
-				syntax error in the module
-				"""
-				msg("Can't find/load "+x+".py plugin in "+\
-					self.settings["sharedir"]+"/arch/")
-
-		if "chost" in self.settings:
-			hostmachine = self.settings["chost"].split("-")[0]
-			if hostmachine not in machinemap:
-				raise CatalystError, "Unknown host machine type "+hostmachine
-			self.settings["hostarch"]=machinemap[hostmachine]
-		else:
-			hostmachine = self.settings["subarch"]
-			if hostmachine in machinemap:
-				hostmachine = machinemap[hostmachine]
-			self.settings["hostarch"]=hostmachine
-		if "cbuild" in self.settings:
-			buildmachine = self.settings["cbuild"].split("-")[0]
-		else:
-			buildmachine = os.uname()[4]
-		if buildmachine not in machinemap:
-			raise CatalystError, "Unknown build machine type "+buildmachine
-		self.settings["buildarch"]=machinemap[buildmachine]
-		self.settings["crosscompile"]=(self.settings["hostarch"]!=\
-			self.settings["buildarch"])
-
-		""" Call arch constructor, pass our settings """
-		try:
-			self.arch=self.subarchmap[self.settings["subarch"]](self.settings)
-		except KeyError:
-			print "Invalid subarch: "+self.settings["subarch"]
-			print "Choose one of the following:",
-			for x in self.subarchmap:
-				print x,
-			print
-			sys.exit(2)
-
-		print "Using target:",self.settings["target"]
-		""" Print a nice informational message """
-		if self.settings["buildarch"]==self.settings["hostarch"]:
-			print "Building natively for",self.settings["hostarch"]
-		elif self.settings["crosscompile"]:
-			print "Cross-compiling on",self.settings["buildarch"],\
-				"for different machine type",self.settings["hostarch"]
-		else:
-			print "Building on",self.settings["buildarch"],\
-				"for alternate personality type",self.settings["hostarch"]
-
-		""" This must be set first as other set_ options depend on this """
-		self.set_spec_prefix()
-
-		""" Define all of our core variables """
-		self.set_target_profile()
-		self.set_target_subpath()
-		self.set_source_subpath()
-
-		""" Set paths """
-		self.set_snapshot_path()
-		self.set_root_path()
-		self.set_source_path()
-		self.set_snapcache_path()
-		self.set_chroot_path()
-		self.set_autoresume_path()
-		self.set_dest_path()
-		self.set_stage_path()
-		self.set_target_path()
-
-		self.set_controller_file()
-		self.set_action_sequence()
-		self.set_use()
-		self.set_cleanables()
-		self.set_iso_volume_id()
-		self.set_build_kernel_vars()
-		self.set_fsscript()
-		self.set_install_mask()
-		self.set_rcadd()
-		self.set_rcdel()
-		self.set_cdtar()
-		self.set_fstype()
-		self.set_fsops()
-		self.set_iso()
-		self.set_packages()
-		self.set_rm()
-		self.set_linuxrc()
-		self.set_busybox_config()
-		self.set_overlay()
-		self.set_portage_overlay()
-		self.set_root_overlay()
-
-		"""
-		This next line checks to make sure that the specified variables exist
-		on disk.
-		"""
-		#pdb.set_trace()
-		file_locate(self.settings,["source_path","snapshot_path","distdir"],\
-			expand=0)
-		""" If we are using portage_confdir, check that as well. """
-		if "portage_confdir" in self.settings:
-			file_locate(self.settings,["portage_confdir"],expand=0)
-
-		""" Setup our mount points """
-		# initialize our target mounts.
-		self.target_mounts = TARGET_MOUNTS_DEFAULTS.copy()
-
-		self.mounts = ["proc", "dev", "portdir", "distdir", "port_tmpdir"]
-		# initialize our source mounts
-		self.mountmap = SOURCE_MOUNTS_DEFAULTS.copy()
-		# update them from settings
-		self.mountmap["distdir"] = self.settings["distdir"]
-		self.mountmap["portdir"] = normpath("/".join([
-			self.settings["snapshot_cache_path"],
-			self.settings["repo_name"],
-			]))
-		if "SNAPCACHE" not in self.settings:
-			self.mounts.remove("portdir")
-			#self.mountmap["portdir"] = None
-		if os.uname()[0] == "Linux":
-			self.mounts.append("devpts")
-			self.mounts.append("shm")
-
-		self.set_mounts()
-
-		"""
-		Configure any user specified options (either in catalyst.conf or on
-		the command line).
-		"""
-		if "PKGCACHE" in self.settings:
-			self.set_pkgcache_path()
-			print "Location of the package cache is "+\
-				self.settings["pkgcache_path"]
-			self.mounts.append("packagedir")
-			self.mountmap["packagedir"] = self.settings["pkgcache_path"]
-
-		if "KERNCACHE" in self.settings:
-			self.set_kerncache_path()
-			print "Location of the kerncache is "+\
-				self.settings["kerncache_path"]
-			self.mounts.append("kerncache")
-			self.mountmap["kerncache"] = self.settings["kerncache_path"]
-
-		if "CCACHE" in self.settings:
-			if "CCACHE_DIR" in os.environ:
-				ccdir=os.environ["CCACHE_DIR"]
-				del os.environ["CCACHE_DIR"]
-			else:
-				ccdir="/root/.ccache"
-			if not os.path.isdir(ccdir):
-				raise CatalystError,\
-					"Compiler cache support can't be enabled (can't find "+\
-					ccdir+")"
-			self.mounts.append("ccache")
-			self.mountmap["ccache"] = ccdir
-			""" for the chroot: """
-			self.env["CCACHE_DIR"] = self.target_mounts["ccache"]
-
-		if "ICECREAM" in self.settings:
-			self.mounts.append("icecream")
-			self.mountmap["icecream"] = self.settings["icecream"]
-			self.env["PATH"] = self.target_mounts["icecream"] + ":" + \
-				self.env["PATH"]
-
-		if "port_logdir" in self.settings:
-			self.mounts.append("port_logdir")
-			self.mountmap["port_logdir"] = self.settings["port_logdir"]
-			self.env["PORT_LOGDIR"] = self.settings["port_logdir"]
-			self.env["PORT_LOGDIR_CLEAN"] = PORT_LOGDIR_CLEAN
-
-	def override_cbuild(self):
-		if "CBUILD" in self.makeconf:
-			self.settings["CBUILD"]=self.makeconf["CBUILD"]
-
-	def override_chost(self):
-		if "CHOST" in self.makeconf:
-			self.settings["CHOST"]=self.makeconf["CHOST"]
-
-	def override_cflags(self):
-		if "CFLAGS" in self.makeconf:
-			self.settings["CFLAGS"]=self.makeconf["CFLAGS"]
-
-	def override_cxxflags(self):
-		if "CXXFLAGS" in self.makeconf:
-			self.settings["CXXFLAGS"]=self.makeconf["CXXFLAGS"]
-
-	def override_ldflags(self):
-		if "LDFLAGS" in self.makeconf:
-			self.settings["LDFLAGS"]=self.makeconf["LDFLAGS"]
-
-	def set_install_mask(self):
-		if "install_mask" in self.settings:
-			if type(self.settings["install_mask"])!=types.StringType:
-				self.settings["install_mask"]=\
-					string.join(self.settings["install_mask"])
-
-	def set_spec_prefix(self):
-		self.settings["spec_prefix"]=self.settings["target"]
-
-	def set_target_profile(self):
-		self.settings["target_profile"]=self.settings["profile"]
-
-	def set_target_subpath(self):
-		self.settings["target_subpath"]=self.settings["rel_type"]+"/"+\
-				self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
-				self.settings["version_stamp"]
-
-	def set_source_subpath(self):
-		if type(self.settings["source_subpath"])!=types.StringType:
-			raise CatalystError,\
-				"source_subpath should have been a string. Perhaps you have something wrong in your spec file?"
-
-	def set_pkgcache_path(self):
-		if "pkgcache_path" in self.settings:
-			if type(self.settings["pkgcache_path"])!=types.StringType:
-				self.settings["pkgcache_path"]=\
-					normpath(string.join(self.settings["pkgcache_path"]))
-		else:
-			self.settings["pkgcache_path"]=\
-				normpath(self.settings["storedir"]+"/packages/"+\
-				self.settings["target_subpath"]+"/")
-
-	def set_kerncache_path(self):
-		if "kerncache_path" in self.settings:
-			if type(self.settings["kerncache_path"])!=types.StringType:
-				self.settings["kerncache_path"]=\
-					normpath(string.join(self.settings["kerncache_path"]))
-		else:
-			self.settings["kerncache_path"]=normpath(self.settings["storedir"]+\
-				"/kerncache/"+self.settings["target_subpath"]+"/")
-
-	def set_target_path(self):
-		self.settings["target_path"] = normpath(self.settings["storedir"] +
-			"/builds/" + self.settings["target_subpath"].rstrip('/') +
-			".tar.bz2")
-		if "AUTORESUME" in self.settings\
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"setup_target_path"):
-			print \
-				"Resume point detected, skipping target path setup operation..."
-		else:
-			""" First clean up any existing target stuff """
-			# XXX WTF are we removing the old tarball before we start building the
-			# XXX new one? If the build fails, you don't want to be left with
-			# XXX nothing at all
-#			if os.path.isfile(self.settings["target_path"]):
-#				cmd("rm -f "+self.settings["target_path"],\
-#					"Could not remove existing file: "\
-#					+self.settings["target_path"],env=self.env)
-			touch(self.settings["autoresume_path"]+"setup_target_path")
-
-			if not os.path.exists(self.settings["storedir"]+"/builds/"):
-				os.makedirs(self.settings["storedir"]+"/builds/")
-
-	def set_fsscript(self):
-		if self.settings["spec_prefix"]+"/fsscript" in self.settings:
-			self.settings["fsscript"]=\
-				self.settings[self.settings["spec_prefix"]+"/fsscript"]
-			del self.settings[self.settings["spec_prefix"]+"/fsscript"]
-
-	def set_rcadd(self):
-		if self.settings["spec_prefix"]+"/rcadd" in self.settings:
-			self.settings["rcadd"]=\
-				self.settings[self.settings["spec_prefix"]+"/rcadd"]
-			del self.settings[self.settings["spec_prefix"]+"/rcadd"]
-
-	def set_rcdel(self):
-		if self.settings["spec_prefix"]+"/rcdel" in self.settings:
-			self.settings["rcdel"]=\
-				self.settings[self.settings["spec_prefix"]+"/rcdel"]
-			del self.settings[self.settings["spec_prefix"]+"/rcdel"]
-
-	def set_cdtar(self):
-		if self.settings["spec_prefix"]+"/cdtar" in self.settings:
-			self.settings["cdtar"]=\
-				normpath(self.settings[self.settings["spec_prefix"]+"/cdtar"])
-			del self.settings[self.settings["spec_prefix"]+"/cdtar"]
-
-	def set_iso(self):
-		if self.settings["spec_prefix"]+"/iso" in self.settings:
-			if self.settings[self.settings["spec_prefix"]+"/iso"].startswith('/'):
-				self.settings["iso"]=\
-					normpath(self.settings[self.settings["spec_prefix"]+"/iso"])
-			else:
-				# This automatically prepends the build dir to the ISO output path
-				# if it doesn't start with a /
-				self.settings["iso"] = normpath(self.settings["storedir"] + \
-					"/builds/" + self.settings["rel_type"] + "/" + \
-					self.settings[self.settings["spec_prefix"]+"/iso"])
-			del self.settings[self.settings["spec_prefix"]+"/iso"]
-
-	def set_fstype(self):
-		if self.settings["spec_prefix"]+"/fstype" in self.settings:
-			self.settings["fstype"]=\
-				self.settings[self.settings["spec_prefix"]+"/fstype"]
-			del self.settings[self.settings["spec_prefix"]+"/fstype"]
-
-		if "fstype" not in self.settings:
-			self.settings["fstype"]="normal"
-			for x in self.valid_values:
-				if x ==  self.settings["spec_prefix"]+"/fstype":
-					print "\n"+self.settings["spec_prefix"]+\
-						"/fstype is being set to the default of \"normal\"\n"
-
-	def set_fsops(self):
-		if "fstype" in self.settings:
-			self.valid_values.append("fsops")
-			if self.settings["spec_prefix"]+"/fsops" in self.settings:
-				self.settings["fsops"]=\
-					self.settings[self.settings["spec_prefix"]+"/fsops"]
-				del self.settings[self.settings["spec_prefix"]+"/fsops"]
-
-	def set_source_path(self):
-		if "SEEDCACHE" in self.settings\
-			and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+\
-				self.settings["source_subpath"]+"/")):
-			self.settings["source_path"]=normpath(self.settings["storedir"]+\
-				"/tmp/"+self.settings["source_subpath"]+"/")
-		else:
-			self.settings["source_path"] = normpath(self.settings["storedir"] +
-				"/builds/" + self.settings["source_subpath"].rstrip("/") +
-				".tar.bz2")
-			if os.path.isfile(self.settings["source_path"]):
-				# XXX: Is this even necessary if the previous check passes?
-				if os.path.exists(self.settings["source_path"]):
-					self.settings["source_path_hash"]=\
-						generate_hash(self.settings["source_path"],\
-						hash_function=self.settings["hash_function"],\
-						verbose=False)
-		print "Source path set to "+self.settings["source_path"]
-		if os.path.isdir(self.settings["source_path"]):
-			print "\tIf this is not desired, remove this directory or turn off"
-			print "\tseedcache in the options of catalyst.conf the source path"
-			print "\twill then be "+\
-				normpath(self.settings["storedir"] + "/builds/" +
-					self.settings["source_subpath"].rstrip("/") + ".tar.bz2\n")
-
-	def set_dest_path(self):
-		if "root_path" in self.settings:
-			self.settings["destpath"]=normpath(self.settings["chroot_path"]+\
-				self.settings["root_path"])
-		else:
-			self.settings["destpath"]=normpath(self.settings["chroot_path"])
-
-	def set_cleanables(self):
-		self.settings["cleanables"]=["/etc/resolv.conf","/var/tmp/*","/tmp/*",\
-			"/root/*", self.settings["portdir"]]
-
-	def set_snapshot_path(self):
-		self.settings["snapshot_path"] = normpath(self.settings["storedir"] +
-			"/snapshots/" + self.settings["snapshot_name"] +
-			self.settings["snapshot"].rstrip("/") + ".tar.xz")
-
-		if os.path.exists(self.settings["snapshot_path"]):
-			self.settings["snapshot_path_hash"]=\
-				generate_hash(self.settings["snapshot_path"],\
-				hash_function=self.settings["hash_function"],verbose=False)
-		else:
-			self.settings["snapshot_path"]=normpath(self.settings["storedir"]+\
-				"/snapshots/" + self.settings["snapshot_name"] +
-				self.settings["snapshot"].rstrip("/") + ".tar.bz2")
-
-			if os.path.exists(self.settings["snapshot_path"]):
-				self.settings["snapshot_path_hash"]=\
-					generate_hash(self.settings["snapshot_path"],\
-					hash_function=self.settings["hash_function"],verbose=False)
-
-	def set_snapcache_path(self):
-		if "SNAPCACHE" in self.settings:
-			self.settings["snapshot_cache_path"]=\
-				normpath(self.settings["snapshot_cache"]+"/"+\
-				self.settings["snapshot"])
-			self.snapcache_lock=\
-				catalyst_lock.LockDir(self.settings["snapshot_cache_path"])
-			print "Caching snapshot to "+self.settings["snapshot_cache_path"]
-
-	def set_chroot_path(self):
-		"""
-		NOTE: the trailing slash has been removed
-		Things *could* break if you don't use a proper join()
-		"""
-		self.settings["chroot_path"]=normpath(self.settings["storedir"]+\
-			"/tmp/"+self.settings["target_subpath"])
-		self.chroot_lock=catalyst_lock.LockDir(self.settings["chroot_path"])
-
-	def set_autoresume_path(self):
-		self.settings["autoresume_path"]=normpath(self.settings["storedir"]+\
-			"/tmp/"+self.settings["rel_type"]+"/"+".autoresume-"+\
-			self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
-			self.settings["version_stamp"]+"/")
-		if "AUTORESUME" in self.settings:
-			print "The autoresume path is " + self.settings["autoresume_path"]
-		if not os.path.exists(self.settings["autoresume_path"]):
-			os.makedirs(self.settings["autoresume_path"],0755)
-
-	def set_controller_file(self):
-		self.settings["controller_file"]=normpath(self.settings["sharedir"]+\
-			"/targets/"+self.settings["target"]+"/"+self.settings["target"]+\
-			"-controller.sh")
-
-	def set_iso_volume_id(self):
-		if self.settings["spec_prefix"]+"/volid" in self.settings:
-			self.settings["iso_volume_id"]=\
-				self.settings[self.settings["spec_prefix"]+"/volid"]
-			if len(self.settings["iso_volume_id"])>32:
-				raise CatalystError,\
-					"ISO volume ID must not exceed 32 characters."
-		else:
-			self.settings["iso_volume_id"]="catalyst "+self.settings["snapshot"]
-
-	def set_action_sequence(self):
-		""" Default action sequence for run method """
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-				"setup_confdir","portage_overlay",\
-				"base_dirs","bind","chroot_setup","setup_environment",\
-				"run_local","preclean","unbind","clean"]
-#		if "TARBALL" in self.settings or \
-#			"FETCH" not in self.settings:
-		if "FETCH" not in self.settings:
-			self.settings["action_sequence"].append("capture")
-		self.settings["action_sequence"].append("clear_autoresume")
-
-	def set_use(self):
-		if self.settings["spec_prefix"]+"/use" in self.settings:
-			self.settings["use"]=\
-				self.settings[self.settings["spec_prefix"]+"/use"]
-			del self.settings[self.settings["spec_prefix"]+"/use"]
-		if "use" not in self.settings:
-			self.settings["use"]=""
-		if type(self.settings["use"])==types.StringType:
-			self.settings["use"]=self.settings["use"].split()
-
-		# Force bindist when options ask for it
-		if "BINDIST" in self.settings:
-			self.settings["use"].append("bindist")
-
-	def set_stage_path(self):
-		self.settings["stage_path"]=normpath(self.settings["chroot_path"])
-
-	def set_mounts(self):
-		pass
-
-	def set_packages(self):
-		pass
-
-	def set_rm(self):
-		if self.settings["spec_prefix"]+"/rm" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/rm"])==types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/rm"]=\
-					self.settings[self.settings["spec_prefix"]+"/rm"].split()
-
-	def set_linuxrc(self):
-		if self.settings["spec_prefix"]+"/linuxrc" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/linuxrc"])==types.StringType:
-				self.settings["linuxrc"]=\
-					self.settings[self.settings["spec_prefix"]+"/linuxrc"]
-				del self.settings[self.settings["spec_prefix"]+"/linuxrc"]
-
-	def set_busybox_config(self):
-		if self.settings["spec_prefix"]+"/busybox_config" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/busybox_config"])==types.StringType:
-				self.settings["busybox_config"]=\
-					self.settings[self.settings["spec_prefix"]+"/busybox_config"]
-				del self.settings[self.settings["spec_prefix"]+"/busybox_config"]
-
-	def set_portage_overlay(self):
-		if "portage_overlay" in self.settings:
-			if type(self.settings["portage_overlay"])==types.StringType:
-				self.settings["portage_overlay"]=\
-					self.settings["portage_overlay"].split()
-			print "portage_overlay directories are set to: \""+\
-				string.join(self.settings["portage_overlay"])+"\""
-
-	def set_overlay(self):
-		if self.settings["spec_prefix"]+"/overlay" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/overlay"])==types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/overlay"]=\
-					self.settings[self.settings["spec_prefix"]+\
-					"/overlay"].split()
-
-	def set_root_overlay(self):
-		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/root_overlay"])==types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/root_overlay"]=\
-					self.settings[self.settings["spec_prefix"]+\
-					"/root_overlay"].split()
-
-	def set_root_path(self):
-		""" ROOT= variable for emerges """
-		self.settings["root_path"]="/"
-
-	def set_valid_build_kernel_vars(self,addlargs):
-		if "boot/kernel" in addlargs:
-			if type(addlargs["boot/kernel"])==types.StringType:
-				loopy=[addlargs["boot/kernel"]]
-			else:
-				loopy=addlargs["boot/kernel"]
-
-			for x in loopy:
-				self.valid_values.append("boot/kernel/"+x+"/aliases")
-				self.valid_values.append("boot/kernel/"+x+"/config")
-				self.valid_values.append("boot/kernel/"+x+"/console")
-				self.valid_values.append("boot/kernel/"+x+"/extraversion")
-				self.valid_values.append("boot/kernel/"+x+"/gk_action")
-				self.valid_values.append("boot/kernel/"+x+"/gk_kernargs")
-				self.valid_values.append("boot/kernel/"+x+"/initramfs_overlay")
-				self.valid_values.append("boot/kernel/"+x+"/machine_type")
-				self.valid_values.append("boot/kernel/"+x+"/sources")
-				self.valid_values.append("boot/kernel/"+x+"/softlevel")
-				self.valid_values.append("boot/kernel/"+x+"/use")
-				self.valid_values.append("boot/kernel/"+x+"/packages")
-				if "boot/kernel/"+x+"/packages" in addlargs:
-					if type(addlargs["boot/kernel/"+x+\
-						"/packages"])==types.StringType:
-						addlargs["boot/kernel/"+x+"/packages"]=\
-							[addlargs["boot/kernel/"+x+"/packages"]]
-
-	def set_build_kernel_vars(self):
-		if self.settings["spec_prefix"]+"/gk_mainargs" in self.settings:
-			self.settings["gk_mainargs"]=\
-				self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
-			del self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
-
-	def kill_chroot_pids(self):
-		print "Checking for processes running in chroot and killing them."
-
-		"""
-		Force environment variables to be exported so script can see them
-		"""
-		self.setup_environment()
-
-		if os.path.exists(self.settings["sharedir"]+\
-			"/targets/support/kill-chroot-pids.sh"):
-			cmd("/bin/bash "+self.settings["sharedir"]+\
-				"/targets/support/kill-chroot-pids.sh",\
-				"kill-chroot-pids script failed.",env=self.env)
-
-	def mount_safety_check(self):
-		"""
-		Check and verify that none of our paths in mypath are mounted. We don't
-		want to clean up with things still mounted, and this allows us to check.
-		Returns 1 on ok, 0 on "something is still mounted" case.
-		"""
-
-		if not os.path.exists(self.settings["chroot_path"]):
-			return
-
-		print "self.mounts =", self.mounts
-		for x in self.mounts:
-			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
-			print "mount_safety_check() x =", x, target
-			if not os.path.exists(target):
-				continue
-
-			if ismount(target):
-				""" Something is still mounted "" """
-				try:
-					print target + " is still mounted; performing auto-bind-umount...",
-					""" Try to umount stuff ourselves """
-					self.unbind()
-					if ismount(target):
-						raise CatalystError, "Auto-unbind failed for " + target
-					else:
-						print "Auto-unbind successful..."
-				except CatalystError:
-					raise CatalystError, "Unable to auto-unbind " + target
-
-	def unpack(self):
-		unpack=True
-
-		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+\
-			"unpack")
-
-		if "SEEDCACHE" in self.settings:
-			if os.path.isdir(self.settings["source_path"]):
-				""" SEEDCACHE Is a directory, use rsync """
-				unpack_cmd="rsync -a --delete "+self.settings["source_path"]+\
-					" "+self.settings["chroot_path"]
-				display_msg="\nStarting rsync from "+\
-					self.settings["source_path"]+"\nto "+\
-					self.settings["chroot_path"]+\
-					" (This may take some time) ...\n"
-				error_msg="Rsync of "+self.settings["source_path"]+" to "+\
-					self.settings["chroot_path"]+" failed."
-			else:
-				""" SEEDCACHE is a not a directory, try untar'ing """
-				print "Referenced SEEDCACHE does not appear to be a directory, trying to untar..."
-				display_msg="\nStarting tar extract from "+\
-					self.settings["source_path"]+"\nto "+\
-					self.settings["chroot_path"]+\
-						" (This may take some time) ...\n"
-				if "bz2" == self.settings["chroot_path"][-3:]:
-					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-						self.settings["chroot_path"]
-				else:
-					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-						self.settings["chroot_path"]
-				error_msg="Tarball extraction of "+\
-					self.settings["source_path"]+" to "+\
-					self.settings["chroot_path"]+" failed."
-		else:
-			""" No SEEDCACHE, use tar """
-			display_msg="\nStarting tar extract from "+\
-				self.settings["source_path"]+"\nto "+\
-				self.settings["chroot_path"]+\
-				" (This may take some time) ...\n"
-			if "bz2" == self.settings["chroot_path"][-3:]:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-					self.settings["chroot_path"]
-			else:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-					self.settings["chroot_path"]
-			error_msg="Tarball extraction of "+self.settings["source_path"]+\
-				" to "+self.settings["chroot_path"]+" failed."
-
-		if "AUTORESUME" in self.settings:
-			if os.path.isdir(self.settings["source_path"]) \
-				and os.path.exists(self.settings["autoresume_path"]+"unpack"):
-				""" Autoresume is valid, SEEDCACHE is valid """
-				unpack=False
-				invalid_snapshot=False
-
-			elif os.path.isfile(self.settings["source_path"]) \
-				and self.settings["source_path_hash"]==clst_unpack_hash:
-				""" Autoresume is valid, tarball is valid """
-				unpack=False
-				invalid_snapshot=True
-
-			elif os.path.isdir(self.settings["source_path"]) \
-				and not os.path.exists(self.settings["autoresume_path"]+\
-				"unpack"):
-				""" Autoresume is invalid, SEEDCACHE """
-				unpack=True
-				invalid_snapshot=False
-
-			elif os.path.isfile(self.settings["source_path"]) \
-				and self.settings["source_path_hash"]!=clst_unpack_hash:
-				""" Autoresume is invalid, tarball """
-				unpack=True
-				invalid_snapshot=True
-		else:
-			""" No autoresume, SEEDCACHE """
-			if "SEEDCACHE" in self.settings:
-				""" SEEDCACHE so let's run rsync and let it clean up """
-				if os.path.isdir(self.settings["source_path"]):
-					unpack=True
-					invalid_snapshot=False
-				elif os.path.isfile(self.settings["source_path"]):
-					""" Tarball so unpack and remove anything already there """
-					unpack=True
-					invalid_snapshot=True
-				""" No autoresume, no SEEDCACHE """
-			else:
-				""" Tarball so unpack and remove anything already there """
-				if os.path.isfile(self.settings["source_path"]):
-					unpack=True
-					invalid_snapshot=True
-				elif os.path.isdir(self.settings["source_path"]):
-					""" We should never reach this, so something is very wrong """
-					raise CatalystError,\
-						"source path is a dir but seedcache is not enabled"
-
-		if unpack:
-			self.mount_safety_check()
-
-			if invalid_snapshot:
-				if "AUTORESUME" in self.settings:
-					print "No Valid Resume point detected, cleaning up..."
-
-				self.clear_autoresume()
-				self.clear_chroot()
-
-			if not os.path.exists(self.settings["chroot_path"]):
-				os.makedirs(self.settings["chroot_path"])
-
-			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
-				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
-
-			if "PKGCACHE" in self.settings:
-				if not os.path.exists(self.settings["pkgcache_path"]):
-					os.makedirs(self.settings["pkgcache_path"],0755)
-
-			if "KERNCACHE" in self.settings:
-				if not os.path.exists(self.settings["kerncache_path"]):
-					os.makedirs(self.settings["kerncache_path"],0755)
-
-			print display_msg
-			cmd(unpack_cmd,error_msg,env=self.env)
-
-			if "source_path_hash" in self.settings:
-				myf=open(self.settings["autoresume_path"]+"unpack","w")
-				myf.write(self.settings["source_path_hash"])
-				myf.close()
-			else:
-				touch(self.settings["autoresume_path"]+"unpack")
-		else:
-			print "Resume point detected, skipping unpack operation..."
-
-	def unpack_snapshot(self):
-		unpack=True
-		snapshot_hash=read_from_clst(self.settings["autoresume_path"]+\
-			"unpack_portage")
-
-		if "SNAPCACHE" in self.settings:
-			snapshot_cache_hash=\
-				read_from_clst(self.settings["snapshot_cache_path"]+\
-				"catalyst-hash")
-			destdir=self.settings["snapshot_cache_path"]
-			if "bz2" == self.settings["chroot_path"][-3:]:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+destdir
-			else:
-				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+destdir
-			unpack_errmsg="Error unpacking snapshot"
-			cleanup_msg="Cleaning up invalid snapshot cache at \n\t"+\
-				self.settings["snapshot_cache_path"]+\
-				" (This can take a long time)..."
-			cleanup_errmsg="Error removing existing snapshot cache directory."
-			self.snapshot_lock_object=self.snapcache_lock
-
-			if self.settings["snapshot_path_hash"]==snapshot_cache_hash:
-				print "Valid snapshot cache, skipping unpack of portage tree..."
-				unpack=False
-		else:
-			destdir = normpath(self.settings["chroot_path"] + self.settings["portdir"])
-			cleanup_errmsg="Error removing existing snapshot directory."
-			cleanup_msg=\
-				"Cleaning up existing portage tree (This can take a long time)..."
-			if "bz2" == self.settings["chroot_path"][-3:]:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+\
-					self.settings["chroot_path"]+"/usr"
-			else:
-				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+\
-					self.settings["chroot_path"]+"/usr"
-			unpack_errmsg="Error unpacking snapshot"
-
-			if "AUTORESUME" in self.settings \
-				and os.path.exists(self.settings["chroot_path"]+\
-					self.settings["portdir"]) \
-				and os.path.exists(self.settings["autoresume_path"]\
-					+"unpack_portage") \
-				and self.settings["snapshot_path_hash"] == snapshot_hash:
-					print \
-						"Valid Resume point detected, skipping unpack of portage tree..."
-					unpack=False
-
-		if unpack:
-			if "SNAPCACHE" in self.settings:
-				self.snapshot_lock_object.write_lock()
-			if os.path.exists(destdir):
-				print cleanup_msg
-				cleanup_cmd="rm -rf "+destdir
-				cmd(cleanup_cmd,cleanup_errmsg,env=self.env)
-			if not os.path.exists(destdir):
-				os.makedirs(destdir,0755)
-
-			print "Unpacking portage tree (This can take a long time) ..."
-			cmd(unpack_cmd,unpack_errmsg,env=self.env)
-
-			if "SNAPCACHE" in self.settings:
-				myf=open(self.settings["snapshot_cache_path"]+"catalyst-hash","w")
-				myf.write(self.settings["snapshot_path_hash"])
-				myf.close()
-			else:
-				print "Setting snapshot autoresume point"
-				myf=open(self.settings["autoresume_path"]+"unpack_portage","w")
-				myf.write(self.settings["snapshot_path_hash"])
-				myf.close()
-
-			if "SNAPCACHE" in self.settings:
-				self.snapshot_lock_object.unlock()
-
-	def config_profile_link(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"config_profile_link"):
-			print \
-				"Resume point detected, skipping config_profile_link operation..."
-		else:
-			# TODO: zmedico and I discussed making this a directory and pushing
-			# in a parent file, as well as other user-specified configuration.
-			print "Configuring profile link..."
-			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.profile",\
-					"Error zapping profile link",env=self.env)
-			cmd("mkdir -p "+self.settings["chroot_path"]+"/etc/portage/")
-			cmd("ln -sf ../.." + self.settings["portdir"] + "/profiles/" + \
-				self.settings["target_profile"]+" "+\
-				self.settings["chroot_path"]+"/etc/portage/make.profile",\
-				"Error creating profile link",env=self.env)
-			touch(self.settings["autoresume_path"]+"config_profile_link")
-
-	def setup_confdir(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"setup_confdir"):
-			print "Resume point detected, skipping setup_confdir operation..."
-		else:
-			if "portage_confdir" in self.settings:
-				print "Configuring /etc/portage..."
-				cmd("rsync -a "+self.settings["portage_confdir"]+"/ "+\
-					self.settings["chroot_path"]+"/etc/portage/",\
-					"Error copying /etc/portage",env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_confdir")
-
-	def portage_overlay(self):
-		""" We copy the contents of our overlays to /usr/local/portage """
-		if "portage_overlay" in self.settings:
-			for x in self.settings["portage_overlay"]:
-				if os.path.exists(x):
-					print "Copying overlay dir " +x
-					cmd("mkdir -p "+self.settings["chroot_path"]+\
-						self.settings["local_overlay"],\
-						"Could not make portage_overlay dir",env=self.env)
-					cmd("cp -R "+x+"/* "+self.settings["chroot_path"]+\
-						self.settings["local_overlay"],\
-						"Could not copy portage_overlay",env=self.env)
-
-	def root_overlay(self):
-		""" Copy over the root_overlay """
-		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
-			for x in self.settings[self.settings["spec_prefix"]+\
-				"/root_overlay"]:
-				if os.path.exists(x):
-					print "Copying root_overlay: "+x
-					cmd("rsync -a "+x+"/ "+\
-						self.settings["chroot_path"],\
-						self.settings["spec_prefix"]+"/root_overlay: "+x+\
-						" copy failed.",env=self.env)
-
-	def base_dirs(self):
-		pass
-
-	def bind(self):
-		for x in self.mounts:
-			#print "bind(); x =", x
-			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
-			if not os.path.exists(target):
-				os.makedirs(target, 0755)
-
-			if not os.path.exists(self.mountmap[x]):
-				if self.mountmap[x] not in ["tmpfs", "shmfs"]:
-					os.makedirs(self.mountmap[x], 0755)
-
-			src=self.mountmap[x]
-			#print "bind(); src =", src
-			if "SNAPCACHE" in self.settings and x == "portdir":
-				self.snapshot_lock_object.read_lock()
-			if os.uname()[0] == "FreeBSD":
-				if src == "/dev":
-					cmd = "mount -t devfs none " + target
-					retval=os.system(cmd)
-				else:
-					cmd = "mount_nullfs " + src + " " + target
-					retval=os.system(cmd)
-			else:
-				if src == "tmpfs":
-					if "var_tmpfs_portage" in self.settings:
-						cmd = "mount -t tmpfs -o size=" + \
-							self.settings["var_tmpfs_portage"] + "G " + \
-							src + " " + target
-						retval=os.system(cmd)
-				elif src == "shmfs":
-					cmd = "mount -t tmpfs -o noexec,nosuid,nodev shm " + target
-					retval=os.system(cmd)
-				else:
-					cmd = "mount --bind " + src + " " + target
-					#print "bind(); cmd =", cmd
-					retval=os.system(cmd)
-			if retval!=0:
-				self.unbind()
-				raise CatalystError,"Couldn't bind mount " + src
-
-	def unbind(self):
-		ouch=0
-		mypath=self.settings["chroot_path"]
-		myrevmounts=self.mounts[:]
-		myrevmounts.reverse()
-		""" Unmount in reverse order for nested bind-mounts """
-		for x in myrevmounts:
-			target = normpath(mypath + self.target_mounts[x])
-			if not os.path.exists(target):
-				continue
-
-			if not ismount(target):
-				continue
-
-			retval=os.system("umount " + target)
-
-			if retval!=0:
-				warn("First attempt to unmount: " + target + " failed.")
-				warn("Killing any pids still running in the chroot")
-
-				self.kill_chroot_pids()
-
-				retval2 = os.system("umount " + target)
-				if retval2!=0:
-					ouch=1
-					warn("Couldn't umount bind mount: " + target)
-
-			if "SNAPCACHE" in self.settings and x == "/usr/portage":
-				try:
-					"""
-					It's possible the snapshot lock object isn't created yet.
-					This is because mount safety check calls unbind before the
-					target is fully initialized
-					"""
-					self.snapshot_lock_object.unlock()
-				except:
-					pass
-		if ouch:
-			"""
-			if any bind mounts really failed, then we need to raise
-			this to potentially prevent an upcoming bash stage cleanup script
-			from wiping our bind mounts.
-			"""
-			raise CatalystError,\
-				"Couldn't umount one or more bind-mounts; aborting for safety."
-
-	def chroot_setup(self):
-		self.makeconf=read_makeconf(self.settings["chroot_path"]+\
-			"/etc/portage/make.conf")
-		self.override_cbuild()
-		self.override_chost()
-		self.override_cflags()
-		self.override_cxxflags()
-		self.override_ldflags()
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"chroot_setup"):
-			print "Resume point detected, skipping chroot_setup operation..."
-		else:
-			print "Setting up chroot..."
-
-			#self.makeconf=read_makeconf(self.settings["chroot_path"]+"/etc/portage/make.conf")
-
-			cmd("cp /etc/resolv.conf "+self.settings["chroot_path"]+"/etc",\
-				"Could not copy resolv.conf into place.",env=self.env)
-
-			""" Copy over the envscript, if applicable """
-			if "ENVSCRIPT" in self.settings:
-				if not os.path.exists(self.settings["ENVSCRIPT"]):
-					raise CatalystError,\
-						"Can't find envscript "+self.settings["ENVSCRIPT"]
-
-				print "\nWarning!!!!"
-				print "\tOverriding certain env variables may cause catastrophic failure."
-				print "\tIf your build fails look here first as the possible problem."
-				print "\tCatalyst assumes you know what you are doing when setting"
-				print "\t\tthese variables."
-				print "\tCatalyst Maintainers use VERY minimal envscripts if used at all"
-				print "\tYou have been warned\n"
-
-				cmd("cp "+self.settings["ENVSCRIPT"]+" "+\
-					self.settings["chroot_path"]+"/tmp/envscript",\
-					"Could not copy envscript into place.",env=self.env)
-
-			"""
-			Copy over /etc/hosts from the host in case there are any
-			specialties in there
-			"""
-			if os.path.exists(self.settings["chroot_path"]+"/etc/hosts"):
-				cmd("mv "+self.settings["chroot_path"]+"/etc/hosts "+\
-					self.settings["chroot_path"]+"/etc/hosts.catalyst",\
-					"Could not backup /etc/hosts",env=self.env)
-				cmd("cp /etc/hosts "+self.settings["chroot_path"]+"/etc/hosts",\
-					"Could not copy /etc/hosts",env=self.env)
-
-			""" Modify and write out make.conf (for the chroot) """
-			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.conf",\
-				"Could not remove "+self.settings["chroot_path"]+\
-				"/etc/portage/make.conf",env=self.env)
-			myf=open(self.settings["chroot_path"]+"/etc/portage/make.conf","w")
-			myf.write("# These settings were set by the catalyst build script that automatically\n# built this stage.\n")
-			myf.write("# Please consult /usr/share/portage/config/make.conf.example for a more\n# detailed example.\n")
-			if "CFLAGS" in self.settings:
-				myf.write('CFLAGS="'+self.settings["CFLAGS"]+'"\n')
-			if "CXXFLAGS" in self.settings:
-				if self.settings["CXXFLAGS"]!=self.settings["CFLAGS"]:
-					myf.write('CXXFLAGS="'+self.settings["CXXFLAGS"]+'"\n')
-				else:
-					myf.write('CXXFLAGS="${CFLAGS}"\n')
-			else:
-				myf.write('CXXFLAGS="${CFLAGS}"\n')
-
-			if "LDFLAGS" in self.settings:
-				myf.write("# LDFLAGS is unsupported.  USE AT YOUR OWN RISK!\n")
-				myf.write('LDFLAGS="'+self.settings["LDFLAGS"]+'"\n')
-			if "CBUILD" in self.settings:
-				myf.write("# This should not be changed unless you know exactly what you are doing.  You\n# should probably be using a different stage, instead.\n")
-				myf.write('CBUILD="'+self.settings["CBUILD"]+'"\n')
-
-			myf.write("# WARNING: Changing your CHOST is not something that should be done lightly.\n# Please consult http://www.gentoo.org/doc/en/change-chost.xml before changing.\n")
-			myf.write('CHOST="'+self.settings["CHOST"]+'"\n')
-
-			""" Figure out what our USE vars are for building """
-			myusevars=[]
-			if "HOSTUSE" in self.settings:
-				myusevars.extend(self.settings["HOSTUSE"])
-
-			if "use" in self.settings:
-				myusevars.extend(self.settings["use"])
-
-			if myusevars:
-				myf.write("# These are the USE flags that were used in addition to what is provided by the\n# profile used for building.\n")
-				myusevars = sorted(set(myusevars))
-				myf.write('USE="'+string.join(myusevars)+'"\n')
-				if '-*' in myusevars:
-					print "\nWarning!!!  "
-					print "\tThe use of -* in "+self.settings["spec_prefix"]+\
-						"/use will cause portage to ignore"
-					print "\tpackage.use in the profile and portage_confdir. You've been warned!"
-
-			myf.write('PORTDIR="%s"\n' % self.settings['portdir'])
-			myf.write('DISTDIR="%s"\n' % self.settings['distdir'])
-			myf.write('PKGDIR="%s"\n' % self.settings['packagedir'])
-
-			""" Setup the portage overlay """
-			if "portage_overlay" in self.settings:
-				myf.write('PORTDIR_OVERLAY="/usr/local/portage"\n')
-
-			myf.close()
-			cmd("cp "+self.settings["chroot_path"]+"/etc/portage/make.conf "+\
-				self.settings["chroot_path"]+"/etc/portage/make.conf.catalyst",\
-				"Could not backup /etc/portage/make.conf",env=self.env)
-			touch(self.settings["autoresume_path"]+"chroot_setup")
-
-	def fsscript(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"fsscript"):
-			print "Resume point detected, skipping fsscript operation..."
-		else:
-			if "fsscript" in self.settings:
-				if os.path.exists(self.settings["controller_file"]):
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" fsscript","fsscript script failed.",env=self.env)
-					touch(self.settings["autoresume_path"]+"fsscript")
-
-	def rcupdate(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"rcupdate"):
-			print "Resume point detected, skipping rcupdate operation..."
-		else:
-			if os.path.exists(self.settings["controller_file"]):
-				cmd("/bin/bash "+self.settings["controller_file"]+" rc-update",\
-					"rc-update script failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"rcupdate")
-
-	def clean(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"clean"):
-			print "Resume point detected, skipping clean operation..."
-		else:
-			for x in self.settings["cleanables"]:
-				print "Cleaning chroot: "+x+"... "
-				cmd("rm -rf "+self.settings["destpath"]+x,"Couldn't clean "+\
-					x,env=self.env)
-
-		""" Put /etc/hosts back into place """
-		if os.path.exists(self.settings["chroot_path"]+"/etc/hosts.catalyst"):
-			cmd("mv -f "+self.settings["chroot_path"]+"/etc/hosts.catalyst "+\
-				self.settings["chroot_path"]+"/etc/hosts",\
-				"Could not replace /etc/hosts",env=self.env)
-
-		""" Remove our overlay """
-		if os.path.exists(self.settings["chroot_path"] + self.settings["local_overlay"]):
-			cmd("rm -rf " + self.settings["chroot_path"] + self.settings["local_overlay"],
-				"Could not remove " + self.settings["local_overlay"], env=self.env)
-			cmd("sed -i '/^PORTDIR_OVERLAY/d' "+self.settings["chroot_path"]+\
-				"/etc/portage/make.conf",\
-				"Could not remove PORTDIR_OVERLAY from make.conf",env=self.env)
-
-		""" Clean up old and obsoleted files in /etc """
-		if os.path.exists(self.settings["stage_path"]+"/etc"):
-			cmd("find "+self.settings["stage_path"]+\
-				"/etc -maxdepth 1 -name \"*-\" | xargs rm -f",\
-				"Could not remove stray files in /etc",env=self.env)
-
-		if os.path.exists(self.settings["controller_file"]):
-			cmd("/bin/bash "+self.settings["controller_file"]+" clean",\
-				"clean script failed.",env=self.env)
-			touch(self.settings["autoresume_path"]+"clean")
-
-	def empty(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"empty"):
-			print "Resume point detected, skipping empty operation..."
-		else:
-			if self.settings["spec_prefix"]+"/empty" in self.settings:
-				if type(self.settings[self.settings["spec_prefix"]+\
-					"/empty"])==types.StringType:
-					self.settings[self.settings["spec_prefix"]+"/empty"]=\
-						self.settings[self.settings["spec_prefix"]+\
-						"/empty"].split()
-				for x in self.settings[self.settings["spec_prefix"]+"/empty"]:
-					myemp=self.settings["destpath"]+x
-					if not os.path.isdir(myemp) or os.path.islink(myemp):
-						print x,"not a directory or does not exist, skipping 'empty' operation."
-						continue
-					print "Emptying directory",x
-					"""
-					stat the dir, delete the dir, recreate the dir and set
-					the proper perms and ownership
-					"""
-					mystat=os.stat(myemp)
-					shutil.rmtree(myemp)
-					os.makedirs(myemp,0755)
-					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-					os.chmod(myemp,mystat[ST_MODE])
-			touch(self.settings["autoresume_path"]+"empty")
-
-	def remove(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"remove"):
-			print "Resume point detected, skipping remove operation..."
-		else:
-			if self.settings["spec_prefix"]+"/rm" in self.settings:
-				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
-					"""
-					We're going to shell out for all these cleaning
-					operations, so we get easy glob handling.
-					"""
-					print "livecd: removing "+x
-					os.system("rm -rf "+self.settings["chroot_path"]+x)
-				try:
-					if os.path.exists(self.settings["controller_file"]):
-						cmd("/bin/bash "+self.settings["controller_file"]+\
-							" clean","Clean  failed.",env=self.env)
-						touch(self.settings["autoresume_path"]+"remove")
-				except:
-					self.unbind()
-					raise
-
-	def preclean(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"preclean"):
-			print "Resume point detected, skipping preclean operation..."
-		else:
-			try:
-				if os.path.exists(self.settings["controller_file"]):
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" preclean","preclean script failed.",env=self.env)
-					touch(self.settings["autoresume_path"]+"preclean")
-
-			except:
-				self.unbind()
-				raise CatalystError, "Build failed, could not execute preclean"
-
-	def capture(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"capture"):
-			print "Resume point detected, skipping capture operation..."
-		else:
-			""" Capture target in a tarball """
-			mypath=self.settings["target_path"].split("/")
-			""" Remove filename from path """
-			mypath=string.join(mypath[:-1],"/")
-
-			""" Now make sure path exists """
-			if not os.path.exists(mypath):
-				os.makedirs(mypath)
-
-			print "Creating stage tarball..."
-
-			cmd("tar -I lbzip2 -cpf "+self.settings["target_path"]+" -C "+\
-				self.settings["stage_path"]+" .",\
-				"Couldn't create stage tarball",env=self.env)
-
-			self.gen_contents_file(self.settings["target_path"])
-			self.gen_digest_file(self.settings["target_path"])
-
-			touch(self.settings["autoresume_path"]+"capture")
-
-	def run_local(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"run_local"):
-			print "Resume point detected, skipping run_local operation..."
-		else:
-			try:
-				if os.path.exists(self.settings["controller_file"]):
-					cmd("/bin/bash "+self.settings["controller_file"]+" run",\
-						"run script failed.",env=self.env)
-					touch(self.settings["autoresume_path"]+"run_local")
-
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"Stage build aborting due to error."
-
-	def setup_environment(self):
-		"""
-		Modify the current environment. This is an ugly hack that should be
-		fixed. We need this to use the os.system() call since we can't
-		specify our own environ
-		"""
-		for x in self.settings.keys():
-			""" Sanitize var names by doing "s|/-.|_|g" """
-			varname="clst_"+string.replace(x,"/","_")
-			varname=string.replace(varname,"-","_")
-			varname=string.replace(varname,".","_")
-			if type(self.settings[x])==types.StringType:
-				""" Prefix to prevent namespace clashes """
-				#os.environ[varname]=self.settings[x]
-				self.env[varname]=self.settings[x]
-			elif type(self.settings[x])==types.ListType:
-				#os.environ[varname]=string.join(self.settings[x])
-				self.env[varname]=string.join(self.settings[x])
-			elif type(self.settings[x])==types.BooleanType:
-				if self.settings[x]:
-					self.env[varname]="true"
-				else:
-					self.env[varname]="false"
-		if "makeopts" in self.settings:
-			self.env["MAKEOPTS"]=self.settings["makeopts"]
-
-	def run(self):
-		self.chroot_lock.write_lock()
-
-		""" Kill any pids in the chroot "" """
-		self.kill_chroot_pids()
-
-		""" Check for mounts right away and abort if we cannot unmount them """
-		self.mount_safety_check()
-
-		if "CLEAR_AUTORESUME" in self.settings:
-			self.clear_autoresume()
-
-		if "PURGETMPONLY" in self.settings:
-			self.purge()
-			return
-
-		if "PURGEONLY" in self.settings:
-			self.purge()
-			return
-
-		if "PURGE" in self.settings:
-			self.purge()
-
-		for x in self.settings["action_sequence"]:
-			print "--- Running action sequence: "+x
-			sys.stdout.flush()
-			try:
-				apply(getattr(self,x))
-			except:
-				self.mount_safety_check()
-				raise
-
-		self.chroot_lock.unlock()
-
-	def unmerge(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"unmerge"):
-			print "Resume point detected, skipping unmerge operation..."
-		else:
-			if self.settings["spec_prefix"]+"/unmerge" in self.settings:
-				if type(self.settings[self.settings["spec_prefix"]+\
-					"/unmerge"])==types.StringType:
-					self.settings[self.settings["spec_prefix"]+"/unmerge"]=\
-						[self.settings[self.settings["spec_prefix"]+"/unmerge"]]
-				myunmerge=\
-					self.settings[self.settings["spec_prefix"]+"/unmerge"][:]
-
-				for x in range(0,len(myunmerge)):
-					"""
-					Surround args with quotes for passing to bash, allows
-					things like "<" to remain intact
-					"""
-					myunmerge[x]="'"+myunmerge[x]+"'"
-				myunmerge=string.join(myunmerge)
-
-				""" Before cleaning, unmerge stuff """
-				try:
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" unmerge "+ myunmerge,"Unmerge script failed.",\
-						env=self.env)
-					print "unmerge shell script"
-				except CatalystError:
-					self.unbind()
-					raise
-				touch(self.settings["autoresume_path"]+"unmerge")
-
-	def target_setup(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"target_setup"):
-			print "Resume point detected, skipping target_setup operation..."
-		else:
-			print "Setting up filesystems per filesystem type"
-			cmd("/bin/bash "+self.settings["controller_file"]+\
-				" target_image_setup "+ self.settings["target_path"],\
-				"target_image_setup script failed.",env=self.env)
-			touch(self.settings["autoresume_path"]+"target_setup")
-
-	def setup_overlay(self):
-		if "AUTORESUME" in self.settings \
-		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
-			print "Resume point detected, skipping setup_overlay operation..."
-		else:
-			if self.settings["spec_prefix"]+"/overlay" in self.settings:
-				for x in self.settings[self.settings["spec_prefix"]+"/overlay"]:
-					if os.path.exists(x):
-						cmd("rsync -a "+x+"/ "+\
-							self.settings["target_path"],\
-							self.settings["spec_prefix"]+"overlay: "+x+\
-							" copy failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_overlay")
-
-	def create_iso(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"create_iso"):
-			print "Resume point detected, skipping create_iso operation..."
-		else:
-			""" Create the ISO """
-			if "iso" in self.settings:
-				cmd("/bin/bash "+self.settings["controller_file"]+" iso "+\
-					self.settings["iso"],"ISO creation script failed.",\
-					env=self.env)
-				self.gen_contents_file(self.settings["iso"])
-				self.gen_digest_file(self.settings["iso"])
-				touch(self.settings["autoresume_path"]+"create_iso")
-			else:
-				print "WARNING: livecd/iso was not defined."
-				print "An ISO Image will not be created."
-
-	def build_packages(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"build_packages"):
-			print "Resume point detected, skipping build_packages operation..."
-		else:
-			if self.settings["spec_prefix"]+"/packages" in self.settings:
-				if "AUTORESUME" in self.settings \
-					and os.path.exists(self.settings["autoresume_path"]+\
-						"build_packages"):
-					print "Resume point detected, skipping build_packages operation..."
-				else:
-					mypack=\
-						list_bashify(self.settings[self.settings["spec_prefix"]\
-						+"/packages"])
-					try:
-						cmd("/bin/bash "+self.settings["controller_file"]+\
-							" build_packages "+mypack,\
-							"Error in attempt to build packages",env=self.env)
-						touch(self.settings["autoresume_path"]+"build_packages")
-					except CatalystError:
-						self.unbind()
-						raise CatalystError,self.settings["spec_prefix"]+\
-							"build aborting due to error."
-
-	def build_kernel(self):
-		"Build all configured kernels"
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"build_kernel"):
-			print "Resume point detected, skipping build_kernel operation..."
-		else:
-			if "boot/kernel" in self.settings:
-				try:
-					mynames=self.settings["boot/kernel"]
-					if type(mynames)==types.StringType:
-						mynames=[mynames]
-					"""
-					Execute the script that sets up the kernel build environment
-					"""
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" pre-kmerge ","Runscript pre-kmerge failed",\
-						env=self.env)
-					for kname in mynames:
-						self._build_kernel(kname=kname)
-					touch(self.settings["autoresume_path"]+"build_kernel")
-				except CatalystError:
-					self.unbind()
-					raise CatalystError,\
-						"build aborting due to kernel build error."
-
-	def _build_kernel(self, kname):
-		"Build a single configured kernel by name"
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]\
-				+"build_kernel_"+kname):
-			print "Resume point detected, skipping build_kernel for "+kname+" operation..."
-			return
-		self._copy_kernel_config(kname=kname)
-
-		"""
-		If we need to pass special options to the bootloader
-		for this kernel put them into the environment
-		"""
-		if "boot/kernel/"+kname+"/kernelopts" in self.settings:
-			myopts=self.settings["boot/kernel/"+kname+\
-				"/kernelopts"]
-
-			if type(myopts) != types.StringType:
-				myopts = string.join(myopts)
-				self.env[kname+"_kernelopts"]=myopts
-
-			else:
-				self.env[kname+"_kernelopts"]=""
-
-		if "boot/kernel/"+kname+"/extraversion" not in self.settings:
-			self.settings["boot/kernel/"+kname+\
-				"/extraversion"]=""
-
-		self.env["clst_kextraversion"]=\
-			self.settings["boot/kernel/"+kname+\
-			"/extraversion"]
-
-		self._copy_initramfs_overlay(kname=kname)
-
-		""" Execute the script that builds the kernel """
-		cmd("/bin/bash "+self.settings["controller_file"]+\
-			" kernel "+kname,\
-			"Runscript kernel build failed",env=self.env)
-
-		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
-			if os.path.exists(self.settings["chroot_path"]+\
-				"/tmp/initramfs_overlay/"):
-				print "Cleaning up temporary overlay dir"
-				cmd("rm -R "+self.settings["chroot_path"]+\
-					"/tmp/initramfs_overlay/",env=self.env)
-
-		touch(self.settings["autoresume_path"]+\
-			"build_kernel_"+kname)
-
-		"""
-		Execute the script that cleans up the kernel build
-		environment
-		"""
-		cmd("/bin/bash "+self.settings["controller_file"]+\
-			" post-kmerge ",
-			"Runscript post-kmerge failed",env=self.env)
-
-	def _copy_kernel_config(self, kname):
-		if "boot/kernel/"+kname+"/config" in self.settings:
-			if not os.path.exists(self.settings["boot/kernel/"+kname+"/config"]):
-				self.unbind()
-				raise CatalystError,\
-					"Can't find kernel config: "+\
-					self.settings["boot/kernel/"+kname+\
-					"/config"]
-
-			try:
-				cmd("cp "+self.settings["boot/kernel/"+kname+\
-					"/config"]+" "+\
-					self.settings["chroot_path"]+"/var/tmp/"+\
-					kname+".config",\
-					"Couldn't copy kernel config: "+\
-					self.settings["boot/kernel/"+kname+\
-					"/config"],env=self.env)
-
-			except CatalystError:
-				self.unbind()
-
-	def _copy_initramfs_overlay(self, kname):
-		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
-			if os.path.exists(self.settings["boot/kernel/"+\
-				kname+"/initramfs_overlay"]):
-				print "Copying initramfs_overlay dir "+\
-					self.settings["boot/kernel/"+kname+\
-					"/initramfs_overlay"]
-
-				cmd("mkdir -p "+\
-					self.settings["chroot_path"]+\
-					"/tmp/initramfs_overlay/"+\
-					self.settings["boot/kernel/"+kname+\
-					"/initramfs_overlay"],env=self.env)
-
-				cmd("cp -R "+self.settings["boot/kernel/"+\
-					kname+"/initramfs_overlay"]+"/* "+\
-					self.settings["chroot_path"]+\
-					"/tmp/initramfs_overlay/"+\
-					self.settings["boot/kernel/"+kname+\
-					"/initramfs_overlay"],env=self.env)
-
-	def bootloader(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"bootloader"):
-			print "Resume point detected, skipping bootloader operation..."
-		else:
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+\
-					" bootloader " + self.settings["target_path"],\
-					"Bootloader script failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"bootloader")
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"Script aborting due to error."
-
-	def livecd_update(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"livecd_update"):
-			print "Resume point detected, skipping build_packages operation..."
-		else:
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+\
-					" livecd-update","livecd-update failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"livecd_update")
-
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"build aborting due to livecd_update error."
-
-	def clear_chroot(self):
-		myemp=self.settings["chroot_path"]
-		if os.path.isdir(myemp):
-			print "Emptying directory",myemp
-			"""
-			stat the dir, delete the dir, recreate the dir and set
-			the proper perms and ownership
-			"""
-			mystat=os.stat(myemp)
-			#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
-			""" There's no easy way to change flags recursively in python """
-			if os.uname()[0] == "FreeBSD":
-				os.system("chflags -R noschg "+myemp)
-			shutil.rmtree(myemp)
-			os.makedirs(myemp,0755)
-			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-			os.chmod(myemp,mystat[ST_MODE])
-
-	def clear_packages(self):
-		if "PKGCACHE" in self.settings:
-			print "purging the pkgcache ..."
-
-			myemp=self.settings["pkgcache_path"]
-			if os.path.isdir(myemp):
-				print "Emptying directory",myemp
-				"""
-				stat the dir, delete the dir, recreate the dir and set
-				the proper perms and ownership
-				"""
-				mystat=os.stat(myemp)
-				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
-				shutil.rmtree(myemp)
-				os.makedirs(myemp,0755)
-				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-				os.chmod(myemp,mystat[ST_MODE])
-
-	def clear_kerncache(self):
-		if "KERNCACHE" in self.settings:
-			print "purging the kerncache ..."
-
-			myemp=self.settings["kerncache_path"]
-			if os.path.isdir(myemp):
-				print "Emptying directory",myemp
-				"""
-				stat the dir, delete the dir, recreate the dir and set
-				the proper perms and ownership
-				"""
-				mystat=os.stat(myemp)
-				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
-				shutil.rmtree(myemp)
-				os.makedirs(myemp,0755)
-				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-				os.chmod(myemp,mystat[ST_MODE])
-
-	def clear_autoresume(self):
-		""" Clean resume points since they are no longer needed """
-		if "AUTORESUME" in self.settings:
-			print "Removing AutoResume Points: ..."
-		myemp=self.settings["autoresume_path"]
-		if os.path.isdir(myemp):
-				if "AUTORESUME" in self.settings:
-					print "Emptying directory",myemp
-				"""
-				stat the dir, delete the dir, recreate the dir and set
-				the proper perms and ownership
-				"""
-				mystat=os.stat(myemp)
-				if os.uname()[0] == "FreeBSD":
-					cmd("chflags -R noschg "+myemp,\
-						"Could not remove immutable flag for file "\
-						+myemp)
-				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env-self.env)
-				shutil.rmtree(myemp)
-				os.makedirs(myemp,0755)
-				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-				os.chmod(myemp,mystat[ST_MODE])
-
-	def gen_contents_file(self,file):
-		if os.path.exists(file+".CONTENTS"):
-			os.remove(file+".CONTENTS")
-		if "contents" in self.settings:
-			if os.path.exists(file):
-				myf=open(file+".CONTENTS","w")
-				keys={}
-				for i in self.settings["contents"].split():
-					keys[i]=1
-					array=keys.keys()
-					array.sort()
-				for j in array:
-					contents=generate_contents(file,contents_function=j,\
-						verbose="VERBOSE" in self.settings)
-					if contents:
-						myf.write(contents)
-				myf.close()
-
-	def gen_digest_file(self,file):
-		if os.path.exists(file+".DIGESTS"):
-			os.remove(file+".DIGESTS")
-		if "digests" in self.settings:
-			if os.path.exists(file):
-				myf=open(file+".DIGESTS","w")
-				keys={}
-				for i in self.settings["digests"].split():
-					keys[i]=1
-					array=keys.keys()
-					array.sort()
-				for f in [file, file+'.CONTENTS']:
-					if os.path.exists(f):
-						if "all" in array:
-							for k in hash_map.keys():
-								hash=generate_hash(f,hash_function=k,verbose=\
-									"VERBOSE" in self.settings)
-								myf.write(hash)
-						else:
-							for j in array:
-								hash=generate_hash(f,hash_function=j,verbose=\
-									"VERBOSE" in self.settings)
-								myf.write(hash)
-				myf.close()
-
-	def purge(self):
-		countdown(10,"Purging Caches ...")
-		if any(k in self.settings for k in ("PURGE","PURGEONLY","PURGETMPONLY")):
-			print "clearing autoresume ..."
-			self.clear_autoresume()
-
-			print "clearing chroot ..."
-			self.clear_chroot()
-
-			if "PURGETMPONLY" not in self.settings:
-				print "clearing package cache ..."
-				self.clear_packages()
-
-			print "clearing kerncache ..."
-			self.clear_kerncache()
-
-# vim: ts=4 sw=4 sta et sts=4 ai
diff --git a/modules/generic_target.py b/modules/generic_target.py
deleted file mode 100644
index fe96bd7..0000000
--- a/modules/generic_target.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from catalyst_support import *
-
-class generic_target:
-	"""
-	The toplevel class for generic_stage_target. This is about as generic as we get.
-	"""
-	def __init__(self,myspec,addlargs):
-		addl_arg_parse(myspec,addlargs,self.required_values,self.valid_values)
-		self.settings=myspec
-		self.env={}
-		self.env["PATH"]="/bin:/sbin:/usr/bin:/usr/sbin"
diff --git a/modules/grp_target.py b/modules/grp_target.py
deleted file mode 100644
index 6941522..0000000
--- a/modules/grp_target.py
+++ /dev/null
@@ -1,118 +0,0 @@
-"""
-Gentoo Reference Platform (GRP) target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,types,glob
-from catalyst_support import *
-from generic_stage_target import *
-
-class grp_target(generic_stage_target):
-	"""
-	The builder class for GRP (Gentoo Reference Platform) builds.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["version_stamp","target","subarch",\
-			"rel_type","profile","snapshot","source_subpath"]
-
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend(["grp/use"])
-		if "grp" not in addlargs:
-			raise CatalystError,"Required value \"grp\" not specified in spec."
-
-		self.required_values.extend(["grp"])
-		if type(addlargs["grp"])==types.StringType:
-			addlargs["grp"]=[addlargs["grp"]]
-
-		if "grp/use" in addlargs:
-			if type(addlargs["grp/use"])==types.StringType:
-				addlargs["grp/use"]=[addlargs["grp/use"]]
-
-		for x in addlargs["grp"]:
-			self.required_values.append("grp/"+x+"/packages")
-			self.required_values.append("grp/"+x+"/type")
-
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-			print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			#if os.path.isdir(self.settings["target_path"]):
-				#cmd("rm -rf "+self.settings["target_path"],
-				#"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
-			if not os.path.exists(self.settings["target_path"]):
-				os.makedirs(self.settings["target_path"])
-
-			touch(self.settings["autoresume_path"]+"setup_target_path")
-
-	def run_local(self):
-		for pkgset in self.settings["grp"]:
-			# example call: "grp.sh run pkgset cd1 xmms vim sys-apps/gleep"
-			mypackages=list_bashify(self.settings["grp/"+pkgset+"/packages"])
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+" run "+self.settings["grp/"+pkgset+"/type"]\
-					+" "+pkgset+" "+mypackages,env=self.env)
-
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"GRP build aborting due to error."
-
-	def set_use(self):
-		generic_stage_target.set_use(self)
-		if "BINDIST" in self.settings:
-			if "use" in self.settings:
-				self.settings["use"].append("bindist")
-			else:
-				self.settings["use"]=["bindist"]
-
-	def set_mounts(self):
-	    self.mounts.append("/tmp/grp")
-            self.mountmap["/tmp/grp"]=self.settings["target_path"]
-
-	def generate_digests(self):
-		for pkgset in self.settings["grp"]:
-			if self.settings["grp/"+pkgset+"/type"] == "pkgset":
-				destdir=normpath(self.settings["target_path"]+"/"+pkgset+"/All")
-				print "Digesting files in the pkgset....."
-				digests=glob.glob(destdir+'/*.DIGESTS')
-				for i in digests:
-					if os.path.exists(i):
-						os.remove(i)
-
-				files=os.listdir(destdir)
-				#ignore files starting with '.' using list comprehension
-				files=[filename for filename in files if filename[0] != '.']
-				for i in files:
-					if os.path.isfile(normpath(destdir+"/"+i)):
-						self.gen_contents_file(normpath(destdir+"/"+i))
-						self.gen_digest_file(normpath(destdir+"/"+i))
-			else:
-				destdir=normpath(self.settings["target_path"]+"/"+pkgset)
-				print "Digesting files in the srcset....."
-
-				digests=glob.glob(destdir+'/*.DIGESTS')
-				for i in digests:
-					if os.path.exists(i):
-						os.remove(i)
-
-				files=os.listdir(destdir)
-				#ignore files starting with '.' using list comprehension
-				files=[filename for filename in files if filename[0] != '.']
-				for i in files:
-					if os.path.isfile(normpath(destdir+"/"+i)):
-						#self.gen_contents_file(normpath(destdir+"/"+i))
-						self.gen_digest_file(normpath(destdir+"/"+i))
-
-	def set_action_sequence(self):
-	    self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir","portage_overlay","bind","chroot_setup",\
-					"setup_environment","run_local","unbind",\
-					"generate_digests","clear_autoresume"]
-
-def register(foo):
-	foo.update({"grp":grp_target})
-	return foo
diff --git a/modules/livecd_stage1_target.py b/modules/livecd_stage1_target.py
deleted file mode 100644
index 59de9bb..0000000
--- a/modules/livecd_stage1_target.py
+++ /dev/null
@@ -1,75 +0,0 @@
-"""
-LiveCD stage1 target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst_support import *
-from generic_stage_target import *
-
-class livecd_stage1_target(generic_stage_target):
-	"""
-	Builder class for LiveCD stage1.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["livecd/packages"]
-		self.valid_values=self.required_values[:]
-
-		self.valid_values.extend(["livecd/use"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir","portage_overlay",\
-					"bind","chroot_setup","setup_environment","build_packages",\
-					"unbind", "clean","clear_autoresume"]
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"])
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			if os.path.exists(self.settings["target_path"]):
-				cmd("rm -rf "+self.settings["target_path"],\
-					"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_target_path")
-
-			if not os.path.exists(self.settings["target_path"]):
-				os.makedirs(self.settings["target_path"])
-
-	def set_target_path(self):
-		pass
-
-	def set_spec_prefix(self):
-	                self.settings["spec_prefix"]="livecd"
-
-	def set_use(self):
-		generic_stage_target.set_use(self)
-		if "use" in self.settings:
-			self.settings["use"].append("livecd")
-			if "BINDIST" in self.settings:
-				self.settings["use"].append("bindist")
-		else:
-			self.settings["use"]=["livecd"]
-			if "BINDIST" in self.settings:
-				self.settings["use"].append("bindist")
-
-	def set_packages(self):
-		generic_stage_target.set_packages(self)
-		if self.settings["spec_prefix"]+"/packages" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+"/packages"]) == types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/packages"] = \
-					self.settings[self.settings["spec_prefix"]+"/packages"].split()
-		self.settings[self.settings["spec_prefix"]+"/packages"].append("app-misc/livecd-tools")
-
-	def set_pkgcache_path(self):
-		if "pkgcache_path" in self.settings:
-			if type(self.settings["pkgcache_path"]) != types.StringType:
-				self.settings["pkgcache_path"]=normpath(string.join(self.settings["pkgcache_path"]))
-		else:
-			generic_stage_target.set_pkgcache_path(self)
-
-def register(foo):
-	foo.update({"livecd-stage1":livecd_stage1_target})
-	return foo
diff --git a/modules/livecd_stage2_target.py b/modules/livecd_stage2_target.py
deleted file mode 100644
index c74c16d..0000000
--- a/modules/livecd_stage2_target.py
+++ /dev/null
@@ -1,148 +0,0 @@
-"""
-LiveCD stage2 target, builds upon previous LiveCD stage1 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,types,stat,shutil
-from catalyst_support import *
-from generic_stage_target import *
-
-class livecd_stage2_target(generic_stage_target):
-	"""
-	Builder class for a LiveCD stage2 build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["boot/kernel"]
-
-		self.valid_values=[]
-
-		self.valid_values.extend(self.required_values)
-		self.valid_values.extend(["livecd/cdtar","livecd/empty","livecd/rm",\
-			"livecd/unmerge","livecd/iso","livecd/gk_mainargs","livecd/type",\
-			"livecd/readme","livecd/motd","livecd/overlay",\
-			"livecd/modblacklist","livecd/splash_theme","livecd/rcadd",\
-			"livecd/rcdel","livecd/fsscript","livecd/xinitrc",\
-			"livecd/root_overlay","livecd/users","portage_overlay",\
-			"livecd/fstype","livecd/fsops","livecd/linuxrc","livecd/bootargs",\
-			"gamecd/conf","livecd/xdm","livecd/xsession","livecd/volid"])
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		if "livecd/type" not in self.settings:
-			self.settings["livecd/type"] = "generic-livecd"
-
-		file_locate(self.settings, ["cdtar","controller_file"])
-
-	def set_source_path(self):
-		self.settings["source_path"] = normpath(self.settings["storedir"] +
-			"/builds/" + self.settings["source_subpath"].rstrip("/") +
-			".tar.bz2")
-		if os.path.isfile(self.settings["source_path"]):
-			self.settings["source_path_hash"]=generate_hash(self.settings["source_path"])
-		else:
-			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/")
-		if not os.path.exists(self.settings["source_path"]):
-			raise CatalystError,"Source Path: "+self.settings["source_path"]+" does not exist."
-
-	def set_spec_prefix(self):
-	    self.settings["spec_prefix"]="livecd"
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			if os.path.isdir(self.settings["target_path"]):
-				cmd("rm -rf "+self.settings["target_path"],
-				"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_target_path")
-			if not os.path.exists(self.settings["target_path"]):
-				os.makedirs(self.settings["target_path"])
-
-	def run_local(self):
-		# what modules do we want to blacklist?
-		if "livecd/modblacklist" in self.settings:
-			try:
-				myf=open(self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf","a")
-			except:
-				self.unbind()
-				raise CatalystError,"Couldn't open "+self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf."
-
-			myf.write("\n#Added by Catalyst:")
-			# workaround until config.py is using configparser
-			if isinstance(self.settings["livecd/modblacklist"], str):
-				self.settings["livecd/modblacklist"] = self.settings["livecd/modblacklist"].split()
-			for x in self.settings["livecd/modblacklist"]:
-				myf.write("\nblacklist "+x)
-			myf.close()
-
-	def unpack(self):
-		unpack=True
-		display_msg=None
-
-		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+"unpack")
-
-		if os.path.isdir(self.settings["source_path"]):
-			unpack_cmd="rsync -a --delete "+self.settings["source_path"]+" "+self.settings["chroot_path"]
-			display_msg="\nStarting rsync from "+self.settings["source_path"]+"\nto "+\
-				self.settings["chroot_path"]+" (This may take some time) ...\n"
-			error_msg="Rsync of "+self.settings["source_path"]+" to "+self.settings["chroot_path"]+" failed."
-			invalid_snapshot=False
-
-		if "AUTORESUME" in self.settings:
-			if os.path.isdir(self.settings["source_path"]) and \
-				os.path.exists(self.settings["autoresume_path"]+"unpack"):
-				print "Resume point detected, skipping unpack operation..."
-				unpack=False
-			elif "source_path_hash" in self.settings:
-				if self.settings["source_path_hash"] != clst_unpack_hash:
-					invalid_snapshot=True
-
-		if unpack:
-			self.mount_safety_check()
-			if invalid_snapshot:
-				print "No Valid Resume point detected, cleaning up  ..."
-				#os.remove(self.settings["autoresume_path"]+"dir_setup")
-				self.clear_autoresume()
-				self.clear_chroot()
-				#self.dir_setup()
-
-			if not os.path.exists(self.settings["chroot_path"]):
-				os.makedirs(self.settings["chroot_path"])
-
-			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
-				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
-
-			if "PKGCACHE" in self.settings:
-				if not os.path.exists(self.settings["pkgcache_path"]):
-					os.makedirs(self.settings["pkgcache_path"],0755)
-
-			if not display_msg:
-				raise CatalystError,"Could not find appropriate source. Please check the 'source_subpath' setting in the spec file."
-
-			print display_msg
-			cmd(unpack_cmd,error_msg,env=self.env)
-
-			if "source_path_hash" in self.settings:
-				myf=open(self.settings["autoresume_path"]+"unpack","w")
-				myf.write(self.settings["source_path_hash"])
-				myf.close()
-			else:
-				touch(self.settings["autoresume_path"]+"unpack")
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-				"config_profile_link","setup_confdir","portage_overlay",\
-				"bind","chroot_setup","setup_environment","run_local",\
-				"build_kernel"]
-		if "FETCH" not in self.settings:
-			self.settings["action_sequence"] += ["bootloader","preclean",\
-				"livecd_update","root_overlay","fsscript","rcupdate","unmerge",\
-				"unbind","remove","empty","target_setup",\
-				"setup_overlay","create_iso"]
-		self.settings["action_sequence"].append("clear_autoresume")
-
-def register(foo):
-	foo.update({"livecd-stage2":livecd_stage2_target})
-	return foo
diff --git a/modules/netboot2_target.py b/modules/netboot2_target.py
deleted file mode 100644
index 1ab7e7d..0000000
--- a/modules/netboot2_target.py
+++ /dev/null
@@ -1,166 +0,0 @@
-"""
-netboot target, version 2
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,types
-from catalyst_support import *
-from generic_stage_target import *
-
-class netboot2_target(generic_stage_target):
-	"""
-	Builder class for a netboot build, version 2
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[
-			"boot/kernel"
-		]
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend([
-			"netboot2/packages",
-			"netboot2/use",
-			"netboot2/extra_files",
-			"netboot2/overlay",
-			"netboot2/busybox_config",
-			"netboot2/root_overlay",
-			"netboot2/linuxrc"
-		])
-
-		try:
-			if "netboot2/packages" in addlargs:
-				if type(addlargs["netboot2/packages"]) == types.StringType:
-					loopy=[addlargs["netboot2/packages"]]
-				else:
-					loopy=addlargs["netboot2/packages"]
-
-				for x in loopy:
-					self.valid_values.append("netboot2/packages/"+x+"/files")
-		except:
-			raise CatalystError,"configuration error in netboot2/packages."
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		self.set_build_kernel_vars()
-		self.settings["merge_path"]=normpath("/tmp/image/")
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+\
-			self.settings["target_subpath"]+"/")
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			if os.path.isfile(self.settings["target_path"]):
-				cmd("rm -f "+self.settings["target_path"], \
-					"Could not remove existing file: "+self.settings["target_path"],env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_target_path")
-
-		if not os.path.exists(self.settings["storedir"]+"/builds/"):
-			os.makedirs(self.settings["storedir"]+"/builds/")
-
-	def copy_files_to_image(self):
-		# copies specific files from the buildroot to merge_path
-		myfiles=[]
-
-		# check for autoresume point
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"copy_files_to_image"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			if "netboot2/packages" in self.settings:
-				if type(self.settings["netboot2/packages"]) == types.StringType:
-					loopy=[self.settings["netboot2/packages"]]
-				else:
-					loopy=self.settings["netboot2/packages"]
-
-			for x in loopy:
-				if "netboot2/packages/"+x+"/files" in self.settings:
-				    if type(self.settings["netboot2/packages/"+x+"/files"]) == types.ListType:
-					    myfiles.extend(self.settings["netboot2/packages/"+x+"/files"])
-				    else:
-					    myfiles.append(self.settings["netboot2/packages/"+x+"/files"])
-
-			if "netboot2/extra_files" in self.settings:
-				if type(self.settings["netboot2/extra_files"]) == types.ListType:
-					myfiles.extend(self.settings["netboot2/extra_files"])
-				else:
-					myfiles.append(self.settings["netboot2/extra_files"])
-
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+\
-					" image " + list_bashify(myfiles),env=self.env)
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"Failed to copy files to image!"
-
-			touch(self.settings["autoresume_path"]+"copy_files_to_image")
-
-	def setup_overlay(self):
-		if "AUTORESUME" in self.settings \
-		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
-			print "Resume point detected, skipping setup_overlay operation..."
-		else:
-			if "netboot2/overlay" in self.settings:
-				for x in self.settings["netboot2/overlay"]:
-					if os.path.exists(x):
-						cmd("rsync -a "+x+"/ "+\
-							self.settings["chroot_path"] + self.settings["merge_path"], "netboot2/overlay: "+x+" copy failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_overlay")
-
-	def move_kernels(self):
-		# we're done, move the kernels to builds/*
-		# no auto resume here as we always want the
-		# freshest images moved
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+\
-				" final",env=self.env)
-			print ">>> Netboot Build Finished!"
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"Failed to move kernel images!"
-
-	def remove(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"remove"):
-			print "Resume point detected, skipping remove operation..."
-		else:
-			if self.settings["spec_prefix"]+"/rm" in self.settings:
-				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
-					# we're going to shell out for all these cleaning operations,
-					# so we get easy glob handling
-					print "netboot2: removing " + x
-					os.system("rm -rf " + self.settings["chroot_path"] + self.settings["merge_path"] + x)
-
-	def empty(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"empty"):
-			print "Resume point detected, skipping empty operation..."
-		else:
-			if "netboot2/empty" in self.settings:
-				if type(self.settings["netboot2/empty"])==types.StringType:
-					self.settings["netboot2/empty"]=self.settings["netboot2/empty"].split()
-				for x in self.settings["netboot2/empty"]:
-					myemp=self.settings["chroot_path"] + self.settings["merge_path"] + x
-					if not os.path.isdir(myemp):
-						print x,"not a directory or does not exist, skipping 'empty' operation."
-						continue
-					print "Emptying directory", x
-					# stat the dir, delete the dir, recreate the dir and set
-					# the proper perms and ownership
-					mystat=os.stat(myemp)
-					shutil.rmtree(myemp)
-					os.makedirs(myemp,0755)
-					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-					os.chmod(myemp,mystat[ST_MODE])
-		touch(self.settings["autoresume_path"]+"empty")
-
-	def set_action_sequence(self):
-	    self.settings["action_sequence"]=["unpack","unpack_snapshot","config_profile_link",
-	    				"setup_confdir","portage_overlay","bind","chroot_setup",\
-					"setup_environment","build_packages","root_overlay",\
-					"copy_files_to_image","setup_overlay","build_kernel","move_kernels",\
-					"remove","empty","unbind","clean","clear_autoresume"]
-
-def register(foo):
-	foo.update({"netboot2":netboot2_target})
-	return foo
diff --git a/modules/netboot_target.py b/modules/netboot_target.py
deleted file mode 100644
index ff2c81f..0000000
--- a/modules/netboot_target.py
+++ /dev/null
@@ -1,128 +0,0 @@
-"""
-netboot target, version 1
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,types
-from catalyst_support import *
-from generic_stage_target import *
-
-class netboot_target(generic_stage_target):
-	"""
-	Builder class for a netboot build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.valid_values = [
-			"netboot/kernel/sources",
-			"netboot/kernel/config",
-			"netboot/kernel/prebuilt",
-
-			"netboot/busybox_config",
-
-			"netboot/extra_files",
-			"netboot/packages"
-		]
-		self.required_values=[]
-
-		try:
-			if "netboot/packages" in addlargs:
-				if type(addlargs["netboot/packages"]) == types.StringType:
-					loopy=[addlargs["netboot/packages"]]
-				else:
-					loopy=addlargs["netboot/packages"]
-
-		#	for x in loopy:
-		#		self.required_values.append("netboot/packages/"+x+"/files")
-		except:
-			raise CatalystError,"configuration error in netboot/packages."
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		self.set_build_kernel_vars(addlargs)
-		if "netboot/busybox_config" in addlargs:
-			file_locate(self.settings, ["netboot/busybox_config"])
-
-		# Custom Kernel Tarball --- use that instead ...
-
-		# unless the user wants specific CFLAGS/CXXFLAGS, let's use -Os
-
-		for envvar in "CFLAGS", "CXXFLAGS":
-			if envvar not in os.environ and envvar not in addlargs:
-				self.settings[envvar] = "-Os -pipe"
-
-	def set_root_path(self):
-		# ROOT= variable for emerges
-		self.settings["root_path"]=normpath("/tmp/image")
-		print "netboot root path is "+self.settings["root_path"]
-
-#	def build_packages(self):
-#		# build packages
-#		if "netboot/packages" in self.settings:
-#			mypack=list_bashify(self.settings["netboot/packages"])
-#		try:
-#			cmd("/bin/bash "+self.settings["controller_file"]+" packages "+mypack,env=self.env)
-#		except CatalystError:
-#			self.unbind()
-#			raise CatalystError,"netboot build aborting due to error."
-
-	def build_busybox(self):
-		# build busybox
-		if "netboot/busybox_config" in self.settings:
-			mycmd = self.settings["netboot/busybox_config"]
-		else:
-			mycmd = ""
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+" busybox "+ mycmd,env=self.env)
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"netboot build aborting due to error."
-
-	def copy_files_to_image(self):
-		# create image
-		myfiles=[]
-		if "netboot/packages" in self.settings:
-			if type(self.settings["netboot/packages"]) == types.StringType:
-				loopy=[self.settings["netboot/packages"]]
-			else:
-				loopy=self.settings["netboot/packages"]
-
-		for x in loopy:
-			if "netboot/packages/"+x+"/files" in self.settings:
-			    if type(self.settings["netboot/packages/"+x+"/files"]) == types.ListType:
-				    myfiles.extend(self.settings["netboot/packages/"+x+"/files"])
-			    else:
-				    myfiles.append(self.settings["netboot/packages/"+x+"/files"])
-
-		if "netboot/extra_files" in self.settings:
-			if type(self.settings["netboot/extra_files"]) == types.ListType:
-				myfiles.extend(self.settings["netboot/extra_files"])
-			else:
-				myfiles.append(self.settings["netboot/extra_files"])
-
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+\
-				" image " + list_bashify(myfiles),env=self.env)
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"netboot build aborting due to error."
-
-	def create_netboot_files(self):
-		# finish it all up
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+" finish",env=self.env)
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"netboot build aborting due to error."
-
-		# end
-		print "netboot: build finished !"
-
-	def set_action_sequence(self):
-	    self.settings["action_sequence"]=["unpack","unpack_snapshot",
-	    				"config_profile_link","setup_confdir","bind","chroot_setup",\
-						"setup_environment","build_packages","build_busybox",\
-						"build_kernel","copy_files_to_image",\
-						"clean","create_netboot_files","unbind","clear_autoresume"]
-
-def register(foo):
-	foo.update({"netboot":netboot_target})
-	return foo
diff --git a/modules/snapshot_target.py b/modules/snapshot_target.py
deleted file mode 100644
index ba1bab5..0000000
--- a/modules/snapshot_target.py
+++ /dev/null
@@ -1,91 +0,0 @@
-"""
-Snapshot target
-"""
-
-import os
-from catalyst_support import *
-from generic_stage_target import *
-
-class snapshot_target(generic_stage_target):
-	"""
-	Builder class for snapshots.
-	"""
-	def __init__(self,myspec,addlargs):
-		self.required_values=["version_stamp","target"]
-		self.valid_values=["version_stamp","target"]
-
-		generic_target.__init__(self,myspec,addlargs)
-		self.settings=myspec
-		self.settings["target_subpath"]="portage"
-		st=self.settings["storedir"]
-		self.settings["snapshot_path"] = normpath(st + "/snapshots/"
-			+ self.settings["snapshot_name"]
-			+ self.settings["version_stamp"] + ".tar.bz2")
-		self.settings["tmp_path"]=normpath(st+"/tmp/"+self.settings["target_subpath"])
-
-	def setup(self):
-		x=normpath(self.settings["storedir"]+"/snapshots")
-		if not os.path.exists(x):
-			os.makedirs(x)
-
-	def mount_safety_check(self):
-		pass
-
-	def run(self):
-		if "PURGEONLY" in self.settings:
-			self.purge()
-			return
-
-		if "PURGE" in self.settings:
-			self.purge()
-
-		self.setup()
-		print "Creating Portage tree snapshot "+self.settings["version_stamp"]+\
-			" from "+self.settings["portdir"]+"..."
-
-		mytmp=self.settings["tmp_path"]
-		if not os.path.exists(mytmp):
-			os.makedirs(mytmp)
-
-		cmd("rsync -a --delete --exclude /packages/ --exclude /distfiles/ " +
-			"--exclude /local/ --exclude CVS/ --exclude .svn --filter=H_**/files/digest-* " +
-			self.settings["portdir"] + "/ " + mytmp + "/%s/" % self.settings["repo_name"],
-			"Snapshot failure", env=self.env)
-
-		print "Compressing Portage snapshot tarball..."
-		cmd("tar -I lbzip2 -cf " + self.settings["snapshot_path"] + " -C " +
-			mytmp + " " + self.settings["repo_name"],
-			"Snapshot creation failure",env=self.env)
-
-		self.gen_contents_file(self.settings["snapshot_path"])
-		self.gen_digest_file(self.settings["snapshot_path"])
-
-		self.cleanup()
-		print "snapshot: complete!"
-
-	def kill_chroot_pids(self):
-		pass
-
-	def cleanup(self):
-		print "Cleaning up..."
-
-	def purge(self):
-		myemp=self.settings["tmp_path"]
-		if os.path.isdir(myemp):
-			print "Emptying directory",myemp
-			"""
-			stat the dir, delete the dir, recreate the dir and set
-			the proper perms and ownership
-			"""
-			mystat=os.stat(myemp)
-			""" There's no easy way to change flags recursively in python """
-			if os.uname()[0] == "FreeBSD":
-				os.system("chflags -R noschg "+myemp)
-			shutil.rmtree(myemp)
-			os.makedirs(myemp,0755)
-			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-			os.chmod(myemp,mystat[ST_MODE])
-
-def register(foo):
-	foo.update({"snapshot":snapshot_target})
-	return foo
diff --git a/modules/stage1_target.py b/modules/stage1_target.py
deleted file mode 100644
index 5f4ffa0..0000000
--- a/modules/stage1_target.py
+++ /dev/null
@@ -1,97 +0,0 @@
-"""
-stage1 target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst_support import *
-from generic_stage_target import *
-
-class stage1_target(generic_stage_target):
-	"""
-	Builder class for a stage1 installation tarball build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=["chost"]
-		self.valid_values.extend(["update_seed","update_seed_command"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_stage_path(self):
-		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+self.settings["root_path"])
-		print "stage1 stage path is "+self.settings["stage_path"]
-
-	def set_root_path(self):
-		# sets the root path, relative to 'chroot_path', of the stage1 root
-		self.settings["root_path"]=normpath("/tmp/stage1root")
-		print "stage1 root path is "+self.settings["root_path"]
-
-	def set_cleanables(self):
-		generic_stage_target.set_cleanables(self)
-		self.settings["cleanables"].extend([\
-		"/usr/share/zoneinfo", "/etc/portage/package*"])
-
-	# XXX: How do these override_foo() functions differ from the ones in generic_stage_target and why aren't they in stage3_target?
-
-	def override_chost(self):
-		if "chost" in self.settings:
-			self.settings["CHOST"]=list_to_string(self.settings["chost"])
-
-	def override_cflags(self):
-		if "cflags" in self.settings:
-			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
-
-	def override_cxxflags(self):
-		if "cxxflags" in self.settings:
-			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
-
-	def override_ldflags(self):
-		if "ldflags" in self.settings:
-			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
-
-	def set_portage_overlay(self):
-		generic_stage_target.set_portage_overlay(self)
-		if "portage_overlay" in self.settings:
-			print "\nWARNING !!!!!"
-			print "\tUsing an portage overlay for earlier stages could cause build issues."
-			print "\tIf you break it, you buy it. Don't complain to us about it."
-			print "\tDont say we did not warn you\n"
-
-	def base_dirs(self):
-		if os.uname()[0] == "FreeBSD":
-			# baselayout no longer creates the .keep files in proc and dev for FreeBSD as it
-			# would create them too late...we need them earlier before bind mounting filesystems
-			# since proc and dev are not writeable, so...create them here
-			if not os.path.exists(self.settings["stage_path"]+"/proc"):
-				os.makedirs(self.settings["stage_path"]+"/proc")
-			if not os.path.exists(self.settings["stage_path"]+"/dev"):
-				os.makedirs(self.settings["stage_path"]+"/dev")
-			if not os.path.isfile(self.settings["stage_path"]+"/proc/.keep"):
-				try:
-					proc_keepfile = open(self.settings["stage_path"]+"/proc/.keep","w")
-					proc_keepfile.write('')
-					proc_keepfile.close()
-				except IOError:
-					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
-			if not os.path.isfile(self.settings["stage_path"]+"/dev/.keep"):
-				try:
-					dev_keepfile = open(self.settings["stage_path"]+"/dev/.keep","w")
-					dev_keepfile.write('')
-					dev_keepfile.close()
-				except IOError:
-					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
-		else:
-			pass
-
-	def set_mounts(self):
-		# stage_path/proc probably doesn't exist yet, so create it
-		if not os.path.exists(self.settings["stage_path"]+"/proc"):
-			os.makedirs(self.settings["stage_path"]+"/proc")
-
-		# alter the mount mappings to bind mount proc onto it
-		self.mounts.append("stage1root/proc")
-		self.target_mounts["stage1root/proc"] = "/tmp/stage1root/proc"
-		self.mountmap["stage1root/proc"] = "/proc"
-
-def register(foo):
-	foo.update({"stage1":stage1_target})
-	return foo
diff --git a/modules/stage2_target.py b/modules/stage2_target.py
deleted file mode 100644
index 803ec59..0000000
--- a/modules/stage2_target.py
+++ /dev/null
@@ -1,66 +0,0 @@
-"""
-stage2 target, builds upon previous stage1 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst_support import *
-from generic_stage_target import *
-
-class stage2_target(generic_stage_target):
-	"""
-	Builder class for a stage2 installation tarball build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=["chost"]
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_source_path(self):
-		if "SEEDCACHE" in self.settings and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")):
-			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")
-		else:
-			self.settings["source_path"] = normpath(self.settings["storedir"] +
-				"/builds/" + self.settings["source_subpath"].rstrip("/") +
-				".tar.bz2")
-			if os.path.isfile(self.settings["source_path"]):
-				if os.path.exists(self.settings["source_path"]):
-				# XXX: Is this even necessary if the previous check passes?
-					self.settings["source_path_hash"]=generate_hash(self.settings["source_path"],\
-						hash_function=self.settings["hash_function"],verbose=False)
-		print "Source path set to "+self.settings["source_path"]
-		if os.path.isdir(self.settings["source_path"]):
-			print "\tIf this is not desired, remove this directory or turn of seedcache in the options of catalyst.conf"
-			print "\tthe source path will then be " + \
-				normpath(self.settings["storedir"] + "/builds/" + \
-				self.settings["source_subpath"].restrip("/") + ".tar.bz2\n")
-
-	# XXX: How do these override_foo() functions differ from the ones in
-	# generic_stage_target and why aren't they in stage3_target?
-
-	def override_chost(self):
-		if "chost" in self.settings:
-			self.settings["CHOST"]=list_to_string(self.settings["chost"])
-
-	def override_cflags(self):
-		if "cflags" in self.settings:
-			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
-
-	def override_cxxflags(self):
-		if "cxxflags" in self.settings:
-			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
-
-	def override_ldflags(self):
-		if "ldflags" in self.settings:
-			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
-
-	def set_portage_overlay(self):
-			generic_stage_target.set_portage_overlay(self)
-			if "portage_overlay" in self.settings:
-				print "\nWARNING !!!!!"
-				print "\tUsing an portage overlay for earlier stages could cause build issues."
-				print "\tIf you break it, you buy it. Don't complain to us about it."
-				print "\tDont say we did not warn you\n"
-
-def register(foo):
-	foo.update({"stage2":stage2_target})
-	return foo
diff --git a/modules/stage3_target.py b/modules/stage3_target.py
deleted file mode 100644
index 4d3a008..0000000
--- a/modules/stage3_target.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""
-stage3 target, builds upon previous stage2/stage3 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst_support import *
-from generic_stage_target import *
-
-class stage3_target(generic_stage_target):
-	"""
-	Builder class for a stage3 installation tarball build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=[]
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_portage_overlay(self):
-		generic_stage_target.set_portage_overlay(self)
-		if "portage_overlay" in self.settings:
-			print "\nWARNING !!!!!"
-			print "\tUsing an overlay for earlier stages could cause build issues."
-			print "\tIf you break it, you buy it. Don't complain to us about it."
-			print "\tDont say we did not warn you\n"
-
-	def set_cleanables(self):
-		generic_stage_target.set_cleanables(self)
-
-def register(foo):
-	foo.update({"stage3":stage3_target})
-	return foo
diff --git a/modules/stage4_target.py b/modules/stage4_target.py
deleted file mode 100644
index ce41b2d..0000000
--- a/modules/stage4_target.py
+++ /dev/null
@@ -1,43 +0,0 @@
-"""
-stage4 target, builds upon previous stage3/stage4 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst_support import *
-from generic_stage_target import *
-
-class stage4_target(generic_stage_target):
-	"""
-	Builder class for stage4.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["stage4/packages"]
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend(["stage4/use","boot/kernel",\
-				"stage4/root_overlay","stage4/fsscript",\
-				"stage4/gk_mainargs","splash_theme",\
-				"portage_overlay","stage4/rcadd","stage4/rcdel",\
-				"stage4/linuxrc","stage4/unmerge","stage4/rm","stage4/empty"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_cleanables(self):
-		self.settings["cleanables"]=["/var/tmp/*","/tmp/*"]
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir","portage_overlay",\
-					"bind","chroot_setup","setup_environment","build_packages",\
-					"build_kernel","bootloader","root_overlay","fsscript",\
-					"preclean","rcupdate","unmerge","unbind","remove","empty",\
-					"clean"]
-
-#		if "TARBALL" in self.settings or \
-#			"FETCH" not in self.settings:
-		if "FETCH" not in self.settings:
-			self.settings["action_sequence"].append("capture")
-		self.settings["action_sequence"].append("clear_autoresume")
-
-def register(foo):
-	foo.update({"stage4":stage4_target})
-	return foo
-
diff --git a/modules/tinderbox_target.py b/modules/tinderbox_target.py
deleted file mode 100644
index ca55610..0000000
--- a/modules/tinderbox_target.py
+++ /dev/null
@@ -1,48 +0,0 @@
-"""
-Tinderbox target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst_support import *
-from generic_stage_target import *
-
-class tinderbox_target(generic_stage_target):
-	"""
-	Builder class for the tinderbox target
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["tinderbox/packages"]
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend(["tinderbox/use"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def run_local(self):
-		# tinderbox
-		# example call: "grp.sh run xmms vim sys-apps/gleep"
-		try:
-			if os.path.exists(self.settings["controller_file"]):
-			    cmd("/bin/bash "+self.settings["controller_file"]+" run "+\
-				list_bashify(self.settings["tinderbox/packages"]),"run script failed.",env=self.env)
-
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"Tinderbox aborting due to error."
-
-	def set_cleanables(self):
-		self.settings['cleanables'] = [
-			'/etc/resolv.conf',
-			'/var/tmp/*',
-			'/root/*',
-			self.settings['portdir'],
-			]
-
-	def set_action_sequence(self):
-		#Default action sequence for run method
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-		              "config_profile_link","setup_confdir","bind","chroot_setup",\
-		              "setup_environment","run_local","preclean","unbind","clean",\
-		              "clear_autoresume"]
-
-def register(foo):
-	foo.update({"tinderbox":tinderbox_target})
-	return foo
-- 
1.8.3.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [gentoo-catalyst] [PATCH 2/5] Move catalyst_support, builder, catalyst_lock out of modules, into the catalyst namespace.
  2014-01-12  1:46 [gentoo-catalyst] Re-organize the python structure Brian Dolbec
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories Brian Dolbec
@ 2014-01-12  1:46 ` Brian Dolbec
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 3/5] Rename the modules subpkg to targets, to better reflect what it contains Brian Dolbec
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12  1:46 UTC (permalink / raw
  To: gentoo-catalyst; +Cc: Brian Dolbec

---
 catalyst/arch/alpha.py                   |   6 +-
 catalyst/arch/amd64.py                   |   2 +-
 catalyst/arch/arm.py                     |   6 +-
 catalyst/arch/hppa.py                    |   6 +-
 catalyst/arch/ia64.py                    |   6 +-
 catalyst/arch/mips.py                    |   6 +-
 catalyst/arch/powerpc.py                 |   6 +-
 catalyst/arch/s390.py                    |   6 +-
 catalyst/arch/sh.py                      |   6 +-
 catalyst/arch/sparc.py                   |   6 +-
 catalyst/arch/x86.py                     |   6 +-
 catalyst/builder.py                      |  20 +
 catalyst/config.py                       |   3 +-
 catalyst/lock.py                         | 468 ++++++++++++++++++++
 catalyst/main.py                         |   7 +-
 catalyst/modules/builder.py              |  20 -
 catalyst/modules/catalyst_lock.py        | 468 --------------------
 catalyst/modules/catalyst_support.py     | 718 -------------------------------
 catalyst/modules/embedded_target.py      |   2 +-
 catalyst/modules/generic_stage_target.py |   8 +-
 catalyst/modules/generic_target.py       |   2 +-
 catalyst/modules/grp_target.py           |   2 +-
 catalyst/modules/livecd_stage1_target.py |   2 +-
 catalyst/modules/livecd_stage2_target.py |   2 +-
 catalyst/modules/netboot2_target.py      |   2 +-
 catalyst/modules/netboot_target.py       |   2 +-
 catalyst/modules/snapshot_target.py      |   2 +-
 catalyst/modules/stage1_target.py        |   2 +-
 catalyst/modules/stage2_target.py        |   2 +-
 catalyst/modules/stage3_target.py        |   2 +-
 catalyst/modules/stage4_target.py        |   2 +-
 catalyst/modules/tinderbox_target.py     |   2 +-
 catalyst/support.py                      | 718 +++++++++++++++++++++++++++++++
 33 files changed, 1270 insertions(+), 1248 deletions(-)
 create mode 100644 catalyst/builder.py
 create mode 100644 catalyst/lock.py
 delete mode 100644 catalyst/modules/builder.py
 delete mode 100644 catalyst/modules/catalyst_lock.py
 delete mode 100644 catalyst/modules/catalyst_support.py
 create mode 100644 catalyst/support.py

diff --git a/catalyst/arch/alpha.py b/catalyst/arch/alpha.py
index f0fc95a..7248020 100644
--- a/catalyst/arch/alpha.py
+++ b/catalyst/arch/alpha.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_alpha(builder.generic):
 	"abstract base class for all alpha builders"
diff --git a/catalyst/arch/amd64.py b/catalyst/arch/amd64.py
index 262b55a..13e7563 100644
--- a/catalyst/arch/amd64.py
+++ b/catalyst/arch/amd64.py
@@ -1,5 +1,5 @@
 
-import builder
+from catalyst import builder
 
 class generic_amd64(builder.generic):
 	"abstract base class for all amd64 builders"
diff --git a/catalyst/arch/arm.py b/catalyst/arch/arm.py
index 2de3942..8f207ff 100644
--- a/catalyst/arch/arm.py
+++ b/catalyst/arch/arm.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_arm(builder.generic):
 	"Abstract base class for all arm (little endian) builders"
diff --git a/catalyst/arch/hppa.py b/catalyst/arch/hppa.py
index f804398..3aac9b6 100644
--- a/catalyst/arch/hppa.py
+++ b/catalyst/arch/hppa.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_hppa(builder.generic):
 	"Abstract base class for all hppa builders"
diff --git a/catalyst/arch/ia64.py b/catalyst/arch/ia64.py
index 825af70..4003085 100644
--- a/catalyst/arch/ia64.py
+++ b/catalyst/arch/ia64.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class arch_ia64(builder.generic):
 	"builder class for ia64"
diff --git a/catalyst/arch/mips.py b/catalyst/arch/mips.py
index b3730fa..7cce392 100644
--- a/catalyst/arch/mips.py
+++ b/catalyst/arch/mips.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_mips(builder.generic):
 	"Abstract base class for all mips builders [Big-endian]"
diff --git a/catalyst/arch/powerpc.py b/catalyst/arch/powerpc.py
index e9f611b..6cec580 100644
--- a/catalyst/arch/powerpc.py
+++ b/catalyst/arch/powerpc.py
@@ -1,6 +1,8 @@
 
-import os,builder
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_ppc(builder.generic):
 	"abstract base class for all 32-bit powerpc builders"
diff --git a/catalyst/arch/s390.py b/catalyst/arch/s390.py
index bf22f66..c49e0b7 100644
--- a/catalyst/arch/s390.py
+++ b/catalyst/arch/s390.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_s390(builder.generic):
 	"abstract base class for all s390 builders"
diff --git a/catalyst/arch/sh.py b/catalyst/arch/sh.py
index 2fc9531..1fa1b0b 100644
--- a/catalyst/arch/sh.py
+++ b/catalyst/arch/sh.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_sh(builder.generic):
 	"Abstract base class for all sh builders [Little-endian]"
diff --git a/catalyst/arch/sparc.py b/catalyst/arch/sparc.py
index 5eb5344..2889528 100644
--- a/catalyst/arch/sparc.py
+++ b/catalyst/arch/sparc.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_sparc(builder.generic):
 	"abstract base class for all sparc builders"
diff --git a/catalyst/arch/x86.py b/catalyst/arch/x86.py
index 0391b79..c8d1911 100644
--- a/catalyst/arch/x86.py
+++ b/catalyst/arch/x86.py
@@ -1,6 +1,8 @@
 
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
 
 class generic_x86(builder.generic):
 	"abstract base class for all x86 builders"
diff --git a/catalyst/builder.py b/catalyst/builder.py
new file mode 100644
index 0000000..ad27d78
--- /dev/null
+++ b/catalyst/builder.py
@@ -0,0 +1,20 @@
+
+class generic:
+	def __init__(self,myspec):
+		self.settings=myspec
+
+	def mount_safety_check(self):
+		"""
+		Make sure that no bind mounts exist in chrootdir (to use before
+		cleaning the directory, to make sure we don't wipe the contents of
+		a bind mount
+		"""
+		pass
+
+	def mount_all(self):
+		"""do all bind mounts"""
+		pass
+
+	def umount_all(self):
+		"""unmount all bind mounts"""
+		pass
diff --git a/catalyst/config.py b/catalyst/config.py
index 726bf74..460bbd5 100644
--- a/catalyst/config.py
+++ b/catalyst/config.py
@@ -1,5 +1,6 @@
+
 import re
-from modules.catalyst_support import *
+from catalyst.support import *
 
 class ParserBase:
 
diff --git a/catalyst/lock.py b/catalyst/lock.py
new file mode 100644
index 0000000..2d10d2f
--- /dev/null
+++ b/catalyst/lock.py
@@ -0,0 +1,468 @@
+#!/usr/bin/python
+import os
+import fcntl
+import errno
+import sys
+import string
+import time
+from catalyst.support import *
+
+def writemsg(mystr):
+	sys.stderr.write(mystr)
+	sys.stderr.flush()
+
+class LockDir:
+	locking_method=fcntl.flock
+	lock_dirs_in_use=[]
+	die_on_failed_lock=True
+	def __del__(self):
+		self.clean_my_hardlocks()
+		self.delete_lock_from_path_list()
+		if self.islocked():
+			self.fcntl_unlock()
+
+	def __init__(self,lockdir):
+		self.locked=False
+		self.myfd=None
+		self.set_gid(250)
+		self.locking_method=LockDir.locking_method
+		self.set_lockdir(lockdir)
+		self.set_lockfilename(".catalyst_lock")
+		self.set_lockfile()
+
+		if LockDir.lock_dirs_in_use.count(lockdir)>0:
+			raise "This directory already associated with a lock object"
+		else:
+			LockDir.lock_dirs_in_use.append(lockdir)
+
+		self.hardlock_paths={}
+
+	def delete_lock_from_path_list(self):
+		i=0
+		try:
+			if LockDir.lock_dirs_in_use:
+				for x in LockDir.lock_dirs_in_use:
+					if LockDir.lock_dirs_in_use[i] == self.lockdir:
+						del LockDir.lock_dirs_in_use[i]
+						break
+						i=i+1
+		except AttributeError:
+			pass
+
+	def islocked(self):
+		if self.locked:
+			return True
+		else:
+			return False
+
+	def set_gid(self,gid):
+		if not self.islocked():
+#			if "DEBUG" in self.settings:
+#				print "setting gid to", gid
+			self.gid=gid
+
+	def set_lockdir(self,lockdir):
+		if not os.path.exists(lockdir):
+			os.makedirs(lockdir)
+		if os.path.isdir(lockdir):
+			if not self.islocked():
+				if lockdir[-1] == "/":
+					lockdir=lockdir[:-1]
+				self.lockdir=normpath(lockdir)
+#				if "DEBUG" in self.settings:
+#					print "setting lockdir to", self.lockdir
+		else:
+			raise "the lock object needs a path to a dir"
+
+	def set_lockfilename(self,lockfilename):
+		if not self.islocked():
+			self.lockfilename=lockfilename
+#			if "DEBUG" in self.settings:
+#				print "setting lockfilename to", self.lockfilename
+
+	def set_lockfile(self):
+		if not self.islocked():
+			self.lockfile=normpath(self.lockdir+'/'+self.lockfilename)
+#			if "DEBUG" in self.settings:
+#				print "setting lockfile to", self.lockfile
+
+	def read_lock(self):
+		if not self.locking_method == "HARDLOCK":
+			self.fcntl_lock("read")
+		else:
+			print "HARDLOCKING doesnt support shared-read locks"
+			print "using exclusive write locks"
+			self.hard_lock()
+
+	def write_lock(self):
+		if not self.locking_method == "HARDLOCK":
+			self.fcntl_lock("write")
+		else:
+			self.hard_lock()
+
+	def unlock(self):
+		if not self.locking_method == "HARDLOCK":
+			self.fcntl_unlock()
+		else:
+			self.hard_unlock()
+
+	def fcntl_lock(self,locktype):
+		if self.myfd==None:
+			if not os.path.exists(os.path.dirname(self.lockdir)):
+				raise DirectoryNotFound, os.path.dirname(self.lockdir)
+			if not os.path.exists(self.lockfile):
+				old_mask=os.umask(000)
+				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
+				try:
+					if os.stat(self.lockfile).st_gid != self.gid:
+						os.chown(self.lockfile,os.getuid(),self.gid)
+				except SystemExit, e:
+					raise
+				except OSError, e:
+					if e[0] == 2: #XXX: No such file or directory
+						return self.fcntl_locking(locktype)
+					else:
+						writemsg("Cannot chown a lockfile. This could cause inconvenience later.\n")
+
+				os.umask(old_mask)
+			else:
+				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
+
+		try:
+			if locktype == "read":
+				self.locking_method(self.myfd,fcntl.LOCK_SH|fcntl.LOCK_NB)
+			else:
+				self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
+		except IOError, e:
+			if "errno" not in dir(e):
+				raise
+			if e.errno == errno.EAGAIN:
+				if not LockDir.die_on_failed_lock:
+					# Resource temp unavailable; eg, someone beat us to the lock.
+					writemsg("waiting for lock on %s\n" % self.lockfile)
+
+					# Try for the exclusive or shared lock again.
+					if locktype == "read":
+						self.locking_method(self.myfd,fcntl.LOCK_SH)
+					else:
+						self.locking_method(self.myfd,fcntl.LOCK_EX)
+				else:
+					raise LockInUse,self.lockfile
+			elif e.errno == errno.ENOLCK:
+				pass
+			else:
+				raise
+		if not os.path.exists(self.lockfile):
+			os.close(self.myfd)
+			self.myfd=None
+			#writemsg("lockfile recurse\n")
+			self.fcntl_lock(locktype)
+		else:
+			self.locked=True
+			#writemsg("Lockfile obtained\n")
+
+	def fcntl_unlock(self):
+		import fcntl
+		unlinkfile = 1
+		if not os.path.exists(self.lockfile):
+			print "lockfile does not exist '%s'" % self.lockfile
+			if (self.myfd != None):
+				try:
+					os.close(myfd)
+					self.myfd=None
+				except:
+					pass
+				return False
+
+			try:
+				if self.myfd == None:
+					self.myfd = os.open(self.lockfile, os.O_WRONLY,0660)
+					unlinkfile = 1
+					self.locking_method(self.myfd,fcntl.LOCK_UN)
+			except SystemExit, e:
+				raise
+			except Exception, e:
+				os.close(self.myfd)
+				self.myfd=None
+				raise IOError, "Failed to unlock file '%s'\n" % self.lockfile
+				try:
+					# This sleep call was added to allow other processes that are
+					# waiting for a lock to be able to grab it before it is deleted.
+					# lockfile() already accounts for this situation, however, and
+					# the sleep here adds more time than is saved overall, so am
+					# commenting until it is proved necessary.
+					#time.sleep(0.0001)
+					if unlinkfile:
+						InUse=False
+						try:
+							self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
+						except:
+							print "Read lock may be in effect. skipping lockfile delete..."
+							InUse=True
+							# We won the lock, so there isn't competition for it.
+							# We can safely delete the file.
+							#writemsg("Got the lockfile...\n")
+							#writemsg("Unlinking...\n")
+							self.locking_method(self.myfd,fcntl.LOCK_UN)
+					if not InUse:
+						os.unlink(self.lockfile)
+						os.close(self.myfd)
+						self.myfd=None
+#						if "DEBUG" in self.settings:
+#							print "Unlinked lockfile..."
+				except SystemExit, e:
+					raise
+				except Exception, e:
+					# We really don't care... Someone else has the lock.
+					# So it is their problem now.
+					print "Failed to get lock... someone took it."
+					print str(e)
+
+					# Why test lockfilename?  Because we may have been handed an
+					# fd originally, and the caller might not like having their
+					# open fd closed automatically on them.
+					#if type(lockfilename) == types.StringType:
+					#        os.close(myfd)
+
+		if (self.myfd != None):
+			os.close(self.myfd)
+			self.myfd=None
+			self.locked=False
+			time.sleep(.0001)
+
+	def hard_lock(self,max_wait=14400):
+		"""Does the NFS, hardlink shuffle to ensure locking on the disk.
+		We create a PRIVATE lockfile, that is just a placeholder on the disk.
+		Then we HARDLINK the real lockfile to that private file.
+		If our file can 2 references, then we have the lock. :)
+		Otherwise we lather, rise, and repeat.
+		We default to a 4 hour timeout.
+		"""
+
+		self.myhardlock = self.hardlock_name(self.lockdir)
+
+		start_time = time.time()
+		reported_waiting = False
+
+		while(time.time() < (start_time + max_wait)):
+			# We only need it to exist.
+			self.myfd = os.open(self.myhardlock, os.O_CREAT|os.O_RDWR,0660)
+			os.close(self.myfd)
+
+			self.add_hardlock_file_to_cleanup()
+			if not os.path.exists(self.myhardlock):
+				raise FileNotFound, "Created lockfile is missing: %(filename)s" % {"filename":self.myhardlock}
+			try:
+				res = os.link(self.myhardlock, self.lockfile)
+			except SystemExit, e:
+				raise
+			except Exception, e:
+#				if "DEBUG" in self.settings:
+#					print "lockfile(): Hardlink: Link failed."
+#					print "Exception: ",e
+				pass
+
+			if self.hardlink_is_mine(self.myhardlock, self.lockfile):
+				# We have the lock.
+				if reported_waiting:
+					print
+				return True
+
+			if reported_waiting:
+				writemsg(".")
+			else:
+				reported_waiting = True
+				print
+				print "Waiting on (hardlink) lockfile: (one '.' per 3 seconds)"
+				print "Lockfile: " + self.lockfile
+			time.sleep(3)
+
+		os.unlink(self.myhardlock)
+		return False
+
+	def hard_unlock(self):
+		try:
+			if os.path.exists(self.myhardlock):
+				os.unlink(self.myhardlock)
+			if os.path.exists(self.lockfile):
+				os.unlink(self.lockfile)
+		except SystemExit, e:
+			raise
+		except:
+			writemsg("Something strange happened to our hardlink locks.\n")
+
+	def add_hardlock_file_to_cleanup(self):
+		#mypath = self.normpath(path)
+		if os.path.isdir(self.lockdir) and os.path.isfile(self.myhardlock):
+			self.hardlock_paths[self.lockdir]=self.myhardlock
+
+	def remove_hardlock_file_from_cleanup(self):
+		if self.lockdir in self.hardlock_paths:
+			del self.hardlock_paths[self.lockdir]
+			print self.hardlock_paths
+
+	def hardlock_name(self, path):
+		mypath=path+"/.hardlock-"+os.uname()[1]+"-"+str(os.getpid())
+		newpath = os.path.normpath(mypath)
+		if len(newpath) > 1:
+			if newpath[1] == "/":
+				newpath = "/"+newpath.lstrip("/")
+		return newpath
+
+	def hardlink_is_mine(self,link,lock):
+		import stat
+		try:
+			myhls = os.stat(link)
+			mylfs = os.stat(lock)
+		except SystemExit, e:
+			raise
+		except:
+			myhls = None
+			mylfs = None
+
+		if myhls:
+			if myhls[stat.ST_NLINK] == 2:
+				return True
+		if mylfs:
+			if mylfs[stat.ST_INO] == myhls[stat.ST_INO]:
+				return True
+		return False
+
+	def hardlink_active(lock):
+		if not os.path.exists(lock):
+			return False
+
+	def clean_my_hardlocks(self):
+		try:
+			for x in self.hardlock_paths.keys():
+				self.hardlock_cleanup(x)
+		except AttributeError:
+			pass
+
+	def hardlock_cleanup(self,path):
+		mypid  = str(os.getpid())
+		myhost = os.uname()[1]
+		mydl = os.listdir(path)
+		results = []
+		mycount = 0
+
+		mylist = {}
+		for x in mydl:
+			filepath=path+"/"+x
+			if os.path.isfile(filepath):
+				parts = filepath.split(".hardlock-")
+			if len(parts) == 2:
+				filename = parts[0]
+				hostpid  = parts[1].split("-")
+				host  = "-".join(hostpid[:-1])
+				pid   = hostpid[-1]
+			if filename not in mylist:
+				mylist[filename] = {}
+
+			if host not in mylist[filename]:
+				mylist[filename][host] = []
+				mylist[filename][host].append(pid)
+				mycount += 1
+			else:
+				mylist[filename][host].append(pid)
+				mycount += 1
+
+
+		results.append("Found %(count)s locks" % {"count":mycount})
+		for x in mylist.keys():
+			if myhost in mylist[x]:
+				mylockname = self.hardlock_name(x)
+				if self.hardlink_is_mine(mylockname, self.lockfile) or \
+					not os.path.exists(self.lockfile):
+					for y in mylist[x].keys():
+						for z in mylist[x][y]:
+							filename = x+".hardlock-"+y+"-"+z
+							if filename == mylockname:
+								self.hard_unlock()
+								continue
+							try:
+								# We're sweeping through, unlinking everyone's locks.
+								os.unlink(filename)
+								results.append("Unlinked: " + filename)
+							except SystemExit, e:
+								raise
+							except Exception,e:
+								pass
+					try:
+						os.unlink(x)
+						results.append("Unlinked: " + x)
+						os.unlink(mylockname)
+						results.append("Unlinked: " + mylockname)
+					except SystemExit, e:
+						raise
+					except Exception,e:
+						pass
+				else:
+					try:
+						os.unlink(mylockname)
+						results.append("Unlinked: " + mylockname)
+					except SystemExit, e:
+						raise
+					except Exception,e:
+						pass
+		return results
+
+if __name__ == "__main__":
+
+	def lock_work():
+		print
+		for i in range(1,6):
+			print i,time.time()
+			time.sleep(1)
+		print
+	def normpath(mypath):
+		newpath = os.path.normpath(mypath)
+		if len(newpath) > 1:
+			if newpath[1] == "/":
+				newpath = "/"+newpath.lstrip("/")
+		return newpath
+
+	print "Lock 5 starting"
+	import time
+	Lock1=LockDir("/tmp/lock_path")
+	Lock1.write_lock()
+	print "Lock1 write lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+	Lock1.read_lock()
+	print "Lock1 read lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+	Lock1.read_lock()
+	print "Lock1 read lock"
+
+	Lock1.write_lock()
+	print "Lock1 write lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+	Lock1.read_lock()
+	print "Lock1 read lock"
+
+	lock_work()
+
+	Lock1.unlock()
+	print "Lock1 unlock"
+
+#Lock1.write_lock()
+#time.sleep(2)
+#Lock1.unlock()
+    ##Lock1.write_lock()
+    #time.sleep(2)
+    #Lock1.unlock()
diff --git a/catalyst/main.py b/catalyst/main.py
index aebb495..7b66dab 100644
--- a/catalyst/main.py
+++ b/catalyst/main.py
@@ -21,7 +21,7 @@ sys.path.append(__selfpath__ + "/modules")
 
 import catalyst.config
 import catalyst.util
-from catalyst.modules.catalyst_support import (required_build_targets,
+from catalyst.support import (required_build_targets,
 	valid_build_targets, CatalystError, hash_map, find_binary, LockInUse)
 
 __maintainer__="Catalyst <catalyst@gentoo.org>"
@@ -196,7 +196,8 @@ def parse_config(myconfig):
 		conf_values["port_logdir"]=myconf["port_logdir"];
 
 def import_modules():
-	# import catalyst's own modules (i.e. catalyst_support and the arch modules)
+	# import catalyst's own modules
+	# (i.e. stage and the arch modules)
 	targetmap={}
 
 	try:
@@ -347,7 +348,7 @@ def main():
 	parse_config(myconfig)
 
 	# Start checking that digests are valid now that the hash_map was imported
-	# from catalyst_support
+	# from catalyst.support
 	if "digests" in conf_values:
 		for i in conf_values["digests"].split():
 			if i not in hash_map:
diff --git a/catalyst/modules/builder.py b/catalyst/modules/builder.py
deleted file mode 100644
index ad27d78..0000000
--- a/catalyst/modules/builder.py
+++ /dev/null
@@ -1,20 +0,0 @@
-
-class generic:
-	def __init__(self,myspec):
-		self.settings=myspec
-
-	def mount_safety_check(self):
-		"""
-		Make sure that no bind mounts exist in chrootdir (to use before
-		cleaning the directory, to make sure we don't wipe the contents of
-		a bind mount
-		"""
-		pass
-
-	def mount_all(self):
-		"""do all bind mounts"""
-		pass
-
-	def umount_all(self):
-		"""unmount all bind mounts"""
-		pass
diff --git a/catalyst/modules/catalyst_lock.py b/catalyst/modules/catalyst_lock.py
deleted file mode 100644
index 5311cf8..0000000
--- a/catalyst/modules/catalyst_lock.py
+++ /dev/null
@@ -1,468 +0,0 @@
-#!/usr/bin/python
-import os
-import fcntl
-import errno
-import sys
-import string
-import time
-from catalyst_support import *
-
-def writemsg(mystr):
-	sys.stderr.write(mystr)
-	sys.stderr.flush()
-
-class LockDir:
-	locking_method=fcntl.flock
-	lock_dirs_in_use=[]
-	die_on_failed_lock=True
-	def __del__(self):
-		self.clean_my_hardlocks()
-		self.delete_lock_from_path_list()
-		if self.islocked():
-			self.fcntl_unlock()
-
-	def __init__(self,lockdir):
-		self.locked=False
-		self.myfd=None
-		self.set_gid(250)
-		self.locking_method=LockDir.locking_method
-		self.set_lockdir(lockdir)
-		self.set_lockfilename(".catalyst_lock")
-		self.set_lockfile()
-
-		if LockDir.lock_dirs_in_use.count(lockdir)>0:
-			raise "This directory already associated with a lock object"
-		else:
-			LockDir.lock_dirs_in_use.append(lockdir)
-
-		self.hardlock_paths={}
-
-	def delete_lock_from_path_list(self):
-		i=0
-		try:
-			if LockDir.lock_dirs_in_use:
-				for x in LockDir.lock_dirs_in_use:
-					if LockDir.lock_dirs_in_use[i] == self.lockdir:
-						del LockDir.lock_dirs_in_use[i]
-						break
-						i=i+1
-		except AttributeError:
-			pass
-
-	def islocked(self):
-		if self.locked:
-			return True
-		else:
-			return False
-
-	def set_gid(self,gid):
-		if not self.islocked():
-#			if "DEBUG" in self.settings:
-#				print "setting gid to", gid
-			self.gid=gid
-
-	def set_lockdir(self,lockdir):
-		if not os.path.exists(lockdir):
-			os.makedirs(lockdir)
-		if os.path.isdir(lockdir):
-			if not self.islocked():
-				if lockdir[-1] == "/":
-					lockdir=lockdir[:-1]
-				self.lockdir=normpath(lockdir)
-#				if "DEBUG" in self.settings:
-#					print "setting lockdir to", self.lockdir
-		else:
-			raise "the lock object needs a path to a dir"
-
-	def set_lockfilename(self,lockfilename):
-		if not self.islocked():
-			self.lockfilename=lockfilename
-#			if "DEBUG" in self.settings:
-#				print "setting lockfilename to", self.lockfilename
-
-	def set_lockfile(self):
-		if not self.islocked():
-			self.lockfile=normpath(self.lockdir+'/'+self.lockfilename)
-#			if "DEBUG" in self.settings:
-#				print "setting lockfile to", self.lockfile
-
-	def read_lock(self):
-		if not self.locking_method == "HARDLOCK":
-			self.fcntl_lock("read")
-		else:
-			print "HARDLOCKING doesnt support shared-read locks"
-			print "using exclusive write locks"
-			self.hard_lock()
-
-	def write_lock(self):
-		if not self.locking_method == "HARDLOCK":
-			self.fcntl_lock("write")
-		else:
-			self.hard_lock()
-
-	def unlock(self):
-		if not self.locking_method == "HARDLOCK":
-			self.fcntl_unlock()
-		else:
-			self.hard_unlock()
-
-	def fcntl_lock(self,locktype):
-		if self.myfd==None:
-			if not os.path.exists(os.path.dirname(self.lockdir)):
-				raise DirectoryNotFound, os.path.dirname(self.lockdir)
-			if not os.path.exists(self.lockfile):
-				old_mask=os.umask(000)
-				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
-				try:
-					if os.stat(self.lockfile).st_gid != self.gid:
-						os.chown(self.lockfile,os.getuid(),self.gid)
-				except SystemExit, e:
-					raise
-				except OSError, e:
-					if e[0] == 2: #XXX: No such file or directory
-						return self.fcntl_locking(locktype)
-					else:
-						writemsg("Cannot chown a lockfile. This could cause inconvenience later.\n")
-
-				os.umask(old_mask)
-			else:
-				self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
-
-		try:
-			if locktype == "read":
-				self.locking_method(self.myfd,fcntl.LOCK_SH|fcntl.LOCK_NB)
-			else:
-				self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
-		except IOError, e:
-			if "errno" not in dir(e):
-				raise
-			if e.errno == errno.EAGAIN:
-				if not LockDir.die_on_failed_lock:
-					# Resource temp unavailable; eg, someone beat us to the lock.
-					writemsg("waiting for lock on %s\n" % self.lockfile)
-
-					# Try for the exclusive or shared lock again.
-					if locktype == "read":
-						self.locking_method(self.myfd,fcntl.LOCK_SH)
-					else:
-						self.locking_method(self.myfd,fcntl.LOCK_EX)
-				else:
-					raise LockInUse,self.lockfile
-			elif e.errno == errno.ENOLCK:
-				pass
-			else:
-				raise
-		if not os.path.exists(self.lockfile):
-			os.close(self.myfd)
-			self.myfd=None
-			#writemsg("lockfile recurse\n")
-			self.fcntl_lock(locktype)
-		else:
-			self.locked=True
-			#writemsg("Lockfile obtained\n")
-
-	def fcntl_unlock(self):
-		import fcntl
-		unlinkfile = 1
-		if not os.path.exists(self.lockfile):
-			print "lockfile does not exist '%s'" % self.lockfile
-			if (self.myfd != None):
-				try:
-					os.close(myfd)
-					self.myfd=None
-				except:
-					pass
-				return False
-
-			try:
-				if self.myfd == None:
-					self.myfd = os.open(self.lockfile, os.O_WRONLY,0660)
-					unlinkfile = 1
-					self.locking_method(self.myfd,fcntl.LOCK_UN)
-			except SystemExit, e:
-				raise
-			except Exception, e:
-				os.close(self.myfd)
-				self.myfd=None
-				raise IOError, "Failed to unlock file '%s'\n" % self.lockfile
-				try:
-					# This sleep call was added to allow other processes that are
-					# waiting for a lock to be able to grab it before it is deleted.
-					# lockfile() already accounts for this situation, however, and
-					# the sleep here adds more time than is saved overall, so am
-					# commenting until it is proved necessary.
-					#time.sleep(0.0001)
-					if unlinkfile:
-						InUse=False
-						try:
-							self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
-						except:
-							print "Read lock may be in effect. skipping lockfile delete..."
-							InUse=True
-							# We won the lock, so there isn't competition for it.
-							# We can safely delete the file.
-							#writemsg("Got the lockfile...\n")
-							#writemsg("Unlinking...\n")
-							self.locking_method(self.myfd,fcntl.LOCK_UN)
-					if not InUse:
-						os.unlink(self.lockfile)
-						os.close(self.myfd)
-						self.myfd=None
-#						if "DEBUG" in self.settings:
-#							print "Unlinked lockfile..."
-				except SystemExit, e:
-					raise
-				except Exception, e:
-					# We really don't care... Someone else has the lock.
-					# So it is their problem now.
-					print "Failed to get lock... someone took it."
-					print str(e)
-
-					# Why test lockfilename?  Because we may have been handed an
-					# fd originally, and the caller might not like having their
-					# open fd closed automatically on them.
-					#if type(lockfilename) == types.StringType:
-					#        os.close(myfd)
-
-		if (self.myfd != None):
-			os.close(self.myfd)
-			self.myfd=None
-			self.locked=False
-			time.sleep(.0001)
-
-	def hard_lock(self,max_wait=14400):
-		"""Does the NFS, hardlink shuffle to ensure locking on the disk.
-		We create a PRIVATE lockfile, that is just a placeholder on the disk.
-		Then we HARDLINK the real lockfile to that private file.
-		If our file can 2 references, then we have the lock. :)
-		Otherwise we lather, rise, and repeat.
-		We default to a 4 hour timeout.
-		"""
-
-		self.myhardlock = self.hardlock_name(self.lockdir)
-
-		start_time = time.time()
-		reported_waiting = False
-
-		while(time.time() < (start_time + max_wait)):
-			# We only need it to exist.
-			self.myfd = os.open(self.myhardlock, os.O_CREAT|os.O_RDWR,0660)
-			os.close(self.myfd)
-
-			self.add_hardlock_file_to_cleanup()
-			if not os.path.exists(self.myhardlock):
-				raise FileNotFound, "Created lockfile is missing: %(filename)s" % {"filename":self.myhardlock}
-			try:
-				res = os.link(self.myhardlock, self.lockfile)
-			except SystemExit, e:
-				raise
-			except Exception, e:
-#				if "DEBUG" in self.settings:
-#					print "lockfile(): Hardlink: Link failed."
-#					print "Exception: ",e
-				pass
-
-			if self.hardlink_is_mine(self.myhardlock, self.lockfile):
-				# We have the lock.
-				if reported_waiting:
-					print
-				return True
-
-			if reported_waiting:
-				writemsg(".")
-			else:
-				reported_waiting = True
-				print
-				print "Waiting on (hardlink) lockfile: (one '.' per 3 seconds)"
-				print "Lockfile: " + self.lockfile
-			time.sleep(3)
-
-		os.unlink(self.myhardlock)
-		return False
-
-	def hard_unlock(self):
-		try:
-			if os.path.exists(self.myhardlock):
-				os.unlink(self.myhardlock)
-			if os.path.exists(self.lockfile):
-				os.unlink(self.lockfile)
-		except SystemExit, e:
-			raise
-		except:
-			writemsg("Something strange happened to our hardlink locks.\n")
-
-	def add_hardlock_file_to_cleanup(self):
-		#mypath = self.normpath(path)
-		if os.path.isdir(self.lockdir) and os.path.isfile(self.myhardlock):
-			self.hardlock_paths[self.lockdir]=self.myhardlock
-
-	def remove_hardlock_file_from_cleanup(self):
-		if self.lockdir in self.hardlock_paths:
-			del self.hardlock_paths[self.lockdir]
-			print self.hardlock_paths
-
-	def hardlock_name(self, path):
-		mypath=path+"/.hardlock-"+os.uname()[1]+"-"+str(os.getpid())
-		newpath = os.path.normpath(mypath)
-		if len(newpath) > 1:
-			if newpath[1] == "/":
-				newpath = "/"+newpath.lstrip("/")
-		return newpath
-
-	def hardlink_is_mine(self,link,lock):
-		import stat
-		try:
-			myhls = os.stat(link)
-			mylfs = os.stat(lock)
-		except SystemExit, e:
-			raise
-		except:
-			myhls = None
-			mylfs = None
-
-		if myhls:
-			if myhls[stat.ST_NLINK] == 2:
-				return True
-		if mylfs:
-			if mylfs[stat.ST_INO] == myhls[stat.ST_INO]:
-				return True
-		return False
-
-	def hardlink_active(lock):
-		if not os.path.exists(lock):
-			return False
-
-	def clean_my_hardlocks(self):
-		try:
-			for x in self.hardlock_paths.keys():
-				self.hardlock_cleanup(x)
-		except AttributeError:
-			pass
-
-	def hardlock_cleanup(self,path):
-		mypid  = str(os.getpid())
-		myhost = os.uname()[1]
-		mydl = os.listdir(path)
-		results = []
-		mycount = 0
-
-		mylist = {}
-		for x in mydl:
-			filepath=path+"/"+x
-			if os.path.isfile(filepath):
-				parts = filepath.split(".hardlock-")
-			if len(parts) == 2:
-				filename = parts[0]
-				hostpid  = parts[1].split("-")
-				host  = "-".join(hostpid[:-1])
-				pid   = hostpid[-1]
-			if filename not in mylist:
-				mylist[filename] = {}
-
-			if host not in mylist[filename]:
-				mylist[filename][host] = []
-				mylist[filename][host].append(pid)
-				mycount += 1
-			else:
-				mylist[filename][host].append(pid)
-				mycount += 1
-
-
-		results.append("Found %(count)s locks" % {"count":mycount})
-		for x in mylist.keys():
-			if myhost in mylist[x]:
-				mylockname = self.hardlock_name(x)
-				if self.hardlink_is_mine(mylockname, self.lockfile) or \
-					not os.path.exists(self.lockfile):
-					for y in mylist[x].keys():
-						for z in mylist[x][y]:
-							filename = x+".hardlock-"+y+"-"+z
-							if filename == mylockname:
-								self.hard_unlock()
-								continue
-							try:
-								# We're sweeping through, unlinking everyone's locks.
-								os.unlink(filename)
-								results.append("Unlinked: " + filename)
-							except SystemExit, e:
-								raise
-							except Exception,e:
-								pass
-					try:
-						os.unlink(x)
-						results.append("Unlinked: " + x)
-						os.unlink(mylockname)
-						results.append("Unlinked: " + mylockname)
-					except SystemExit, e:
-						raise
-					except Exception,e:
-						pass
-				else:
-					try:
-						os.unlink(mylockname)
-						results.append("Unlinked: " + mylockname)
-					except SystemExit, e:
-						raise
-					except Exception,e:
-						pass
-		return results
-
-if __name__ == "__main__":
-
-	def lock_work():
-		print
-		for i in range(1,6):
-			print i,time.time()
-			time.sleep(1)
-		print
-	def normpath(mypath):
-		newpath = os.path.normpath(mypath)
-		if len(newpath) > 1:
-			if newpath[1] == "/":
-				newpath = "/"+newpath.lstrip("/")
-		return newpath
-
-	print "Lock 5 starting"
-	import time
-	Lock1=LockDir("/tmp/lock_path")
-	Lock1.write_lock()
-	print "Lock1 write lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-	Lock1.read_lock()
-	print "Lock1 read lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-	Lock1.read_lock()
-	print "Lock1 read lock"
-
-	Lock1.write_lock()
-	print "Lock1 write lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-	Lock1.read_lock()
-	print "Lock1 read lock"
-
-	lock_work()
-
-	Lock1.unlock()
-	print "Lock1 unlock"
-
-#Lock1.write_lock()
-#time.sleep(2)
-#Lock1.unlock()
-    ##Lock1.write_lock()
-    #time.sleep(2)
-    #Lock1.unlock()
diff --git a/catalyst/modules/catalyst_support.py b/catalyst/modules/catalyst_support.py
deleted file mode 100644
index 316dfa3..0000000
--- a/catalyst/modules/catalyst_support.py
+++ /dev/null
@@ -1,718 +0,0 @@
-
-import sys,string,os,types,re,signal,traceback,time
-#import md5,sha
-selinux_capable = False
-#userpriv_capable = (os.getuid() == 0)
-#fakeroot_capable = False
-BASH_BINARY             = "/bin/bash"
-
-try:
-        import resource
-        max_fd_limit=resource.getrlimit(RLIMIT_NOFILE)
-except SystemExit, e:
-        raise
-except:
-        # hokay, no resource module.
-        max_fd_limit=256
-
-# pids this process knows of.
-spawned_pids = []
-
-try:
-        import urllib
-except SystemExit, e:
-        raise
-
-def cleanup(pids,block_exceptions=True):
-        """function to go through and reap the list of pids passed to it"""
-        global spawned_pids
-        if type(pids) == int:
-                pids = [pids]
-        for x in pids:
-                try:
-                        os.kill(x,signal.SIGTERM)
-                        if os.waitpid(x,os.WNOHANG)[1] == 0:
-                                # feisty bugger, still alive.
-                                os.kill(x,signal.SIGKILL)
-                                os.waitpid(x,0)
-
-                except OSError, oe:
-                        if block_exceptions:
-                                pass
-                        if oe.errno not in (10,3):
-                                raise oe
-                except SystemExit:
-                        raise
-                except Exception:
-                        if block_exceptions:
-                                pass
-                try:                    spawned_pids.remove(x)
-                except IndexError:      pass
-
-
-
-# a function to turn a string of non-printable characters into a string of
-# hex characters
-def hexify(str):
-	hexStr = string.hexdigits
-	r = ''
-	for ch in str:
-		i = ord(ch)
-		r = r + hexStr[(i >> 4) & 0xF] + hexStr[i & 0xF]
-	return r
-# hexify()
-
-def generate_contents(file,contents_function="auto",verbose=False):
-	try:
-		_ = contents_function
-		if _ == 'auto' and file.endswith('.iso'):
-			_ = 'isoinfo-l'
-		if (_ in ['tar-tv','auto']):
-			if file.endswith('.tgz') or file.endswith('.tar.gz'):
-				_ = 'tar-tvz'
-			elif file.endswith('.tbz2') or file.endswith('.tar.bz2'):
-				_ = 'tar-tvj'
-			elif file.endswith('.tar'):
-				_ = 'tar-tv'
-
-		if _ == 'auto':
-			warn('File %r has unknown type for automatic detection.' % (file, ))
-			return None
-		else:
-			contents_function = _
-			_ = contents_map[contents_function]
-			return _[0](file,_[1],verbose)
-	except:
-		raise CatalystError,\
-			"Error generating contents, is appropriate utility (%s) installed on your system?" \
-			% (contents_function, )
-
-def calc_contents(file,cmd,verbose):
-	args={ 'file': file }
-	cmd=cmd % dict(args)
-	a=os.popen(cmd)
-	mylines=a.readlines()
-	a.close()
-	result="".join(mylines)
-	if verbose:
-		print result
-	return result
-
-# This has map must be defined after the function calc_content
-# It is possible to call different functions from this but they must be defined
-# before hash_map
-# Key,function,cmd
-contents_map={
-	# 'find' is disabled because it requires the source path, which is not
-	# always available
-	#"find"		:[calc_contents,"find %(path)s"],
-	"tar-tv":[calc_contents,"tar tvf %(file)s"],
-	"tar-tvz":[calc_contents,"tar tvzf %(file)s"],
-	"tar-tvj":[calc_contents,"tar -I lbzip2 -tvf %(file)s"],
-	"isoinfo-l":[calc_contents,"isoinfo -l -i %(file)s"],
-	# isoinfo-f should be a last resort only
-	"isoinfo-f":[calc_contents,"isoinfo -f -i %(file)s"],
-}
-
-def generate_hash(file,hash_function="crc32",verbose=False):
-	try:
-		return hash_map[hash_function][0](file,hash_map[hash_function][1],hash_map[hash_function][2],\
-			hash_map[hash_function][3],verbose)
-	except:
-		raise CatalystError,"Error generating hash, is appropriate utility installed on your system?"
-
-def calc_hash(file,cmd,cmd_args,id_string="MD5",verbose=False):
-	a=os.popen(cmd+" "+cmd_args+" "+file)
-	mylines=a.readlines()
-	a.close()
-	mylines=mylines[0].split()
-	result=mylines[0]
-	if verbose:
-		print id_string+" (%s) = %s" % (file, result)
-	return result
-
-def calc_hash2(file,cmd,cmd_args,id_string="MD5",verbose=False):
-	a=os.popen(cmd+" "+cmd_args+" "+file)
-	header=a.readline()
-	mylines=a.readline().split()
-	hash=mylines[0]
-	short_file=os.path.split(mylines[1])[1]
-	a.close()
-	result=header+hash+"  "+short_file+"\n"
-	if verbose:
-		print header+" (%s) = %s" % (short_file, result)
-	return result
-
-# This has map must be defined after the function calc_hash
-# It is possible to call different functions from this but they must be defined
-# before hash_map
-# Key,function,cmd,cmd_args,Print string
-hash_map={
-	 "adler32":[calc_hash2,"shash","-a ADLER32","ADLER32"],\
-	 "crc32":[calc_hash2,"shash","-a CRC32","CRC32"],\
-	 "crc32b":[calc_hash2,"shash","-a CRC32B","CRC32B"],\
-	 "gost":[calc_hash2,"shash","-a GOST","GOST"],\
-	 "haval128":[calc_hash2,"shash","-a HAVAL128","HAVAL128"],\
-	 "haval160":[calc_hash2,"shash","-a HAVAL160","HAVAL160"],\
-	 "haval192":[calc_hash2,"shash","-a HAVAL192","HAVAL192"],\
-	 "haval224":[calc_hash2,"shash","-a HAVAL224","HAVAL224"],\
-	 "haval256":[calc_hash2,"shash","-a HAVAL256","HAVAL256"],\
-	 "md2":[calc_hash2,"shash","-a MD2","MD2"],\
-	 "md4":[calc_hash2,"shash","-a MD4","MD4"],\
-	 "md5":[calc_hash2,"shash","-a MD5","MD5"],\
-	 "ripemd128":[calc_hash2,"shash","-a RIPEMD128","RIPEMD128"],\
-	 "ripemd160":[calc_hash2,"shash","-a RIPEMD160","RIPEMD160"],\
-	 "ripemd256":[calc_hash2,"shash","-a RIPEMD256","RIPEMD256"],\
-	 "ripemd320":[calc_hash2,"shash","-a RIPEMD320","RIPEMD320"],\
-	 "sha1":[calc_hash2,"shash","-a SHA1","SHA1"],\
-	 "sha224":[calc_hash2,"shash","-a SHA224","SHA224"],\
-	 "sha256":[calc_hash2,"shash","-a SHA256","SHA256"],\
-	 "sha384":[calc_hash2,"shash","-a SHA384","SHA384"],\
-	 "sha512":[calc_hash2,"shash","-a SHA512","SHA512"],\
-	 "snefru128":[calc_hash2,"shash","-a SNEFRU128","SNEFRU128"],\
-	 "snefru256":[calc_hash2,"shash","-a SNEFRU256","SNEFRU256"],\
-	 "tiger":[calc_hash2,"shash","-a TIGER","TIGER"],\
-	 "tiger128":[calc_hash2,"shash","-a TIGER128","TIGER128"],\
-	 "tiger160":[calc_hash2,"shash","-a TIGER160","TIGER160"],\
-	 "whirlpool":[calc_hash2,"shash","-a WHIRLPOOL","WHIRLPOOL"],\
-	 }
-
-def read_from_clst(file):
-	line = ''
-	myline = ''
-	try:
-		myf=open(file,"r")
-	except:
-		return -1
-		#raise CatalystError, "Could not open file "+file
-	for line in myf.readlines():
-	    #line = string.replace(line, "\n", "") # drop newline
-	    myline = myline + line
-	myf.close()
-	return myline
-# read_from_clst
-
-# these should never be touched
-required_build_targets=["generic_target","generic_stage_target"]
-
-# new build types should be added here
-valid_build_targets=["stage1_target","stage2_target","stage3_target","stage4_target","grp_target",
-			"livecd_stage1_target","livecd_stage2_target","embedded_target",
-			"tinderbox_target","snapshot_target","netboot_target","netboot2_target"]
-
-required_config_file_values=["storedir","sharedir","distdir","portdir"]
-valid_config_file_values=required_config_file_values[:]
-valid_config_file_values.append("PKGCACHE")
-valid_config_file_values.append("KERNCACHE")
-valid_config_file_values.append("CCACHE")
-valid_config_file_values.append("DISTCC")
-valid_config_file_values.append("ICECREAM")
-valid_config_file_values.append("ENVSCRIPT")
-valid_config_file_values.append("AUTORESUME")
-valid_config_file_values.append("FETCH")
-valid_config_file_values.append("CLEAR_AUTORESUME")
-valid_config_file_values.append("options")
-valid_config_file_values.append("DEBUG")
-valid_config_file_values.append("VERBOSE")
-valid_config_file_values.append("PURGE")
-valid_config_file_values.append("PURGEONLY")
-valid_config_file_values.append("SNAPCACHE")
-valid_config_file_values.append("snapshot_cache")
-valid_config_file_values.append("hash_function")
-valid_config_file_values.append("digests")
-valid_config_file_values.append("contents")
-valid_config_file_values.append("SEEDCACHE")
-
-verbosity=1
-
-def list_bashify(mylist):
-	if type(mylist)==types.StringType:
-		mypack=[mylist]
-	else:
-		mypack=mylist[:]
-	for x in range(0,len(mypack)):
-		# surround args with quotes for passing to bash,
-		# allows things like "<" to remain intact
-		mypack[x]="'"+mypack[x]+"'"
-	mypack=string.join(mypack)
-	return mypack
-
-def list_to_string(mylist):
-	if type(mylist)==types.StringType:
-		mypack=[mylist]
-	else:
-		mypack=mylist[:]
-	for x in range(0,len(mypack)):
-		# surround args with quotes for passing to bash,
-		# allows things like "<" to remain intact
-		mypack[x]=mypack[x]
-	mypack=string.join(mypack)
-	return mypack
-
-class CatalystError(Exception):
-	def __init__(self, message):
-		if message:
-			(type,value)=sys.exc_info()[:2]
-			if value!=None:
-				print
-				print traceback.print_exc(file=sys.stdout)
-			print
-			print "!!! catalyst: "+message
-			print
-
-class LockInUse(Exception):
-	def __init__(self, message):
-		if message:
-			#(type,value)=sys.exc_info()[:2]
-			#if value!=None:
-			    #print
-			    #kprint traceback.print_exc(file=sys.stdout)
-			print
-			print "!!! catalyst lock file in use: "+message
-			print
-
-def die(msg=None):
-	warn(msg)
-	sys.exit(1)
-
-def warn(msg):
-	print "!!! catalyst: "+msg
-
-def find_binary(myc):
-	"""look through the environmental path for an executable file named whatever myc is"""
-        # this sucks. badly.
-        p=os.getenv("PATH")
-        if p == None:
-                return None
-        for x in p.split(":"):
-                #if it exists, and is executable
-                if os.path.exists("%s/%s" % (x,myc)) and os.stat("%s/%s" % (x,myc))[0] & 0x0248:
-                        return "%s/%s" % (x,myc)
-        return None
-
-def spawn_bash(mycommand,env={},debug=False,opt_name=None,**keywords):
-	"""spawn mycommand as an arguement to bash"""
-	args=[BASH_BINARY]
-	if not opt_name:
-	    opt_name=mycommand.split()[0]
-	if "BASH_ENV" not in env:
-	    env["BASH_ENV"] = "/etc/spork/is/not/valid/profile.env"
-	if debug:
-	    args.append("-x")
-	args.append("-c")
-	args.append(mycommand)
-	return spawn(args,env=env,opt_name=opt_name,**keywords)
-
-#def spawn_get_output(mycommand,spawn_type=spawn,raw_exit_code=False,emulate_gso=True, \
-#        collect_fds=[1],fd_pipes=None,**keywords):
-
-def spawn_get_output(mycommand,raw_exit_code=False,emulate_gso=True, \
-        collect_fds=[1],fd_pipes=None,**keywords):
-        """call spawn, collecting the output to fd's specified in collect_fds list
-        emulate_gso is a compatability hack to emulate commands.getstatusoutput's return, minus the
-        requirement it always be a bash call (spawn_type controls the actual spawn call), and minus the
-        'lets let log only stdin and let stderr slide by'.
-
-        emulate_gso was deprecated from the day it was added, so convert your code over.
-        spawn_type is the passed in function to call- typically spawn_bash, spawn, spawn_sandbox, or spawn_fakeroot"""
-        global selinux_capable
-        pr,pw=os.pipe()
-
-        #if type(spawn_type) not in [types.FunctionType, types.MethodType]:
-        #        s="spawn_type must be passed a function, not",type(spawn_type),spawn_type
-        #        raise Exception,s
-
-        if fd_pipes==None:
-                fd_pipes={}
-                fd_pipes[0] = 0
-
-        for x in collect_fds:
-                fd_pipes[x] = pw
-        keywords["returnpid"]=True
-
-        mypid=spawn_bash(mycommand,fd_pipes=fd_pipes,**keywords)
-        os.close(pw)
-        if type(mypid) != types.ListType:
-                os.close(pr)
-                return [mypid, "%s: No such file or directory" % mycommand.split()[0]]
-
-        fd=os.fdopen(pr,"r")
-        mydata=fd.readlines()
-        fd.close()
-        if emulate_gso:
-                mydata=string.join(mydata)
-                if len(mydata) and mydata[-1] == "\n":
-                        mydata=mydata[:-1]
-        retval=os.waitpid(mypid[0],0)[1]
-        cleanup(mypid)
-        if raw_exit_code:
-                return [retval,mydata]
-        retval=process_exit_code(retval)
-        return [retval, mydata]
-
-# base spawn function
-def spawn(mycommand,env={},raw_exit_code=False,opt_name=None,fd_pipes=None,returnpid=False,\
-	 uid=None,gid=None,groups=None,umask=None,logfile=None,path_lookup=True,\
-	 selinux_context=None, raise_signals=False, func_call=False):
-	"""base fork/execve function.
-	mycommand is the desired command- if you need a command to execute in a bash/sandbox/fakeroot
-	environment, use the appropriate spawn call.  This is a straight fork/exec code path.
-	Can either have a tuple, or a string passed in.  If uid/gid/groups/umask specified, it changes
-	the forked process to said value.  If path_lookup is on, a non-absolute command will be converted
-	to an absolute command, otherwise it returns None.
-
-	selinux_context is the desired context, dependant on selinux being available.
-	opt_name controls the name the processor goes by.
-	fd_pipes controls which file descriptor numbers are left open in the forked process- it's a dict of
-	current fd's raw fd #, desired #.
-
-	func_call is a boolean for specifying to execute a python function- use spawn_func instead.
-	raise_signals is questionable.  Basically throw an exception if signal'd.  No exception is thrown
-	if raw_input is on.
-
-	logfile overloads the specified fd's to write to a tee process which logs to logfile
-	returnpid returns the relevant pids (a list, including the logging process if logfile is on).
-
-	non-returnpid calls to spawn will block till the process has exited, returning the exitcode/signal
-	raw_exit_code controls whether the actual waitpid result is returned, or intrepretted."""
-
-	myc=''
-	if not func_call:
-		if type(mycommand)==types.StringType:
-			mycommand=mycommand.split()
-		myc = mycommand[0]
-		if not os.access(myc, os.X_OK):
-			if not path_lookup:
-				return None
-			myc = find_binary(myc)
-			if myc == None:
-			    return None
-        mypid=[]
-	if logfile:
-		pr,pw=os.pipe()
-		mypid.extend(spawn(('tee','-i','-a',logfile),returnpid=True,fd_pipes={0:pr,1:1,2:2}))
-		retval=os.waitpid(mypid[-1],os.WNOHANG)[1]
-		if retval != 0:
-			# he's dead jim.
-			if raw_exit_code:
-				return retval
-			return process_exit_code(retval)
-
-		if fd_pipes == None:
-			fd_pipes={}
-			fd_pipes[0] = 0
-		fd_pipes[1]=pw
-		fd_pipes[2]=pw
-
-	if not opt_name:
-		opt_name = mycommand[0]
-	myargs=[opt_name]
-	myargs.extend(mycommand[1:])
-	global spawned_pids
-	mypid.append(os.fork())
-	if mypid[-1] != 0:
-		#log the bugger.
-		spawned_pids.extend(mypid)
-
-	if mypid[-1] == 0:
-		if func_call:
-			spawned_pids = []
-
-		# this may look ugly, but basically it moves file descriptors around to ensure no
-		# handles that are needed are accidentally closed during the final dup2 calls.
-		trg_fd=[]
-		if type(fd_pipes)==types.DictType:
-			src_fd=[]
-			k=fd_pipes.keys()
-			k.sort()
-
-			#build list of which fds will be where, and where they are at currently
-			for x in k:
-				trg_fd.append(x)
-				src_fd.append(fd_pipes[x])
-
-			# run through said list dup'ing descriptors so that they won't be waxed
-			# by other dup calls.
-			for x in range(0,len(trg_fd)):
-				if trg_fd[x] == src_fd[x]:
-					continue
-				if trg_fd[x] in src_fd[x+1:]:
-					new=os.dup2(trg_fd[x],max(src_fd) + 1)
-					os.close(trg_fd[x])
-					try:
-						while True:
-							src_fd[s.index(trg_fd[x])]=new
-					except SystemExit, e:
-						raise
-					except:
-						pass
-
-			# transfer the fds to their final pre-exec position.
-			for x in range(0,len(trg_fd)):
-				if trg_fd[x] != src_fd[x]:
-					os.dup2(src_fd[x], trg_fd[x])
-		else:
-			trg_fd=[0,1,2]
-
-		# wax all open descriptors that weren't requested be left open.
-		for x in range(0,max_fd_limit):
-			if x not in trg_fd:
-				try:
-					os.close(x)
-                                except SystemExit, e:
-                                        raise
-                                except:
-                                        pass
-
-                # note this order must be preserved- can't change gid/groups if you change uid first.
-                if selinux_capable and selinux_context:
-                        import selinux
-                        selinux.setexec(selinux_context)
-                if gid:
-                        os.setgid(gid)
-                if groups:
-                        os.setgroups(groups)
-                if uid:
-                        os.setuid(uid)
-                if umask:
-                        os.umask(umask)
-                else:
-                        os.umask(022)
-
-                try:
-                        #print "execing", myc, myargs
-                        if func_call:
-                                # either use a passed in func for interpretting the results, or return if no exception.
-                                # note the passed in list, and dict are expanded.
-                                if len(mycommand) == 4:
-                                        os._exit(mycommand[3](mycommand[0](*mycommand[1],**mycommand[2])))
-                                try:
-                                        mycommand[0](*mycommand[1],**mycommand[2])
-                                except Exception,e:
-                                        print "caught exception",e," in forked func",mycommand[0]
-                                sys.exit(0)
-
-			#os.execvp(myc,myargs)
-                        os.execve(myc,myargs,env)
-                except SystemExit, e:
-                        raise
-                except Exception, e:
-                        if not func_call:
-                                raise str(e)+":\n   "+myc+" "+string.join(myargs)
-                        print "func call failed"
-
-                # If the execve fails, we need to report it, and exit
-                # *carefully* --- report error here
-                os._exit(1)
-                sys.exit(1)
-                return # should never get reached
-
-        # if we were logging, kill the pipes.
-        if logfile:
-                os.close(pr)
-                os.close(pw)
-
-        if returnpid:
-                return mypid
-
-        # loop through pids (typically one, unless logging), either waiting on their death, or waxing them
-        # if the main pid (mycommand) returned badly.
-        while len(mypid):
-                retval=os.waitpid(mypid[-1],0)[1]
-                if retval != 0:
-                        cleanup(mypid[0:-1],block_exceptions=False)
-                        # at this point we've killed all other kid pids generated via this call.
-                        # return now.
-                        if raw_exit_code:
-                                return retval
-                        return process_exit_code(retval,throw_signals=raise_signals)
-                else:
-                        mypid.pop(-1)
-        cleanup(mypid)
-        return 0
-
-def cmd(mycmd,myexc="",env={}):
-	try:
-		sys.stdout.flush()
-		retval=spawn_bash(mycmd,env)
-		if retval != 0:
-			raise CatalystError,myexc
-	except:
-		raise
-
-def process_exit_code(retval,throw_signals=False):
-        """process a waitpid returned exit code, returning exit code if it exit'd, or the
-        signal if it died from signalling
-        if throw_signals is on, it raises a SystemExit if the process was signaled.
-        This is intended for usage with threads, although at the moment you can't signal individual
-        threads in python, only the master thread, so it's a questionable option."""
-        if (retval & 0xff)==0:
-                return retval >> 8 # return exit code
-        else:
-                if throw_signals:
-                        #use systemexit, since portage is stupid about exception catching.
-                        raise SystemExit()
-                return (retval & 0xff) << 8 # interrupted by signal
-
-def file_locate(settings,filelist,expand=1):
-	#if expand=1, non-absolute paths will be accepted and
-	# expanded to os.getcwd()+"/"+localpath if file exists
-	for myfile in filelist:
-		if myfile not in settings:
-			#filenames such as cdtar are optional, so we don't assume the variable is defined.
-			pass
-		else:
-		    if len(settings[myfile])==0:
-			    raise CatalystError, "File variable \""+myfile+"\" has a length of zero (not specified.)"
-		    if settings[myfile][0]=="/":
-			    if not os.path.exists(settings[myfile]):
-				    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]
-		    elif expand and os.path.exists(os.getcwd()+"/"+settings[myfile]):
-			    settings[myfile]=os.getcwd()+"/"+settings[myfile]
-		    else:
-			    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]+" (2nd try)"
-"""
-Spec file format:
-
-The spec file format is a very simple and easy-to-use format for storing data. Here's an example
-file:
-
-item1: value1
-item2: foo bar oni
-item3:
-	meep
-	bark
-	gleep moop
-
-This file would be interpreted as defining three items: item1, item2 and item3. item1 would contain
-the string value "value1". Item2 would contain an ordered list [ "foo", "bar", "oni" ]. item3
-would contain an ordered list as well: [ "meep", "bark", "gleep", "moop" ]. It's important to note
-that the order of multiple-value items is preserved, but the order that the items themselves are
-defined are not preserved. In other words, "foo", "bar", "oni" ordering is preserved but "item1"
-"item2" "item3" ordering is not, as the item strings are stored in a dictionary (hash).
-"""
-
-def parse_makeconf(mylines):
-	mymakeconf={}
-	pos=0
-	pat=re.compile("([0-9a-zA-Z_]*)=(.*)")
-	while pos<len(mylines):
-		if len(mylines[pos])<=1:
-			#skip blanks
-			pos += 1
-			continue
-		if mylines[pos][0] in ["#"," ","\t"]:
-			#skip indented lines, comments
-			pos += 1
-			continue
-		else:
-			myline=mylines[pos]
-			mobj=pat.match(myline)
-			pos += 1
-			if mobj.group(2):
-			    clean_string = re.sub(r"\"",r"",mobj.group(2))
-			    mymakeconf[mobj.group(1)]=clean_string
-	return mymakeconf
-
-def read_makeconf(mymakeconffile):
-	if os.path.exists(mymakeconffile):
-		try:
-			try:
-				import snakeoil.fileutils
-				return snakeoil.fileutils.read_bash_dict(mymakeconffile, sourcing_command="source")
-			except ImportError:
-				try:
-					import portage.util
-					return portage.util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
-				except:
-					try:
-						import portage_util
-						return portage_util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
-					except ImportError:
-						myf=open(mymakeconffile,"r")
-						mylines=myf.readlines()
-						myf.close()
-						return parse_makeconf(mylines)
-		except:
-			raise CatalystError, "Could not parse make.conf file "+mymakeconffile
-	else:
-		makeconf={}
-		return makeconf
-
-def msg(mymsg,verblevel=1):
-	if verbosity>=verblevel:
-		print mymsg
-
-def pathcompare(path1,path2):
-	# Change double slashes to slash
-	path1 = re.sub(r"//",r"/",path1)
-	path2 = re.sub(r"//",r"/",path2)
-	# Removing ending slash
-	path1 = re.sub("/$","",path1)
-	path2 = re.sub("/$","",path2)
-
-	if path1 == path2:
-		return 1
-	return 0
-
-def ismount(path):
-	"enhanced to handle bind mounts"
-	if os.path.ismount(path):
-		return 1
-	a=os.popen("mount")
-	mylines=a.readlines()
-	a.close()
-	for line in mylines:
-		mysplit=line.split()
-		if pathcompare(path,mysplit[2]):
-			return 1
-	return 0
-
-def addl_arg_parse(myspec,addlargs,requiredspec,validspec):
-	"helper function to help targets parse additional arguments"
-	global valid_config_file_values
-
-	messages = []
-	for x in addlargs.keys():
-		if x not in validspec and x not in valid_config_file_values and x not in requiredspec:
-			messages.append("Argument \""+x+"\" not recognized.")
-		else:
-			myspec[x]=addlargs[x]
-
-	for x in requiredspec:
-		if x not in myspec:
-			messages.append("Required argument \""+x+"\" not specified.")
-
-	if messages:
-		raise CatalystError, '\n\tAlso: '.join(messages)
-
-def touch(myfile):
-	try:
-		myf=open(myfile,"w")
-		myf.close()
-	except IOError:
-		raise CatalystError, "Could not touch "+myfile+"."
-
-def countdown(secs=5, doing="Starting"):
-        if secs:
-		print ">>> Waiting",secs,"seconds before starting..."
-		print ">>> (Control-C to abort)...\n"+doing+" in: ",
-		ticks=range(secs)
-		ticks.reverse()
-		for sec in ticks:
-			sys.stdout.write(str(sec+1)+" ")
-			sys.stdout.flush()
-			time.sleep(1)
-		print
-
-def normpath(mypath):
-	TrailingSlash=False
-        if mypath[-1] == "/":
-	    TrailingSlash=True
-        newpath = os.path.normpath(mypath)
-        if len(newpath) > 1:
-                if newpath[:2] == "//":
-                        newpath = newpath[1:]
-	if TrailingSlash:
-	    newpath=newpath+'/'
-        return newpath
diff --git a/catalyst/modules/embedded_target.py b/catalyst/modules/embedded_target.py
index f38ea00..7cee7a6 100644
--- a/catalyst/modules/embedded_target.py
+++ b/catalyst/modules/embedded_target.py
@@ -11,7 +11,7 @@ ROOT=/tmp/submerge emerge --something foo bar .
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
 import os,string,imp,types,shutil
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 from stat import *
 
diff --git a/catalyst/modules/generic_stage_target.py b/catalyst/modules/generic_stage_target.py
index 63d919d..2c1a921 100644
--- a/catalyst/modules/generic_stage_target.py
+++ b/catalyst/modules/generic_stage_target.py
@@ -1,8 +1,8 @@
 import os,string,imp,types,shutil
-from catalyst_support import *
+from catalyst.support import *
 from generic_target import *
 from stat import *
-import catalyst_lock
+from catalyst.lock import LockDir
 
 
 PORT_LOGDIR_CLEAN = \
@@ -473,7 +473,7 @@ class generic_stage_target(generic_target):
 				normpath(self.settings["snapshot_cache"]+"/"+\
 				self.settings["snapshot"])
 			self.snapcache_lock=\
-				catalyst_lock.LockDir(self.settings["snapshot_cache_path"])
+				LockDir(self.settings["snapshot_cache_path"])
 			print "Caching snapshot to "+self.settings["snapshot_cache_path"]
 
 	def set_chroot_path(self):
@@ -483,7 +483,7 @@ class generic_stage_target(generic_target):
 		"""
 		self.settings["chroot_path"]=normpath(self.settings["storedir"]+\
 			"/tmp/"+self.settings["target_subpath"])
-		self.chroot_lock=catalyst_lock.LockDir(self.settings["chroot_path"])
+		self.chroot_lock=LockDir(self.settings["chroot_path"])
 
 	def set_autoresume_path(self):
 		self.settings["autoresume_path"]=normpath(self.settings["storedir"]+\
diff --git a/catalyst/modules/generic_target.py b/catalyst/modules/generic_target.py
index fe96bd7..de51994 100644
--- a/catalyst/modules/generic_target.py
+++ b/catalyst/modules/generic_target.py
@@ -1,4 +1,4 @@
-from catalyst_support import *
+from catalyst.support import *
 
 class generic_target:
 	"""
diff --git a/catalyst/modules/grp_target.py b/catalyst/modules/grp_target.py
index 6941522..8e70042 100644
--- a/catalyst/modules/grp_target.py
+++ b/catalyst/modules/grp_target.py
@@ -4,7 +4,7 @@ Gentoo Reference Platform (GRP) target
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
 import os,types,glob
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class grp_target(generic_stage_target):
diff --git a/catalyst/modules/livecd_stage1_target.py b/catalyst/modules/livecd_stage1_target.py
index 59de9bb..ac846ec 100644
--- a/catalyst/modules/livecd_stage1_target.py
+++ b/catalyst/modules/livecd_stage1_target.py
@@ -3,7 +3,7 @@ LiveCD stage1 target
 """
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class livecd_stage1_target(generic_stage_target):
diff --git a/catalyst/modules/livecd_stage2_target.py b/catalyst/modules/livecd_stage2_target.py
index c74c16d..8595ffc 100644
--- a/catalyst/modules/livecd_stage2_target.py
+++ b/catalyst/modules/livecd_stage2_target.py
@@ -4,7 +4,7 @@ LiveCD stage2 target, builds upon previous LiveCD stage1 tarball
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
 import os,string,types,stat,shutil
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class livecd_stage2_target(generic_stage_target):
diff --git a/catalyst/modules/netboot2_target.py b/catalyst/modules/netboot2_target.py
index 1ab7e7d..2b3cd20 100644
--- a/catalyst/modules/netboot2_target.py
+++ b/catalyst/modules/netboot2_target.py
@@ -4,7 +4,7 @@ netboot target, version 2
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
 import os,string,types
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class netboot2_target(generic_stage_target):
diff --git a/catalyst/modules/netboot_target.py b/catalyst/modules/netboot_target.py
index ff2c81f..9d01b7e 100644
--- a/catalyst/modules/netboot_target.py
+++ b/catalyst/modules/netboot_target.py
@@ -4,7 +4,7 @@ netboot target, version 1
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
 import os,string,types
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class netboot_target(generic_stage_target):
diff --git a/catalyst/modules/snapshot_target.py b/catalyst/modules/snapshot_target.py
index ba1bab5..d1b9e40 100644
--- a/catalyst/modules/snapshot_target.py
+++ b/catalyst/modules/snapshot_target.py
@@ -3,7 +3,7 @@ Snapshot target
 """
 
 import os
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class snapshot_target(generic_stage_target):
diff --git a/catalyst/modules/stage1_target.py b/catalyst/modules/stage1_target.py
index 5f4ffa0..8d5a674 100644
--- a/catalyst/modules/stage1_target.py
+++ b/catalyst/modules/stage1_target.py
@@ -3,7 +3,7 @@ stage1 target
 """
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class stage1_target(generic_stage_target):
diff --git a/catalyst/modules/stage2_target.py b/catalyst/modules/stage2_target.py
index 803ec59..0168718 100644
--- a/catalyst/modules/stage2_target.py
+++ b/catalyst/modules/stage2_target.py
@@ -3,7 +3,7 @@ stage2 target, builds upon previous stage1 tarball
 """
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class stage2_target(generic_stage_target):
diff --git a/catalyst/modules/stage3_target.py b/catalyst/modules/stage3_target.py
index 4d3a008..89edd66 100644
--- a/catalyst/modules/stage3_target.py
+++ b/catalyst/modules/stage3_target.py
@@ -3,7 +3,7 @@ stage3 target, builds upon previous stage2/stage3 tarball
 """
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class stage3_target(generic_stage_target):
diff --git a/catalyst/modules/stage4_target.py b/catalyst/modules/stage4_target.py
index ce41b2d..9168f2e 100644
--- a/catalyst/modules/stage4_target.py
+++ b/catalyst/modules/stage4_target.py
@@ -3,7 +3,7 @@ stage4 target, builds upon previous stage3/stage4 tarball
 """
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class stage4_target(generic_stage_target):
diff --git a/catalyst/modules/tinderbox_target.py b/catalyst/modules/tinderbox_target.py
index ca55610..1d31989 100644
--- a/catalyst/modules/tinderbox_target.py
+++ b/catalyst/modules/tinderbox_target.py
@@ -3,7 +3,7 @@ Tinderbox target
 """
 # NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
 
-from catalyst_support import *
+from catalyst.support import *
 from generic_stage_target import *
 
 class tinderbox_target(generic_stage_target):
diff --git a/catalyst/support.py b/catalyst/support.py
new file mode 100644
index 0000000..316dfa3
--- /dev/null
+++ b/catalyst/support.py
@@ -0,0 +1,718 @@
+
+import sys,string,os,types,re,signal,traceback,time
+#import md5,sha
+selinux_capable = False
+#userpriv_capable = (os.getuid() == 0)
+#fakeroot_capable = False
+BASH_BINARY             = "/bin/bash"
+
+try:
+        import resource
+        max_fd_limit=resource.getrlimit(RLIMIT_NOFILE)
+except SystemExit, e:
+        raise
+except:
+        # hokay, no resource module.
+        max_fd_limit=256
+
+# pids this process knows of.
+spawned_pids = []
+
+try:
+        import urllib
+except SystemExit, e:
+        raise
+
+def cleanup(pids,block_exceptions=True):
+        """function to go through and reap the list of pids passed to it"""
+        global spawned_pids
+        if type(pids) == int:
+                pids = [pids]
+        for x in pids:
+                try:
+                        os.kill(x,signal.SIGTERM)
+                        if os.waitpid(x,os.WNOHANG)[1] == 0:
+                                # feisty bugger, still alive.
+                                os.kill(x,signal.SIGKILL)
+                                os.waitpid(x,0)
+
+                except OSError, oe:
+                        if block_exceptions:
+                                pass
+                        if oe.errno not in (10,3):
+                                raise oe
+                except SystemExit:
+                        raise
+                except Exception:
+                        if block_exceptions:
+                                pass
+                try:                    spawned_pids.remove(x)
+                except IndexError:      pass
+
+
+
+# a function to turn a string of non-printable characters into a string of
+# hex characters
+def hexify(str):
+	hexStr = string.hexdigits
+	r = ''
+	for ch in str:
+		i = ord(ch)
+		r = r + hexStr[(i >> 4) & 0xF] + hexStr[i & 0xF]
+	return r
+# hexify()
+
+def generate_contents(file,contents_function="auto",verbose=False):
+	try:
+		_ = contents_function
+		if _ == 'auto' and file.endswith('.iso'):
+			_ = 'isoinfo-l'
+		if (_ in ['tar-tv','auto']):
+			if file.endswith('.tgz') or file.endswith('.tar.gz'):
+				_ = 'tar-tvz'
+			elif file.endswith('.tbz2') or file.endswith('.tar.bz2'):
+				_ = 'tar-tvj'
+			elif file.endswith('.tar'):
+				_ = 'tar-tv'
+
+		if _ == 'auto':
+			warn('File %r has unknown type for automatic detection.' % (file, ))
+			return None
+		else:
+			contents_function = _
+			_ = contents_map[contents_function]
+			return _[0](file,_[1],verbose)
+	except:
+		raise CatalystError,\
+			"Error generating contents, is appropriate utility (%s) installed on your system?" \
+			% (contents_function, )
+
+def calc_contents(file,cmd,verbose):
+	args={ 'file': file }
+	cmd=cmd % dict(args)
+	a=os.popen(cmd)
+	mylines=a.readlines()
+	a.close()
+	result="".join(mylines)
+	if verbose:
+		print result
+	return result
+
+# This has map must be defined after the function calc_content
+# It is possible to call different functions from this but they must be defined
+# before hash_map
+# Key,function,cmd
+contents_map={
+	# 'find' is disabled because it requires the source path, which is not
+	# always available
+	#"find"		:[calc_contents,"find %(path)s"],
+	"tar-tv":[calc_contents,"tar tvf %(file)s"],
+	"tar-tvz":[calc_contents,"tar tvzf %(file)s"],
+	"tar-tvj":[calc_contents,"tar -I lbzip2 -tvf %(file)s"],
+	"isoinfo-l":[calc_contents,"isoinfo -l -i %(file)s"],
+	# isoinfo-f should be a last resort only
+	"isoinfo-f":[calc_contents,"isoinfo -f -i %(file)s"],
+}
+
+def generate_hash(file,hash_function="crc32",verbose=False):
+	try:
+		return hash_map[hash_function][0](file,hash_map[hash_function][1],hash_map[hash_function][2],\
+			hash_map[hash_function][3],verbose)
+	except:
+		raise CatalystError,"Error generating hash, is appropriate utility installed on your system?"
+
+def calc_hash(file,cmd,cmd_args,id_string="MD5",verbose=False):
+	a=os.popen(cmd+" "+cmd_args+" "+file)
+	mylines=a.readlines()
+	a.close()
+	mylines=mylines[0].split()
+	result=mylines[0]
+	if verbose:
+		print id_string+" (%s) = %s" % (file, result)
+	return result
+
+def calc_hash2(file,cmd,cmd_args,id_string="MD5",verbose=False):
+	a=os.popen(cmd+" "+cmd_args+" "+file)
+	header=a.readline()
+	mylines=a.readline().split()
+	hash=mylines[0]
+	short_file=os.path.split(mylines[1])[1]
+	a.close()
+	result=header+hash+"  "+short_file+"\n"
+	if verbose:
+		print header+" (%s) = %s" % (short_file, result)
+	return result
+
+# This has map must be defined after the function calc_hash
+# It is possible to call different functions from this but they must be defined
+# before hash_map
+# Key,function,cmd,cmd_args,Print string
+hash_map={
+	 "adler32":[calc_hash2,"shash","-a ADLER32","ADLER32"],\
+	 "crc32":[calc_hash2,"shash","-a CRC32","CRC32"],\
+	 "crc32b":[calc_hash2,"shash","-a CRC32B","CRC32B"],\
+	 "gost":[calc_hash2,"shash","-a GOST","GOST"],\
+	 "haval128":[calc_hash2,"shash","-a HAVAL128","HAVAL128"],\
+	 "haval160":[calc_hash2,"shash","-a HAVAL160","HAVAL160"],\
+	 "haval192":[calc_hash2,"shash","-a HAVAL192","HAVAL192"],\
+	 "haval224":[calc_hash2,"shash","-a HAVAL224","HAVAL224"],\
+	 "haval256":[calc_hash2,"shash","-a HAVAL256","HAVAL256"],\
+	 "md2":[calc_hash2,"shash","-a MD2","MD2"],\
+	 "md4":[calc_hash2,"shash","-a MD4","MD4"],\
+	 "md5":[calc_hash2,"shash","-a MD5","MD5"],\
+	 "ripemd128":[calc_hash2,"shash","-a RIPEMD128","RIPEMD128"],\
+	 "ripemd160":[calc_hash2,"shash","-a RIPEMD160","RIPEMD160"],\
+	 "ripemd256":[calc_hash2,"shash","-a RIPEMD256","RIPEMD256"],\
+	 "ripemd320":[calc_hash2,"shash","-a RIPEMD320","RIPEMD320"],\
+	 "sha1":[calc_hash2,"shash","-a SHA1","SHA1"],\
+	 "sha224":[calc_hash2,"shash","-a SHA224","SHA224"],\
+	 "sha256":[calc_hash2,"shash","-a SHA256","SHA256"],\
+	 "sha384":[calc_hash2,"shash","-a SHA384","SHA384"],\
+	 "sha512":[calc_hash2,"shash","-a SHA512","SHA512"],\
+	 "snefru128":[calc_hash2,"shash","-a SNEFRU128","SNEFRU128"],\
+	 "snefru256":[calc_hash2,"shash","-a SNEFRU256","SNEFRU256"],\
+	 "tiger":[calc_hash2,"shash","-a TIGER","TIGER"],\
+	 "tiger128":[calc_hash2,"shash","-a TIGER128","TIGER128"],\
+	 "tiger160":[calc_hash2,"shash","-a TIGER160","TIGER160"],\
+	 "whirlpool":[calc_hash2,"shash","-a WHIRLPOOL","WHIRLPOOL"],\
+	 }
+
+def read_from_clst(file):
+	line = ''
+	myline = ''
+	try:
+		myf=open(file,"r")
+	except:
+		return -1
+		#raise CatalystError, "Could not open file "+file
+	for line in myf.readlines():
+	    #line = string.replace(line, "\n", "") # drop newline
+	    myline = myline + line
+	myf.close()
+	return myline
+# read_from_clst
+
+# these should never be touched
+required_build_targets=["generic_target","generic_stage_target"]
+
+# new build types should be added here
+valid_build_targets=["stage1_target","stage2_target","stage3_target","stage4_target","grp_target",
+			"livecd_stage1_target","livecd_stage2_target","embedded_target",
+			"tinderbox_target","snapshot_target","netboot_target","netboot2_target"]
+
+required_config_file_values=["storedir","sharedir","distdir","portdir"]
+valid_config_file_values=required_config_file_values[:]
+valid_config_file_values.append("PKGCACHE")
+valid_config_file_values.append("KERNCACHE")
+valid_config_file_values.append("CCACHE")
+valid_config_file_values.append("DISTCC")
+valid_config_file_values.append("ICECREAM")
+valid_config_file_values.append("ENVSCRIPT")
+valid_config_file_values.append("AUTORESUME")
+valid_config_file_values.append("FETCH")
+valid_config_file_values.append("CLEAR_AUTORESUME")
+valid_config_file_values.append("options")
+valid_config_file_values.append("DEBUG")
+valid_config_file_values.append("VERBOSE")
+valid_config_file_values.append("PURGE")
+valid_config_file_values.append("PURGEONLY")
+valid_config_file_values.append("SNAPCACHE")
+valid_config_file_values.append("snapshot_cache")
+valid_config_file_values.append("hash_function")
+valid_config_file_values.append("digests")
+valid_config_file_values.append("contents")
+valid_config_file_values.append("SEEDCACHE")
+
+verbosity=1
+
+def list_bashify(mylist):
+	if type(mylist)==types.StringType:
+		mypack=[mylist]
+	else:
+		mypack=mylist[:]
+	for x in range(0,len(mypack)):
+		# surround args with quotes for passing to bash,
+		# allows things like "<" to remain intact
+		mypack[x]="'"+mypack[x]+"'"
+	mypack=string.join(mypack)
+	return mypack
+
+def list_to_string(mylist):
+	if type(mylist)==types.StringType:
+		mypack=[mylist]
+	else:
+		mypack=mylist[:]
+	for x in range(0,len(mypack)):
+		# surround args with quotes for passing to bash,
+		# allows things like "<" to remain intact
+		mypack[x]=mypack[x]
+	mypack=string.join(mypack)
+	return mypack
+
+class CatalystError(Exception):
+	def __init__(self, message):
+		if message:
+			(type,value)=sys.exc_info()[:2]
+			if value!=None:
+				print
+				print traceback.print_exc(file=sys.stdout)
+			print
+			print "!!! catalyst: "+message
+			print
+
+class LockInUse(Exception):
+	def __init__(self, message):
+		if message:
+			#(type,value)=sys.exc_info()[:2]
+			#if value!=None:
+			    #print
+			    #kprint traceback.print_exc(file=sys.stdout)
+			print
+			print "!!! catalyst lock file in use: "+message
+			print
+
+def die(msg=None):
+	warn(msg)
+	sys.exit(1)
+
+def warn(msg):
+	print "!!! catalyst: "+msg
+
+def find_binary(myc):
+	"""look through the environmental path for an executable file named whatever myc is"""
+        # this sucks. badly.
+        p=os.getenv("PATH")
+        if p == None:
+                return None
+        for x in p.split(":"):
+                #if it exists, and is executable
+                if os.path.exists("%s/%s" % (x,myc)) and os.stat("%s/%s" % (x,myc))[0] & 0x0248:
+                        return "%s/%s" % (x,myc)
+        return None
+
+def spawn_bash(mycommand,env={},debug=False,opt_name=None,**keywords):
+	"""spawn mycommand as an arguement to bash"""
+	args=[BASH_BINARY]
+	if not opt_name:
+	    opt_name=mycommand.split()[0]
+	if "BASH_ENV" not in env:
+	    env["BASH_ENV"] = "/etc/spork/is/not/valid/profile.env"
+	if debug:
+	    args.append("-x")
+	args.append("-c")
+	args.append(mycommand)
+	return spawn(args,env=env,opt_name=opt_name,**keywords)
+
+#def spawn_get_output(mycommand,spawn_type=spawn,raw_exit_code=False,emulate_gso=True, \
+#        collect_fds=[1],fd_pipes=None,**keywords):
+
+def spawn_get_output(mycommand,raw_exit_code=False,emulate_gso=True, \
+        collect_fds=[1],fd_pipes=None,**keywords):
+        """call spawn, collecting the output to fd's specified in collect_fds list
+        emulate_gso is a compatability hack to emulate commands.getstatusoutput's return, minus the
+        requirement it always be a bash call (spawn_type controls the actual spawn call), and minus the
+        'lets let log only stdin and let stderr slide by'.
+
+        emulate_gso was deprecated from the day it was added, so convert your code over.
+        spawn_type is the passed in function to call- typically spawn_bash, spawn, spawn_sandbox, or spawn_fakeroot"""
+        global selinux_capable
+        pr,pw=os.pipe()
+
+        #if type(spawn_type) not in [types.FunctionType, types.MethodType]:
+        #        s="spawn_type must be passed a function, not",type(spawn_type),spawn_type
+        #        raise Exception,s
+
+        if fd_pipes==None:
+                fd_pipes={}
+                fd_pipes[0] = 0
+
+        for x in collect_fds:
+                fd_pipes[x] = pw
+        keywords["returnpid"]=True
+
+        mypid=spawn_bash(mycommand,fd_pipes=fd_pipes,**keywords)
+        os.close(pw)
+        if type(mypid) != types.ListType:
+                os.close(pr)
+                return [mypid, "%s: No such file or directory" % mycommand.split()[0]]
+
+        fd=os.fdopen(pr,"r")
+        mydata=fd.readlines()
+        fd.close()
+        if emulate_gso:
+                mydata=string.join(mydata)
+                if len(mydata) and mydata[-1] == "\n":
+                        mydata=mydata[:-1]
+        retval=os.waitpid(mypid[0],0)[1]
+        cleanup(mypid)
+        if raw_exit_code:
+                return [retval,mydata]
+        retval=process_exit_code(retval)
+        return [retval, mydata]
+
+# base spawn function
+def spawn(mycommand,env={},raw_exit_code=False,opt_name=None,fd_pipes=None,returnpid=False,\
+	 uid=None,gid=None,groups=None,umask=None,logfile=None,path_lookup=True,\
+	 selinux_context=None, raise_signals=False, func_call=False):
+	"""base fork/execve function.
+	mycommand is the desired command- if you need a command to execute in a bash/sandbox/fakeroot
+	environment, use the appropriate spawn call.  This is a straight fork/exec code path.
+	Can either have a tuple, or a string passed in.  If uid/gid/groups/umask specified, it changes
+	the forked process to said value.  If path_lookup is on, a non-absolute command will be converted
+	to an absolute command, otherwise it returns None.
+
+	selinux_context is the desired context, dependant on selinux being available.
+	opt_name controls the name the processor goes by.
+	fd_pipes controls which file descriptor numbers are left open in the forked process- it's a dict of
+	current fd's raw fd #, desired #.
+
+	func_call is a boolean for specifying to execute a python function- use spawn_func instead.
+	raise_signals is questionable.  Basically throw an exception if signal'd.  No exception is thrown
+	if raw_input is on.
+
+	logfile overloads the specified fd's to write to a tee process which logs to logfile
+	returnpid returns the relevant pids (a list, including the logging process if logfile is on).
+
+	non-returnpid calls to spawn will block till the process has exited, returning the exitcode/signal
+	raw_exit_code controls whether the actual waitpid result is returned, or intrepretted."""
+
+	myc=''
+	if not func_call:
+		if type(mycommand)==types.StringType:
+			mycommand=mycommand.split()
+		myc = mycommand[0]
+		if not os.access(myc, os.X_OK):
+			if not path_lookup:
+				return None
+			myc = find_binary(myc)
+			if myc == None:
+			    return None
+        mypid=[]
+	if logfile:
+		pr,pw=os.pipe()
+		mypid.extend(spawn(('tee','-i','-a',logfile),returnpid=True,fd_pipes={0:pr,1:1,2:2}))
+		retval=os.waitpid(mypid[-1],os.WNOHANG)[1]
+		if retval != 0:
+			# he's dead jim.
+			if raw_exit_code:
+				return retval
+			return process_exit_code(retval)
+
+		if fd_pipes == None:
+			fd_pipes={}
+			fd_pipes[0] = 0
+		fd_pipes[1]=pw
+		fd_pipes[2]=pw
+
+	if not opt_name:
+		opt_name = mycommand[0]
+	myargs=[opt_name]
+	myargs.extend(mycommand[1:])
+	global spawned_pids
+	mypid.append(os.fork())
+	if mypid[-1] != 0:
+		#log the bugger.
+		spawned_pids.extend(mypid)
+
+	if mypid[-1] == 0:
+		if func_call:
+			spawned_pids = []
+
+		# this may look ugly, but basically it moves file descriptors around to ensure no
+		# handles that are needed are accidentally closed during the final dup2 calls.
+		trg_fd=[]
+		if type(fd_pipes)==types.DictType:
+			src_fd=[]
+			k=fd_pipes.keys()
+			k.sort()
+
+			#build list of which fds will be where, and where they are at currently
+			for x in k:
+				trg_fd.append(x)
+				src_fd.append(fd_pipes[x])
+
+			# run through said list dup'ing descriptors so that they won't be waxed
+			# by other dup calls.
+			for x in range(0,len(trg_fd)):
+				if trg_fd[x] == src_fd[x]:
+					continue
+				if trg_fd[x] in src_fd[x+1:]:
+					new=os.dup2(trg_fd[x],max(src_fd) + 1)
+					os.close(trg_fd[x])
+					try:
+						while True:
+							src_fd[s.index(trg_fd[x])]=new
+					except SystemExit, e:
+						raise
+					except:
+						pass
+
+			# transfer the fds to their final pre-exec position.
+			for x in range(0,len(trg_fd)):
+				if trg_fd[x] != src_fd[x]:
+					os.dup2(src_fd[x], trg_fd[x])
+		else:
+			trg_fd=[0,1,2]
+
+		# wax all open descriptors that weren't requested be left open.
+		for x in range(0,max_fd_limit):
+			if x not in trg_fd:
+				try:
+					os.close(x)
+                                except SystemExit, e:
+                                        raise
+                                except:
+                                        pass
+
+                # note this order must be preserved- can't change gid/groups if you change uid first.
+                if selinux_capable and selinux_context:
+                        import selinux
+                        selinux.setexec(selinux_context)
+                if gid:
+                        os.setgid(gid)
+                if groups:
+                        os.setgroups(groups)
+                if uid:
+                        os.setuid(uid)
+                if umask:
+                        os.umask(umask)
+                else:
+                        os.umask(022)
+
+                try:
+                        #print "execing", myc, myargs
+                        if func_call:
+                                # either use a passed in func for interpretting the results, or return if no exception.
+                                # note the passed in list, and dict are expanded.
+                                if len(mycommand) == 4:
+                                        os._exit(mycommand[3](mycommand[0](*mycommand[1],**mycommand[2])))
+                                try:
+                                        mycommand[0](*mycommand[1],**mycommand[2])
+                                except Exception,e:
+                                        print "caught exception",e," in forked func",mycommand[0]
+                                sys.exit(0)
+
+			#os.execvp(myc,myargs)
+                        os.execve(myc,myargs,env)
+                except SystemExit, e:
+                        raise
+                except Exception, e:
+                        if not func_call:
+                                raise str(e)+":\n   "+myc+" "+string.join(myargs)
+                        print "func call failed"
+
+                # If the execve fails, we need to report it, and exit
+                # *carefully* --- report error here
+                os._exit(1)
+                sys.exit(1)
+                return # should never get reached
+
+        # if we were logging, kill the pipes.
+        if logfile:
+                os.close(pr)
+                os.close(pw)
+
+        if returnpid:
+                return mypid
+
+        # loop through pids (typically one, unless logging), either waiting on their death, or waxing them
+        # if the main pid (mycommand) returned badly.
+        while len(mypid):
+                retval=os.waitpid(mypid[-1],0)[1]
+                if retval != 0:
+                        cleanup(mypid[0:-1],block_exceptions=False)
+                        # at this point we've killed all other kid pids generated via this call.
+                        # return now.
+                        if raw_exit_code:
+                                return retval
+                        return process_exit_code(retval,throw_signals=raise_signals)
+                else:
+                        mypid.pop(-1)
+        cleanup(mypid)
+        return 0
+
+def cmd(mycmd,myexc="",env={}):
+	try:
+		sys.stdout.flush()
+		retval=spawn_bash(mycmd,env)
+		if retval != 0:
+			raise CatalystError,myexc
+	except:
+		raise
+
+def process_exit_code(retval,throw_signals=False):
+        """process a waitpid returned exit code, returning exit code if it exit'd, or the
+        signal if it died from signalling
+        if throw_signals is on, it raises a SystemExit if the process was signaled.
+        This is intended for usage with threads, although at the moment you can't signal individual
+        threads in python, only the master thread, so it's a questionable option."""
+        if (retval & 0xff)==0:
+                return retval >> 8 # return exit code
+        else:
+                if throw_signals:
+                        #use systemexit, since portage is stupid about exception catching.
+                        raise SystemExit()
+                return (retval & 0xff) << 8 # interrupted by signal
+
+def file_locate(settings,filelist,expand=1):
+	#if expand=1, non-absolute paths will be accepted and
+	# expanded to os.getcwd()+"/"+localpath if file exists
+	for myfile in filelist:
+		if myfile not in settings:
+			#filenames such as cdtar are optional, so we don't assume the variable is defined.
+			pass
+		else:
+		    if len(settings[myfile])==0:
+			    raise CatalystError, "File variable \""+myfile+"\" has a length of zero (not specified.)"
+		    if settings[myfile][0]=="/":
+			    if not os.path.exists(settings[myfile]):
+				    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]
+		    elif expand and os.path.exists(os.getcwd()+"/"+settings[myfile]):
+			    settings[myfile]=os.getcwd()+"/"+settings[myfile]
+		    else:
+			    raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]+" (2nd try)"
+"""
+Spec file format:
+
+The spec file format is a very simple and easy-to-use format for storing data. Here's an example
+file:
+
+item1: value1
+item2: foo bar oni
+item3:
+	meep
+	bark
+	gleep moop
+
+This file would be interpreted as defining three items: item1, item2 and item3. item1 would contain
+the string value "value1". Item2 would contain an ordered list [ "foo", "bar", "oni" ]. item3
+would contain an ordered list as well: [ "meep", "bark", "gleep", "moop" ]. It's important to note
+that the order of multiple-value items is preserved, but the order that the items themselves are
+defined are not preserved. In other words, "foo", "bar", "oni" ordering is preserved but "item1"
+"item2" "item3" ordering is not, as the item strings are stored in a dictionary (hash).
+"""
+
+def parse_makeconf(mylines):
+	mymakeconf={}
+	pos=0
+	pat=re.compile("([0-9a-zA-Z_]*)=(.*)")
+	while pos<len(mylines):
+		if len(mylines[pos])<=1:
+			#skip blanks
+			pos += 1
+			continue
+		if mylines[pos][0] in ["#"," ","\t"]:
+			#skip indented lines, comments
+			pos += 1
+			continue
+		else:
+			myline=mylines[pos]
+			mobj=pat.match(myline)
+			pos += 1
+			if mobj.group(2):
+			    clean_string = re.sub(r"\"",r"",mobj.group(2))
+			    mymakeconf[mobj.group(1)]=clean_string
+	return mymakeconf
+
+def read_makeconf(mymakeconffile):
+	if os.path.exists(mymakeconffile):
+		try:
+			try:
+				import snakeoil.fileutils
+				return snakeoil.fileutils.read_bash_dict(mymakeconffile, sourcing_command="source")
+			except ImportError:
+				try:
+					import portage.util
+					return portage.util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
+				except:
+					try:
+						import portage_util
+						return portage_util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
+					except ImportError:
+						myf=open(mymakeconffile,"r")
+						mylines=myf.readlines()
+						myf.close()
+						return parse_makeconf(mylines)
+		except:
+			raise CatalystError, "Could not parse make.conf file "+mymakeconffile
+	else:
+		makeconf={}
+		return makeconf
+
+def msg(mymsg,verblevel=1):
+	if verbosity>=verblevel:
+		print mymsg
+
+def pathcompare(path1,path2):
+	# Change double slashes to slash
+	path1 = re.sub(r"//",r"/",path1)
+	path2 = re.sub(r"//",r"/",path2)
+	# Removing ending slash
+	path1 = re.sub("/$","",path1)
+	path2 = re.sub("/$","",path2)
+
+	if path1 == path2:
+		return 1
+	return 0
+
+def ismount(path):
+	"enhanced to handle bind mounts"
+	if os.path.ismount(path):
+		return 1
+	a=os.popen("mount")
+	mylines=a.readlines()
+	a.close()
+	for line in mylines:
+		mysplit=line.split()
+		if pathcompare(path,mysplit[2]):
+			return 1
+	return 0
+
+def addl_arg_parse(myspec,addlargs,requiredspec,validspec):
+	"helper function to help targets parse additional arguments"
+	global valid_config_file_values
+
+	messages = []
+	for x in addlargs.keys():
+		if x not in validspec and x not in valid_config_file_values and x not in requiredspec:
+			messages.append("Argument \""+x+"\" not recognized.")
+		else:
+			myspec[x]=addlargs[x]
+
+	for x in requiredspec:
+		if x not in myspec:
+			messages.append("Required argument \""+x+"\" not specified.")
+
+	if messages:
+		raise CatalystError, '\n\tAlso: '.join(messages)
+
+def touch(myfile):
+	try:
+		myf=open(myfile,"w")
+		myf.close()
+	except IOError:
+		raise CatalystError, "Could not touch "+myfile+"."
+
+def countdown(secs=5, doing="Starting"):
+        if secs:
+		print ">>> Waiting",secs,"seconds before starting..."
+		print ">>> (Control-C to abort)...\n"+doing+" in: ",
+		ticks=range(secs)
+		ticks.reverse()
+		for sec in ticks:
+			sys.stdout.write(str(sec+1)+" ")
+			sys.stdout.flush()
+			time.sleep(1)
+		print
+
+def normpath(mypath):
+	TrailingSlash=False
+        if mypath[-1] == "/":
+	    TrailingSlash=True
+        newpath = os.path.normpath(mypath)
+        if len(newpath) > 1:
+                if newpath[:2] == "//":
+                        newpath = newpath[1:]
+	if TrailingSlash:
+	    newpath=newpath+'/'
+        return newpath
-- 
1.8.3.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [gentoo-catalyst] [PATCH 3/5] Rename the modules subpkg to targets, to better reflect what it contains.
  2014-01-12  1:46 [gentoo-catalyst] Re-organize the python structure Brian Dolbec
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories Brian Dolbec
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 2/5] Move catalyst_support, builder, catalyst_lock out of modules, into the catalyst namespace Brian Dolbec
@ 2014-01-12  1:46 ` Brian Dolbec
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 4/5] Move catalyst.conf and catalystrc to an etc/ directory Brian Dolbec
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12  1:46 UTC (permalink / raw
  To: gentoo-catalyst; +Cc: Brian Dolbec

---
 catalyst/main.py                         |    6 +-
 catalyst/modules/__init__.py             |    1 -
 catalyst/modules/embedded_target.py      |   51 -
 catalyst/modules/generic_stage_target.py | 1741 ------------------------------
 catalyst/modules/generic_target.py       |   11 -
 catalyst/modules/grp_target.py           |  118 --
 catalyst/modules/livecd_stage1_target.py |   75 --
 catalyst/modules/livecd_stage2_target.py |  148 ---
 catalyst/modules/netboot2_target.py      |  166 ---
 catalyst/modules/netboot_target.py       |  128 ---
 catalyst/modules/snapshot_target.py      |   91 --
 catalyst/modules/stage1_target.py        |   97 --
 catalyst/modules/stage2_target.py        |   66 --
 catalyst/modules/stage3_target.py        |   31 -
 catalyst/modules/stage4_target.py        |   43 -
 catalyst/modules/tinderbox_target.py     |   48 -
 catalyst/targets/__init__.py             |    1 +
 catalyst/targets/embedded_target.py      |   51 +
 catalyst/targets/generic_stage_target.py | 1741 ++++++++++++++++++++++++++++++
 catalyst/targets/generic_target.py       |   11 +
 catalyst/targets/grp_target.py           |  118 ++
 catalyst/targets/livecd_stage1_target.py |   75 ++
 catalyst/targets/livecd_stage2_target.py |  148 +++
 catalyst/targets/netboot2_target.py      |  166 +++
 catalyst/targets/netboot_target.py       |  128 +++
 catalyst/targets/snapshot_target.py      |   91 ++
 catalyst/targets/stage1_target.py        |   97 ++
 catalyst/targets/stage2_target.py        |   66 ++
 catalyst/targets/stage3_target.py        |   31 +
 catalyst/targets/stage4_target.py        |   43 +
 catalyst/targets/tinderbox_target.py     |   48 +
 31 files changed, 2818 insertions(+), 2818 deletions(-)
 delete mode 100644 catalyst/modules/__init__.py
 delete mode 100644 catalyst/modules/embedded_target.py
 delete mode 100644 catalyst/modules/generic_stage_target.py
 delete mode 100644 catalyst/modules/generic_target.py
 delete mode 100644 catalyst/modules/grp_target.py
 delete mode 100644 catalyst/modules/livecd_stage1_target.py
 delete mode 100644 catalyst/modules/livecd_stage2_target.py
 delete mode 100644 catalyst/modules/netboot2_target.py
 delete mode 100644 catalyst/modules/netboot_target.py
 delete mode 100644 catalyst/modules/snapshot_target.py
 delete mode 100644 catalyst/modules/stage1_target.py
 delete mode 100644 catalyst/modules/stage2_target.py
 delete mode 100644 catalyst/modules/stage3_target.py
 delete mode 100644 catalyst/modules/stage4_target.py
 delete mode 100644 catalyst/modules/tinderbox_target.py
 create mode 100644 catalyst/targets/__init__.py
 create mode 100644 catalyst/targets/embedded_target.py
 create mode 100644 catalyst/targets/generic_stage_target.py
 create mode 100644 catalyst/targets/generic_target.py
 create mode 100644 catalyst/targets/grp_target.py
 create mode 100644 catalyst/targets/livecd_stage1_target.py
 create mode 100644 catalyst/targets/livecd_stage2_target.py
 create mode 100644 catalyst/targets/netboot2_target.py
 create mode 100644 catalyst/targets/netboot_target.py
 create mode 100644 catalyst/targets/snapshot_target.py
 create mode 100644 catalyst/targets/stage1_target.py
 create mode 100644 catalyst/targets/stage2_target.py
 create mode 100644 catalyst/targets/stage3_target.py
 create mode 100644 catalyst/targets/stage4_target.py
 create mode 100644 catalyst/targets/tinderbox_target.py

diff --git a/catalyst/main.py b/catalyst/main.py
index 7b66dab..082d7d9 100644
--- a/catalyst/main.py
+++ b/catalyst/main.py
@@ -201,11 +201,11 @@ def import_modules():
 	targetmap={}
 
 	try:
-		module_dir = __selfpath__ + "/modules/"
+		module_dir = __selfpath__ + "/targets/"
 		for x in required_build_targets:
 			try:
 				fh=open(module_dir + x + ".py")
-				module=imp.load_module(x, fh,"modules/" + x + ".py",
+				module=imp.load_module(x, fh,"targets/" + x + ".py",
 					(".py", "r", imp.PY_SOURCE))
 				fh.close()
 
@@ -215,7 +215,7 @@ def import_modules():
 		for x in valid_build_targets:
 			try:
 				fh=open(module_dir + x + ".py")
-				module=imp.load_module(x, fh, "modules/" + x + ".py",
+				module=imp.load_module(x, fh, "targets/" + x + ".py",
 					(".py", "r", imp.PY_SOURCE))
 				module.register(targetmap)
 				fh.close()
diff --git a/catalyst/modules/__init__.py b/catalyst/modules/__init__.py
deleted file mode 100644
index 8b13789..0000000
--- a/catalyst/modules/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/catalyst/modules/embedded_target.py b/catalyst/modules/embedded_target.py
deleted file mode 100644
index 7cee7a6..0000000
--- a/catalyst/modules/embedded_target.py
+++ /dev/null
@@ -1,51 +0,0 @@
-"""
-Enbedded target, similar to the stage2 target, builds upon a stage2 tarball.
-
-A stage2 tarball is unpacked, but instead
-of building a stage3, it emerges @system into another directory
-inside the stage2 system.  This way, we do not have to emerge GCC/portage
-into the staged system.
-It may sound complicated but basically it runs
-ROOT=/tmp/submerge emerge --something foo bar .
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,imp,types,shutil
-from catalyst.support import *
-from generic_stage_target import *
-from stat import *
-
-class embedded_target(generic_stage_target):
-	"""
-	Builder class for embedded target
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=[]
-		self.valid_values.extend(["embedded/empty","embedded/rm","embedded/unmerge","embedded/fs-prepare","embedded/fs-finish","embedded/mergeroot","embedded/packages","embedded/fs-type","embedded/runscript","boot/kernel","embedded/linuxrc"])
-		self.valid_values.extend(["embedded/use"])
-		if "embedded/fs-type" in addlargs:
-			self.valid_values.append("embedded/fs-ops")
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		self.set_build_kernel_vars(addlargs)
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["dir_setup","unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir",\
-					"portage_overlay","bind","chroot_setup",\
-					"setup_environment","build_kernel","build_packages",\
-					"bootloader","root_overlay","fsscript","unmerge",\
-					"unbind","remove","empty","clean","capture","clear_autoresume"]
-
-	def set_stage_path(self):
-		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+"/tmp/mergeroot")
-		print "embedded stage path is "+self.settings["stage_path"]
-
-	def set_root_path(self):
-		self.settings["root_path"]=normpath("/tmp/mergeroot")
-		print "embedded root path is "+self.settings["root_path"]
-
-def register(foo):
-	foo.update({"embedded":embedded_target})
-	return foo
diff --git a/catalyst/modules/generic_stage_target.py b/catalyst/modules/generic_stage_target.py
deleted file mode 100644
index 2c1a921..0000000
--- a/catalyst/modules/generic_stage_target.py
+++ /dev/null
@@ -1,1741 +0,0 @@
-import os,string,imp,types,shutil
-from catalyst.support import *
-from generic_target import *
-from stat import *
-from catalyst.lock import LockDir
-
-
-PORT_LOGDIR_CLEAN = \
-	'find "${PORT_LOGDIR}" -type f ! -name "summary.log*" -mtime +30 -delete'
-
-TARGET_MOUNTS_DEFAULTS = {
-	"ccache": "/var/tmp/ccache",
-	"dev": "/dev",
-	"devpts": "/dev/pts",
-	"distdir": "/usr/portage/distfiles",
-	"icecream": "/usr/lib/icecc/bin",
-	"kerncache": "/tmp/kerncache",
-	"packagedir": "/usr/portage/packages",
-	"portdir": "/usr/portage",
-	"port_tmpdir": "/var/tmp/portage",
-	"port_logdir": "/var/log/portage",
-	"proc": "/proc",
-	"shm": "/dev/shm",
-	}
-
-SOURCE_MOUNTS_DEFAULTS = {
-	"dev": "/dev",
-	"devpts": "/dev/pts",
-	"distdir": "/usr/portage/distfiles",
-	"portdir": "/usr/portage",
-	"port_tmpdir": "tmpfs",
-	"proc": "/proc",
-	"shm": "shmfs",
-	}
-
-
-class generic_stage_target(generic_target):
-	"""
-	This class does all of the chroot setup, copying of files, etc. It is
-	the driver class for pretty much everything that Catalyst does.
-	"""
-	def __init__(self,myspec,addlargs):
-		self.required_values.extend(["version_stamp","target","subarch",\
-			"rel_type","profile","snapshot","source_subpath"])
-
-		self.valid_values.extend(["version_stamp","target","subarch",\
-			"rel_type","profile","snapshot","source_subpath","portage_confdir",\
-			"cflags","cxxflags","ldflags","cbuild","hostuse","portage_overlay",\
-			"distcc_hosts","makeopts","pkgcache_path","kerncache_path"])
-
-		self.set_valid_build_kernel_vars(addlargs)
-		generic_target.__init__(self,myspec,addlargs)
-
-		"""
-		The semantics of subarchmap and machinemap changed a bit in 2.0.3 to
-		work better with vapier's CBUILD stuff. I've removed the "monolithic"
-		machinemap from this file and split up its contents amongst the
-		various arch/foo.py files.
-
-		When register() is called on each module in the arch/ dir, it now
-		returns a tuple instead of acting on the subarchmap dict that is
-		passed to it. The tuple contains the values that were previously
-		added to subarchmap as well as a new list of CHOSTs that go along
-		with that arch. This allows us to build machinemap on the fly based
-		on the keys in subarchmap and the values of the 2nd list returned
-		(tmpmachinemap).
-
-		Also, after talking with vapier. I have a slightly better idea of what
-		certain variables are used for and what they should be set to. Neither
-		'buildarch' or 'hostarch' are used directly, so their value doesn't
-		really matter. They are just compared to determine if we are
-		cross-compiling. Because of this, they are just set to the name of the
-		module in arch/ that the subarch is part of to make things simpler.
-		The entire build process is still based off of 'subarch' like it was
-		previously. -agaffney
-		"""
-
-		self.archmap = {}
-		self.subarchmap = {}
-		machinemap = {}
-		arch_dir = self.settings["PythonDir"] + "/arch/"
-		for x in [x[:-3] for x in os.listdir(arch_dir) if x.endswith(".py")]:
-			if x == "__init__":
-				continue
-			try:
-				fh=open(arch_dir + x + ".py")
-				"""
-				This next line loads the plugin as a module and assigns it to
-				archmap[x]
-				"""
-				self.archmap[x]=imp.load_module(x,fh,"../arch/" + x + ".py",
-					(".py", "r", imp.PY_SOURCE))
-				"""
-				This next line registers all the subarches supported in the
-				plugin
-				"""
-				tmpsubarchmap, tmpmachinemap = self.archmap[x].register()
-				self.subarchmap.update(tmpsubarchmap)
-				for machine in tmpmachinemap:
-					machinemap[machine] = x
-				for subarch in tmpsubarchmap:
-					machinemap[subarch] = x
-				fh.close()
-			except IOError:
-				"""
-				This message should probably change a bit, since everything in
-				the dir should load just fine. If it doesn't, it's probably a
-				syntax error in the module
-				"""
-				msg("Can't find/load " + x + ".py plugin in " + arch_dir)
-
-		if "chost" in self.settings:
-			hostmachine = self.settings["chost"].split("-")[0]
-			if hostmachine not in machinemap:
-				raise CatalystError, "Unknown host machine type "+hostmachine
-			self.settings["hostarch"]=machinemap[hostmachine]
-		else:
-			hostmachine = self.settings["subarch"]
-			if hostmachine in machinemap:
-				hostmachine = machinemap[hostmachine]
-			self.settings["hostarch"]=hostmachine
-		if "cbuild" in self.settings:
-			buildmachine = self.settings["cbuild"].split("-")[0]
-		else:
-			buildmachine = os.uname()[4]
-		if buildmachine not in machinemap:
-			raise CatalystError, "Unknown build machine type "+buildmachine
-		self.settings["buildarch"]=machinemap[buildmachine]
-		self.settings["crosscompile"]=(self.settings["hostarch"]!=\
-			self.settings["buildarch"])
-
-		""" Call arch constructor, pass our settings """
-		try:
-			self.arch=self.subarchmap[self.settings["subarch"]](self.settings)
-		except KeyError:
-			print "Invalid subarch: "+self.settings["subarch"]
-			print "Choose one of the following:",
-			for x in self.subarchmap:
-				print x,
-			print
-			sys.exit(2)
-
-		print "Using target:",self.settings["target"]
-		""" Print a nice informational message """
-		if self.settings["buildarch"]==self.settings["hostarch"]:
-			print "Building natively for",self.settings["hostarch"]
-		elif self.settings["crosscompile"]:
-			print "Cross-compiling on",self.settings["buildarch"],\
-				"for different machine type",self.settings["hostarch"]
-		else:
-			print "Building on",self.settings["buildarch"],\
-				"for alternate personality type",self.settings["hostarch"]
-
-		""" This must be set first as other set_ options depend on this """
-		self.set_spec_prefix()
-
-		""" Define all of our core variables """
-		self.set_target_profile()
-		self.set_target_subpath()
-		self.set_source_subpath()
-
-		""" Set paths """
-		self.set_snapshot_path()
-		self.set_root_path()
-		self.set_source_path()
-		self.set_snapcache_path()
-		self.set_chroot_path()
-		self.set_autoresume_path()
-		self.set_dest_path()
-		self.set_stage_path()
-		self.set_target_path()
-
-		self.set_controller_file()
-		self.set_action_sequence()
-		self.set_use()
-		self.set_cleanables()
-		self.set_iso_volume_id()
-		self.set_build_kernel_vars()
-		self.set_fsscript()
-		self.set_install_mask()
-		self.set_rcadd()
-		self.set_rcdel()
-		self.set_cdtar()
-		self.set_fstype()
-		self.set_fsops()
-		self.set_iso()
-		self.set_packages()
-		self.set_rm()
-		self.set_linuxrc()
-		self.set_busybox_config()
-		self.set_overlay()
-		self.set_portage_overlay()
-		self.set_root_overlay()
-
-		"""
-		This next line checks to make sure that the specified variables exist
-		on disk.
-		"""
-		#pdb.set_trace()
-		file_locate(self.settings,["source_path","snapshot_path","distdir"],\
-			expand=0)
-		""" If we are using portage_confdir, check that as well. """
-		if "portage_confdir" in self.settings:
-			file_locate(self.settings,["portage_confdir"],expand=0)
-
-		""" Setup our mount points """
-		# initialize our target mounts.
-		self.target_mounts = TARGET_MOUNTS_DEFAULTS.copy()
-
-		self.mounts = ["proc", "dev", "portdir", "distdir", "port_tmpdir"]
-		# initialize our source mounts
-		self.mountmap = SOURCE_MOUNTS_DEFAULTS.copy()
-		# update them from settings
-		self.mountmap["distdir"] = self.settings["distdir"]
-		self.mountmap["portdir"] = normpath("/".join([
-			self.settings["snapshot_cache_path"],
-			self.settings["repo_name"],
-			]))
-		if "SNAPCACHE" not in self.settings:
-			self.mounts.remove("portdir")
-			#self.mountmap["portdir"] = None
-		if os.uname()[0] == "Linux":
-			self.mounts.append("devpts")
-			self.mounts.append("shm")
-
-		self.set_mounts()
-
-		"""
-		Configure any user specified options (either in catalyst.conf or on
-		the command line).
-		"""
-		if "PKGCACHE" in self.settings:
-			self.set_pkgcache_path()
-			print "Location of the package cache is "+\
-				self.settings["pkgcache_path"]
-			self.mounts.append("packagedir")
-			self.mountmap["packagedir"] = self.settings["pkgcache_path"]
-
-		if "KERNCACHE" in self.settings:
-			self.set_kerncache_path()
-			print "Location of the kerncache is "+\
-				self.settings["kerncache_path"]
-			self.mounts.append("kerncache")
-			self.mountmap["kerncache"] = self.settings["kerncache_path"]
-
-		if "CCACHE" in self.settings:
-			if "CCACHE_DIR" in os.environ:
-				ccdir=os.environ["CCACHE_DIR"]
-				del os.environ["CCACHE_DIR"]
-			else:
-				ccdir="/root/.ccache"
-			if not os.path.isdir(ccdir):
-				raise CatalystError,\
-					"Compiler cache support can't be enabled (can't find "+\
-					ccdir+")"
-			self.mounts.append("ccache")
-			self.mountmap["ccache"] = ccdir
-			""" for the chroot: """
-			self.env["CCACHE_DIR"] = self.target_mounts["ccache"]
-
-		if "ICECREAM" in self.settings:
-			self.mounts.append("icecream")
-			self.mountmap["icecream"] = self.settings["icecream"]
-			self.env["PATH"] = self.target_mounts["icecream"] + ":" + \
-				self.env["PATH"]
-
-		if "port_logdir" in self.settings:
-			self.mounts.append("port_logdir")
-			self.mountmap["port_logdir"] = self.settings["port_logdir"]
-			self.env["PORT_LOGDIR"] = self.settings["port_logdir"]
-			self.env["PORT_LOGDIR_CLEAN"] = PORT_LOGDIR_CLEAN
-
-	def override_cbuild(self):
-		if "CBUILD" in self.makeconf:
-			self.settings["CBUILD"]=self.makeconf["CBUILD"]
-
-	def override_chost(self):
-		if "CHOST" in self.makeconf:
-			self.settings["CHOST"]=self.makeconf["CHOST"]
-
-	def override_cflags(self):
-		if "CFLAGS" in self.makeconf:
-			self.settings["CFLAGS"]=self.makeconf["CFLAGS"]
-
-	def override_cxxflags(self):
-		if "CXXFLAGS" in self.makeconf:
-			self.settings["CXXFLAGS"]=self.makeconf["CXXFLAGS"]
-
-	def override_ldflags(self):
-		if "LDFLAGS" in self.makeconf:
-			self.settings["LDFLAGS"]=self.makeconf["LDFLAGS"]
-
-	def set_install_mask(self):
-		if "install_mask" in self.settings:
-			if type(self.settings["install_mask"])!=types.StringType:
-				self.settings["install_mask"]=\
-					string.join(self.settings["install_mask"])
-
-	def set_spec_prefix(self):
-		self.settings["spec_prefix"]=self.settings["target"]
-
-	def set_target_profile(self):
-		self.settings["target_profile"]=self.settings["profile"]
-
-	def set_target_subpath(self):
-		self.settings["target_subpath"]=self.settings["rel_type"]+"/"+\
-				self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
-				self.settings["version_stamp"]
-
-	def set_source_subpath(self):
-		if type(self.settings["source_subpath"])!=types.StringType:
-			raise CatalystError,\
-				"source_subpath should have been a string. Perhaps you have something wrong in your spec file?"
-
-	def set_pkgcache_path(self):
-		if "pkgcache_path" in self.settings:
-			if type(self.settings["pkgcache_path"])!=types.StringType:
-				self.settings["pkgcache_path"]=\
-					normpath(string.join(self.settings["pkgcache_path"]))
-		else:
-			self.settings["pkgcache_path"]=\
-				normpath(self.settings["storedir"]+"/packages/"+\
-				self.settings["target_subpath"]+"/")
-
-	def set_kerncache_path(self):
-		if "kerncache_path" in self.settings:
-			if type(self.settings["kerncache_path"])!=types.StringType:
-				self.settings["kerncache_path"]=\
-					normpath(string.join(self.settings["kerncache_path"]))
-		else:
-			self.settings["kerncache_path"]=normpath(self.settings["storedir"]+\
-				"/kerncache/"+self.settings["target_subpath"]+"/")
-
-	def set_target_path(self):
-		self.settings["target_path"] = normpath(self.settings["storedir"] +
-			"/builds/" + self.settings["target_subpath"].rstrip('/') +
-			".tar.bz2")
-		if "AUTORESUME" in self.settings\
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"setup_target_path"):
-			print \
-				"Resume point detected, skipping target path setup operation..."
-		else:
-			""" First clean up any existing target stuff """
-			# XXX WTF are we removing the old tarball before we start building the
-			# XXX new one? If the build fails, you don't want to be left with
-			# XXX nothing at all
-#			if os.path.isfile(self.settings["target_path"]):
-#				cmd("rm -f "+self.settings["target_path"],\
-#					"Could not remove existing file: "\
-#					+self.settings["target_path"],env=self.env)
-			touch(self.settings["autoresume_path"]+"setup_target_path")
-
-			if not os.path.exists(self.settings["storedir"]+"/builds/"):
-				os.makedirs(self.settings["storedir"]+"/builds/")
-
-	def set_fsscript(self):
-		if self.settings["spec_prefix"]+"/fsscript" in self.settings:
-			self.settings["fsscript"]=\
-				self.settings[self.settings["spec_prefix"]+"/fsscript"]
-			del self.settings[self.settings["spec_prefix"]+"/fsscript"]
-
-	def set_rcadd(self):
-		if self.settings["spec_prefix"]+"/rcadd" in self.settings:
-			self.settings["rcadd"]=\
-				self.settings[self.settings["spec_prefix"]+"/rcadd"]
-			del self.settings[self.settings["spec_prefix"]+"/rcadd"]
-
-	def set_rcdel(self):
-		if self.settings["spec_prefix"]+"/rcdel" in self.settings:
-			self.settings["rcdel"]=\
-				self.settings[self.settings["spec_prefix"]+"/rcdel"]
-			del self.settings[self.settings["spec_prefix"]+"/rcdel"]
-
-	def set_cdtar(self):
-		if self.settings["spec_prefix"]+"/cdtar" in self.settings:
-			self.settings["cdtar"]=\
-				normpath(self.settings[self.settings["spec_prefix"]+"/cdtar"])
-			del self.settings[self.settings["spec_prefix"]+"/cdtar"]
-
-	def set_iso(self):
-		if self.settings["spec_prefix"]+"/iso" in self.settings:
-			if self.settings[self.settings["spec_prefix"]+"/iso"].startswith('/'):
-				self.settings["iso"]=\
-					normpath(self.settings[self.settings["spec_prefix"]+"/iso"])
-			else:
-				# This automatically prepends the build dir to the ISO output path
-				# if it doesn't start with a /
-				self.settings["iso"] = normpath(self.settings["storedir"] + \
-					"/builds/" + self.settings["rel_type"] + "/" + \
-					self.settings[self.settings["spec_prefix"]+"/iso"])
-			del self.settings[self.settings["spec_prefix"]+"/iso"]
-
-	def set_fstype(self):
-		if self.settings["spec_prefix"]+"/fstype" in self.settings:
-			self.settings["fstype"]=\
-				self.settings[self.settings["spec_prefix"]+"/fstype"]
-			del self.settings[self.settings["spec_prefix"]+"/fstype"]
-
-		if "fstype" not in self.settings:
-			self.settings["fstype"]="normal"
-			for x in self.valid_values:
-				if x ==  self.settings["spec_prefix"]+"/fstype":
-					print "\n"+self.settings["spec_prefix"]+\
-						"/fstype is being set to the default of \"normal\"\n"
-
-	def set_fsops(self):
-		if "fstype" in self.settings:
-			self.valid_values.append("fsops")
-			if self.settings["spec_prefix"]+"/fsops" in self.settings:
-				self.settings["fsops"]=\
-					self.settings[self.settings["spec_prefix"]+"/fsops"]
-				del self.settings[self.settings["spec_prefix"]+"/fsops"]
-
-	def set_source_path(self):
-		if "SEEDCACHE" in self.settings\
-			and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+\
-				self.settings["source_subpath"]+"/")):
-			self.settings["source_path"]=normpath(self.settings["storedir"]+\
-				"/tmp/"+self.settings["source_subpath"]+"/")
-		else:
-			self.settings["source_path"] = normpath(self.settings["storedir"] +
-				"/builds/" + self.settings["source_subpath"].rstrip("/") +
-				".tar.bz2")
-			if os.path.isfile(self.settings["source_path"]):
-				# XXX: Is this even necessary if the previous check passes?
-				if os.path.exists(self.settings["source_path"]):
-					self.settings["source_path_hash"]=\
-						generate_hash(self.settings["source_path"],\
-						hash_function=self.settings["hash_function"],\
-						verbose=False)
-		print "Source path set to "+self.settings["source_path"]
-		if os.path.isdir(self.settings["source_path"]):
-			print "\tIf this is not desired, remove this directory or turn off"
-			print "\tseedcache in the options of catalyst.conf the source path"
-			print "\twill then be "+\
-				normpath(self.settings["storedir"] + "/builds/" +
-					self.settings["source_subpath"].rstrip("/") + ".tar.bz2\n")
-
-	def set_dest_path(self):
-		if "root_path" in self.settings:
-			self.settings["destpath"]=normpath(self.settings["chroot_path"]+\
-				self.settings["root_path"])
-		else:
-			self.settings["destpath"]=normpath(self.settings["chroot_path"])
-
-	def set_cleanables(self):
-		self.settings["cleanables"]=["/etc/resolv.conf","/var/tmp/*","/tmp/*",\
-			"/root/*", self.settings["portdir"]]
-
-	def set_snapshot_path(self):
-		self.settings["snapshot_path"] = normpath(self.settings["storedir"] +
-			"/snapshots/" + self.settings["snapshot_name"] +
-			self.settings["snapshot"].rstrip("/") + ".tar.xz")
-
-		if os.path.exists(self.settings["snapshot_path"]):
-			self.settings["snapshot_path_hash"]=\
-				generate_hash(self.settings["snapshot_path"],\
-				hash_function=self.settings["hash_function"],verbose=False)
-		else:
-			self.settings["snapshot_path"]=normpath(self.settings["storedir"]+\
-				"/snapshots/" + self.settings["snapshot_name"] +
-				self.settings["snapshot"].rstrip("/") + ".tar.bz2")
-
-			if os.path.exists(self.settings["snapshot_path"]):
-				self.settings["snapshot_path_hash"]=\
-					generate_hash(self.settings["snapshot_path"],\
-					hash_function=self.settings["hash_function"],verbose=False)
-
-	def set_snapcache_path(self):
-		if "SNAPCACHE" in self.settings:
-			self.settings["snapshot_cache_path"]=\
-				normpath(self.settings["snapshot_cache"]+"/"+\
-				self.settings["snapshot"])
-			self.snapcache_lock=\
-				LockDir(self.settings["snapshot_cache_path"])
-			print "Caching snapshot to "+self.settings["snapshot_cache_path"]
-
-	def set_chroot_path(self):
-		"""
-		NOTE: the trailing slash has been removed
-		Things *could* break if you don't use a proper join()
-		"""
-		self.settings["chroot_path"]=normpath(self.settings["storedir"]+\
-			"/tmp/"+self.settings["target_subpath"])
-		self.chroot_lock=LockDir(self.settings["chroot_path"])
-
-	def set_autoresume_path(self):
-		self.settings["autoresume_path"]=normpath(self.settings["storedir"]+\
-			"/tmp/"+self.settings["rel_type"]+"/"+".autoresume-"+\
-			self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
-			self.settings["version_stamp"]+"/")
-		if "AUTORESUME" in self.settings:
-			print "The autoresume path is " + self.settings["autoresume_path"]
-		if not os.path.exists(self.settings["autoresume_path"]):
-			os.makedirs(self.settings["autoresume_path"],0755)
-
-	def set_controller_file(self):
-		self.settings["controller_file"]=normpath(self.settings["sharedir"]+\
-			"/targets/"+self.settings["target"]+"/"+self.settings["target"]+\
-			"-controller.sh")
-
-	def set_iso_volume_id(self):
-		if self.settings["spec_prefix"]+"/volid" in self.settings:
-			self.settings["iso_volume_id"]=\
-				self.settings[self.settings["spec_prefix"]+"/volid"]
-			if len(self.settings["iso_volume_id"])>32:
-				raise CatalystError,\
-					"ISO volume ID must not exceed 32 characters."
-		else:
-			self.settings["iso_volume_id"]="catalyst "+self.settings["snapshot"]
-
-	def set_action_sequence(self):
-		""" Default action sequence for run method """
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-				"setup_confdir","portage_overlay",\
-				"base_dirs","bind","chroot_setup","setup_environment",\
-				"run_local","preclean","unbind","clean"]
-#		if "TARBALL" in self.settings or \
-#			"FETCH" not in self.settings:
-		if "FETCH" not in self.settings:
-			self.settings["action_sequence"].append("capture")
-		self.settings["action_sequence"].append("clear_autoresume")
-
-	def set_use(self):
-		if self.settings["spec_prefix"]+"/use" in self.settings:
-			self.settings["use"]=\
-				self.settings[self.settings["spec_prefix"]+"/use"]
-			del self.settings[self.settings["spec_prefix"]+"/use"]
-		if "use" not in self.settings:
-			self.settings["use"]=""
-		if type(self.settings["use"])==types.StringType:
-			self.settings["use"]=self.settings["use"].split()
-
-		# Force bindist when options ask for it
-		if "BINDIST" in self.settings:
-			self.settings["use"].append("bindist")
-
-	def set_stage_path(self):
-		self.settings["stage_path"]=normpath(self.settings["chroot_path"])
-
-	def set_mounts(self):
-		pass
-
-	def set_packages(self):
-		pass
-
-	def set_rm(self):
-		if self.settings["spec_prefix"]+"/rm" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/rm"])==types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/rm"]=\
-					self.settings[self.settings["spec_prefix"]+"/rm"].split()
-
-	def set_linuxrc(self):
-		if self.settings["spec_prefix"]+"/linuxrc" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/linuxrc"])==types.StringType:
-				self.settings["linuxrc"]=\
-					self.settings[self.settings["spec_prefix"]+"/linuxrc"]
-				del self.settings[self.settings["spec_prefix"]+"/linuxrc"]
-
-	def set_busybox_config(self):
-		if self.settings["spec_prefix"]+"/busybox_config" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/busybox_config"])==types.StringType:
-				self.settings["busybox_config"]=\
-					self.settings[self.settings["spec_prefix"]+"/busybox_config"]
-				del self.settings[self.settings["spec_prefix"]+"/busybox_config"]
-
-	def set_portage_overlay(self):
-		if "portage_overlay" in self.settings:
-			if type(self.settings["portage_overlay"])==types.StringType:
-				self.settings["portage_overlay"]=\
-					self.settings["portage_overlay"].split()
-			print "portage_overlay directories are set to: \""+\
-				string.join(self.settings["portage_overlay"])+"\""
-
-	def set_overlay(self):
-		if self.settings["spec_prefix"]+"/overlay" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/overlay"])==types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/overlay"]=\
-					self.settings[self.settings["spec_prefix"]+\
-					"/overlay"].split()
-
-	def set_root_overlay(self):
-		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+\
-				"/root_overlay"])==types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/root_overlay"]=\
-					self.settings[self.settings["spec_prefix"]+\
-					"/root_overlay"].split()
-
-	def set_root_path(self):
-		""" ROOT= variable for emerges """
-		self.settings["root_path"]="/"
-
-	def set_valid_build_kernel_vars(self,addlargs):
-		if "boot/kernel" in addlargs:
-			if type(addlargs["boot/kernel"])==types.StringType:
-				loopy=[addlargs["boot/kernel"]]
-			else:
-				loopy=addlargs["boot/kernel"]
-
-			for x in loopy:
-				self.valid_values.append("boot/kernel/"+x+"/aliases")
-				self.valid_values.append("boot/kernel/"+x+"/config")
-				self.valid_values.append("boot/kernel/"+x+"/console")
-				self.valid_values.append("boot/kernel/"+x+"/extraversion")
-				self.valid_values.append("boot/kernel/"+x+"/gk_action")
-				self.valid_values.append("boot/kernel/"+x+"/gk_kernargs")
-				self.valid_values.append("boot/kernel/"+x+"/initramfs_overlay")
-				self.valid_values.append("boot/kernel/"+x+"/machine_type")
-				self.valid_values.append("boot/kernel/"+x+"/sources")
-				self.valid_values.append("boot/kernel/"+x+"/softlevel")
-				self.valid_values.append("boot/kernel/"+x+"/use")
-				self.valid_values.append("boot/kernel/"+x+"/packages")
-				if "boot/kernel/"+x+"/packages" in addlargs:
-					if type(addlargs["boot/kernel/"+x+\
-						"/packages"])==types.StringType:
-						addlargs["boot/kernel/"+x+"/packages"]=\
-							[addlargs["boot/kernel/"+x+"/packages"]]
-
-	def set_build_kernel_vars(self):
-		if self.settings["spec_prefix"]+"/gk_mainargs" in self.settings:
-			self.settings["gk_mainargs"]=\
-				self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
-			del self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
-
-	def kill_chroot_pids(self):
-		print "Checking for processes running in chroot and killing them."
-
-		"""
-		Force environment variables to be exported so script can see them
-		"""
-		self.setup_environment()
-
-		if os.path.exists(self.settings["sharedir"]+\
-			"/targets/support/kill-chroot-pids.sh"):
-			cmd("/bin/bash "+self.settings["sharedir"]+\
-				"/targets/support/kill-chroot-pids.sh",\
-				"kill-chroot-pids script failed.",env=self.env)
-
-	def mount_safety_check(self):
-		"""
-		Check and verify that none of our paths in mypath are mounted. We don't
-		want to clean up with things still mounted, and this allows us to check.
-		Returns 1 on ok, 0 on "something is still mounted" case.
-		"""
-
-		if not os.path.exists(self.settings["chroot_path"]):
-			return
-
-		print "self.mounts =", self.mounts
-		for x in self.mounts:
-			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
-			print "mount_safety_check() x =", x, target
-			if not os.path.exists(target):
-				continue
-
-			if ismount(target):
-				""" Something is still mounted "" """
-				try:
-					print target + " is still mounted; performing auto-bind-umount...",
-					""" Try to umount stuff ourselves """
-					self.unbind()
-					if ismount(target):
-						raise CatalystError, "Auto-unbind failed for " + target
-					else:
-						print "Auto-unbind successful..."
-				except CatalystError:
-					raise CatalystError, "Unable to auto-unbind " + target
-
-	def unpack(self):
-		unpack=True
-
-		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+\
-			"unpack")
-
-		if "SEEDCACHE" in self.settings:
-			if os.path.isdir(self.settings["source_path"]):
-				""" SEEDCACHE Is a directory, use rsync """
-				unpack_cmd="rsync -a --delete "+self.settings["source_path"]+\
-					" "+self.settings["chroot_path"]
-				display_msg="\nStarting rsync from "+\
-					self.settings["source_path"]+"\nto "+\
-					self.settings["chroot_path"]+\
-					" (This may take some time) ...\n"
-				error_msg="Rsync of "+self.settings["source_path"]+" to "+\
-					self.settings["chroot_path"]+" failed."
-			else:
-				""" SEEDCACHE is a not a directory, try untar'ing """
-				print "Referenced SEEDCACHE does not appear to be a directory, trying to untar..."
-				display_msg="\nStarting tar extract from "+\
-					self.settings["source_path"]+"\nto "+\
-					self.settings["chroot_path"]+\
-						" (This may take some time) ...\n"
-				if "bz2" == self.settings["chroot_path"][-3:]:
-					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-						self.settings["chroot_path"]
-				else:
-					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-						self.settings["chroot_path"]
-				error_msg="Tarball extraction of "+\
-					self.settings["source_path"]+" to "+\
-					self.settings["chroot_path"]+" failed."
-		else:
-			""" No SEEDCACHE, use tar """
-			display_msg="\nStarting tar extract from "+\
-				self.settings["source_path"]+"\nto "+\
-				self.settings["chroot_path"]+\
-				" (This may take some time) ...\n"
-			if "bz2" == self.settings["chroot_path"][-3:]:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-					self.settings["chroot_path"]
-			else:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
-					self.settings["chroot_path"]
-			error_msg="Tarball extraction of "+self.settings["source_path"]+\
-				" to "+self.settings["chroot_path"]+" failed."
-
-		if "AUTORESUME" in self.settings:
-			if os.path.isdir(self.settings["source_path"]) \
-				and os.path.exists(self.settings["autoresume_path"]+"unpack"):
-				""" Autoresume is valid, SEEDCACHE is valid """
-				unpack=False
-				invalid_snapshot=False
-
-			elif os.path.isfile(self.settings["source_path"]) \
-				and self.settings["source_path_hash"]==clst_unpack_hash:
-				""" Autoresume is valid, tarball is valid """
-				unpack=False
-				invalid_snapshot=True
-
-			elif os.path.isdir(self.settings["source_path"]) \
-				and not os.path.exists(self.settings["autoresume_path"]+\
-				"unpack"):
-				""" Autoresume is invalid, SEEDCACHE """
-				unpack=True
-				invalid_snapshot=False
-
-			elif os.path.isfile(self.settings["source_path"]) \
-				and self.settings["source_path_hash"]!=clst_unpack_hash:
-				""" Autoresume is invalid, tarball """
-				unpack=True
-				invalid_snapshot=True
-		else:
-			""" No autoresume, SEEDCACHE """
-			if "SEEDCACHE" in self.settings:
-				""" SEEDCACHE so let's run rsync and let it clean up """
-				if os.path.isdir(self.settings["source_path"]):
-					unpack=True
-					invalid_snapshot=False
-				elif os.path.isfile(self.settings["source_path"]):
-					""" Tarball so unpack and remove anything already there """
-					unpack=True
-					invalid_snapshot=True
-				""" No autoresume, no SEEDCACHE """
-			else:
-				""" Tarball so unpack and remove anything already there """
-				if os.path.isfile(self.settings["source_path"]):
-					unpack=True
-					invalid_snapshot=True
-				elif os.path.isdir(self.settings["source_path"]):
-					""" We should never reach this, so something is very wrong """
-					raise CatalystError,\
-						"source path is a dir but seedcache is not enabled"
-
-		if unpack:
-			self.mount_safety_check()
-
-			if invalid_snapshot:
-				if "AUTORESUME" in self.settings:
-					print "No Valid Resume point detected, cleaning up..."
-
-				self.clear_autoresume()
-				self.clear_chroot()
-
-			if not os.path.exists(self.settings["chroot_path"]):
-				os.makedirs(self.settings["chroot_path"])
-
-			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
-				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
-
-			if "PKGCACHE" in self.settings:
-				if not os.path.exists(self.settings["pkgcache_path"]):
-					os.makedirs(self.settings["pkgcache_path"],0755)
-
-			if "KERNCACHE" in self.settings:
-				if not os.path.exists(self.settings["kerncache_path"]):
-					os.makedirs(self.settings["kerncache_path"],0755)
-
-			print display_msg
-			cmd(unpack_cmd,error_msg,env=self.env)
-
-			if "source_path_hash" in self.settings:
-				myf=open(self.settings["autoresume_path"]+"unpack","w")
-				myf.write(self.settings["source_path_hash"])
-				myf.close()
-			else:
-				touch(self.settings["autoresume_path"]+"unpack")
-		else:
-			print "Resume point detected, skipping unpack operation..."
-
-	def unpack_snapshot(self):
-		unpack=True
-		snapshot_hash=read_from_clst(self.settings["autoresume_path"]+\
-			"unpack_portage")
-
-		if "SNAPCACHE" in self.settings:
-			snapshot_cache_hash=\
-				read_from_clst(self.settings["snapshot_cache_path"]+\
-				"catalyst-hash")
-			destdir=self.settings["snapshot_cache_path"]
-			if "bz2" == self.settings["chroot_path"][-3:]:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+destdir
-			else:
-				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+destdir
-			unpack_errmsg="Error unpacking snapshot"
-			cleanup_msg="Cleaning up invalid snapshot cache at \n\t"+\
-				self.settings["snapshot_cache_path"]+\
-				" (This can take a long time)..."
-			cleanup_errmsg="Error removing existing snapshot cache directory."
-			self.snapshot_lock_object=self.snapcache_lock
-
-			if self.settings["snapshot_path_hash"]==snapshot_cache_hash:
-				print "Valid snapshot cache, skipping unpack of portage tree..."
-				unpack=False
-		else:
-			destdir = normpath(self.settings["chroot_path"] + self.settings["portdir"])
-			cleanup_errmsg="Error removing existing snapshot directory."
-			cleanup_msg=\
-				"Cleaning up existing portage tree (This can take a long time)..."
-			if "bz2" == self.settings["chroot_path"][-3:]:
-				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+\
-					self.settings["chroot_path"]+"/usr"
-			else:
-				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+\
-					self.settings["chroot_path"]+"/usr"
-			unpack_errmsg="Error unpacking snapshot"
-
-			if "AUTORESUME" in self.settings \
-				and os.path.exists(self.settings["chroot_path"]+\
-					self.settings["portdir"]) \
-				and os.path.exists(self.settings["autoresume_path"]\
-					+"unpack_portage") \
-				and self.settings["snapshot_path_hash"] == snapshot_hash:
-					print \
-						"Valid Resume point detected, skipping unpack of portage tree..."
-					unpack=False
-
-		if unpack:
-			if "SNAPCACHE" in self.settings:
-				self.snapshot_lock_object.write_lock()
-			if os.path.exists(destdir):
-				print cleanup_msg
-				cleanup_cmd="rm -rf "+destdir
-				cmd(cleanup_cmd,cleanup_errmsg,env=self.env)
-			if not os.path.exists(destdir):
-				os.makedirs(destdir,0755)
-
-			print "Unpacking portage tree (This can take a long time) ..."
-			cmd(unpack_cmd,unpack_errmsg,env=self.env)
-
-			if "SNAPCACHE" in self.settings:
-				myf=open(self.settings["snapshot_cache_path"]+"catalyst-hash","w")
-				myf.write(self.settings["snapshot_path_hash"])
-				myf.close()
-			else:
-				print "Setting snapshot autoresume point"
-				myf=open(self.settings["autoresume_path"]+"unpack_portage","w")
-				myf.write(self.settings["snapshot_path_hash"])
-				myf.close()
-
-			if "SNAPCACHE" in self.settings:
-				self.snapshot_lock_object.unlock()
-
-	def config_profile_link(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"config_profile_link"):
-			print \
-				"Resume point detected, skipping config_profile_link operation..."
-		else:
-			# TODO: zmedico and I discussed making this a directory and pushing
-			# in a parent file, as well as other user-specified configuration.
-			print "Configuring profile link..."
-			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.profile",\
-					"Error zapping profile link",env=self.env)
-			cmd("mkdir -p "+self.settings["chroot_path"]+"/etc/portage/")
-			cmd("ln -sf ../.." + self.settings["portdir"] + "/profiles/" + \
-				self.settings["target_profile"]+" "+\
-				self.settings["chroot_path"]+"/etc/portage/make.profile",\
-				"Error creating profile link",env=self.env)
-			touch(self.settings["autoresume_path"]+"config_profile_link")
-
-	def setup_confdir(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"setup_confdir"):
-			print "Resume point detected, skipping setup_confdir operation..."
-		else:
-			if "portage_confdir" in self.settings:
-				print "Configuring /etc/portage..."
-				cmd("rsync -a "+self.settings["portage_confdir"]+"/ "+\
-					self.settings["chroot_path"]+"/etc/portage/",\
-					"Error copying /etc/portage",env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_confdir")
-
-	def portage_overlay(self):
-		""" We copy the contents of our overlays to /usr/local/portage """
-		if "portage_overlay" in self.settings:
-			for x in self.settings["portage_overlay"]:
-				if os.path.exists(x):
-					print "Copying overlay dir " +x
-					cmd("mkdir -p "+self.settings["chroot_path"]+\
-						self.settings["local_overlay"],\
-						"Could not make portage_overlay dir",env=self.env)
-					cmd("cp -R "+x+"/* "+self.settings["chroot_path"]+\
-						self.settings["local_overlay"],\
-						"Could not copy portage_overlay",env=self.env)
-
-	def root_overlay(self):
-		""" Copy over the root_overlay """
-		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
-			for x in self.settings[self.settings["spec_prefix"]+\
-				"/root_overlay"]:
-				if os.path.exists(x):
-					print "Copying root_overlay: "+x
-					cmd("rsync -a "+x+"/ "+\
-						self.settings["chroot_path"],\
-						self.settings["spec_prefix"]+"/root_overlay: "+x+\
-						" copy failed.",env=self.env)
-
-	def base_dirs(self):
-		pass
-
-	def bind(self):
-		for x in self.mounts:
-			#print "bind(); x =", x
-			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
-			if not os.path.exists(target):
-				os.makedirs(target, 0755)
-
-			if not os.path.exists(self.mountmap[x]):
-				if self.mountmap[x] not in ["tmpfs", "shmfs"]:
-					os.makedirs(self.mountmap[x], 0755)
-
-			src=self.mountmap[x]
-			#print "bind(); src =", src
-			if "SNAPCACHE" in self.settings and x == "portdir":
-				self.snapshot_lock_object.read_lock()
-			if os.uname()[0] == "FreeBSD":
-				if src == "/dev":
-					cmd = "mount -t devfs none " + target
-					retval=os.system(cmd)
-				else:
-					cmd = "mount_nullfs " + src + " " + target
-					retval=os.system(cmd)
-			else:
-				if src == "tmpfs":
-					if "var_tmpfs_portage" in self.settings:
-						cmd = "mount -t tmpfs -o size=" + \
-							self.settings["var_tmpfs_portage"] + "G " + \
-							src + " " + target
-						retval=os.system(cmd)
-				elif src == "shmfs":
-					cmd = "mount -t tmpfs -o noexec,nosuid,nodev shm " + target
-					retval=os.system(cmd)
-				else:
-					cmd = "mount --bind " + src + " " + target
-					#print "bind(); cmd =", cmd
-					retval=os.system(cmd)
-			if retval!=0:
-				self.unbind()
-				raise CatalystError,"Couldn't bind mount " + src
-
-	def unbind(self):
-		ouch=0
-		mypath=self.settings["chroot_path"]
-		myrevmounts=self.mounts[:]
-		myrevmounts.reverse()
-		""" Unmount in reverse order for nested bind-mounts """
-		for x in myrevmounts:
-			target = normpath(mypath + self.target_mounts[x])
-			if not os.path.exists(target):
-				continue
-
-			if not ismount(target):
-				continue
-
-			retval=os.system("umount " + target)
-
-			if retval!=0:
-				warn("First attempt to unmount: " + target + " failed.")
-				warn("Killing any pids still running in the chroot")
-
-				self.kill_chroot_pids()
-
-				retval2 = os.system("umount " + target)
-				if retval2!=0:
-					ouch=1
-					warn("Couldn't umount bind mount: " + target)
-
-			if "SNAPCACHE" in self.settings and x == "/usr/portage":
-				try:
-					"""
-					It's possible the snapshot lock object isn't created yet.
-					This is because mount safety check calls unbind before the
-					target is fully initialized
-					"""
-					self.snapshot_lock_object.unlock()
-				except:
-					pass
-		if ouch:
-			"""
-			if any bind mounts really failed, then we need to raise
-			this to potentially prevent an upcoming bash stage cleanup script
-			from wiping our bind mounts.
-			"""
-			raise CatalystError,\
-				"Couldn't umount one or more bind-mounts; aborting for safety."
-
-	def chroot_setup(self):
-		self.makeconf=read_makeconf(self.settings["chroot_path"]+\
-			"/etc/portage/make.conf")
-		self.override_cbuild()
-		self.override_chost()
-		self.override_cflags()
-		self.override_cxxflags()
-		self.override_ldflags()
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"chroot_setup"):
-			print "Resume point detected, skipping chroot_setup operation..."
-		else:
-			print "Setting up chroot..."
-
-			#self.makeconf=read_makeconf(self.settings["chroot_path"]+"/etc/portage/make.conf")
-
-			cmd("cp /etc/resolv.conf "+self.settings["chroot_path"]+"/etc",\
-				"Could not copy resolv.conf into place.",env=self.env)
-
-			""" Copy over the envscript, if applicable """
-			if "ENVSCRIPT" in self.settings:
-				if not os.path.exists(self.settings["ENVSCRIPT"]):
-					raise CatalystError,\
-						"Can't find envscript "+self.settings["ENVSCRIPT"]
-
-				print "\nWarning!!!!"
-				print "\tOverriding certain env variables may cause catastrophic failure."
-				print "\tIf your build fails look here first as the possible problem."
-				print "\tCatalyst assumes you know what you are doing when setting"
-				print "\t\tthese variables."
-				print "\tCatalyst Maintainers use VERY minimal envscripts if used at all"
-				print "\tYou have been warned\n"
-
-				cmd("cp "+self.settings["ENVSCRIPT"]+" "+\
-					self.settings["chroot_path"]+"/tmp/envscript",\
-					"Could not copy envscript into place.",env=self.env)
-
-			"""
-			Copy over /etc/hosts from the host in case there are any
-			specialties in there
-			"""
-			if os.path.exists(self.settings["chroot_path"]+"/etc/hosts"):
-				cmd("mv "+self.settings["chroot_path"]+"/etc/hosts "+\
-					self.settings["chroot_path"]+"/etc/hosts.catalyst",\
-					"Could not backup /etc/hosts",env=self.env)
-				cmd("cp /etc/hosts "+self.settings["chroot_path"]+"/etc/hosts",\
-					"Could not copy /etc/hosts",env=self.env)
-
-			""" Modify and write out make.conf (for the chroot) """
-			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.conf",\
-				"Could not remove "+self.settings["chroot_path"]+\
-				"/etc/portage/make.conf",env=self.env)
-			myf=open(self.settings["chroot_path"]+"/etc/portage/make.conf","w")
-			myf.write("# These settings were set by the catalyst build script that automatically\n# built this stage.\n")
-			myf.write("# Please consult /usr/share/portage/config/make.conf.example for a more\n# detailed example.\n")
-			if "CFLAGS" in self.settings:
-				myf.write('CFLAGS="'+self.settings["CFLAGS"]+'"\n')
-			if "CXXFLAGS" in self.settings:
-				if self.settings["CXXFLAGS"]!=self.settings["CFLAGS"]:
-					myf.write('CXXFLAGS="'+self.settings["CXXFLAGS"]+'"\n')
-				else:
-					myf.write('CXXFLAGS="${CFLAGS}"\n')
-			else:
-				myf.write('CXXFLAGS="${CFLAGS}"\n')
-
-			if "LDFLAGS" in self.settings:
-				myf.write("# LDFLAGS is unsupported.  USE AT YOUR OWN RISK!\n")
-				myf.write('LDFLAGS="'+self.settings["LDFLAGS"]+'"\n')
-			if "CBUILD" in self.settings:
-				myf.write("# This should not be changed unless you know exactly what you are doing.  You\n# should probably be using a different stage, instead.\n")
-				myf.write('CBUILD="'+self.settings["CBUILD"]+'"\n')
-
-			myf.write("# WARNING: Changing your CHOST is not something that should be done lightly.\n# Please consult http://www.gentoo.org/doc/en/change-chost.xml before changing.\n")
-			myf.write('CHOST="'+self.settings["CHOST"]+'"\n')
-
-			""" Figure out what our USE vars are for building """
-			myusevars=[]
-			if "HOSTUSE" in self.settings:
-				myusevars.extend(self.settings["HOSTUSE"])
-
-			if "use" in self.settings:
-				myusevars.extend(self.settings["use"])
-
-			if myusevars:
-				myf.write("# These are the USE flags that were used in addition to what is provided by the\n# profile used for building.\n")
-				myusevars = sorted(set(myusevars))
-				myf.write('USE="'+string.join(myusevars)+'"\n')
-				if '-*' in myusevars:
-					print "\nWarning!!!  "
-					print "\tThe use of -* in "+self.settings["spec_prefix"]+\
-						"/use will cause portage to ignore"
-					print "\tpackage.use in the profile and portage_confdir. You've been warned!"
-
-			myf.write('PORTDIR="%s"\n' % self.settings['portdir'])
-			myf.write('DISTDIR="%s"\n' % self.settings['distdir'])
-			myf.write('PKGDIR="%s"\n' % self.settings['packagedir'])
-
-			""" Setup the portage overlay """
-			if "portage_overlay" in self.settings:
-				myf.write('PORTDIR_OVERLAY="/usr/local/portage"\n')
-
-			myf.close()
-			cmd("cp "+self.settings["chroot_path"]+"/etc/portage/make.conf "+\
-				self.settings["chroot_path"]+"/etc/portage/make.conf.catalyst",\
-				"Could not backup /etc/portage/make.conf",env=self.env)
-			touch(self.settings["autoresume_path"]+"chroot_setup")
-
-	def fsscript(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"fsscript"):
-			print "Resume point detected, skipping fsscript operation..."
-		else:
-			if "fsscript" in self.settings:
-				if os.path.exists(self.settings["controller_file"]):
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" fsscript","fsscript script failed.",env=self.env)
-					touch(self.settings["autoresume_path"]+"fsscript")
-
-	def rcupdate(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"rcupdate"):
-			print "Resume point detected, skipping rcupdate operation..."
-		else:
-			if os.path.exists(self.settings["controller_file"]):
-				cmd("/bin/bash "+self.settings["controller_file"]+" rc-update",\
-					"rc-update script failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"rcupdate")
-
-	def clean(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"clean"):
-			print "Resume point detected, skipping clean operation..."
-		else:
-			for x in self.settings["cleanables"]:
-				print "Cleaning chroot: "+x+"... "
-				cmd("rm -rf "+self.settings["destpath"]+x,"Couldn't clean "+\
-					x,env=self.env)
-
-		""" Put /etc/hosts back into place """
-		if os.path.exists(self.settings["chroot_path"]+"/etc/hosts.catalyst"):
-			cmd("mv -f "+self.settings["chroot_path"]+"/etc/hosts.catalyst "+\
-				self.settings["chroot_path"]+"/etc/hosts",\
-				"Could not replace /etc/hosts",env=self.env)
-
-		""" Remove our overlay """
-		if os.path.exists(self.settings["chroot_path"] + self.settings["local_overlay"]):
-			cmd("rm -rf " + self.settings["chroot_path"] + self.settings["local_overlay"],
-				"Could not remove " + self.settings["local_overlay"], env=self.env)
-			cmd("sed -i '/^PORTDIR_OVERLAY/d' "+self.settings["chroot_path"]+\
-				"/etc/portage/make.conf",\
-				"Could not remove PORTDIR_OVERLAY from make.conf",env=self.env)
-
-		""" Clean up old and obsoleted files in /etc """
-		if os.path.exists(self.settings["stage_path"]+"/etc"):
-			cmd("find "+self.settings["stage_path"]+\
-				"/etc -maxdepth 1 -name \"*-\" | xargs rm -f",\
-				"Could not remove stray files in /etc",env=self.env)
-
-		if os.path.exists(self.settings["controller_file"]):
-			cmd("/bin/bash "+self.settings["controller_file"]+" clean",\
-				"clean script failed.",env=self.env)
-			touch(self.settings["autoresume_path"]+"clean")
-
-	def empty(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"empty"):
-			print "Resume point detected, skipping empty operation..."
-		else:
-			if self.settings["spec_prefix"]+"/empty" in self.settings:
-				if type(self.settings[self.settings["spec_prefix"]+\
-					"/empty"])==types.StringType:
-					self.settings[self.settings["spec_prefix"]+"/empty"]=\
-						self.settings[self.settings["spec_prefix"]+\
-						"/empty"].split()
-				for x in self.settings[self.settings["spec_prefix"]+"/empty"]:
-					myemp=self.settings["destpath"]+x
-					if not os.path.isdir(myemp) or os.path.islink(myemp):
-						print x,"not a directory or does not exist, skipping 'empty' operation."
-						continue
-					print "Emptying directory",x
-					"""
-					stat the dir, delete the dir, recreate the dir and set
-					the proper perms and ownership
-					"""
-					mystat=os.stat(myemp)
-					shutil.rmtree(myemp)
-					os.makedirs(myemp,0755)
-					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-					os.chmod(myemp,mystat[ST_MODE])
-			touch(self.settings["autoresume_path"]+"empty")
-
-	def remove(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"remove"):
-			print "Resume point detected, skipping remove operation..."
-		else:
-			if self.settings["spec_prefix"]+"/rm" in self.settings:
-				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
-					"""
-					We're going to shell out for all these cleaning
-					operations, so we get easy glob handling.
-					"""
-					print "livecd: removing "+x
-					os.system("rm -rf "+self.settings["chroot_path"]+x)
-				try:
-					if os.path.exists(self.settings["controller_file"]):
-						cmd("/bin/bash "+self.settings["controller_file"]+\
-							" clean","Clean  failed.",env=self.env)
-						touch(self.settings["autoresume_path"]+"remove")
-				except:
-					self.unbind()
-					raise
-
-	def preclean(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"preclean"):
-			print "Resume point detected, skipping preclean operation..."
-		else:
-			try:
-				if os.path.exists(self.settings["controller_file"]):
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" preclean","preclean script failed.",env=self.env)
-					touch(self.settings["autoresume_path"]+"preclean")
-
-			except:
-				self.unbind()
-				raise CatalystError, "Build failed, could not execute preclean"
-
-	def capture(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"capture"):
-			print "Resume point detected, skipping capture operation..."
-		else:
-			""" Capture target in a tarball """
-			mypath=self.settings["target_path"].split("/")
-			""" Remove filename from path """
-			mypath=string.join(mypath[:-1],"/")
-
-			""" Now make sure path exists """
-			if not os.path.exists(mypath):
-				os.makedirs(mypath)
-
-			print "Creating stage tarball..."
-
-			cmd("tar -I lbzip2 -cpf "+self.settings["target_path"]+" -C "+\
-				self.settings["stage_path"]+" .",\
-				"Couldn't create stage tarball",env=self.env)
-
-			self.gen_contents_file(self.settings["target_path"])
-			self.gen_digest_file(self.settings["target_path"])
-
-			touch(self.settings["autoresume_path"]+"capture")
-
-	def run_local(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"run_local"):
-			print "Resume point detected, skipping run_local operation..."
-		else:
-			try:
-				if os.path.exists(self.settings["controller_file"]):
-					cmd("/bin/bash "+self.settings["controller_file"]+" run",\
-						"run script failed.",env=self.env)
-					touch(self.settings["autoresume_path"]+"run_local")
-
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"Stage build aborting due to error."
-
-	def setup_environment(self):
-		"""
-		Modify the current environment. This is an ugly hack that should be
-		fixed. We need this to use the os.system() call since we can't
-		specify our own environ
-		"""
-		for x in self.settings.keys():
-			""" Sanitize var names by doing "s|/-.|_|g" """
-			varname="clst_"+string.replace(x,"/","_")
-			varname=string.replace(varname,"-","_")
-			varname=string.replace(varname,".","_")
-			if type(self.settings[x])==types.StringType:
-				""" Prefix to prevent namespace clashes """
-				#os.environ[varname]=self.settings[x]
-				self.env[varname]=self.settings[x]
-			elif type(self.settings[x])==types.ListType:
-				#os.environ[varname]=string.join(self.settings[x])
-				self.env[varname]=string.join(self.settings[x])
-			elif type(self.settings[x])==types.BooleanType:
-				if self.settings[x]:
-					self.env[varname]="true"
-				else:
-					self.env[varname]="false"
-		if "makeopts" in self.settings:
-			self.env["MAKEOPTS"]=self.settings["makeopts"]
-
-	def run(self):
-		self.chroot_lock.write_lock()
-
-		""" Kill any pids in the chroot "" """
-		self.kill_chroot_pids()
-
-		""" Check for mounts right away and abort if we cannot unmount them """
-		self.mount_safety_check()
-
-		if "CLEAR_AUTORESUME" in self.settings:
-			self.clear_autoresume()
-
-		if "PURGETMPONLY" in self.settings:
-			self.purge()
-			return
-
-		if "PURGEONLY" in self.settings:
-			self.purge()
-			return
-
-		if "PURGE" in self.settings:
-			self.purge()
-
-		for x in self.settings["action_sequence"]:
-			print "--- Running action sequence: "+x
-			sys.stdout.flush()
-			try:
-				apply(getattr(self,x))
-			except:
-				self.mount_safety_check()
-				raise
-
-		self.chroot_lock.unlock()
-
-	def unmerge(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"unmerge"):
-			print "Resume point detected, skipping unmerge operation..."
-		else:
-			if self.settings["spec_prefix"]+"/unmerge" in self.settings:
-				if type(self.settings[self.settings["spec_prefix"]+\
-					"/unmerge"])==types.StringType:
-					self.settings[self.settings["spec_prefix"]+"/unmerge"]=\
-						[self.settings[self.settings["spec_prefix"]+"/unmerge"]]
-				myunmerge=\
-					self.settings[self.settings["spec_prefix"]+"/unmerge"][:]
-
-				for x in range(0,len(myunmerge)):
-					"""
-					Surround args with quotes for passing to bash, allows
-					things like "<" to remain intact
-					"""
-					myunmerge[x]="'"+myunmerge[x]+"'"
-				myunmerge=string.join(myunmerge)
-
-				""" Before cleaning, unmerge stuff """
-				try:
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" unmerge "+ myunmerge,"Unmerge script failed.",\
-						env=self.env)
-					print "unmerge shell script"
-				except CatalystError:
-					self.unbind()
-					raise
-				touch(self.settings["autoresume_path"]+"unmerge")
-
-	def target_setup(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"target_setup"):
-			print "Resume point detected, skipping target_setup operation..."
-		else:
-			print "Setting up filesystems per filesystem type"
-			cmd("/bin/bash "+self.settings["controller_file"]+\
-				" target_image_setup "+ self.settings["target_path"],\
-				"target_image_setup script failed.",env=self.env)
-			touch(self.settings["autoresume_path"]+"target_setup")
-
-	def setup_overlay(self):
-		if "AUTORESUME" in self.settings \
-		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
-			print "Resume point detected, skipping setup_overlay operation..."
-		else:
-			if self.settings["spec_prefix"]+"/overlay" in self.settings:
-				for x in self.settings[self.settings["spec_prefix"]+"/overlay"]:
-					if os.path.exists(x):
-						cmd("rsync -a "+x+"/ "+\
-							self.settings["target_path"],\
-							self.settings["spec_prefix"]+"overlay: "+x+\
-							" copy failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_overlay")
-
-	def create_iso(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"create_iso"):
-			print "Resume point detected, skipping create_iso operation..."
-		else:
-			""" Create the ISO """
-			if "iso" in self.settings:
-				cmd("/bin/bash "+self.settings["controller_file"]+" iso "+\
-					self.settings["iso"],"ISO creation script failed.",\
-					env=self.env)
-				self.gen_contents_file(self.settings["iso"])
-				self.gen_digest_file(self.settings["iso"])
-				touch(self.settings["autoresume_path"]+"create_iso")
-			else:
-				print "WARNING: livecd/iso was not defined."
-				print "An ISO Image will not be created."
-
-	def build_packages(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"build_packages"):
-			print "Resume point detected, skipping build_packages operation..."
-		else:
-			if self.settings["spec_prefix"]+"/packages" in self.settings:
-				if "AUTORESUME" in self.settings \
-					and os.path.exists(self.settings["autoresume_path"]+\
-						"build_packages"):
-					print "Resume point detected, skipping build_packages operation..."
-				else:
-					mypack=\
-						list_bashify(self.settings[self.settings["spec_prefix"]\
-						+"/packages"])
-					try:
-						cmd("/bin/bash "+self.settings["controller_file"]+\
-							" build_packages "+mypack,\
-							"Error in attempt to build packages",env=self.env)
-						touch(self.settings["autoresume_path"]+"build_packages")
-					except CatalystError:
-						self.unbind()
-						raise CatalystError,self.settings["spec_prefix"]+\
-							"build aborting due to error."
-
-	def build_kernel(self):
-		"Build all configured kernels"
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"build_kernel"):
-			print "Resume point detected, skipping build_kernel operation..."
-		else:
-			if "boot/kernel" in self.settings:
-				try:
-					mynames=self.settings["boot/kernel"]
-					if type(mynames)==types.StringType:
-						mynames=[mynames]
-					"""
-					Execute the script that sets up the kernel build environment
-					"""
-					cmd("/bin/bash "+self.settings["controller_file"]+\
-						" pre-kmerge ","Runscript pre-kmerge failed",\
-						env=self.env)
-					for kname in mynames:
-						self._build_kernel(kname=kname)
-					touch(self.settings["autoresume_path"]+"build_kernel")
-				except CatalystError:
-					self.unbind()
-					raise CatalystError,\
-						"build aborting due to kernel build error."
-
-	def _build_kernel(self, kname):
-		"Build a single configured kernel by name"
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]\
-				+"build_kernel_"+kname):
-			print "Resume point detected, skipping build_kernel for "+kname+" operation..."
-			return
-		self._copy_kernel_config(kname=kname)
-
-		"""
-		If we need to pass special options to the bootloader
-		for this kernel put them into the environment
-		"""
-		if "boot/kernel/"+kname+"/kernelopts" in self.settings:
-			myopts=self.settings["boot/kernel/"+kname+\
-				"/kernelopts"]
-
-			if type(myopts) != types.StringType:
-				myopts = string.join(myopts)
-				self.env[kname+"_kernelopts"]=myopts
-
-			else:
-				self.env[kname+"_kernelopts"]=""
-
-		if "boot/kernel/"+kname+"/extraversion" not in self.settings:
-			self.settings["boot/kernel/"+kname+\
-				"/extraversion"]=""
-
-		self.env["clst_kextraversion"]=\
-			self.settings["boot/kernel/"+kname+\
-			"/extraversion"]
-
-		self._copy_initramfs_overlay(kname=kname)
-
-		""" Execute the script that builds the kernel """
-		cmd("/bin/bash "+self.settings["controller_file"]+\
-			" kernel "+kname,\
-			"Runscript kernel build failed",env=self.env)
-
-		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
-			if os.path.exists(self.settings["chroot_path"]+\
-				"/tmp/initramfs_overlay/"):
-				print "Cleaning up temporary overlay dir"
-				cmd("rm -R "+self.settings["chroot_path"]+\
-					"/tmp/initramfs_overlay/",env=self.env)
-
-		touch(self.settings["autoresume_path"]+\
-			"build_kernel_"+kname)
-
-		"""
-		Execute the script that cleans up the kernel build
-		environment
-		"""
-		cmd("/bin/bash "+self.settings["controller_file"]+\
-			" post-kmerge ",
-			"Runscript post-kmerge failed",env=self.env)
-
-	def _copy_kernel_config(self, kname):
-		if "boot/kernel/"+kname+"/config" in self.settings:
-			if not os.path.exists(self.settings["boot/kernel/"+kname+"/config"]):
-				self.unbind()
-				raise CatalystError,\
-					"Can't find kernel config: "+\
-					self.settings["boot/kernel/"+kname+\
-					"/config"]
-
-			try:
-				cmd("cp "+self.settings["boot/kernel/"+kname+\
-					"/config"]+" "+\
-					self.settings["chroot_path"]+"/var/tmp/"+\
-					kname+".config",\
-					"Couldn't copy kernel config: "+\
-					self.settings["boot/kernel/"+kname+\
-					"/config"],env=self.env)
-
-			except CatalystError:
-				self.unbind()
-
-	def _copy_initramfs_overlay(self, kname):
-		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
-			if os.path.exists(self.settings["boot/kernel/"+\
-				kname+"/initramfs_overlay"]):
-				print "Copying initramfs_overlay dir "+\
-					self.settings["boot/kernel/"+kname+\
-					"/initramfs_overlay"]
-
-				cmd("mkdir -p "+\
-					self.settings["chroot_path"]+\
-					"/tmp/initramfs_overlay/"+\
-					self.settings["boot/kernel/"+kname+\
-					"/initramfs_overlay"],env=self.env)
-
-				cmd("cp -R "+self.settings["boot/kernel/"+\
-					kname+"/initramfs_overlay"]+"/* "+\
-					self.settings["chroot_path"]+\
-					"/tmp/initramfs_overlay/"+\
-					self.settings["boot/kernel/"+kname+\
-					"/initramfs_overlay"],env=self.env)
-
-	def bootloader(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"bootloader"):
-			print "Resume point detected, skipping bootloader operation..."
-		else:
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+\
-					" bootloader " + self.settings["target_path"],\
-					"Bootloader script failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"bootloader")
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"Script aborting due to error."
-
-	def livecd_update(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+\
-				"livecd_update"):
-			print "Resume point detected, skipping build_packages operation..."
-		else:
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+\
-					" livecd-update","livecd-update failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"livecd_update")
-
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"build aborting due to livecd_update error."
-
-	def clear_chroot(self):
-		myemp=self.settings["chroot_path"]
-		if os.path.isdir(myemp):
-			print "Emptying directory",myemp
-			"""
-			stat the dir, delete the dir, recreate the dir and set
-			the proper perms and ownership
-			"""
-			mystat=os.stat(myemp)
-			#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
-			""" There's no easy way to change flags recursively in python """
-			if os.uname()[0] == "FreeBSD":
-				os.system("chflags -R noschg "+myemp)
-			shutil.rmtree(myemp)
-			os.makedirs(myemp,0755)
-			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-			os.chmod(myemp,mystat[ST_MODE])
-
-	def clear_packages(self):
-		if "PKGCACHE" in self.settings:
-			print "purging the pkgcache ..."
-
-			myemp=self.settings["pkgcache_path"]
-			if os.path.isdir(myemp):
-				print "Emptying directory",myemp
-				"""
-				stat the dir, delete the dir, recreate the dir and set
-				the proper perms and ownership
-				"""
-				mystat=os.stat(myemp)
-				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
-				shutil.rmtree(myemp)
-				os.makedirs(myemp,0755)
-				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-				os.chmod(myemp,mystat[ST_MODE])
-
-	def clear_kerncache(self):
-		if "KERNCACHE" in self.settings:
-			print "purging the kerncache ..."
-
-			myemp=self.settings["kerncache_path"]
-			if os.path.isdir(myemp):
-				print "Emptying directory",myemp
-				"""
-				stat the dir, delete the dir, recreate the dir and set
-				the proper perms and ownership
-				"""
-				mystat=os.stat(myemp)
-				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
-				shutil.rmtree(myemp)
-				os.makedirs(myemp,0755)
-				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-				os.chmod(myemp,mystat[ST_MODE])
-
-	def clear_autoresume(self):
-		""" Clean resume points since they are no longer needed """
-		if "AUTORESUME" in self.settings:
-			print "Removing AutoResume Points: ..."
-		myemp=self.settings["autoresume_path"]
-		if os.path.isdir(myemp):
-				if "AUTORESUME" in self.settings:
-					print "Emptying directory",myemp
-				"""
-				stat the dir, delete the dir, recreate the dir and set
-				the proper perms and ownership
-				"""
-				mystat=os.stat(myemp)
-				if os.uname()[0] == "FreeBSD":
-					cmd("chflags -R noschg "+myemp,\
-						"Could not remove immutable flag for file "\
-						+myemp)
-				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env-self.env)
-				shutil.rmtree(myemp)
-				os.makedirs(myemp,0755)
-				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-				os.chmod(myemp,mystat[ST_MODE])
-
-	def gen_contents_file(self,file):
-		if os.path.exists(file+".CONTENTS"):
-			os.remove(file+".CONTENTS")
-		if "contents" in self.settings:
-			if os.path.exists(file):
-				myf=open(file+".CONTENTS","w")
-				keys={}
-				for i in self.settings["contents"].split():
-					keys[i]=1
-					array=keys.keys()
-					array.sort()
-				for j in array:
-					contents=generate_contents(file,contents_function=j,\
-						verbose="VERBOSE" in self.settings)
-					if contents:
-						myf.write(contents)
-				myf.close()
-
-	def gen_digest_file(self,file):
-		if os.path.exists(file+".DIGESTS"):
-			os.remove(file+".DIGESTS")
-		if "digests" in self.settings:
-			if os.path.exists(file):
-				myf=open(file+".DIGESTS","w")
-				keys={}
-				for i in self.settings["digests"].split():
-					keys[i]=1
-					array=keys.keys()
-					array.sort()
-				for f in [file, file+'.CONTENTS']:
-					if os.path.exists(f):
-						if "all" in array:
-							for k in hash_map.keys():
-								hash=generate_hash(f,hash_function=k,verbose=\
-									"VERBOSE" in self.settings)
-								myf.write(hash)
-						else:
-							for j in array:
-								hash=generate_hash(f,hash_function=j,verbose=\
-									"VERBOSE" in self.settings)
-								myf.write(hash)
-				myf.close()
-
-	def purge(self):
-		countdown(10,"Purging Caches ...")
-		if any(k in self.settings for k in ("PURGE","PURGEONLY","PURGETMPONLY")):
-			print "clearing autoresume ..."
-			self.clear_autoresume()
-
-			print "clearing chroot ..."
-			self.clear_chroot()
-
-			if "PURGETMPONLY" not in self.settings:
-				print "clearing package cache ..."
-				self.clear_packages()
-
-			print "clearing kerncache ..."
-			self.clear_kerncache()
-
-# vim: ts=4 sw=4 sta et sts=4 ai
diff --git a/catalyst/modules/generic_target.py b/catalyst/modules/generic_target.py
deleted file mode 100644
index de51994..0000000
--- a/catalyst/modules/generic_target.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from catalyst.support import *
-
-class generic_target:
-	"""
-	The toplevel class for generic_stage_target. This is about as generic as we get.
-	"""
-	def __init__(self,myspec,addlargs):
-		addl_arg_parse(myspec,addlargs,self.required_values,self.valid_values)
-		self.settings=myspec
-		self.env={}
-		self.env["PATH"]="/bin:/sbin:/usr/bin:/usr/sbin"
diff --git a/catalyst/modules/grp_target.py b/catalyst/modules/grp_target.py
deleted file mode 100644
index 8e70042..0000000
--- a/catalyst/modules/grp_target.py
+++ /dev/null
@@ -1,118 +0,0 @@
-"""
-Gentoo Reference Platform (GRP) target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,types,glob
-from catalyst.support import *
-from generic_stage_target import *
-
-class grp_target(generic_stage_target):
-	"""
-	The builder class for GRP (Gentoo Reference Platform) builds.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["version_stamp","target","subarch",\
-			"rel_type","profile","snapshot","source_subpath"]
-
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend(["grp/use"])
-		if "grp" not in addlargs:
-			raise CatalystError,"Required value \"grp\" not specified in spec."
-
-		self.required_values.extend(["grp"])
-		if type(addlargs["grp"])==types.StringType:
-			addlargs["grp"]=[addlargs["grp"]]
-
-		if "grp/use" in addlargs:
-			if type(addlargs["grp/use"])==types.StringType:
-				addlargs["grp/use"]=[addlargs["grp/use"]]
-
-		for x in addlargs["grp"]:
-			self.required_values.append("grp/"+x+"/packages")
-			self.required_values.append("grp/"+x+"/type")
-
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-			print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			#if os.path.isdir(self.settings["target_path"]):
-				#cmd("rm -rf "+self.settings["target_path"],
-				#"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
-			if not os.path.exists(self.settings["target_path"]):
-				os.makedirs(self.settings["target_path"])
-
-			touch(self.settings["autoresume_path"]+"setup_target_path")
-
-	def run_local(self):
-		for pkgset in self.settings["grp"]:
-			# example call: "grp.sh run pkgset cd1 xmms vim sys-apps/gleep"
-			mypackages=list_bashify(self.settings["grp/"+pkgset+"/packages"])
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+" run "+self.settings["grp/"+pkgset+"/type"]\
-					+" "+pkgset+" "+mypackages,env=self.env)
-
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"GRP build aborting due to error."
-
-	def set_use(self):
-		generic_stage_target.set_use(self)
-		if "BINDIST" in self.settings:
-			if "use" in self.settings:
-				self.settings["use"].append("bindist")
-			else:
-				self.settings["use"]=["bindist"]
-
-	def set_mounts(self):
-	    self.mounts.append("/tmp/grp")
-            self.mountmap["/tmp/grp"]=self.settings["target_path"]
-
-	def generate_digests(self):
-		for pkgset in self.settings["grp"]:
-			if self.settings["grp/"+pkgset+"/type"] == "pkgset":
-				destdir=normpath(self.settings["target_path"]+"/"+pkgset+"/All")
-				print "Digesting files in the pkgset....."
-				digests=glob.glob(destdir+'/*.DIGESTS')
-				for i in digests:
-					if os.path.exists(i):
-						os.remove(i)
-
-				files=os.listdir(destdir)
-				#ignore files starting with '.' using list comprehension
-				files=[filename for filename in files if filename[0] != '.']
-				for i in files:
-					if os.path.isfile(normpath(destdir+"/"+i)):
-						self.gen_contents_file(normpath(destdir+"/"+i))
-						self.gen_digest_file(normpath(destdir+"/"+i))
-			else:
-				destdir=normpath(self.settings["target_path"]+"/"+pkgset)
-				print "Digesting files in the srcset....."
-
-				digests=glob.glob(destdir+'/*.DIGESTS')
-				for i in digests:
-					if os.path.exists(i):
-						os.remove(i)
-
-				files=os.listdir(destdir)
-				#ignore files starting with '.' using list comprehension
-				files=[filename for filename in files if filename[0] != '.']
-				for i in files:
-					if os.path.isfile(normpath(destdir+"/"+i)):
-						#self.gen_contents_file(normpath(destdir+"/"+i))
-						self.gen_digest_file(normpath(destdir+"/"+i))
-
-	def set_action_sequence(self):
-	    self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir","portage_overlay","bind","chroot_setup",\
-					"setup_environment","run_local","unbind",\
-					"generate_digests","clear_autoresume"]
-
-def register(foo):
-	foo.update({"grp":grp_target})
-	return foo
diff --git a/catalyst/modules/livecd_stage1_target.py b/catalyst/modules/livecd_stage1_target.py
deleted file mode 100644
index ac846ec..0000000
--- a/catalyst/modules/livecd_stage1_target.py
+++ /dev/null
@@ -1,75 +0,0 @@
-"""
-LiveCD stage1 target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst.support import *
-from generic_stage_target import *
-
-class livecd_stage1_target(generic_stage_target):
-	"""
-	Builder class for LiveCD stage1.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["livecd/packages"]
-		self.valid_values=self.required_values[:]
-
-		self.valid_values.extend(["livecd/use"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir","portage_overlay",\
-					"bind","chroot_setup","setup_environment","build_packages",\
-					"unbind", "clean","clear_autoresume"]
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"])
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			if os.path.exists(self.settings["target_path"]):
-				cmd("rm -rf "+self.settings["target_path"],\
-					"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_target_path")
-
-			if not os.path.exists(self.settings["target_path"]):
-				os.makedirs(self.settings["target_path"])
-
-	def set_target_path(self):
-		pass
-
-	def set_spec_prefix(self):
-	                self.settings["spec_prefix"]="livecd"
-
-	def set_use(self):
-		generic_stage_target.set_use(self)
-		if "use" in self.settings:
-			self.settings["use"].append("livecd")
-			if "BINDIST" in self.settings:
-				self.settings["use"].append("bindist")
-		else:
-			self.settings["use"]=["livecd"]
-			if "BINDIST" in self.settings:
-				self.settings["use"].append("bindist")
-
-	def set_packages(self):
-		generic_stage_target.set_packages(self)
-		if self.settings["spec_prefix"]+"/packages" in self.settings:
-			if type(self.settings[self.settings["spec_prefix"]+"/packages"]) == types.StringType:
-				self.settings[self.settings["spec_prefix"]+"/packages"] = \
-					self.settings[self.settings["spec_prefix"]+"/packages"].split()
-		self.settings[self.settings["spec_prefix"]+"/packages"].append("app-misc/livecd-tools")
-
-	def set_pkgcache_path(self):
-		if "pkgcache_path" in self.settings:
-			if type(self.settings["pkgcache_path"]) != types.StringType:
-				self.settings["pkgcache_path"]=normpath(string.join(self.settings["pkgcache_path"]))
-		else:
-			generic_stage_target.set_pkgcache_path(self)
-
-def register(foo):
-	foo.update({"livecd-stage1":livecd_stage1_target})
-	return foo
diff --git a/catalyst/modules/livecd_stage2_target.py b/catalyst/modules/livecd_stage2_target.py
deleted file mode 100644
index 8595ffc..0000000
--- a/catalyst/modules/livecd_stage2_target.py
+++ /dev/null
@@ -1,148 +0,0 @@
-"""
-LiveCD stage2 target, builds upon previous LiveCD stage1 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,types,stat,shutil
-from catalyst.support import *
-from generic_stage_target import *
-
-class livecd_stage2_target(generic_stage_target):
-	"""
-	Builder class for a LiveCD stage2 build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["boot/kernel"]
-
-		self.valid_values=[]
-
-		self.valid_values.extend(self.required_values)
-		self.valid_values.extend(["livecd/cdtar","livecd/empty","livecd/rm",\
-			"livecd/unmerge","livecd/iso","livecd/gk_mainargs","livecd/type",\
-			"livecd/readme","livecd/motd","livecd/overlay",\
-			"livecd/modblacklist","livecd/splash_theme","livecd/rcadd",\
-			"livecd/rcdel","livecd/fsscript","livecd/xinitrc",\
-			"livecd/root_overlay","livecd/users","portage_overlay",\
-			"livecd/fstype","livecd/fsops","livecd/linuxrc","livecd/bootargs",\
-			"gamecd/conf","livecd/xdm","livecd/xsession","livecd/volid"])
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		if "livecd/type" not in self.settings:
-			self.settings["livecd/type"] = "generic-livecd"
-
-		file_locate(self.settings, ["cdtar","controller_file"])
-
-	def set_source_path(self):
-		self.settings["source_path"] = normpath(self.settings["storedir"] +
-			"/builds/" + self.settings["source_subpath"].rstrip("/") +
-			".tar.bz2")
-		if os.path.isfile(self.settings["source_path"]):
-			self.settings["source_path_hash"]=generate_hash(self.settings["source_path"])
-		else:
-			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/")
-		if not os.path.exists(self.settings["source_path"]):
-			raise CatalystError,"Source Path: "+self.settings["source_path"]+" does not exist."
-
-	def set_spec_prefix(self):
-	    self.settings["spec_prefix"]="livecd"
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			if os.path.isdir(self.settings["target_path"]):
-				cmd("rm -rf "+self.settings["target_path"],
-				"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_target_path")
-			if not os.path.exists(self.settings["target_path"]):
-				os.makedirs(self.settings["target_path"])
-
-	def run_local(self):
-		# what modules do we want to blacklist?
-		if "livecd/modblacklist" in self.settings:
-			try:
-				myf=open(self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf","a")
-			except:
-				self.unbind()
-				raise CatalystError,"Couldn't open "+self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf."
-
-			myf.write("\n#Added by Catalyst:")
-			# workaround until config.py is using configparser
-			if isinstance(self.settings["livecd/modblacklist"], str):
-				self.settings["livecd/modblacklist"] = self.settings["livecd/modblacklist"].split()
-			for x in self.settings["livecd/modblacklist"]:
-				myf.write("\nblacklist "+x)
-			myf.close()
-
-	def unpack(self):
-		unpack=True
-		display_msg=None
-
-		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+"unpack")
-
-		if os.path.isdir(self.settings["source_path"]):
-			unpack_cmd="rsync -a --delete "+self.settings["source_path"]+" "+self.settings["chroot_path"]
-			display_msg="\nStarting rsync from "+self.settings["source_path"]+"\nto "+\
-				self.settings["chroot_path"]+" (This may take some time) ...\n"
-			error_msg="Rsync of "+self.settings["source_path"]+" to "+self.settings["chroot_path"]+" failed."
-			invalid_snapshot=False
-
-		if "AUTORESUME" in self.settings:
-			if os.path.isdir(self.settings["source_path"]) and \
-				os.path.exists(self.settings["autoresume_path"]+"unpack"):
-				print "Resume point detected, skipping unpack operation..."
-				unpack=False
-			elif "source_path_hash" in self.settings:
-				if self.settings["source_path_hash"] != clst_unpack_hash:
-					invalid_snapshot=True
-
-		if unpack:
-			self.mount_safety_check()
-			if invalid_snapshot:
-				print "No Valid Resume point detected, cleaning up  ..."
-				#os.remove(self.settings["autoresume_path"]+"dir_setup")
-				self.clear_autoresume()
-				self.clear_chroot()
-				#self.dir_setup()
-
-			if not os.path.exists(self.settings["chroot_path"]):
-				os.makedirs(self.settings["chroot_path"])
-
-			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
-				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
-
-			if "PKGCACHE" in self.settings:
-				if not os.path.exists(self.settings["pkgcache_path"]):
-					os.makedirs(self.settings["pkgcache_path"],0755)
-
-			if not display_msg:
-				raise CatalystError,"Could not find appropriate source. Please check the 'source_subpath' setting in the spec file."
-
-			print display_msg
-			cmd(unpack_cmd,error_msg,env=self.env)
-
-			if "source_path_hash" in self.settings:
-				myf=open(self.settings["autoresume_path"]+"unpack","w")
-				myf.write(self.settings["source_path_hash"])
-				myf.close()
-			else:
-				touch(self.settings["autoresume_path"]+"unpack")
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-				"config_profile_link","setup_confdir","portage_overlay",\
-				"bind","chroot_setup","setup_environment","run_local",\
-				"build_kernel"]
-		if "FETCH" not in self.settings:
-			self.settings["action_sequence"] += ["bootloader","preclean",\
-				"livecd_update","root_overlay","fsscript","rcupdate","unmerge",\
-				"unbind","remove","empty","target_setup",\
-				"setup_overlay","create_iso"]
-		self.settings["action_sequence"].append("clear_autoresume")
-
-def register(foo):
-	foo.update({"livecd-stage2":livecd_stage2_target})
-	return foo
diff --git a/catalyst/modules/netboot2_target.py b/catalyst/modules/netboot2_target.py
deleted file mode 100644
index 2b3cd20..0000000
--- a/catalyst/modules/netboot2_target.py
+++ /dev/null
@@ -1,166 +0,0 @@
-"""
-netboot target, version 2
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,types
-from catalyst.support import *
-from generic_stage_target import *
-
-class netboot2_target(generic_stage_target):
-	"""
-	Builder class for a netboot build, version 2
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[
-			"boot/kernel"
-		]
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend([
-			"netboot2/packages",
-			"netboot2/use",
-			"netboot2/extra_files",
-			"netboot2/overlay",
-			"netboot2/busybox_config",
-			"netboot2/root_overlay",
-			"netboot2/linuxrc"
-		])
-
-		try:
-			if "netboot2/packages" in addlargs:
-				if type(addlargs["netboot2/packages"]) == types.StringType:
-					loopy=[addlargs["netboot2/packages"]]
-				else:
-					loopy=addlargs["netboot2/packages"]
-
-				for x in loopy:
-					self.valid_values.append("netboot2/packages/"+x+"/files")
-		except:
-			raise CatalystError,"configuration error in netboot2/packages."
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		self.set_build_kernel_vars()
-		self.settings["merge_path"]=normpath("/tmp/image/")
-
-	def set_target_path(self):
-		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+\
-			self.settings["target_subpath"]+"/")
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			# first clean up any existing target stuff
-			if os.path.isfile(self.settings["target_path"]):
-				cmd("rm -f "+self.settings["target_path"], \
-					"Could not remove existing file: "+self.settings["target_path"],env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_target_path")
-
-		if not os.path.exists(self.settings["storedir"]+"/builds/"):
-			os.makedirs(self.settings["storedir"]+"/builds/")
-
-	def copy_files_to_image(self):
-		# copies specific files from the buildroot to merge_path
-		myfiles=[]
-
-		# check for autoresume point
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"copy_files_to_image"):
-				print "Resume point detected, skipping target path setup operation..."
-		else:
-			if "netboot2/packages" in self.settings:
-				if type(self.settings["netboot2/packages"]) == types.StringType:
-					loopy=[self.settings["netboot2/packages"]]
-				else:
-					loopy=self.settings["netboot2/packages"]
-
-			for x in loopy:
-				if "netboot2/packages/"+x+"/files" in self.settings:
-				    if type(self.settings["netboot2/packages/"+x+"/files"]) == types.ListType:
-					    myfiles.extend(self.settings["netboot2/packages/"+x+"/files"])
-				    else:
-					    myfiles.append(self.settings["netboot2/packages/"+x+"/files"])
-
-			if "netboot2/extra_files" in self.settings:
-				if type(self.settings["netboot2/extra_files"]) == types.ListType:
-					myfiles.extend(self.settings["netboot2/extra_files"])
-				else:
-					myfiles.append(self.settings["netboot2/extra_files"])
-
-			try:
-				cmd("/bin/bash "+self.settings["controller_file"]+\
-					" image " + list_bashify(myfiles),env=self.env)
-			except CatalystError:
-				self.unbind()
-				raise CatalystError,"Failed to copy files to image!"
-
-			touch(self.settings["autoresume_path"]+"copy_files_to_image")
-
-	def setup_overlay(self):
-		if "AUTORESUME" in self.settings \
-		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
-			print "Resume point detected, skipping setup_overlay operation..."
-		else:
-			if "netboot2/overlay" in self.settings:
-				for x in self.settings["netboot2/overlay"]:
-					if os.path.exists(x):
-						cmd("rsync -a "+x+"/ "+\
-							self.settings["chroot_path"] + self.settings["merge_path"], "netboot2/overlay: "+x+" copy failed.",env=self.env)
-				touch(self.settings["autoresume_path"]+"setup_overlay")
-
-	def move_kernels(self):
-		# we're done, move the kernels to builds/*
-		# no auto resume here as we always want the
-		# freshest images moved
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+\
-				" final",env=self.env)
-			print ">>> Netboot Build Finished!"
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"Failed to move kernel images!"
-
-	def remove(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"remove"):
-			print "Resume point detected, skipping remove operation..."
-		else:
-			if self.settings["spec_prefix"]+"/rm" in self.settings:
-				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
-					# we're going to shell out for all these cleaning operations,
-					# so we get easy glob handling
-					print "netboot2: removing " + x
-					os.system("rm -rf " + self.settings["chroot_path"] + self.settings["merge_path"] + x)
-
-	def empty(self):
-		if "AUTORESUME" in self.settings \
-			and os.path.exists(self.settings["autoresume_path"]+"empty"):
-			print "Resume point detected, skipping empty operation..."
-		else:
-			if "netboot2/empty" in self.settings:
-				if type(self.settings["netboot2/empty"])==types.StringType:
-					self.settings["netboot2/empty"]=self.settings["netboot2/empty"].split()
-				for x in self.settings["netboot2/empty"]:
-					myemp=self.settings["chroot_path"] + self.settings["merge_path"] + x
-					if not os.path.isdir(myemp):
-						print x,"not a directory or does not exist, skipping 'empty' operation."
-						continue
-					print "Emptying directory", x
-					# stat the dir, delete the dir, recreate the dir and set
-					# the proper perms and ownership
-					mystat=os.stat(myemp)
-					shutil.rmtree(myemp)
-					os.makedirs(myemp,0755)
-					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-					os.chmod(myemp,mystat[ST_MODE])
-		touch(self.settings["autoresume_path"]+"empty")
-
-	def set_action_sequence(self):
-	    self.settings["action_sequence"]=["unpack","unpack_snapshot","config_profile_link",
-	    				"setup_confdir","portage_overlay","bind","chroot_setup",\
-					"setup_environment","build_packages","root_overlay",\
-					"copy_files_to_image","setup_overlay","build_kernel","move_kernels",\
-					"remove","empty","unbind","clean","clear_autoresume"]
-
-def register(foo):
-	foo.update({"netboot2":netboot2_target})
-	return foo
diff --git a/catalyst/modules/netboot_target.py b/catalyst/modules/netboot_target.py
deleted file mode 100644
index 9d01b7e..0000000
--- a/catalyst/modules/netboot_target.py
+++ /dev/null
@@ -1,128 +0,0 @@
-"""
-netboot target, version 1
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-import os,string,types
-from catalyst.support import *
-from generic_stage_target import *
-
-class netboot_target(generic_stage_target):
-	"""
-	Builder class for a netboot build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.valid_values = [
-			"netboot/kernel/sources",
-			"netboot/kernel/config",
-			"netboot/kernel/prebuilt",
-
-			"netboot/busybox_config",
-
-			"netboot/extra_files",
-			"netboot/packages"
-		]
-		self.required_values=[]
-
-		try:
-			if "netboot/packages" in addlargs:
-				if type(addlargs["netboot/packages"]) == types.StringType:
-					loopy=[addlargs["netboot/packages"]]
-				else:
-					loopy=addlargs["netboot/packages"]
-
-		#	for x in loopy:
-		#		self.required_values.append("netboot/packages/"+x+"/files")
-		except:
-			raise CatalystError,"configuration error in netboot/packages."
-
-		generic_stage_target.__init__(self,spec,addlargs)
-		self.set_build_kernel_vars(addlargs)
-		if "netboot/busybox_config" in addlargs:
-			file_locate(self.settings, ["netboot/busybox_config"])
-
-		# Custom Kernel Tarball --- use that instead ...
-
-		# unless the user wants specific CFLAGS/CXXFLAGS, let's use -Os
-
-		for envvar in "CFLAGS", "CXXFLAGS":
-			if envvar not in os.environ and envvar not in addlargs:
-				self.settings[envvar] = "-Os -pipe"
-
-	def set_root_path(self):
-		# ROOT= variable for emerges
-		self.settings["root_path"]=normpath("/tmp/image")
-		print "netboot root path is "+self.settings["root_path"]
-
-#	def build_packages(self):
-#		# build packages
-#		if "netboot/packages" in self.settings:
-#			mypack=list_bashify(self.settings["netboot/packages"])
-#		try:
-#			cmd("/bin/bash "+self.settings["controller_file"]+" packages "+mypack,env=self.env)
-#		except CatalystError:
-#			self.unbind()
-#			raise CatalystError,"netboot build aborting due to error."
-
-	def build_busybox(self):
-		# build busybox
-		if "netboot/busybox_config" in self.settings:
-			mycmd = self.settings["netboot/busybox_config"]
-		else:
-			mycmd = ""
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+" busybox "+ mycmd,env=self.env)
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"netboot build aborting due to error."
-
-	def copy_files_to_image(self):
-		# create image
-		myfiles=[]
-		if "netboot/packages" in self.settings:
-			if type(self.settings["netboot/packages"]) == types.StringType:
-				loopy=[self.settings["netboot/packages"]]
-			else:
-				loopy=self.settings["netboot/packages"]
-
-		for x in loopy:
-			if "netboot/packages/"+x+"/files" in self.settings:
-			    if type(self.settings["netboot/packages/"+x+"/files"]) == types.ListType:
-				    myfiles.extend(self.settings["netboot/packages/"+x+"/files"])
-			    else:
-				    myfiles.append(self.settings["netboot/packages/"+x+"/files"])
-
-		if "netboot/extra_files" in self.settings:
-			if type(self.settings["netboot/extra_files"]) == types.ListType:
-				myfiles.extend(self.settings["netboot/extra_files"])
-			else:
-				myfiles.append(self.settings["netboot/extra_files"])
-
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+\
-				" image " + list_bashify(myfiles),env=self.env)
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"netboot build aborting due to error."
-
-	def create_netboot_files(self):
-		# finish it all up
-		try:
-			cmd("/bin/bash "+self.settings["controller_file"]+" finish",env=self.env)
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"netboot build aborting due to error."
-
-		# end
-		print "netboot: build finished !"
-
-	def set_action_sequence(self):
-	    self.settings["action_sequence"]=["unpack","unpack_snapshot",
-	    				"config_profile_link","setup_confdir","bind","chroot_setup",\
-						"setup_environment","build_packages","build_busybox",\
-						"build_kernel","copy_files_to_image",\
-						"clean","create_netboot_files","unbind","clear_autoresume"]
-
-def register(foo):
-	foo.update({"netboot":netboot_target})
-	return foo
diff --git a/catalyst/modules/snapshot_target.py b/catalyst/modules/snapshot_target.py
deleted file mode 100644
index d1b9e40..0000000
--- a/catalyst/modules/snapshot_target.py
+++ /dev/null
@@ -1,91 +0,0 @@
-"""
-Snapshot target
-"""
-
-import os
-from catalyst.support import *
-from generic_stage_target import *
-
-class snapshot_target(generic_stage_target):
-	"""
-	Builder class for snapshots.
-	"""
-	def __init__(self,myspec,addlargs):
-		self.required_values=["version_stamp","target"]
-		self.valid_values=["version_stamp","target"]
-
-		generic_target.__init__(self,myspec,addlargs)
-		self.settings=myspec
-		self.settings["target_subpath"]="portage"
-		st=self.settings["storedir"]
-		self.settings["snapshot_path"] = normpath(st + "/snapshots/"
-			+ self.settings["snapshot_name"]
-			+ self.settings["version_stamp"] + ".tar.bz2")
-		self.settings["tmp_path"]=normpath(st+"/tmp/"+self.settings["target_subpath"])
-
-	def setup(self):
-		x=normpath(self.settings["storedir"]+"/snapshots")
-		if not os.path.exists(x):
-			os.makedirs(x)
-
-	def mount_safety_check(self):
-		pass
-
-	def run(self):
-		if "PURGEONLY" in self.settings:
-			self.purge()
-			return
-
-		if "PURGE" in self.settings:
-			self.purge()
-
-		self.setup()
-		print "Creating Portage tree snapshot "+self.settings["version_stamp"]+\
-			" from "+self.settings["portdir"]+"..."
-
-		mytmp=self.settings["tmp_path"]
-		if not os.path.exists(mytmp):
-			os.makedirs(mytmp)
-
-		cmd("rsync -a --delete --exclude /packages/ --exclude /distfiles/ " +
-			"--exclude /local/ --exclude CVS/ --exclude .svn --filter=H_**/files/digest-* " +
-			self.settings["portdir"] + "/ " + mytmp + "/%s/" % self.settings["repo_name"],
-			"Snapshot failure", env=self.env)
-
-		print "Compressing Portage snapshot tarball..."
-		cmd("tar -I lbzip2 -cf " + self.settings["snapshot_path"] + " -C " +
-			mytmp + " " + self.settings["repo_name"],
-			"Snapshot creation failure",env=self.env)
-
-		self.gen_contents_file(self.settings["snapshot_path"])
-		self.gen_digest_file(self.settings["snapshot_path"])
-
-		self.cleanup()
-		print "snapshot: complete!"
-
-	def kill_chroot_pids(self):
-		pass
-
-	def cleanup(self):
-		print "Cleaning up..."
-
-	def purge(self):
-		myemp=self.settings["tmp_path"]
-		if os.path.isdir(myemp):
-			print "Emptying directory",myemp
-			"""
-			stat the dir, delete the dir, recreate the dir and set
-			the proper perms and ownership
-			"""
-			mystat=os.stat(myemp)
-			""" There's no easy way to change flags recursively in python """
-			if os.uname()[0] == "FreeBSD":
-				os.system("chflags -R noschg "+myemp)
-			shutil.rmtree(myemp)
-			os.makedirs(myemp,0755)
-			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
-			os.chmod(myemp,mystat[ST_MODE])
-
-def register(foo):
-	foo.update({"snapshot":snapshot_target})
-	return foo
diff --git a/catalyst/modules/stage1_target.py b/catalyst/modules/stage1_target.py
deleted file mode 100644
index 8d5a674..0000000
--- a/catalyst/modules/stage1_target.py
+++ /dev/null
@@ -1,97 +0,0 @@
-"""
-stage1 target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst.support import *
-from generic_stage_target import *
-
-class stage1_target(generic_stage_target):
-	"""
-	Builder class for a stage1 installation tarball build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=["chost"]
-		self.valid_values.extend(["update_seed","update_seed_command"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_stage_path(self):
-		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+self.settings["root_path"])
-		print "stage1 stage path is "+self.settings["stage_path"]
-
-	def set_root_path(self):
-		# sets the root path, relative to 'chroot_path', of the stage1 root
-		self.settings["root_path"]=normpath("/tmp/stage1root")
-		print "stage1 root path is "+self.settings["root_path"]
-
-	def set_cleanables(self):
-		generic_stage_target.set_cleanables(self)
-		self.settings["cleanables"].extend([\
-		"/usr/share/zoneinfo", "/etc/portage/package*"])
-
-	# XXX: How do these override_foo() functions differ from the ones in generic_stage_target and why aren't they in stage3_target?
-
-	def override_chost(self):
-		if "chost" in self.settings:
-			self.settings["CHOST"]=list_to_string(self.settings["chost"])
-
-	def override_cflags(self):
-		if "cflags" in self.settings:
-			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
-
-	def override_cxxflags(self):
-		if "cxxflags" in self.settings:
-			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
-
-	def override_ldflags(self):
-		if "ldflags" in self.settings:
-			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
-
-	def set_portage_overlay(self):
-		generic_stage_target.set_portage_overlay(self)
-		if "portage_overlay" in self.settings:
-			print "\nWARNING !!!!!"
-			print "\tUsing an portage overlay for earlier stages could cause build issues."
-			print "\tIf you break it, you buy it. Don't complain to us about it."
-			print "\tDont say we did not warn you\n"
-
-	def base_dirs(self):
-		if os.uname()[0] == "FreeBSD":
-			# baselayout no longer creates the .keep files in proc and dev for FreeBSD as it
-			# would create them too late...we need them earlier before bind mounting filesystems
-			# since proc and dev are not writeable, so...create them here
-			if not os.path.exists(self.settings["stage_path"]+"/proc"):
-				os.makedirs(self.settings["stage_path"]+"/proc")
-			if not os.path.exists(self.settings["stage_path"]+"/dev"):
-				os.makedirs(self.settings["stage_path"]+"/dev")
-			if not os.path.isfile(self.settings["stage_path"]+"/proc/.keep"):
-				try:
-					proc_keepfile = open(self.settings["stage_path"]+"/proc/.keep","w")
-					proc_keepfile.write('')
-					proc_keepfile.close()
-				except IOError:
-					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
-			if not os.path.isfile(self.settings["stage_path"]+"/dev/.keep"):
-				try:
-					dev_keepfile = open(self.settings["stage_path"]+"/dev/.keep","w")
-					dev_keepfile.write('')
-					dev_keepfile.close()
-				except IOError:
-					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
-		else:
-			pass
-
-	def set_mounts(self):
-		# stage_path/proc probably doesn't exist yet, so create it
-		if not os.path.exists(self.settings["stage_path"]+"/proc"):
-			os.makedirs(self.settings["stage_path"]+"/proc")
-
-		# alter the mount mappings to bind mount proc onto it
-		self.mounts.append("stage1root/proc")
-		self.target_mounts["stage1root/proc"] = "/tmp/stage1root/proc"
-		self.mountmap["stage1root/proc"] = "/proc"
-
-def register(foo):
-	foo.update({"stage1":stage1_target})
-	return foo
diff --git a/catalyst/modules/stage2_target.py b/catalyst/modules/stage2_target.py
deleted file mode 100644
index 0168718..0000000
--- a/catalyst/modules/stage2_target.py
+++ /dev/null
@@ -1,66 +0,0 @@
-"""
-stage2 target, builds upon previous stage1 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst.support import *
-from generic_stage_target import *
-
-class stage2_target(generic_stage_target):
-	"""
-	Builder class for a stage2 installation tarball build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=["chost"]
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_source_path(self):
-		if "SEEDCACHE" in self.settings and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")):
-			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")
-		else:
-			self.settings["source_path"] = normpath(self.settings["storedir"] +
-				"/builds/" + self.settings["source_subpath"].rstrip("/") +
-				".tar.bz2")
-			if os.path.isfile(self.settings["source_path"]):
-				if os.path.exists(self.settings["source_path"]):
-				# XXX: Is this even necessary if the previous check passes?
-					self.settings["source_path_hash"]=generate_hash(self.settings["source_path"],\
-						hash_function=self.settings["hash_function"],verbose=False)
-		print "Source path set to "+self.settings["source_path"]
-		if os.path.isdir(self.settings["source_path"]):
-			print "\tIf this is not desired, remove this directory or turn of seedcache in the options of catalyst.conf"
-			print "\tthe source path will then be " + \
-				normpath(self.settings["storedir"] + "/builds/" + \
-				self.settings["source_subpath"].restrip("/") + ".tar.bz2\n")
-
-	# XXX: How do these override_foo() functions differ from the ones in
-	# generic_stage_target and why aren't they in stage3_target?
-
-	def override_chost(self):
-		if "chost" in self.settings:
-			self.settings["CHOST"]=list_to_string(self.settings["chost"])
-
-	def override_cflags(self):
-		if "cflags" in self.settings:
-			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
-
-	def override_cxxflags(self):
-		if "cxxflags" in self.settings:
-			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
-
-	def override_ldflags(self):
-		if "ldflags" in self.settings:
-			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
-
-	def set_portage_overlay(self):
-			generic_stage_target.set_portage_overlay(self)
-			if "portage_overlay" in self.settings:
-				print "\nWARNING !!!!!"
-				print "\tUsing an portage overlay for earlier stages could cause build issues."
-				print "\tIf you break it, you buy it. Don't complain to us about it."
-				print "\tDont say we did not warn you\n"
-
-def register(foo):
-	foo.update({"stage2":stage2_target})
-	return foo
diff --git a/catalyst/modules/stage3_target.py b/catalyst/modules/stage3_target.py
deleted file mode 100644
index 89edd66..0000000
--- a/catalyst/modules/stage3_target.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""
-stage3 target, builds upon previous stage2/stage3 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst.support import *
-from generic_stage_target import *
-
-class stage3_target(generic_stage_target):
-	"""
-	Builder class for a stage3 installation tarball build.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=[]
-		self.valid_values=[]
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_portage_overlay(self):
-		generic_stage_target.set_portage_overlay(self)
-		if "portage_overlay" in self.settings:
-			print "\nWARNING !!!!!"
-			print "\tUsing an overlay for earlier stages could cause build issues."
-			print "\tIf you break it, you buy it. Don't complain to us about it."
-			print "\tDont say we did not warn you\n"
-
-	def set_cleanables(self):
-		generic_stage_target.set_cleanables(self)
-
-def register(foo):
-	foo.update({"stage3":stage3_target})
-	return foo
diff --git a/catalyst/modules/stage4_target.py b/catalyst/modules/stage4_target.py
deleted file mode 100644
index 9168f2e..0000000
--- a/catalyst/modules/stage4_target.py
+++ /dev/null
@@ -1,43 +0,0 @@
-"""
-stage4 target, builds upon previous stage3/stage4 tarball
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst.support import *
-from generic_stage_target import *
-
-class stage4_target(generic_stage_target):
-	"""
-	Builder class for stage4.
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["stage4/packages"]
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend(["stage4/use","boot/kernel",\
-				"stage4/root_overlay","stage4/fsscript",\
-				"stage4/gk_mainargs","splash_theme",\
-				"portage_overlay","stage4/rcadd","stage4/rcdel",\
-				"stage4/linuxrc","stage4/unmerge","stage4/rm","stage4/empty"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def set_cleanables(self):
-		self.settings["cleanables"]=["/var/tmp/*","/tmp/*"]
-
-	def set_action_sequence(self):
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-					"config_profile_link","setup_confdir","portage_overlay",\
-					"bind","chroot_setup","setup_environment","build_packages",\
-					"build_kernel","bootloader","root_overlay","fsscript",\
-					"preclean","rcupdate","unmerge","unbind","remove","empty",\
-					"clean"]
-
-#		if "TARBALL" in self.settings or \
-#			"FETCH" not in self.settings:
-		if "FETCH" not in self.settings:
-			self.settings["action_sequence"].append("capture")
-		self.settings["action_sequence"].append("clear_autoresume")
-
-def register(foo):
-	foo.update({"stage4":stage4_target})
-	return foo
-
diff --git a/catalyst/modules/tinderbox_target.py b/catalyst/modules/tinderbox_target.py
deleted file mode 100644
index 1d31989..0000000
--- a/catalyst/modules/tinderbox_target.py
+++ /dev/null
@@ -1,48 +0,0 @@
-"""
-Tinderbox target
-"""
-# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-
-from catalyst.support import *
-from generic_stage_target import *
-
-class tinderbox_target(generic_stage_target):
-	"""
-	Builder class for the tinderbox target
-	"""
-	def __init__(self,spec,addlargs):
-		self.required_values=["tinderbox/packages"]
-		self.valid_values=self.required_values[:]
-		self.valid_values.extend(["tinderbox/use"])
-		generic_stage_target.__init__(self,spec,addlargs)
-
-	def run_local(self):
-		# tinderbox
-		# example call: "grp.sh run xmms vim sys-apps/gleep"
-		try:
-			if os.path.exists(self.settings["controller_file"]):
-			    cmd("/bin/bash "+self.settings["controller_file"]+" run "+\
-				list_bashify(self.settings["tinderbox/packages"]),"run script failed.",env=self.env)
-
-		except CatalystError:
-			self.unbind()
-			raise CatalystError,"Tinderbox aborting due to error."
-
-	def set_cleanables(self):
-		self.settings['cleanables'] = [
-			'/etc/resolv.conf',
-			'/var/tmp/*',
-			'/root/*',
-			self.settings['portdir'],
-			]
-
-	def set_action_sequence(self):
-		#Default action sequence for run method
-		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
-		              "config_profile_link","setup_confdir","bind","chroot_setup",\
-		              "setup_environment","run_local","preclean","unbind","clean",\
-		              "clear_autoresume"]
-
-def register(foo):
-	foo.update({"tinderbox":tinderbox_target})
-	return foo
diff --git a/catalyst/targets/__init__.py b/catalyst/targets/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/catalyst/targets/__init__.py
@@ -0,0 +1 @@
+
diff --git a/catalyst/targets/embedded_target.py b/catalyst/targets/embedded_target.py
new file mode 100644
index 0000000..7cee7a6
--- /dev/null
+++ b/catalyst/targets/embedded_target.py
@@ -0,0 +1,51 @@
+"""
+Enbedded target, similar to the stage2 target, builds upon a stage2 tarball.
+
+A stage2 tarball is unpacked, but instead
+of building a stage3, it emerges @system into another directory
+inside the stage2 system.  This way, we do not have to emerge GCC/portage
+into the staged system.
+It may sound complicated but basically it runs
+ROOT=/tmp/submerge emerge --something foo bar .
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,imp,types,shutil
+from catalyst.support import *
+from generic_stage_target import *
+from stat import *
+
+class embedded_target(generic_stage_target):
+	"""
+	Builder class for embedded target
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=[]
+		self.valid_values.extend(["embedded/empty","embedded/rm","embedded/unmerge","embedded/fs-prepare","embedded/fs-finish","embedded/mergeroot","embedded/packages","embedded/fs-type","embedded/runscript","boot/kernel","embedded/linuxrc"])
+		self.valid_values.extend(["embedded/use"])
+		if "embedded/fs-type" in addlargs:
+			self.valid_values.append("embedded/fs-ops")
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		self.set_build_kernel_vars(addlargs)
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["dir_setup","unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir",\
+					"portage_overlay","bind","chroot_setup",\
+					"setup_environment","build_kernel","build_packages",\
+					"bootloader","root_overlay","fsscript","unmerge",\
+					"unbind","remove","empty","clean","capture","clear_autoresume"]
+
+	def set_stage_path(self):
+		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+"/tmp/mergeroot")
+		print "embedded stage path is "+self.settings["stage_path"]
+
+	def set_root_path(self):
+		self.settings["root_path"]=normpath("/tmp/mergeroot")
+		print "embedded root path is "+self.settings["root_path"]
+
+def register(foo):
+	foo.update({"embedded":embedded_target})
+	return foo
diff --git a/catalyst/targets/generic_stage_target.py b/catalyst/targets/generic_stage_target.py
new file mode 100644
index 0000000..2c1a921
--- /dev/null
+++ b/catalyst/targets/generic_stage_target.py
@@ -0,0 +1,1741 @@
+import os,string,imp,types,shutil
+from catalyst.support import *
+from generic_target import *
+from stat import *
+from catalyst.lock import LockDir
+
+
+PORT_LOGDIR_CLEAN = \
+	'find "${PORT_LOGDIR}" -type f ! -name "summary.log*" -mtime +30 -delete'
+
+TARGET_MOUNTS_DEFAULTS = {
+	"ccache": "/var/tmp/ccache",
+	"dev": "/dev",
+	"devpts": "/dev/pts",
+	"distdir": "/usr/portage/distfiles",
+	"icecream": "/usr/lib/icecc/bin",
+	"kerncache": "/tmp/kerncache",
+	"packagedir": "/usr/portage/packages",
+	"portdir": "/usr/portage",
+	"port_tmpdir": "/var/tmp/portage",
+	"port_logdir": "/var/log/portage",
+	"proc": "/proc",
+	"shm": "/dev/shm",
+	}
+
+SOURCE_MOUNTS_DEFAULTS = {
+	"dev": "/dev",
+	"devpts": "/dev/pts",
+	"distdir": "/usr/portage/distfiles",
+	"portdir": "/usr/portage",
+	"port_tmpdir": "tmpfs",
+	"proc": "/proc",
+	"shm": "shmfs",
+	}
+
+
+class generic_stage_target(generic_target):
+	"""
+	This class does all of the chroot setup, copying of files, etc. It is
+	the driver class for pretty much everything that Catalyst does.
+	"""
+	def __init__(self,myspec,addlargs):
+		self.required_values.extend(["version_stamp","target","subarch",\
+			"rel_type","profile","snapshot","source_subpath"])
+
+		self.valid_values.extend(["version_stamp","target","subarch",\
+			"rel_type","profile","snapshot","source_subpath","portage_confdir",\
+			"cflags","cxxflags","ldflags","cbuild","hostuse","portage_overlay",\
+			"distcc_hosts","makeopts","pkgcache_path","kerncache_path"])
+
+		self.set_valid_build_kernel_vars(addlargs)
+		generic_target.__init__(self,myspec,addlargs)
+
+		"""
+		The semantics of subarchmap and machinemap changed a bit in 2.0.3 to
+		work better with vapier's CBUILD stuff. I've removed the "monolithic"
+		machinemap from this file and split up its contents amongst the
+		various arch/foo.py files.
+
+		When register() is called on each module in the arch/ dir, it now
+		returns a tuple instead of acting on the subarchmap dict that is
+		passed to it. The tuple contains the values that were previously
+		added to subarchmap as well as a new list of CHOSTs that go along
+		with that arch. This allows us to build machinemap on the fly based
+		on the keys in subarchmap and the values of the 2nd list returned
+		(tmpmachinemap).
+
+		Also, after talking with vapier. I have a slightly better idea of what
+		certain variables are used for and what they should be set to. Neither
+		'buildarch' or 'hostarch' are used directly, so their value doesn't
+		really matter. They are just compared to determine if we are
+		cross-compiling. Because of this, they are just set to the name of the
+		module in arch/ that the subarch is part of to make things simpler.
+		The entire build process is still based off of 'subarch' like it was
+		previously. -agaffney
+		"""
+
+		self.archmap = {}
+		self.subarchmap = {}
+		machinemap = {}
+		arch_dir = self.settings["PythonDir"] + "/arch/"
+		for x in [x[:-3] for x in os.listdir(arch_dir) if x.endswith(".py")]:
+			if x == "__init__":
+				continue
+			try:
+				fh=open(arch_dir + x + ".py")
+				"""
+				This next line loads the plugin as a module and assigns it to
+				archmap[x]
+				"""
+				self.archmap[x]=imp.load_module(x,fh,"../arch/" + x + ".py",
+					(".py", "r", imp.PY_SOURCE))
+				"""
+				This next line registers all the subarches supported in the
+				plugin
+				"""
+				tmpsubarchmap, tmpmachinemap = self.archmap[x].register()
+				self.subarchmap.update(tmpsubarchmap)
+				for machine in tmpmachinemap:
+					machinemap[machine] = x
+				for subarch in tmpsubarchmap:
+					machinemap[subarch] = x
+				fh.close()
+			except IOError:
+				"""
+				This message should probably change a bit, since everything in
+				the dir should load just fine. If it doesn't, it's probably a
+				syntax error in the module
+				"""
+				msg("Can't find/load " + x + ".py plugin in " + arch_dir)
+
+		if "chost" in self.settings:
+			hostmachine = self.settings["chost"].split("-")[0]
+			if hostmachine not in machinemap:
+				raise CatalystError, "Unknown host machine type "+hostmachine
+			self.settings["hostarch"]=machinemap[hostmachine]
+		else:
+			hostmachine = self.settings["subarch"]
+			if hostmachine in machinemap:
+				hostmachine = machinemap[hostmachine]
+			self.settings["hostarch"]=hostmachine
+		if "cbuild" in self.settings:
+			buildmachine = self.settings["cbuild"].split("-")[0]
+		else:
+			buildmachine = os.uname()[4]
+		if buildmachine not in machinemap:
+			raise CatalystError, "Unknown build machine type "+buildmachine
+		self.settings["buildarch"]=machinemap[buildmachine]
+		self.settings["crosscompile"]=(self.settings["hostarch"]!=\
+			self.settings["buildarch"])
+
+		""" Call arch constructor, pass our settings """
+		try:
+			self.arch=self.subarchmap[self.settings["subarch"]](self.settings)
+		except KeyError:
+			print "Invalid subarch: "+self.settings["subarch"]
+			print "Choose one of the following:",
+			for x in self.subarchmap:
+				print x,
+			print
+			sys.exit(2)
+
+		print "Using target:",self.settings["target"]
+		""" Print a nice informational message """
+		if self.settings["buildarch"]==self.settings["hostarch"]:
+			print "Building natively for",self.settings["hostarch"]
+		elif self.settings["crosscompile"]:
+			print "Cross-compiling on",self.settings["buildarch"],\
+				"for different machine type",self.settings["hostarch"]
+		else:
+			print "Building on",self.settings["buildarch"],\
+				"for alternate personality type",self.settings["hostarch"]
+
+		""" This must be set first as other set_ options depend on this """
+		self.set_spec_prefix()
+
+		""" Define all of our core variables """
+		self.set_target_profile()
+		self.set_target_subpath()
+		self.set_source_subpath()
+
+		""" Set paths """
+		self.set_snapshot_path()
+		self.set_root_path()
+		self.set_source_path()
+		self.set_snapcache_path()
+		self.set_chroot_path()
+		self.set_autoresume_path()
+		self.set_dest_path()
+		self.set_stage_path()
+		self.set_target_path()
+
+		self.set_controller_file()
+		self.set_action_sequence()
+		self.set_use()
+		self.set_cleanables()
+		self.set_iso_volume_id()
+		self.set_build_kernel_vars()
+		self.set_fsscript()
+		self.set_install_mask()
+		self.set_rcadd()
+		self.set_rcdel()
+		self.set_cdtar()
+		self.set_fstype()
+		self.set_fsops()
+		self.set_iso()
+		self.set_packages()
+		self.set_rm()
+		self.set_linuxrc()
+		self.set_busybox_config()
+		self.set_overlay()
+		self.set_portage_overlay()
+		self.set_root_overlay()
+
+		"""
+		This next line checks to make sure that the specified variables exist
+		on disk.
+		"""
+		#pdb.set_trace()
+		file_locate(self.settings,["source_path","snapshot_path","distdir"],\
+			expand=0)
+		""" If we are using portage_confdir, check that as well. """
+		if "portage_confdir" in self.settings:
+			file_locate(self.settings,["portage_confdir"],expand=0)
+
+		""" Setup our mount points """
+		# initialize our target mounts.
+		self.target_mounts = TARGET_MOUNTS_DEFAULTS.copy()
+
+		self.mounts = ["proc", "dev", "portdir", "distdir", "port_tmpdir"]
+		# initialize our source mounts
+		self.mountmap = SOURCE_MOUNTS_DEFAULTS.copy()
+		# update them from settings
+		self.mountmap["distdir"] = self.settings["distdir"]
+		self.mountmap["portdir"] = normpath("/".join([
+			self.settings["snapshot_cache_path"],
+			self.settings["repo_name"],
+			]))
+		if "SNAPCACHE" not in self.settings:
+			self.mounts.remove("portdir")
+			#self.mountmap["portdir"] = None
+		if os.uname()[0] == "Linux":
+			self.mounts.append("devpts")
+			self.mounts.append("shm")
+
+		self.set_mounts()
+
+		"""
+		Configure any user specified options (either in catalyst.conf or on
+		the command line).
+		"""
+		if "PKGCACHE" in self.settings:
+			self.set_pkgcache_path()
+			print "Location of the package cache is "+\
+				self.settings["pkgcache_path"]
+			self.mounts.append("packagedir")
+			self.mountmap["packagedir"] = self.settings["pkgcache_path"]
+
+		if "KERNCACHE" in self.settings:
+			self.set_kerncache_path()
+			print "Location of the kerncache is "+\
+				self.settings["kerncache_path"]
+			self.mounts.append("kerncache")
+			self.mountmap["kerncache"] = self.settings["kerncache_path"]
+
+		if "CCACHE" in self.settings:
+			if "CCACHE_DIR" in os.environ:
+				ccdir=os.environ["CCACHE_DIR"]
+				del os.environ["CCACHE_DIR"]
+			else:
+				ccdir="/root/.ccache"
+			if not os.path.isdir(ccdir):
+				raise CatalystError,\
+					"Compiler cache support can't be enabled (can't find "+\
+					ccdir+")"
+			self.mounts.append("ccache")
+			self.mountmap["ccache"] = ccdir
+			""" for the chroot: """
+			self.env["CCACHE_DIR"] = self.target_mounts["ccache"]
+
+		if "ICECREAM" in self.settings:
+			self.mounts.append("icecream")
+			self.mountmap["icecream"] = self.settings["icecream"]
+			self.env["PATH"] = self.target_mounts["icecream"] + ":" + \
+				self.env["PATH"]
+
+		if "port_logdir" in self.settings:
+			self.mounts.append("port_logdir")
+			self.mountmap["port_logdir"] = self.settings["port_logdir"]
+			self.env["PORT_LOGDIR"] = self.settings["port_logdir"]
+			self.env["PORT_LOGDIR_CLEAN"] = PORT_LOGDIR_CLEAN
+
+	def override_cbuild(self):
+		if "CBUILD" in self.makeconf:
+			self.settings["CBUILD"]=self.makeconf["CBUILD"]
+
+	def override_chost(self):
+		if "CHOST" in self.makeconf:
+			self.settings["CHOST"]=self.makeconf["CHOST"]
+
+	def override_cflags(self):
+		if "CFLAGS" in self.makeconf:
+			self.settings["CFLAGS"]=self.makeconf["CFLAGS"]
+
+	def override_cxxflags(self):
+		if "CXXFLAGS" in self.makeconf:
+			self.settings["CXXFLAGS"]=self.makeconf["CXXFLAGS"]
+
+	def override_ldflags(self):
+		if "LDFLAGS" in self.makeconf:
+			self.settings["LDFLAGS"]=self.makeconf["LDFLAGS"]
+
+	def set_install_mask(self):
+		if "install_mask" in self.settings:
+			if type(self.settings["install_mask"])!=types.StringType:
+				self.settings["install_mask"]=\
+					string.join(self.settings["install_mask"])
+
+	def set_spec_prefix(self):
+		self.settings["spec_prefix"]=self.settings["target"]
+
+	def set_target_profile(self):
+		self.settings["target_profile"]=self.settings["profile"]
+
+	def set_target_subpath(self):
+		self.settings["target_subpath"]=self.settings["rel_type"]+"/"+\
+				self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
+				self.settings["version_stamp"]
+
+	def set_source_subpath(self):
+		if type(self.settings["source_subpath"])!=types.StringType:
+			raise CatalystError,\
+				"source_subpath should have been a string. Perhaps you have something wrong in your spec file?"
+
+	def set_pkgcache_path(self):
+		if "pkgcache_path" in self.settings:
+			if type(self.settings["pkgcache_path"])!=types.StringType:
+				self.settings["pkgcache_path"]=\
+					normpath(string.join(self.settings["pkgcache_path"]))
+		else:
+			self.settings["pkgcache_path"]=\
+				normpath(self.settings["storedir"]+"/packages/"+\
+				self.settings["target_subpath"]+"/")
+
+	def set_kerncache_path(self):
+		if "kerncache_path" in self.settings:
+			if type(self.settings["kerncache_path"])!=types.StringType:
+				self.settings["kerncache_path"]=\
+					normpath(string.join(self.settings["kerncache_path"]))
+		else:
+			self.settings["kerncache_path"]=normpath(self.settings["storedir"]+\
+				"/kerncache/"+self.settings["target_subpath"]+"/")
+
+	def set_target_path(self):
+		self.settings["target_path"] = normpath(self.settings["storedir"] +
+			"/builds/" + self.settings["target_subpath"].rstrip('/') +
+			".tar.bz2")
+		if "AUTORESUME" in self.settings\
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"setup_target_path"):
+			print \
+				"Resume point detected, skipping target path setup operation..."
+		else:
+			""" First clean up any existing target stuff """
+			# XXX WTF are we removing the old tarball before we start building the
+			# XXX new one? If the build fails, you don't want to be left with
+			# XXX nothing at all
+#			if os.path.isfile(self.settings["target_path"]):
+#				cmd("rm -f "+self.settings["target_path"],\
+#					"Could not remove existing file: "\
+#					+self.settings["target_path"],env=self.env)
+			touch(self.settings["autoresume_path"]+"setup_target_path")
+
+			if not os.path.exists(self.settings["storedir"]+"/builds/"):
+				os.makedirs(self.settings["storedir"]+"/builds/")
+
+	def set_fsscript(self):
+		if self.settings["spec_prefix"]+"/fsscript" in self.settings:
+			self.settings["fsscript"]=\
+				self.settings[self.settings["spec_prefix"]+"/fsscript"]
+			del self.settings[self.settings["spec_prefix"]+"/fsscript"]
+
+	def set_rcadd(self):
+		if self.settings["spec_prefix"]+"/rcadd" in self.settings:
+			self.settings["rcadd"]=\
+				self.settings[self.settings["spec_prefix"]+"/rcadd"]
+			del self.settings[self.settings["spec_prefix"]+"/rcadd"]
+
+	def set_rcdel(self):
+		if self.settings["spec_prefix"]+"/rcdel" in self.settings:
+			self.settings["rcdel"]=\
+				self.settings[self.settings["spec_prefix"]+"/rcdel"]
+			del self.settings[self.settings["spec_prefix"]+"/rcdel"]
+
+	def set_cdtar(self):
+		if self.settings["spec_prefix"]+"/cdtar" in self.settings:
+			self.settings["cdtar"]=\
+				normpath(self.settings[self.settings["spec_prefix"]+"/cdtar"])
+			del self.settings[self.settings["spec_prefix"]+"/cdtar"]
+
+	def set_iso(self):
+		if self.settings["spec_prefix"]+"/iso" in self.settings:
+			if self.settings[self.settings["spec_prefix"]+"/iso"].startswith('/'):
+				self.settings["iso"]=\
+					normpath(self.settings[self.settings["spec_prefix"]+"/iso"])
+			else:
+				# This automatically prepends the build dir to the ISO output path
+				# if it doesn't start with a /
+				self.settings["iso"] = normpath(self.settings["storedir"] + \
+					"/builds/" + self.settings["rel_type"] + "/" + \
+					self.settings[self.settings["spec_prefix"]+"/iso"])
+			del self.settings[self.settings["spec_prefix"]+"/iso"]
+
+	def set_fstype(self):
+		if self.settings["spec_prefix"]+"/fstype" in self.settings:
+			self.settings["fstype"]=\
+				self.settings[self.settings["spec_prefix"]+"/fstype"]
+			del self.settings[self.settings["spec_prefix"]+"/fstype"]
+
+		if "fstype" not in self.settings:
+			self.settings["fstype"]="normal"
+			for x in self.valid_values:
+				if x ==  self.settings["spec_prefix"]+"/fstype":
+					print "\n"+self.settings["spec_prefix"]+\
+						"/fstype is being set to the default of \"normal\"\n"
+
+	def set_fsops(self):
+		if "fstype" in self.settings:
+			self.valid_values.append("fsops")
+			if self.settings["spec_prefix"]+"/fsops" in self.settings:
+				self.settings["fsops"]=\
+					self.settings[self.settings["spec_prefix"]+"/fsops"]
+				del self.settings[self.settings["spec_prefix"]+"/fsops"]
+
+	def set_source_path(self):
+		if "SEEDCACHE" in self.settings\
+			and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+\
+				self.settings["source_subpath"]+"/")):
+			self.settings["source_path"]=normpath(self.settings["storedir"]+\
+				"/tmp/"+self.settings["source_subpath"]+"/")
+		else:
+			self.settings["source_path"] = normpath(self.settings["storedir"] +
+				"/builds/" + self.settings["source_subpath"].rstrip("/") +
+				".tar.bz2")
+			if os.path.isfile(self.settings["source_path"]):
+				# XXX: Is this even necessary if the previous check passes?
+				if os.path.exists(self.settings["source_path"]):
+					self.settings["source_path_hash"]=\
+						generate_hash(self.settings["source_path"],\
+						hash_function=self.settings["hash_function"],\
+						verbose=False)
+		print "Source path set to "+self.settings["source_path"]
+		if os.path.isdir(self.settings["source_path"]):
+			print "\tIf this is not desired, remove this directory or turn off"
+			print "\tseedcache in the options of catalyst.conf the source path"
+			print "\twill then be "+\
+				normpath(self.settings["storedir"] + "/builds/" +
+					self.settings["source_subpath"].rstrip("/") + ".tar.bz2\n")
+
+	def set_dest_path(self):
+		if "root_path" in self.settings:
+			self.settings["destpath"]=normpath(self.settings["chroot_path"]+\
+				self.settings["root_path"])
+		else:
+			self.settings["destpath"]=normpath(self.settings["chroot_path"])
+
+	def set_cleanables(self):
+		self.settings["cleanables"]=["/etc/resolv.conf","/var/tmp/*","/tmp/*",\
+			"/root/*", self.settings["portdir"]]
+
+	def set_snapshot_path(self):
+		self.settings["snapshot_path"] = normpath(self.settings["storedir"] +
+			"/snapshots/" + self.settings["snapshot_name"] +
+			self.settings["snapshot"].rstrip("/") + ".tar.xz")
+
+		if os.path.exists(self.settings["snapshot_path"]):
+			self.settings["snapshot_path_hash"]=\
+				generate_hash(self.settings["snapshot_path"],\
+				hash_function=self.settings["hash_function"],verbose=False)
+		else:
+			self.settings["snapshot_path"]=normpath(self.settings["storedir"]+\
+				"/snapshots/" + self.settings["snapshot_name"] +
+				self.settings["snapshot"].rstrip("/") + ".tar.bz2")
+
+			if os.path.exists(self.settings["snapshot_path"]):
+				self.settings["snapshot_path_hash"]=\
+					generate_hash(self.settings["snapshot_path"],\
+					hash_function=self.settings["hash_function"],verbose=False)
+
+	def set_snapcache_path(self):
+		if "SNAPCACHE" in self.settings:
+			self.settings["snapshot_cache_path"]=\
+				normpath(self.settings["snapshot_cache"]+"/"+\
+				self.settings["snapshot"])
+			self.snapcache_lock=\
+				LockDir(self.settings["snapshot_cache_path"])
+			print "Caching snapshot to "+self.settings["snapshot_cache_path"]
+
+	def set_chroot_path(self):
+		"""
+		NOTE: the trailing slash has been removed
+		Things *could* break if you don't use a proper join()
+		"""
+		self.settings["chroot_path"]=normpath(self.settings["storedir"]+\
+			"/tmp/"+self.settings["target_subpath"])
+		self.chroot_lock=LockDir(self.settings["chroot_path"])
+
+	def set_autoresume_path(self):
+		self.settings["autoresume_path"]=normpath(self.settings["storedir"]+\
+			"/tmp/"+self.settings["rel_type"]+"/"+".autoresume-"+\
+			self.settings["target"]+"-"+self.settings["subarch"]+"-"+\
+			self.settings["version_stamp"]+"/")
+		if "AUTORESUME" in self.settings:
+			print "The autoresume path is " + self.settings["autoresume_path"]
+		if not os.path.exists(self.settings["autoresume_path"]):
+			os.makedirs(self.settings["autoresume_path"],0755)
+
+	def set_controller_file(self):
+		self.settings["controller_file"]=normpath(self.settings["sharedir"]+\
+			"/targets/"+self.settings["target"]+"/"+self.settings["target"]+\
+			"-controller.sh")
+
+	def set_iso_volume_id(self):
+		if self.settings["spec_prefix"]+"/volid" in self.settings:
+			self.settings["iso_volume_id"]=\
+				self.settings[self.settings["spec_prefix"]+"/volid"]
+			if len(self.settings["iso_volume_id"])>32:
+				raise CatalystError,\
+					"ISO volume ID must not exceed 32 characters."
+		else:
+			self.settings["iso_volume_id"]="catalyst "+self.settings["snapshot"]
+
+	def set_action_sequence(self):
+		""" Default action sequence for run method """
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+				"setup_confdir","portage_overlay",\
+				"base_dirs","bind","chroot_setup","setup_environment",\
+				"run_local","preclean","unbind","clean"]
+#		if "TARBALL" in self.settings or \
+#			"FETCH" not in self.settings:
+		if "FETCH" not in self.settings:
+			self.settings["action_sequence"].append("capture")
+		self.settings["action_sequence"].append("clear_autoresume")
+
+	def set_use(self):
+		if self.settings["spec_prefix"]+"/use" in self.settings:
+			self.settings["use"]=\
+				self.settings[self.settings["spec_prefix"]+"/use"]
+			del self.settings[self.settings["spec_prefix"]+"/use"]
+		if "use" not in self.settings:
+			self.settings["use"]=""
+		if type(self.settings["use"])==types.StringType:
+			self.settings["use"]=self.settings["use"].split()
+
+		# Force bindist when options ask for it
+		if "BINDIST" in self.settings:
+			self.settings["use"].append("bindist")
+
+	def set_stage_path(self):
+		self.settings["stage_path"]=normpath(self.settings["chroot_path"])
+
+	def set_mounts(self):
+		pass
+
+	def set_packages(self):
+		pass
+
+	def set_rm(self):
+		if self.settings["spec_prefix"]+"/rm" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/rm"])==types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/rm"]=\
+					self.settings[self.settings["spec_prefix"]+"/rm"].split()
+
+	def set_linuxrc(self):
+		if self.settings["spec_prefix"]+"/linuxrc" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/linuxrc"])==types.StringType:
+				self.settings["linuxrc"]=\
+					self.settings[self.settings["spec_prefix"]+"/linuxrc"]
+				del self.settings[self.settings["spec_prefix"]+"/linuxrc"]
+
+	def set_busybox_config(self):
+		if self.settings["spec_prefix"]+"/busybox_config" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/busybox_config"])==types.StringType:
+				self.settings["busybox_config"]=\
+					self.settings[self.settings["spec_prefix"]+"/busybox_config"]
+				del self.settings[self.settings["spec_prefix"]+"/busybox_config"]
+
+	def set_portage_overlay(self):
+		if "portage_overlay" in self.settings:
+			if type(self.settings["portage_overlay"])==types.StringType:
+				self.settings["portage_overlay"]=\
+					self.settings["portage_overlay"].split()
+			print "portage_overlay directories are set to: \""+\
+				string.join(self.settings["portage_overlay"])+"\""
+
+	def set_overlay(self):
+		if self.settings["spec_prefix"]+"/overlay" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/overlay"])==types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/overlay"]=\
+					self.settings[self.settings["spec_prefix"]+\
+					"/overlay"].split()
+
+	def set_root_overlay(self):
+		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+\
+				"/root_overlay"])==types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/root_overlay"]=\
+					self.settings[self.settings["spec_prefix"]+\
+					"/root_overlay"].split()
+
+	def set_root_path(self):
+		""" ROOT= variable for emerges """
+		self.settings["root_path"]="/"
+
+	def set_valid_build_kernel_vars(self,addlargs):
+		if "boot/kernel" in addlargs:
+			if type(addlargs["boot/kernel"])==types.StringType:
+				loopy=[addlargs["boot/kernel"]]
+			else:
+				loopy=addlargs["boot/kernel"]
+
+			for x in loopy:
+				self.valid_values.append("boot/kernel/"+x+"/aliases")
+				self.valid_values.append("boot/kernel/"+x+"/config")
+				self.valid_values.append("boot/kernel/"+x+"/console")
+				self.valid_values.append("boot/kernel/"+x+"/extraversion")
+				self.valid_values.append("boot/kernel/"+x+"/gk_action")
+				self.valid_values.append("boot/kernel/"+x+"/gk_kernargs")
+				self.valid_values.append("boot/kernel/"+x+"/initramfs_overlay")
+				self.valid_values.append("boot/kernel/"+x+"/machine_type")
+				self.valid_values.append("boot/kernel/"+x+"/sources")
+				self.valid_values.append("boot/kernel/"+x+"/softlevel")
+				self.valid_values.append("boot/kernel/"+x+"/use")
+				self.valid_values.append("boot/kernel/"+x+"/packages")
+				if "boot/kernel/"+x+"/packages" in addlargs:
+					if type(addlargs["boot/kernel/"+x+\
+						"/packages"])==types.StringType:
+						addlargs["boot/kernel/"+x+"/packages"]=\
+							[addlargs["boot/kernel/"+x+"/packages"]]
+
+	def set_build_kernel_vars(self):
+		if self.settings["spec_prefix"]+"/gk_mainargs" in self.settings:
+			self.settings["gk_mainargs"]=\
+				self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
+			del self.settings[self.settings["spec_prefix"]+"/gk_mainargs"]
+
+	def kill_chroot_pids(self):
+		print "Checking for processes running in chroot and killing them."
+
+		"""
+		Force environment variables to be exported so script can see them
+		"""
+		self.setup_environment()
+
+		if os.path.exists(self.settings["sharedir"]+\
+			"/targets/support/kill-chroot-pids.sh"):
+			cmd("/bin/bash "+self.settings["sharedir"]+\
+				"/targets/support/kill-chroot-pids.sh",\
+				"kill-chroot-pids script failed.",env=self.env)
+
+	def mount_safety_check(self):
+		"""
+		Check and verify that none of our paths in mypath are mounted. We don't
+		want to clean up with things still mounted, and this allows us to check.
+		Returns 1 on ok, 0 on "something is still mounted" case.
+		"""
+
+		if not os.path.exists(self.settings["chroot_path"]):
+			return
+
+		print "self.mounts =", self.mounts
+		for x in self.mounts:
+			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
+			print "mount_safety_check() x =", x, target
+			if not os.path.exists(target):
+				continue
+
+			if ismount(target):
+				""" Something is still mounted "" """
+				try:
+					print target + " is still mounted; performing auto-bind-umount...",
+					""" Try to umount stuff ourselves """
+					self.unbind()
+					if ismount(target):
+						raise CatalystError, "Auto-unbind failed for " + target
+					else:
+						print "Auto-unbind successful..."
+				except CatalystError:
+					raise CatalystError, "Unable to auto-unbind " + target
+
+	def unpack(self):
+		unpack=True
+
+		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+\
+			"unpack")
+
+		if "SEEDCACHE" in self.settings:
+			if os.path.isdir(self.settings["source_path"]):
+				""" SEEDCACHE Is a directory, use rsync """
+				unpack_cmd="rsync -a --delete "+self.settings["source_path"]+\
+					" "+self.settings["chroot_path"]
+				display_msg="\nStarting rsync from "+\
+					self.settings["source_path"]+"\nto "+\
+					self.settings["chroot_path"]+\
+					" (This may take some time) ...\n"
+				error_msg="Rsync of "+self.settings["source_path"]+" to "+\
+					self.settings["chroot_path"]+" failed."
+			else:
+				""" SEEDCACHE is a not a directory, try untar'ing """
+				print "Referenced SEEDCACHE does not appear to be a directory, trying to untar..."
+				display_msg="\nStarting tar extract from "+\
+					self.settings["source_path"]+"\nto "+\
+					self.settings["chroot_path"]+\
+						" (This may take some time) ...\n"
+				if "bz2" == self.settings["chroot_path"][-3:]:
+					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+						self.settings["chroot_path"]
+				else:
+					unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+						self.settings["chroot_path"]
+				error_msg="Tarball extraction of "+\
+					self.settings["source_path"]+" to "+\
+					self.settings["chroot_path"]+" failed."
+		else:
+			""" No SEEDCACHE, use tar """
+			display_msg="\nStarting tar extract from "+\
+				self.settings["source_path"]+"\nto "+\
+				self.settings["chroot_path"]+\
+				" (This may take some time) ...\n"
+			if "bz2" == self.settings["chroot_path"][-3:]:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+					self.settings["chroot_path"]
+			else:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["source_path"]+" -C "+\
+					self.settings["chroot_path"]
+			error_msg="Tarball extraction of "+self.settings["source_path"]+\
+				" to "+self.settings["chroot_path"]+" failed."
+
+		if "AUTORESUME" in self.settings:
+			if os.path.isdir(self.settings["source_path"]) \
+				and os.path.exists(self.settings["autoresume_path"]+"unpack"):
+				""" Autoresume is valid, SEEDCACHE is valid """
+				unpack=False
+				invalid_snapshot=False
+
+			elif os.path.isfile(self.settings["source_path"]) \
+				and self.settings["source_path_hash"]==clst_unpack_hash:
+				""" Autoresume is valid, tarball is valid """
+				unpack=False
+				invalid_snapshot=True
+
+			elif os.path.isdir(self.settings["source_path"]) \
+				and not os.path.exists(self.settings["autoresume_path"]+\
+				"unpack"):
+				""" Autoresume is invalid, SEEDCACHE """
+				unpack=True
+				invalid_snapshot=False
+
+			elif os.path.isfile(self.settings["source_path"]) \
+				and self.settings["source_path_hash"]!=clst_unpack_hash:
+				""" Autoresume is invalid, tarball """
+				unpack=True
+				invalid_snapshot=True
+		else:
+			""" No autoresume, SEEDCACHE """
+			if "SEEDCACHE" in self.settings:
+				""" SEEDCACHE so let's run rsync and let it clean up """
+				if os.path.isdir(self.settings["source_path"]):
+					unpack=True
+					invalid_snapshot=False
+				elif os.path.isfile(self.settings["source_path"]):
+					""" Tarball so unpack and remove anything already there """
+					unpack=True
+					invalid_snapshot=True
+				""" No autoresume, no SEEDCACHE """
+			else:
+				""" Tarball so unpack and remove anything already there """
+				if os.path.isfile(self.settings["source_path"]):
+					unpack=True
+					invalid_snapshot=True
+				elif os.path.isdir(self.settings["source_path"]):
+					""" We should never reach this, so something is very wrong """
+					raise CatalystError,\
+						"source path is a dir but seedcache is not enabled"
+
+		if unpack:
+			self.mount_safety_check()
+
+			if invalid_snapshot:
+				if "AUTORESUME" in self.settings:
+					print "No Valid Resume point detected, cleaning up..."
+
+				self.clear_autoresume()
+				self.clear_chroot()
+
+			if not os.path.exists(self.settings["chroot_path"]):
+				os.makedirs(self.settings["chroot_path"])
+
+			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
+				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
+
+			if "PKGCACHE" in self.settings:
+				if not os.path.exists(self.settings["pkgcache_path"]):
+					os.makedirs(self.settings["pkgcache_path"],0755)
+
+			if "KERNCACHE" in self.settings:
+				if not os.path.exists(self.settings["kerncache_path"]):
+					os.makedirs(self.settings["kerncache_path"],0755)
+
+			print display_msg
+			cmd(unpack_cmd,error_msg,env=self.env)
+
+			if "source_path_hash" in self.settings:
+				myf=open(self.settings["autoresume_path"]+"unpack","w")
+				myf.write(self.settings["source_path_hash"])
+				myf.close()
+			else:
+				touch(self.settings["autoresume_path"]+"unpack")
+		else:
+			print "Resume point detected, skipping unpack operation..."
+
+	def unpack_snapshot(self):
+		unpack=True
+		snapshot_hash=read_from_clst(self.settings["autoresume_path"]+\
+			"unpack_portage")
+
+		if "SNAPCACHE" in self.settings:
+			snapshot_cache_hash=\
+				read_from_clst(self.settings["snapshot_cache_path"]+\
+				"catalyst-hash")
+			destdir=self.settings["snapshot_cache_path"]
+			if "bz2" == self.settings["chroot_path"][-3:]:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+destdir
+			else:
+				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+destdir
+			unpack_errmsg="Error unpacking snapshot"
+			cleanup_msg="Cleaning up invalid snapshot cache at \n\t"+\
+				self.settings["snapshot_cache_path"]+\
+				" (This can take a long time)..."
+			cleanup_errmsg="Error removing existing snapshot cache directory."
+			self.snapshot_lock_object=self.snapcache_lock
+
+			if self.settings["snapshot_path_hash"]==snapshot_cache_hash:
+				print "Valid snapshot cache, skipping unpack of portage tree..."
+				unpack=False
+		else:
+			destdir = normpath(self.settings["chroot_path"] + self.settings["portdir"])
+			cleanup_errmsg="Error removing existing snapshot directory."
+			cleanup_msg=\
+				"Cleaning up existing portage tree (This can take a long time)..."
+			if "bz2" == self.settings["chroot_path"][-3:]:
+				unpack_cmd="tar -I lbzip2 -xpf "+self.settings["snapshot_path"]+" -C "+\
+					self.settings["chroot_path"]+"/usr"
+			else:
+				unpack_cmd="tar xpf "+self.settings["snapshot_path"]+" -C "+\
+					self.settings["chroot_path"]+"/usr"
+			unpack_errmsg="Error unpacking snapshot"
+
+			if "AUTORESUME" in self.settings \
+				and os.path.exists(self.settings["chroot_path"]+\
+					self.settings["portdir"]) \
+				and os.path.exists(self.settings["autoresume_path"]\
+					+"unpack_portage") \
+				and self.settings["snapshot_path_hash"] == snapshot_hash:
+					print \
+						"Valid Resume point detected, skipping unpack of portage tree..."
+					unpack=False
+
+		if unpack:
+			if "SNAPCACHE" in self.settings:
+				self.snapshot_lock_object.write_lock()
+			if os.path.exists(destdir):
+				print cleanup_msg
+				cleanup_cmd="rm -rf "+destdir
+				cmd(cleanup_cmd,cleanup_errmsg,env=self.env)
+			if not os.path.exists(destdir):
+				os.makedirs(destdir,0755)
+
+			print "Unpacking portage tree (This can take a long time) ..."
+			cmd(unpack_cmd,unpack_errmsg,env=self.env)
+
+			if "SNAPCACHE" in self.settings:
+				myf=open(self.settings["snapshot_cache_path"]+"catalyst-hash","w")
+				myf.write(self.settings["snapshot_path_hash"])
+				myf.close()
+			else:
+				print "Setting snapshot autoresume point"
+				myf=open(self.settings["autoresume_path"]+"unpack_portage","w")
+				myf.write(self.settings["snapshot_path_hash"])
+				myf.close()
+
+			if "SNAPCACHE" in self.settings:
+				self.snapshot_lock_object.unlock()
+
+	def config_profile_link(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"config_profile_link"):
+			print \
+				"Resume point detected, skipping config_profile_link operation..."
+		else:
+			# TODO: zmedico and I discussed making this a directory and pushing
+			# in a parent file, as well as other user-specified configuration.
+			print "Configuring profile link..."
+			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.profile",\
+					"Error zapping profile link",env=self.env)
+			cmd("mkdir -p "+self.settings["chroot_path"]+"/etc/portage/")
+			cmd("ln -sf ../.." + self.settings["portdir"] + "/profiles/" + \
+				self.settings["target_profile"]+" "+\
+				self.settings["chroot_path"]+"/etc/portage/make.profile",\
+				"Error creating profile link",env=self.env)
+			touch(self.settings["autoresume_path"]+"config_profile_link")
+
+	def setup_confdir(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"setup_confdir"):
+			print "Resume point detected, skipping setup_confdir operation..."
+		else:
+			if "portage_confdir" in self.settings:
+				print "Configuring /etc/portage..."
+				cmd("rsync -a "+self.settings["portage_confdir"]+"/ "+\
+					self.settings["chroot_path"]+"/etc/portage/",\
+					"Error copying /etc/portage",env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_confdir")
+
+	def portage_overlay(self):
+		""" We copy the contents of our overlays to /usr/local/portage """
+		if "portage_overlay" in self.settings:
+			for x in self.settings["portage_overlay"]:
+				if os.path.exists(x):
+					print "Copying overlay dir " +x
+					cmd("mkdir -p "+self.settings["chroot_path"]+\
+						self.settings["local_overlay"],\
+						"Could not make portage_overlay dir",env=self.env)
+					cmd("cp -R "+x+"/* "+self.settings["chroot_path"]+\
+						self.settings["local_overlay"],\
+						"Could not copy portage_overlay",env=self.env)
+
+	def root_overlay(self):
+		""" Copy over the root_overlay """
+		if self.settings["spec_prefix"]+"/root_overlay" in self.settings:
+			for x in self.settings[self.settings["spec_prefix"]+\
+				"/root_overlay"]:
+				if os.path.exists(x):
+					print "Copying root_overlay: "+x
+					cmd("rsync -a "+x+"/ "+\
+						self.settings["chroot_path"],\
+						self.settings["spec_prefix"]+"/root_overlay: "+x+\
+						" copy failed.",env=self.env)
+
+	def base_dirs(self):
+		pass
+
+	def bind(self):
+		for x in self.mounts:
+			#print "bind(); x =", x
+			target = normpath(self.settings["chroot_path"] + self.target_mounts[x])
+			if not os.path.exists(target):
+				os.makedirs(target, 0755)
+
+			if not os.path.exists(self.mountmap[x]):
+				if self.mountmap[x] not in ["tmpfs", "shmfs"]:
+					os.makedirs(self.mountmap[x], 0755)
+
+			src=self.mountmap[x]
+			#print "bind(); src =", src
+			if "SNAPCACHE" in self.settings and x == "portdir":
+				self.snapshot_lock_object.read_lock()
+			if os.uname()[0] == "FreeBSD":
+				if src == "/dev":
+					cmd = "mount -t devfs none " + target
+					retval=os.system(cmd)
+				else:
+					cmd = "mount_nullfs " + src + " " + target
+					retval=os.system(cmd)
+			else:
+				if src == "tmpfs":
+					if "var_tmpfs_portage" in self.settings:
+						cmd = "mount -t tmpfs -o size=" + \
+							self.settings["var_tmpfs_portage"] + "G " + \
+							src + " " + target
+						retval=os.system(cmd)
+				elif src == "shmfs":
+					cmd = "mount -t tmpfs -o noexec,nosuid,nodev shm " + target
+					retval=os.system(cmd)
+				else:
+					cmd = "mount --bind " + src + " " + target
+					#print "bind(); cmd =", cmd
+					retval=os.system(cmd)
+			if retval!=0:
+				self.unbind()
+				raise CatalystError,"Couldn't bind mount " + src
+
+	def unbind(self):
+		ouch=0
+		mypath=self.settings["chroot_path"]
+		myrevmounts=self.mounts[:]
+		myrevmounts.reverse()
+		""" Unmount in reverse order for nested bind-mounts """
+		for x in myrevmounts:
+			target = normpath(mypath + self.target_mounts[x])
+			if not os.path.exists(target):
+				continue
+
+			if not ismount(target):
+				continue
+
+			retval=os.system("umount " + target)
+
+			if retval!=0:
+				warn("First attempt to unmount: " + target + " failed.")
+				warn("Killing any pids still running in the chroot")
+
+				self.kill_chroot_pids()
+
+				retval2 = os.system("umount " + target)
+				if retval2!=0:
+					ouch=1
+					warn("Couldn't umount bind mount: " + target)
+
+			if "SNAPCACHE" in self.settings and x == "/usr/portage":
+				try:
+					"""
+					It's possible the snapshot lock object isn't created yet.
+					This is because mount safety check calls unbind before the
+					target is fully initialized
+					"""
+					self.snapshot_lock_object.unlock()
+				except:
+					pass
+		if ouch:
+			"""
+			if any bind mounts really failed, then we need to raise
+			this to potentially prevent an upcoming bash stage cleanup script
+			from wiping our bind mounts.
+			"""
+			raise CatalystError,\
+				"Couldn't umount one or more bind-mounts; aborting for safety."
+
+	def chroot_setup(self):
+		self.makeconf=read_makeconf(self.settings["chroot_path"]+\
+			"/etc/portage/make.conf")
+		self.override_cbuild()
+		self.override_chost()
+		self.override_cflags()
+		self.override_cxxflags()
+		self.override_ldflags()
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"chroot_setup"):
+			print "Resume point detected, skipping chroot_setup operation..."
+		else:
+			print "Setting up chroot..."
+
+			#self.makeconf=read_makeconf(self.settings["chroot_path"]+"/etc/portage/make.conf")
+
+			cmd("cp /etc/resolv.conf "+self.settings["chroot_path"]+"/etc",\
+				"Could not copy resolv.conf into place.",env=self.env)
+
+			""" Copy over the envscript, if applicable """
+			if "ENVSCRIPT" in self.settings:
+				if not os.path.exists(self.settings["ENVSCRIPT"]):
+					raise CatalystError,\
+						"Can't find envscript "+self.settings["ENVSCRIPT"]
+
+				print "\nWarning!!!!"
+				print "\tOverriding certain env variables may cause catastrophic failure."
+				print "\tIf your build fails look here first as the possible problem."
+				print "\tCatalyst assumes you know what you are doing when setting"
+				print "\t\tthese variables."
+				print "\tCatalyst Maintainers use VERY minimal envscripts if used at all"
+				print "\tYou have been warned\n"
+
+				cmd("cp "+self.settings["ENVSCRIPT"]+" "+\
+					self.settings["chroot_path"]+"/tmp/envscript",\
+					"Could not copy envscript into place.",env=self.env)
+
+			"""
+			Copy over /etc/hosts from the host in case there are any
+			specialties in there
+			"""
+			if os.path.exists(self.settings["chroot_path"]+"/etc/hosts"):
+				cmd("mv "+self.settings["chroot_path"]+"/etc/hosts "+\
+					self.settings["chroot_path"]+"/etc/hosts.catalyst",\
+					"Could not backup /etc/hosts",env=self.env)
+				cmd("cp /etc/hosts "+self.settings["chroot_path"]+"/etc/hosts",\
+					"Could not copy /etc/hosts",env=self.env)
+
+			""" Modify and write out make.conf (for the chroot) """
+			cmd("rm -f "+self.settings["chroot_path"]+"/etc/portage/make.conf",\
+				"Could not remove "+self.settings["chroot_path"]+\
+				"/etc/portage/make.conf",env=self.env)
+			myf=open(self.settings["chroot_path"]+"/etc/portage/make.conf","w")
+			myf.write("# These settings were set by the catalyst build script that automatically\n# built this stage.\n")
+			myf.write("# Please consult /usr/share/portage/config/make.conf.example for a more\n# detailed example.\n")
+			if "CFLAGS" in self.settings:
+				myf.write('CFLAGS="'+self.settings["CFLAGS"]+'"\n')
+			if "CXXFLAGS" in self.settings:
+				if self.settings["CXXFLAGS"]!=self.settings["CFLAGS"]:
+					myf.write('CXXFLAGS="'+self.settings["CXXFLAGS"]+'"\n')
+				else:
+					myf.write('CXXFLAGS="${CFLAGS}"\n')
+			else:
+				myf.write('CXXFLAGS="${CFLAGS}"\n')
+
+			if "LDFLAGS" in self.settings:
+				myf.write("# LDFLAGS is unsupported.  USE AT YOUR OWN RISK!\n")
+				myf.write('LDFLAGS="'+self.settings["LDFLAGS"]+'"\n')
+			if "CBUILD" in self.settings:
+				myf.write("# This should not be changed unless you know exactly what you are doing.  You\n# should probably be using a different stage, instead.\n")
+				myf.write('CBUILD="'+self.settings["CBUILD"]+'"\n')
+
+			myf.write("# WARNING: Changing your CHOST is not something that should be done lightly.\n# Please consult http://www.gentoo.org/doc/en/change-chost.xml before changing.\n")
+			myf.write('CHOST="'+self.settings["CHOST"]+'"\n')
+
+			""" Figure out what our USE vars are for building """
+			myusevars=[]
+			if "HOSTUSE" in self.settings:
+				myusevars.extend(self.settings["HOSTUSE"])
+
+			if "use" in self.settings:
+				myusevars.extend(self.settings["use"])
+
+			if myusevars:
+				myf.write("# These are the USE flags that were used in addition to what is provided by the\n# profile used for building.\n")
+				myusevars = sorted(set(myusevars))
+				myf.write('USE="'+string.join(myusevars)+'"\n')
+				if '-*' in myusevars:
+					print "\nWarning!!!  "
+					print "\tThe use of -* in "+self.settings["spec_prefix"]+\
+						"/use will cause portage to ignore"
+					print "\tpackage.use in the profile and portage_confdir. You've been warned!"
+
+			myf.write('PORTDIR="%s"\n' % self.settings['portdir'])
+			myf.write('DISTDIR="%s"\n' % self.settings['distdir'])
+			myf.write('PKGDIR="%s"\n' % self.settings['packagedir'])
+
+			""" Setup the portage overlay """
+			if "portage_overlay" in self.settings:
+				myf.write('PORTDIR_OVERLAY="/usr/local/portage"\n')
+
+			myf.close()
+			cmd("cp "+self.settings["chroot_path"]+"/etc/portage/make.conf "+\
+				self.settings["chroot_path"]+"/etc/portage/make.conf.catalyst",\
+				"Could not backup /etc/portage/make.conf",env=self.env)
+			touch(self.settings["autoresume_path"]+"chroot_setup")
+
+	def fsscript(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"fsscript"):
+			print "Resume point detected, skipping fsscript operation..."
+		else:
+			if "fsscript" in self.settings:
+				if os.path.exists(self.settings["controller_file"]):
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" fsscript","fsscript script failed.",env=self.env)
+					touch(self.settings["autoresume_path"]+"fsscript")
+
+	def rcupdate(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"rcupdate"):
+			print "Resume point detected, skipping rcupdate operation..."
+		else:
+			if os.path.exists(self.settings["controller_file"]):
+				cmd("/bin/bash "+self.settings["controller_file"]+" rc-update",\
+					"rc-update script failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"rcupdate")
+
+	def clean(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"clean"):
+			print "Resume point detected, skipping clean operation..."
+		else:
+			for x in self.settings["cleanables"]:
+				print "Cleaning chroot: "+x+"... "
+				cmd("rm -rf "+self.settings["destpath"]+x,"Couldn't clean "+\
+					x,env=self.env)
+
+		""" Put /etc/hosts back into place """
+		if os.path.exists(self.settings["chroot_path"]+"/etc/hosts.catalyst"):
+			cmd("mv -f "+self.settings["chroot_path"]+"/etc/hosts.catalyst "+\
+				self.settings["chroot_path"]+"/etc/hosts",\
+				"Could not replace /etc/hosts",env=self.env)
+
+		""" Remove our overlay """
+		if os.path.exists(self.settings["chroot_path"] + self.settings["local_overlay"]):
+			cmd("rm -rf " + self.settings["chroot_path"] + self.settings["local_overlay"],
+				"Could not remove " + self.settings["local_overlay"], env=self.env)
+			cmd("sed -i '/^PORTDIR_OVERLAY/d' "+self.settings["chroot_path"]+\
+				"/etc/portage/make.conf",\
+				"Could not remove PORTDIR_OVERLAY from make.conf",env=self.env)
+
+		""" Clean up old and obsoleted files in /etc """
+		if os.path.exists(self.settings["stage_path"]+"/etc"):
+			cmd("find "+self.settings["stage_path"]+\
+				"/etc -maxdepth 1 -name \"*-\" | xargs rm -f",\
+				"Could not remove stray files in /etc",env=self.env)
+
+		if os.path.exists(self.settings["controller_file"]):
+			cmd("/bin/bash "+self.settings["controller_file"]+" clean",\
+				"clean script failed.",env=self.env)
+			touch(self.settings["autoresume_path"]+"clean")
+
+	def empty(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"empty"):
+			print "Resume point detected, skipping empty operation..."
+		else:
+			if self.settings["spec_prefix"]+"/empty" in self.settings:
+				if type(self.settings[self.settings["spec_prefix"]+\
+					"/empty"])==types.StringType:
+					self.settings[self.settings["spec_prefix"]+"/empty"]=\
+						self.settings[self.settings["spec_prefix"]+\
+						"/empty"].split()
+				for x in self.settings[self.settings["spec_prefix"]+"/empty"]:
+					myemp=self.settings["destpath"]+x
+					if not os.path.isdir(myemp) or os.path.islink(myemp):
+						print x,"not a directory or does not exist, skipping 'empty' operation."
+						continue
+					print "Emptying directory",x
+					"""
+					stat the dir, delete the dir, recreate the dir and set
+					the proper perms and ownership
+					"""
+					mystat=os.stat(myemp)
+					shutil.rmtree(myemp)
+					os.makedirs(myemp,0755)
+					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+					os.chmod(myemp,mystat[ST_MODE])
+			touch(self.settings["autoresume_path"]+"empty")
+
+	def remove(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"remove"):
+			print "Resume point detected, skipping remove operation..."
+		else:
+			if self.settings["spec_prefix"]+"/rm" in self.settings:
+				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
+					"""
+					We're going to shell out for all these cleaning
+					operations, so we get easy glob handling.
+					"""
+					print "livecd: removing "+x
+					os.system("rm -rf "+self.settings["chroot_path"]+x)
+				try:
+					if os.path.exists(self.settings["controller_file"]):
+						cmd("/bin/bash "+self.settings["controller_file"]+\
+							" clean","Clean  failed.",env=self.env)
+						touch(self.settings["autoresume_path"]+"remove")
+				except:
+					self.unbind()
+					raise
+
+	def preclean(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"preclean"):
+			print "Resume point detected, skipping preclean operation..."
+		else:
+			try:
+				if os.path.exists(self.settings["controller_file"]):
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" preclean","preclean script failed.",env=self.env)
+					touch(self.settings["autoresume_path"]+"preclean")
+
+			except:
+				self.unbind()
+				raise CatalystError, "Build failed, could not execute preclean"
+
+	def capture(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"capture"):
+			print "Resume point detected, skipping capture operation..."
+		else:
+			""" Capture target in a tarball """
+			mypath=self.settings["target_path"].split("/")
+			""" Remove filename from path """
+			mypath=string.join(mypath[:-1],"/")
+
+			""" Now make sure path exists """
+			if not os.path.exists(mypath):
+				os.makedirs(mypath)
+
+			print "Creating stage tarball..."
+
+			cmd("tar -I lbzip2 -cpf "+self.settings["target_path"]+" -C "+\
+				self.settings["stage_path"]+" .",\
+				"Couldn't create stage tarball",env=self.env)
+
+			self.gen_contents_file(self.settings["target_path"])
+			self.gen_digest_file(self.settings["target_path"])
+
+			touch(self.settings["autoresume_path"]+"capture")
+
+	def run_local(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"run_local"):
+			print "Resume point detected, skipping run_local operation..."
+		else:
+			try:
+				if os.path.exists(self.settings["controller_file"]):
+					cmd("/bin/bash "+self.settings["controller_file"]+" run",\
+						"run script failed.",env=self.env)
+					touch(self.settings["autoresume_path"]+"run_local")
+
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"Stage build aborting due to error."
+
+	def setup_environment(self):
+		"""
+		Modify the current environment. This is an ugly hack that should be
+		fixed. We need this to use the os.system() call since we can't
+		specify our own environ
+		"""
+		for x in self.settings.keys():
+			""" Sanitize var names by doing "s|/-.|_|g" """
+			varname="clst_"+string.replace(x,"/","_")
+			varname=string.replace(varname,"-","_")
+			varname=string.replace(varname,".","_")
+			if type(self.settings[x])==types.StringType:
+				""" Prefix to prevent namespace clashes """
+				#os.environ[varname]=self.settings[x]
+				self.env[varname]=self.settings[x]
+			elif type(self.settings[x])==types.ListType:
+				#os.environ[varname]=string.join(self.settings[x])
+				self.env[varname]=string.join(self.settings[x])
+			elif type(self.settings[x])==types.BooleanType:
+				if self.settings[x]:
+					self.env[varname]="true"
+				else:
+					self.env[varname]="false"
+		if "makeopts" in self.settings:
+			self.env["MAKEOPTS"]=self.settings["makeopts"]
+
+	def run(self):
+		self.chroot_lock.write_lock()
+
+		""" Kill any pids in the chroot "" """
+		self.kill_chroot_pids()
+
+		""" Check for mounts right away and abort if we cannot unmount them """
+		self.mount_safety_check()
+
+		if "CLEAR_AUTORESUME" in self.settings:
+			self.clear_autoresume()
+
+		if "PURGETMPONLY" in self.settings:
+			self.purge()
+			return
+
+		if "PURGEONLY" in self.settings:
+			self.purge()
+			return
+
+		if "PURGE" in self.settings:
+			self.purge()
+
+		for x in self.settings["action_sequence"]:
+			print "--- Running action sequence: "+x
+			sys.stdout.flush()
+			try:
+				apply(getattr(self,x))
+			except:
+				self.mount_safety_check()
+				raise
+
+		self.chroot_lock.unlock()
+
+	def unmerge(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"unmerge"):
+			print "Resume point detected, skipping unmerge operation..."
+		else:
+			if self.settings["spec_prefix"]+"/unmerge" in self.settings:
+				if type(self.settings[self.settings["spec_prefix"]+\
+					"/unmerge"])==types.StringType:
+					self.settings[self.settings["spec_prefix"]+"/unmerge"]=\
+						[self.settings[self.settings["spec_prefix"]+"/unmerge"]]
+				myunmerge=\
+					self.settings[self.settings["spec_prefix"]+"/unmerge"][:]
+
+				for x in range(0,len(myunmerge)):
+					"""
+					Surround args with quotes for passing to bash, allows
+					things like "<" to remain intact
+					"""
+					myunmerge[x]="'"+myunmerge[x]+"'"
+				myunmerge=string.join(myunmerge)
+
+				""" Before cleaning, unmerge stuff """
+				try:
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" unmerge "+ myunmerge,"Unmerge script failed.",\
+						env=self.env)
+					print "unmerge shell script"
+				except CatalystError:
+					self.unbind()
+					raise
+				touch(self.settings["autoresume_path"]+"unmerge")
+
+	def target_setup(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"target_setup"):
+			print "Resume point detected, skipping target_setup operation..."
+		else:
+			print "Setting up filesystems per filesystem type"
+			cmd("/bin/bash "+self.settings["controller_file"]+\
+				" target_image_setup "+ self.settings["target_path"],\
+				"target_image_setup script failed.",env=self.env)
+			touch(self.settings["autoresume_path"]+"target_setup")
+
+	def setup_overlay(self):
+		if "AUTORESUME" in self.settings \
+		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
+			print "Resume point detected, skipping setup_overlay operation..."
+		else:
+			if self.settings["spec_prefix"]+"/overlay" in self.settings:
+				for x in self.settings[self.settings["spec_prefix"]+"/overlay"]:
+					if os.path.exists(x):
+						cmd("rsync -a "+x+"/ "+\
+							self.settings["target_path"],\
+							self.settings["spec_prefix"]+"overlay: "+x+\
+							" copy failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_overlay")
+
+	def create_iso(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"create_iso"):
+			print "Resume point detected, skipping create_iso operation..."
+		else:
+			""" Create the ISO """
+			if "iso" in self.settings:
+				cmd("/bin/bash "+self.settings["controller_file"]+" iso "+\
+					self.settings["iso"],"ISO creation script failed.",\
+					env=self.env)
+				self.gen_contents_file(self.settings["iso"])
+				self.gen_digest_file(self.settings["iso"])
+				touch(self.settings["autoresume_path"]+"create_iso")
+			else:
+				print "WARNING: livecd/iso was not defined."
+				print "An ISO Image will not be created."
+
+	def build_packages(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"build_packages"):
+			print "Resume point detected, skipping build_packages operation..."
+		else:
+			if self.settings["spec_prefix"]+"/packages" in self.settings:
+				if "AUTORESUME" in self.settings \
+					and os.path.exists(self.settings["autoresume_path"]+\
+						"build_packages"):
+					print "Resume point detected, skipping build_packages operation..."
+				else:
+					mypack=\
+						list_bashify(self.settings[self.settings["spec_prefix"]\
+						+"/packages"])
+					try:
+						cmd("/bin/bash "+self.settings["controller_file"]+\
+							" build_packages "+mypack,\
+							"Error in attempt to build packages",env=self.env)
+						touch(self.settings["autoresume_path"]+"build_packages")
+					except CatalystError:
+						self.unbind()
+						raise CatalystError,self.settings["spec_prefix"]+\
+							"build aborting due to error."
+
+	def build_kernel(self):
+		"Build all configured kernels"
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"build_kernel"):
+			print "Resume point detected, skipping build_kernel operation..."
+		else:
+			if "boot/kernel" in self.settings:
+				try:
+					mynames=self.settings["boot/kernel"]
+					if type(mynames)==types.StringType:
+						mynames=[mynames]
+					"""
+					Execute the script that sets up the kernel build environment
+					"""
+					cmd("/bin/bash "+self.settings["controller_file"]+\
+						" pre-kmerge ","Runscript pre-kmerge failed",\
+						env=self.env)
+					for kname in mynames:
+						self._build_kernel(kname=kname)
+					touch(self.settings["autoresume_path"]+"build_kernel")
+				except CatalystError:
+					self.unbind()
+					raise CatalystError,\
+						"build aborting due to kernel build error."
+
+	def _build_kernel(self, kname):
+		"Build a single configured kernel by name"
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]\
+				+"build_kernel_"+kname):
+			print "Resume point detected, skipping build_kernel for "+kname+" operation..."
+			return
+		self._copy_kernel_config(kname=kname)
+
+		"""
+		If we need to pass special options to the bootloader
+		for this kernel put them into the environment
+		"""
+		if "boot/kernel/"+kname+"/kernelopts" in self.settings:
+			myopts=self.settings["boot/kernel/"+kname+\
+				"/kernelopts"]
+
+			if type(myopts) != types.StringType:
+				myopts = string.join(myopts)
+				self.env[kname+"_kernelopts"]=myopts
+
+			else:
+				self.env[kname+"_kernelopts"]=""
+
+		if "boot/kernel/"+kname+"/extraversion" not in self.settings:
+			self.settings["boot/kernel/"+kname+\
+				"/extraversion"]=""
+
+		self.env["clst_kextraversion"]=\
+			self.settings["boot/kernel/"+kname+\
+			"/extraversion"]
+
+		self._copy_initramfs_overlay(kname=kname)
+
+		""" Execute the script that builds the kernel """
+		cmd("/bin/bash "+self.settings["controller_file"]+\
+			" kernel "+kname,\
+			"Runscript kernel build failed",env=self.env)
+
+		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
+			if os.path.exists(self.settings["chroot_path"]+\
+				"/tmp/initramfs_overlay/"):
+				print "Cleaning up temporary overlay dir"
+				cmd("rm -R "+self.settings["chroot_path"]+\
+					"/tmp/initramfs_overlay/",env=self.env)
+
+		touch(self.settings["autoresume_path"]+\
+			"build_kernel_"+kname)
+
+		"""
+		Execute the script that cleans up the kernel build
+		environment
+		"""
+		cmd("/bin/bash "+self.settings["controller_file"]+\
+			" post-kmerge ",
+			"Runscript post-kmerge failed",env=self.env)
+
+	def _copy_kernel_config(self, kname):
+		if "boot/kernel/"+kname+"/config" in self.settings:
+			if not os.path.exists(self.settings["boot/kernel/"+kname+"/config"]):
+				self.unbind()
+				raise CatalystError,\
+					"Can't find kernel config: "+\
+					self.settings["boot/kernel/"+kname+\
+					"/config"]
+
+			try:
+				cmd("cp "+self.settings["boot/kernel/"+kname+\
+					"/config"]+" "+\
+					self.settings["chroot_path"]+"/var/tmp/"+\
+					kname+".config",\
+					"Couldn't copy kernel config: "+\
+					self.settings["boot/kernel/"+kname+\
+					"/config"],env=self.env)
+
+			except CatalystError:
+				self.unbind()
+
+	def _copy_initramfs_overlay(self, kname):
+		if "boot/kernel/"+kname+"/initramfs_overlay" in self.settings:
+			if os.path.exists(self.settings["boot/kernel/"+\
+				kname+"/initramfs_overlay"]):
+				print "Copying initramfs_overlay dir "+\
+					self.settings["boot/kernel/"+kname+\
+					"/initramfs_overlay"]
+
+				cmd("mkdir -p "+\
+					self.settings["chroot_path"]+\
+					"/tmp/initramfs_overlay/"+\
+					self.settings["boot/kernel/"+kname+\
+					"/initramfs_overlay"],env=self.env)
+
+				cmd("cp -R "+self.settings["boot/kernel/"+\
+					kname+"/initramfs_overlay"]+"/* "+\
+					self.settings["chroot_path"]+\
+					"/tmp/initramfs_overlay/"+\
+					self.settings["boot/kernel/"+kname+\
+					"/initramfs_overlay"],env=self.env)
+
+	def bootloader(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"bootloader"):
+			print "Resume point detected, skipping bootloader operation..."
+		else:
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+\
+					" bootloader " + self.settings["target_path"],\
+					"Bootloader script failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"bootloader")
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"Script aborting due to error."
+
+	def livecd_update(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+\
+				"livecd_update"):
+			print "Resume point detected, skipping build_packages operation..."
+		else:
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+\
+					" livecd-update","livecd-update failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"livecd_update")
+
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"build aborting due to livecd_update error."
+
+	def clear_chroot(self):
+		myemp=self.settings["chroot_path"]
+		if os.path.isdir(myemp):
+			print "Emptying directory",myemp
+			"""
+			stat the dir, delete the dir, recreate the dir and set
+			the proper perms and ownership
+			"""
+			mystat=os.stat(myemp)
+			#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
+			""" There's no easy way to change flags recursively in python """
+			if os.uname()[0] == "FreeBSD":
+				os.system("chflags -R noschg "+myemp)
+			shutil.rmtree(myemp)
+			os.makedirs(myemp,0755)
+			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+			os.chmod(myemp,mystat[ST_MODE])
+
+	def clear_packages(self):
+		if "PKGCACHE" in self.settings:
+			print "purging the pkgcache ..."
+
+			myemp=self.settings["pkgcache_path"]
+			if os.path.isdir(myemp):
+				print "Emptying directory",myemp
+				"""
+				stat the dir, delete the dir, recreate the dir and set
+				the proper perms and ownership
+				"""
+				mystat=os.stat(myemp)
+				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
+				shutil.rmtree(myemp)
+				os.makedirs(myemp,0755)
+				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+				os.chmod(myemp,mystat[ST_MODE])
+
+	def clear_kerncache(self):
+		if "KERNCACHE" in self.settings:
+			print "purging the kerncache ..."
+
+			myemp=self.settings["kerncache_path"]
+			if os.path.isdir(myemp):
+				print "Emptying directory",myemp
+				"""
+				stat the dir, delete the dir, recreate the dir and set
+				the proper perms and ownership
+				"""
+				mystat=os.stat(myemp)
+				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env=self.env)
+				shutil.rmtree(myemp)
+				os.makedirs(myemp,0755)
+				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+				os.chmod(myemp,mystat[ST_MODE])
+
+	def clear_autoresume(self):
+		""" Clean resume points since they are no longer needed """
+		if "AUTORESUME" in self.settings:
+			print "Removing AutoResume Points: ..."
+		myemp=self.settings["autoresume_path"]
+		if os.path.isdir(myemp):
+				if "AUTORESUME" in self.settings:
+					print "Emptying directory",myemp
+				"""
+				stat the dir, delete the dir, recreate the dir and set
+				the proper perms and ownership
+				"""
+				mystat=os.stat(myemp)
+				if os.uname()[0] == "FreeBSD":
+					cmd("chflags -R noschg "+myemp,\
+						"Could not remove immutable flag for file "\
+						+myemp)
+				#cmd("rm -rf "+myemp, "Could not remove existing file: "+myemp,env-self.env)
+				shutil.rmtree(myemp)
+				os.makedirs(myemp,0755)
+				os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+				os.chmod(myemp,mystat[ST_MODE])
+
+	def gen_contents_file(self,file):
+		if os.path.exists(file+".CONTENTS"):
+			os.remove(file+".CONTENTS")
+		if "contents" in self.settings:
+			if os.path.exists(file):
+				myf=open(file+".CONTENTS","w")
+				keys={}
+				for i in self.settings["contents"].split():
+					keys[i]=1
+					array=keys.keys()
+					array.sort()
+				for j in array:
+					contents=generate_contents(file,contents_function=j,\
+						verbose="VERBOSE" in self.settings)
+					if contents:
+						myf.write(contents)
+				myf.close()
+
+	def gen_digest_file(self,file):
+		if os.path.exists(file+".DIGESTS"):
+			os.remove(file+".DIGESTS")
+		if "digests" in self.settings:
+			if os.path.exists(file):
+				myf=open(file+".DIGESTS","w")
+				keys={}
+				for i in self.settings["digests"].split():
+					keys[i]=1
+					array=keys.keys()
+					array.sort()
+				for f in [file, file+'.CONTENTS']:
+					if os.path.exists(f):
+						if "all" in array:
+							for k in hash_map.keys():
+								hash=generate_hash(f,hash_function=k,verbose=\
+									"VERBOSE" in self.settings)
+								myf.write(hash)
+						else:
+							for j in array:
+								hash=generate_hash(f,hash_function=j,verbose=\
+									"VERBOSE" in self.settings)
+								myf.write(hash)
+				myf.close()
+
+	def purge(self):
+		countdown(10,"Purging Caches ...")
+		if any(k in self.settings for k in ("PURGE","PURGEONLY","PURGETMPONLY")):
+			print "clearing autoresume ..."
+			self.clear_autoresume()
+
+			print "clearing chroot ..."
+			self.clear_chroot()
+
+			if "PURGETMPONLY" not in self.settings:
+				print "clearing package cache ..."
+				self.clear_packages()
+
+			print "clearing kerncache ..."
+			self.clear_kerncache()
+
+# vim: ts=4 sw=4 sta et sts=4 ai
diff --git a/catalyst/targets/generic_target.py b/catalyst/targets/generic_target.py
new file mode 100644
index 0000000..de51994
--- /dev/null
+++ b/catalyst/targets/generic_target.py
@@ -0,0 +1,11 @@
+from catalyst.support import *
+
+class generic_target:
+	"""
+	The toplevel class for generic_stage_target. This is about as generic as we get.
+	"""
+	def __init__(self,myspec,addlargs):
+		addl_arg_parse(myspec,addlargs,self.required_values,self.valid_values)
+		self.settings=myspec
+		self.env={}
+		self.env["PATH"]="/bin:/sbin:/usr/bin:/usr/sbin"
diff --git a/catalyst/targets/grp_target.py b/catalyst/targets/grp_target.py
new file mode 100644
index 0000000..8e70042
--- /dev/null
+++ b/catalyst/targets/grp_target.py
@@ -0,0 +1,118 @@
+"""
+Gentoo Reference Platform (GRP) target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,types,glob
+from catalyst.support import *
+from generic_stage_target import *
+
+class grp_target(generic_stage_target):
+	"""
+	The builder class for GRP (Gentoo Reference Platform) builds.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["version_stamp","target","subarch",\
+			"rel_type","profile","snapshot","source_subpath"]
+
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend(["grp/use"])
+		if "grp" not in addlargs:
+			raise CatalystError,"Required value \"grp\" not specified in spec."
+
+		self.required_values.extend(["grp"])
+		if type(addlargs["grp"])==types.StringType:
+			addlargs["grp"]=[addlargs["grp"]]
+
+		if "grp/use" in addlargs:
+			if type(addlargs["grp/use"])==types.StringType:
+				addlargs["grp/use"]=[addlargs["grp/use"]]
+
+		for x in addlargs["grp"]:
+			self.required_values.append("grp/"+x+"/packages")
+			self.required_values.append("grp/"+x+"/type")
+
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+			print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			#if os.path.isdir(self.settings["target_path"]):
+				#cmd("rm -rf "+self.settings["target_path"],
+				#"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
+			if not os.path.exists(self.settings["target_path"]):
+				os.makedirs(self.settings["target_path"])
+
+			touch(self.settings["autoresume_path"]+"setup_target_path")
+
+	def run_local(self):
+		for pkgset in self.settings["grp"]:
+			# example call: "grp.sh run pkgset cd1 xmms vim sys-apps/gleep"
+			mypackages=list_bashify(self.settings["grp/"+pkgset+"/packages"])
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+" run "+self.settings["grp/"+pkgset+"/type"]\
+					+" "+pkgset+" "+mypackages,env=self.env)
+
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"GRP build aborting due to error."
+
+	def set_use(self):
+		generic_stage_target.set_use(self)
+		if "BINDIST" in self.settings:
+			if "use" in self.settings:
+				self.settings["use"].append("bindist")
+			else:
+				self.settings["use"]=["bindist"]
+
+	def set_mounts(self):
+	    self.mounts.append("/tmp/grp")
+            self.mountmap["/tmp/grp"]=self.settings["target_path"]
+
+	def generate_digests(self):
+		for pkgset in self.settings["grp"]:
+			if self.settings["grp/"+pkgset+"/type"] == "pkgset":
+				destdir=normpath(self.settings["target_path"]+"/"+pkgset+"/All")
+				print "Digesting files in the pkgset....."
+				digests=glob.glob(destdir+'/*.DIGESTS')
+				for i in digests:
+					if os.path.exists(i):
+						os.remove(i)
+
+				files=os.listdir(destdir)
+				#ignore files starting with '.' using list comprehension
+				files=[filename for filename in files if filename[0] != '.']
+				for i in files:
+					if os.path.isfile(normpath(destdir+"/"+i)):
+						self.gen_contents_file(normpath(destdir+"/"+i))
+						self.gen_digest_file(normpath(destdir+"/"+i))
+			else:
+				destdir=normpath(self.settings["target_path"]+"/"+pkgset)
+				print "Digesting files in the srcset....."
+
+				digests=glob.glob(destdir+'/*.DIGESTS')
+				for i in digests:
+					if os.path.exists(i):
+						os.remove(i)
+
+				files=os.listdir(destdir)
+				#ignore files starting with '.' using list comprehension
+				files=[filename for filename in files if filename[0] != '.']
+				for i in files:
+					if os.path.isfile(normpath(destdir+"/"+i)):
+						#self.gen_contents_file(normpath(destdir+"/"+i))
+						self.gen_digest_file(normpath(destdir+"/"+i))
+
+	def set_action_sequence(self):
+	    self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir","portage_overlay","bind","chroot_setup",\
+					"setup_environment","run_local","unbind",\
+					"generate_digests","clear_autoresume"]
+
+def register(foo):
+	foo.update({"grp":grp_target})
+	return foo
diff --git a/catalyst/targets/livecd_stage1_target.py b/catalyst/targets/livecd_stage1_target.py
new file mode 100644
index 0000000..ac846ec
--- /dev/null
+++ b/catalyst/targets/livecd_stage1_target.py
@@ -0,0 +1,75 @@
+"""
+LiveCD stage1 target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst.support import *
+from generic_stage_target import *
+
+class livecd_stage1_target(generic_stage_target):
+	"""
+	Builder class for LiveCD stage1.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["livecd/packages"]
+		self.valid_values=self.required_values[:]
+
+		self.valid_values.extend(["livecd/use"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir","portage_overlay",\
+					"bind","chroot_setup","setup_environment","build_packages",\
+					"unbind", "clean","clear_autoresume"]
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"])
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			if os.path.exists(self.settings["target_path"]):
+				cmd("rm -rf "+self.settings["target_path"],\
+					"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_target_path")
+
+			if not os.path.exists(self.settings["target_path"]):
+				os.makedirs(self.settings["target_path"])
+
+	def set_target_path(self):
+		pass
+
+	def set_spec_prefix(self):
+	                self.settings["spec_prefix"]="livecd"
+
+	def set_use(self):
+		generic_stage_target.set_use(self)
+		if "use" in self.settings:
+			self.settings["use"].append("livecd")
+			if "BINDIST" in self.settings:
+				self.settings["use"].append("bindist")
+		else:
+			self.settings["use"]=["livecd"]
+			if "BINDIST" in self.settings:
+				self.settings["use"].append("bindist")
+
+	def set_packages(self):
+		generic_stage_target.set_packages(self)
+		if self.settings["spec_prefix"]+"/packages" in self.settings:
+			if type(self.settings[self.settings["spec_prefix"]+"/packages"]) == types.StringType:
+				self.settings[self.settings["spec_prefix"]+"/packages"] = \
+					self.settings[self.settings["spec_prefix"]+"/packages"].split()
+		self.settings[self.settings["spec_prefix"]+"/packages"].append("app-misc/livecd-tools")
+
+	def set_pkgcache_path(self):
+		if "pkgcache_path" in self.settings:
+			if type(self.settings["pkgcache_path"]) != types.StringType:
+				self.settings["pkgcache_path"]=normpath(string.join(self.settings["pkgcache_path"]))
+		else:
+			generic_stage_target.set_pkgcache_path(self)
+
+def register(foo):
+	foo.update({"livecd-stage1":livecd_stage1_target})
+	return foo
diff --git a/catalyst/targets/livecd_stage2_target.py b/catalyst/targets/livecd_stage2_target.py
new file mode 100644
index 0000000..8595ffc
--- /dev/null
+++ b/catalyst/targets/livecd_stage2_target.py
@@ -0,0 +1,148 @@
+"""
+LiveCD stage2 target, builds upon previous LiveCD stage1 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,types,stat,shutil
+from catalyst.support import *
+from generic_stage_target import *
+
+class livecd_stage2_target(generic_stage_target):
+	"""
+	Builder class for a LiveCD stage2 build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["boot/kernel"]
+
+		self.valid_values=[]
+
+		self.valid_values.extend(self.required_values)
+		self.valid_values.extend(["livecd/cdtar","livecd/empty","livecd/rm",\
+			"livecd/unmerge","livecd/iso","livecd/gk_mainargs","livecd/type",\
+			"livecd/readme","livecd/motd","livecd/overlay",\
+			"livecd/modblacklist","livecd/splash_theme","livecd/rcadd",\
+			"livecd/rcdel","livecd/fsscript","livecd/xinitrc",\
+			"livecd/root_overlay","livecd/users","portage_overlay",\
+			"livecd/fstype","livecd/fsops","livecd/linuxrc","livecd/bootargs",\
+			"gamecd/conf","livecd/xdm","livecd/xsession","livecd/volid"])
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		if "livecd/type" not in self.settings:
+			self.settings["livecd/type"] = "generic-livecd"
+
+		file_locate(self.settings, ["cdtar","controller_file"])
+
+	def set_source_path(self):
+		self.settings["source_path"] = normpath(self.settings["storedir"] +
+			"/builds/" + self.settings["source_subpath"].rstrip("/") +
+			".tar.bz2")
+		if os.path.isfile(self.settings["source_path"]):
+			self.settings["source_path_hash"]=generate_hash(self.settings["source_path"])
+		else:
+			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/")
+		if not os.path.exists(self.settings["source_path"]):
+			raise CatalystError,"Source Path: "+self.settings["source_path"]+" does not exist."
+
+	def set_spec_prefix(self):
+	    self.settings["spec_prefix"]="livecd"
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+self.settings["target_subpath"]+"/")
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			if os.path.isdir(self.settings["target_path"]):
+				cmd("rm -rf "+self.settings["target_path"],
+				"Could not remove existing directory: "+self.settings["target_path"],env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_target_path")
+			if not os.path.exists(self.settings["target_path"]):
+				os.makedirs(self.settings["target_path"])
+
+	def run_local(self):
+		# what modules do we want to blacklist?
+		if "livecd/modblacklist" in self.settings:
+			try:
+				myf=open(self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf","a")
+			except:
+				self.unbind()
+				raise CatalystError,"Couldn't open "+self.settings["chroot_path"]+"/etc/modprobe.d/blacklist.conf."
+
+			myf.write("\n#Added by Catalyst:")
+			# workaround until config.py is using configparser
+			if isinstance(self.settings["livecd/modblacklist"], str):
+				self.settings["livecd/modblacklist"] = self.settings["livecd/modblacklist"].split()
+			for x in self.settings["livecd/modblacklist"]:
+				myf.write("\nblacklist "+x)
+			myf.close()
+
+	def unpack(self):
+		unpack=True
+		display_msg=None
+
+		clst_unpack_hash=read_from_clst(self.settings["autoresume_path"]+"unpack")
+
+		if os.path.isdir(self.settings["source_path"]):
+			unpack_cmd="rsync -a --delete "+self.settings["source_path"]+" "+self.settings["chroot_path"]
+			display_msg="\nStarting rsync from "+self.settings["source_path"]+"\nto "+\
+				self.settings["chroot_path"]+" (This may take some time) ...\n"
+			error_msg="Rsync of "+self.settings["source_path"]+" to "+self.settings["chroot_path"]+" failed."
+			invalid_snapshot=False
+
+		if "AUTORESUME" in self.settings:
+			if os.path.isdir(self.settings["source_path"]) and \
+				os.path.exists(self.settings["autoresume_path"]+"unpack"):
+				print "Resume point detected, skipping unpack operation..."
+				unpack=False
+			elif "source_path_hash" in self.settings:
+				if self.settings["source_path_hash"] != clst_unpack_hash:
+					invalid_snapshot=True
+
+		if unpack:
+			self.mount_safety_check()
+			if invalid_snapshot:
+				print "No Valid Resume point detected, cleaning up  ..."
+				#os.remove(self.settings["autoresume_path"]+"dir_setup")
+				self.clear_autoresume()
+				self.clear_chroot()
+				#self.dir_setup()
+
+			if not os.path.exists(self.settings["chroot_path"]):
+				os.makedirs(self.settings["chroot_path"])
+
+			if not os.path.exists(self.settings["chroot_path"]+"/tmp"):
+				os.makedirs(self.settings["chroot_path"]+"/tmp",1777)
+
+			if "PKGCACHE" in self.settings:
+				if not os.path.exists(self.settings["pkgcache_path"]):
+					os.makedirs(self.settings["pkgcache_path"],0755)
+
+			if not display_msg:
+				raise CatalystError,"Could not find appropriate source. Please check the 'source_subpath' setting in the spec file."
+
+			print display_msg
+			cmd(unpack_cmd,error_msg,env=self.env)
+
+			if "source_path_hash" in self.settings:
+				myf=open(self.settings["autoresume_path"]+"unpack","w")
+				myf.write(self.settings["source_path_hash"])
+				myf.close()
+			else:
+				touch(self.settings["autoresume_path"]+"unpack")
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+				"config_profile_link","setup_confdir","portage_overlay",\
+				"bind","chroot_setup","setup_environment","run_local",\
+				"build_kernel"]
+		if "FETCH" not in self.settings:
+			self.settings["action_sequence"] += ["bootloader","preclean",\
+				"livecd_update","root_overlay","fsscript","rcupdate","unmerge",\
+				"unbind","remove","empty","target_setup",\
+				"setup_overlay","create_iso"]
+		self.settings["action_sequence"].append("clear_autoresume")
+
+def register(foo):
+	foo.update({"livecd-stage2":livecd_stage2_target})
+	return foo
diff --git a/catalyst/targets/netboot2_target.py b/catalyst/targets/netboot2_target.py
new file mode 100644
index 0000000..2b3cd20
--- /dev/null
+++ b/catalyst/targets/netboot2_target.py
@@ -0,0 +1,166 @@
+"""
+netboot target, version 2
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,types
+from catalyst.support import *
+from generic_stage_target import *
+
+class netboot2_target(generic_stage_target):
+	"""
+	Builder class for a netboot build, version 2
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[
+			"boot/kernel"
+		]
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend([
+			"netboot2/packages",
+			"netboot2/use",
+			"netboot2/extra_files",
+			"netboot2/overlay",
+			"netboot2/busybox_config",
+			"netboot2/root_overlay",
+			"netboot2/linuxrc"
+		])
+
+		try:
+			if "netboot2/packages" in addlargs:
+				if type(addlargs["netboot2/packages"]) == types.StringType:
+					loopy=[addlargs["netboot2/packages"]]
+				else:
+					loopy=addlargs["netboot2/packages"]
+
+				for x in loopy:
+					self.valid_values.append("netboot2/packages/"+x+"/files")
+		except:
+			raise CatalystError,"configuration error in netboot2/packages."
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		self.set_build_kernel_vars()
+		self.settings["merge_path"]=normpath("/tmp/image/")
+
+	def set_target_path(self):
+		self.settings["target_path"]=normpath(self.settings["storedir"]+"/builds/"+\
+			self.settings["target_subpath"]+"/")
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"setup_target_path"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			# first clean up any existing target stuff
+			if os.path.isfile(self.settings["target_path"]):
+				cmd("rm -f "+self.settings["target_path"], \
+					"Could not remove existing file: "+self.settings["target_path"],env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_target_path")
+
+		if not os.path.exists(self.settings["storedir"]+"/builds/"):
+			os.makedirs(self.settings["storedir"]+"/builds/")
+
+	def copy_files_to_image(self):
+		# copies specific files from the buildroot to merge_path
+		myfiles=[]
+
+		# check for autoresume point
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"copy_files_to_image"):
+				print "Resume point detected, skipping target path setup operation..."
+		else:
+			if "netboot2/packages" in self.settings:
+				if type(self.settings["netboot2/packages"]) == types.StringType:
+					loopy=[self.settings["netboot2/packages"]]
+				else:
+					loopy=self.settings["netboot2/packages"]
+
+			for x in loopy:
+				if "netboot2/packages/"+x+"/files" in self.settings:
+				    if type(self.settings["netboot2/packages/"+x+"/files"]) == types.ListType:
+					    myfiles.extend(self.settings["netboot2/packages/"+x+"/files"])
+				    else:
+					    myfiles.append(self.settings["netboot2/packages/"+x+"/files"])
+
+			if "netboot2/extra_files" in self.settings:
+				if type(self.settings["netboot2/extra_files"]) == types.ListType:
+					myfiles.extend(self.settings["netboot2/extra_files"])
+				else:
+					myfiles.append(self.settings["netboot2/extra_files"])
+
+			try:
+				cmd("/bin/bash "+self.settings["controller_file"]+\
+					" image " + list_bashify(myfiles),env=self.env)
+			except CatalystError:
+				self.unbind()
+				raise CatalystError,"Failed to copy files to image!"
+
+			touch(self.settings["autoresume_path"]+"copy_files_to_image")
+
+	def setup_overlay(self):
+		if "AUTORESUME" in self.settings \
+		and os.path.exists(self.settings["autoresume_path"]+"setup_overlay"):
+			print "Resume point detected, skipping setup_overlay operation..."
+		else:
+			if "netboot2/overlay" in self.settings:
+				for x in self.settings["netboot2/overlay"]:
+					if os.path.exists(x):
+						cmd("rsync -a "+x+"/ "+\
+							self.settings["chroot_path"] + self.settings["merge_path"], "netboot2/overlay: "+x+" copy failed.",env=self.env)
+				touch(self.settings["autoresume_path"]+"setup_overlay")
+
+	def move_kernels(self):
+		# we're done, move the kernels to builds/*
+		# no auto resume here as we always want the
+		# freshest images moved
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+\
+				" final",env=self.env)
+			print ">>> Netboot Build Finished!"
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"Failed to move kernel images!"
+
+	def remove(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"remove"):
+			print "Resume point detected, skipping remove operation..."
+		else:
+			if self.settings["spec_prefix"]+"/rm" in self.settings:
+				for x in self.settings[self.settings["spec_prefix"]+"/rm"]:
+					# we're going to shell out for all these cleaning operations,
+					# so we get easy glob handling
+					print "netboot2: removing " + x
+					os.system("rm -rf " + self.settings["chroot_path"] + self.settings["merge_path"] + x)
+
+	def empty(self):
+		if "AUTORESUME" in self.settings \
+			and os.path.exists(self.settings["autoresume_path"]+"empty"):
+			print "Resume point detected, skipping empty operation..."
+		else:
+			if "netboot2/empty" in self.settings:
+				if type(self.settings["netboot2/empty"])==types.StringType:
+					self.settings["netboot2/empty"]=self.settings["netboot2/empty"].split()
+				for x in self.settings["netboot2/empty"]:
+					myemp=self.settings["chroot_path"] + self.settings["merge_path"] + x
+					if not os.path.isdir(myemp):
+						print x,"not a directory or does not exist, skipping 'empty' operation."
+						continue
+					print "Emptying directory", x
+					# stat the dir, delete the dir, recreate the dir and set
+					# the proper perms and ownership
+					mystat=os.stat(myemp)
+					shutil.rmtree(myemp)
+					os.makedirs(myemp,0755)
+					os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+					os.chmod(myemp,mystat[ST_MODE])
+		touch(self.settings["autoresume_path"]+"empty")
+
+	def set_action_sequence(self):
+	    self.settings["action_sequence"]=["unpack","unpack_snapshot","config_profile_link",
+	    				"setup_confdir","portage_overlay","bind","chroot_setup",\
+					"setup_environment","build_packages","root_overlay",\
+					"copy_files_to_image","setup_overlay","build_kernel","move_kernels",\
+					"remove","empty","unbind","clean","clear_autoresume"]
+
+def register(foo):
+	foo.update({"netboot2":netboot2_target})
+	return foo
diff --git a/catalyst/targets/netboot_target.py b/catalyst/targets/netboot_target.py
new file mode 100644
index 0000000..9d01b7e
--- /dev/null
+++ b/catalyst/targets/netboot_target.py
@@ -0,0 +1,128 @@
+"""
+netboot target, version 1
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+import os,string,types
+from catalyst.support import *
+from generic_stage_target import *
+
+class netboot_target(generic_stage_target):
+	"""
+	Builder class for a netboot build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.valid_values = [
+			"netboot/kernel/sources",
+			"netboot/kernel/config",
+			"netboot/kernel/prebuilt",
+
+			"netboot/busybox_config",
+
+			"netboot/extra_files",
+			"netboot/packages"
+		]
+		self.required_values=[]
+
+		try:
+			if "netboot/packages" in addlargs:
+				if type(addlargs["netboot/packages"]) == types.StringType:
+					loopy=[addlargs["netboot/packages"]]
+				else:
+					loopy=addlargs["netboot/packages"]
+
+		#	for x in loopy:
+		#		self.required_values.append("netboot/packages/"+x+"/files")
+		except:
+			raise CatalystError,"configuration error in netboot/packages."
+
+		generic_stage_target.__init__(self,spec,addlargs)
+		self.set_build_kernel_vars(addlargs)
+		if "netboot/busybox_config" in addlargs:
+			file_locate(self.settings, ["netboot/busybox_config"])
+
+		# Custom Kernel Tarball --- use that instead ...
+
+		# unless the user wants specific CFLAGS/CXXFLAGS, let's use -Os
+
+		for envvar in "CFLAGS", "CXXFLAGS":
+			if envvar not in os.environ and envvar not in addlargs:
+				self.settings[envvar] = "-Os -pipe"
+
+	def set_root_path(self):
+		# ROOT= variable for emerges
+		self.settings["root_path"]=normpath("/tmp/image")
+		print "netboot root path is "+self.settings["root_path"]
+
+#	def build_packages(self):
+#		# build packages
+#		if "netboot/packages" in self.settings:
+#			mypack=list_bashify(self.settings["netboot/packages"])
+#		try:
+#			cmd("/bin/bash "+self.settings["controller_file"]+" packages "+mypack,env=self.env)
+#		except CatalystError:
+#			self.unbind()
+#			raise CatalystError,"netboot build aborting due to error."
+
+	def build_busybox(self):
+		# build busybox
+		if "netboot/busybox_config" in self.settings:
+			mycmd = self.settings["netboot/busybox_config"]
+		else:
+			mycmd = ""
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+" busybox "+ mycmd,env=self.env)
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"netboot build aborting due to error."
+
+	def copy_files_to_image(self):
+		# create image
+		myfiles=[]
+		if "netboot/packages" in self.settings:
+			if type(self.settings["netboot/packages"]) == types.StringType:
+				loopy=[self.settings["netboot/packages"]]
+			else:
+				loopy=self.settings["netboot/packages"]
+
+		for x in loopy:
+			if "netboot/packages/"+x+"/files" in self.settings:
+			    if type(self.settings["netboot/packages/"+x+"/files"]) == types.ListType:
+				    myfiles.extend(self.settings["netboot/packages/"+x+"/files"])
+			    else:
+				    myfiles.append(self.settings["netboot/packages/"+x+"/files"])
+
+		if "netboot/extra_files" in self.settings:
+			if type(self.settings["netboot/extra_files"]) == types.ListType:
+				myfiles.extend(self.settings["netboot/extra_files"])
+			else:
+				myfiles.append(self.settings["netboot/extra_files"])
+
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+\
+				" image " + list_bashify(myfiles),env=self.env)
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"netboot build aborting due to error."
+
+	def create_netboot_files(self):
+		# finish it all up
+		try:
+			cmd("/bin/bash "+self.settings["controller_file"]+" finish",env=self.env)
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"netboot build aborting due to error."
+
+		# end
+		print "netboot: build finished !"
+
+	def set_action_sequence(self):
+	    self.settings["action_sequence"]=["unpack","unpack_snapshot",
+	    				"config_profile_link","setup_confdir","bind","chroot_setup",\
+						"setup_environment","build_packages","build_busybox",\
+						"build_kernel","copy_files_to_image",\
+						"clean","create_netboot_files","unbind","clear_autoresume"]
+
+def register(foo):
+	foo.update({"netboot":netboot_target})
+	return foo
diff --git a/catalyst/targets/snapshot_target.py b/catalyst/targets/snapshot_target.py
new file mode 100644
index 0000000..d1b9e40
--- /dev/null
+++ b/catalyst/targets/snapshot_target.py
@@ -0,0 +1,91 @@
+"""
+Snapshot target
+"""
+
+import os
+from catalyst.support import *
+from generic_stage_target import *
+
+class snapshot_target(generic_stage_target):
+	"""
+	Builder class for snapshots.
+	"""
+	def __init__(self,myspec,addlargs):
+		self.required_values=["version_stamp","target"]
+		self.valid_values=["version_stamp","target"]
+
+		generic_target.__init__(self,myspec,addlargs)
+		self.settings=myspec
+		self.settings["target_subpath"]="portage"
+		st=self.settings["storedir"]
+		self.settings["snapshot_path"] = normpath(st + "/snapshots/"
+			+ self.settings["snapshot_name"]
+			+ self.settings["version_stamp"] + ".tar.bz2")
+		self.settings["tmp_path"]=normpath(st+"/tmp/"+self.settings["target_subpath"])
+
+	def setup(self):
+		x=normpath(self.settings["storedir"]+"/snapshots")
+		if not os.path.exists(x):
+			os.makedirs(x)
+
+	def mount_safety_check(self):
+		pass
+
+	def run(self):
+		if "PURGEONLY" in self.settings:
+			self.purge()
+			return
+
+		if "PURGE" in self.settings:
+			self.purge()
+
+		self.setup()
+		print "Creating Portage tree snapshot "+self.settings["version_stamp"]+\
+			" from "+self.settings["portdir"]+"..."
+
+		mytmp=self.settings["tmp_path"]
+		if not os.path.exists(mytmp):
+			os.makedirs(mytmp)
+
+		cmd("rsync -a --delete --exclude /packages/ --exclude /distfiles/ " +
+			"--exclude /local/ --exclude CVS/ --exclude .svn --filter=H_**/files/digest-* " +
+			self.settings["portdir"] + "/ " + mytmp + "/%s/" % self.settings["repo_name"],
+			"Snapshot failure", env=self.env)
+
+		print "Compressing Portage snapshot tarball..."
+		cmd("tar -I lbzip2 -cf " + self.settings["snapshot_path"] + " -C " +
+			mytmp + " " + self.settings["repo_name"],
+			"Snapshot creation failure",env=self.env)
+
+		self.gen_contents_file(self.settings["snapshot_path"])
+		self.gen_digest_file(self.settings["snapshot_path"])
+
+		self.cleanup()
+		print "snapshot: complete!"
+
+	def kill_chroot_pids(self):
+		pass
+
+	def cleanup(self):
+		print "Cleaning up..."
+
+	def purge(self):
+		myemp=self.settings["tmp_path"]
+		if os.path.isdir(myemp):
+			print "Emptying directory",myemp
+			"""
+			stat the dir, delete the dir, recreate the dir and set
+			the proper perms and ownership
+			"""
+			mystat=os.stat(myemp)
+			""" There's no easy way to change flags recursively in python """
+			if os.uname()[0] == "FreeBSD":
+				os.system("chflags -R noschg "+myemp)
+			shutil.rmtree(myemp)
+			os.makedirs(myemp,0755)
+			os.chown(myemp,mystat[ST_UID],mystat[ST_GID])
+			os.chmod(myemp,mystat[ST_MODE])
+
+def register(foo):
+	foo.update({"snapshot":snapshot_target})
+	return foo
diff --git a/catalyst/targets/stage1_target.py b/catalyst/targets/stage1_target.py
new file mode 100644
index 0000000..8d5a674
--- /dev/null
+++ b/catalyst/targets/stage1_target.py
@@ -0,0 +1,97 @@
+"""
+stage1 target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst.support import *
+from generic_stage_target import *
+
+class stage1_target(generic_stage_target):
+	"""
+	Builder class for a stage1 installation tarball build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=["chost"]
+		self.valid_values.extend(["update_seed","update_seed_command"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_stage_path(self):
+		self.settings["stage_path"]=normpath(self.settings["chroot_path"]+self.settings["root_path"])
+		print "stage1 stage path is "+self.settings["stage_path"]
+
+	def set_root_path(self):
+		# sets the root path, relative to 'chroot_path', of the stage1 root
+		self.settings["root_path"]=normpath("/tmp/stage1root")
+		print "stage1 root path is "+self.settings["root_path"]
+
+	def set_cleanables(self):
+		generic_stage_target.set_cleanables(self)
+		self.settings["cleanables"].extend([\
+		"/usr/share/zoneinfo", "/etc/portage/package*"])
+
+	# XXX: How do these override_foo() functions differ from the ones in generic_stage_target and why aren't they in stage3_target?
+
+	def override_chost(self):
+		if "chost" in self.settings:
+			self.settings["CHOST"]=list_to_string(self.settings["chost"])
+
+	def override_cflags(self):
+		if "cflags" in self.settings:
+			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
+
+	def override_cxxflags(self):
+		if "cxxflags" in self.settings:
+			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
+
+	def override_ldflags(self):
+		if "ldflags" in self.settings:
+			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
+
+	def set_portage_overlay(self):
+		generic_stage_target.set_portage_overlay(self)
+		if "portage_overlay" in self.settings:
+			print "\nWARNING !!!!!"
+			print "\tUsing an portage overlay for earlier stages could cause build issues."
+			print "\tIf you break it, you buy it. Don't complain to us about it."
+			print "\tDont say we did not warn you\n"
+
+	def base_dirs(self):
+		if os.uname()[0] == "FreeBSD":
+			# baselayout no longer creates the .keep files in proc and dev for FreeBSD as it
+			# would create them too late...we need them earlier before bind mounting filesystems
+			# since proc and dev are not writeable, so...create them here
+			if not os.path.exists(self.settings["stage_path"]+"/proc"):
+				os.makedirs(self.settings["stage_path"]+"/proc")
+			if not os.path.exists(self.settings["stage_path"]+"/dev"):
+				os.makedirs(self.settings["stage_path"]+"/dev")
+			if not os.path.isfile(self.settings["stage_path"]+"/proc/.keep"):
+				try:
+					proc_keepfile = open(self.settings["stage_path"]+"/proc/.keep","w")
+					proc_keepfile.write('')
+					proc_keepfile.close()
+				except IOError:
+					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
+			if not os.path.isfile(self.settings["stage_path"]+"/dev/.keep"):
+				try:
+					dev_keepfile = open(self.settings["stage_path"]+"/dev/.keep","w")
+					dev_keepfile.write('')
+					dev_keepfile.close()
+				except IOError:
+					print "!!! Failed to create %s" % (self.settings["stage_path"]+"/dev/.keep")
+		else:
+			pass
+
+	def set_mounts(self):
+		# stage_path/proc probably doesn't exist yet, so create it
+		if not os.path.exists(self.settings["stage_path"]+"/proc"):
+			os.makedirs(self.settings["stage_path"]+"/proc")
+
+		# alter the mount mappings to bind mount proc onto it
+		self.mounts.append("stage1root/proc")
+		self.target_mounts["stage1root/proc"] = "/tmp/stage1root/proc"
+		self.mountmap["stage1root/proc"] = "/proc"
+
+def register(foo):
+	foo.update({"stage1":stage1_target})
+	return foo
diff --git a/catalyst/targets/stage2_target.py b/catalyst/targets/stage2_target.py
new file mode 100644
index 0000000..0168718
--- /dev/null
+++ b/catalyst/targets/stage2_target.py
@@ -0,0 +1,66 @@
+"""
+stage2 target, builds upon previous stage1 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst.support import *
+from generic_stage_target import *
+
+class stage2_target(generic_stage_target):
+	"""
+	Builder class for a stage2 installation tarball build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=["chost"]
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_source_path(self):
+		if "SEEDCACHE" in self.settings and os.path.isdir(normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")):
+			self.settings["source_path"]=normpath(self.settings["storedir"]+"/tmp/"+self.settings["source_subpath"]+"/tmp/stage1root/")
+		else:
+			self.settings["source_path"] = normpath(self.settings["storedir"] +
+				"/builds/" + self.settings["source_subpath"].rstrip("/") +
+				".tar.bz2")
+			if os.path.isfile(self.settings["source_path"]):
+				if os.path.exists(self.settings["source_path"]):
+				# XXX: Is this even necessary if the previous check passes?
+					self.settings["source_path_hash"]=generate_hash(self.settings["source_path"],\
+						hash_function=self.settings["hash_function"],verbose=False)
+		print "Source path set to "+self.settings["source_path"]
+		if os.path.isdir(self.settings["source_path"]):
+			print "\tIf this is not desired, remove this directory or turn of seedcache in the options of catalyst.conf"
+			print "\tthe source path will then be " + \
+				normpath(self.settings["storedir"] + "/builds/" + \
+				self.settings["source_subpath"].restrip("/") + ".tar.bz2\n")
+
+	# XXX: How do these override_foo() functions differ from the ones in
+	# generic_stage_target and why aren't they in stage3_target?
+
+	def override_chost(self):
+		if "chost" in self.settings:
+			self.settings["CHOST"]=list_to_string(self.settings["chost"])
+
+	def override_cflags(self):
+		if "cflags" in self.settings:
+			self.settings["CFLAGS"]=list_to_string(self.settings["cflags"])
+
+	def override_cxxflags(self):
+		if "cxxflags" in self.settings:
+			self.settings["CXXFLAGS"]=list_to_string(self.settings["cxxflags"])
+
+	def override_ldflags(self):
+		if "ldflags" in self.settings:
+			self.settings["LDFLAGS"]=list_to_string(self.settings["ldflags"])
+
+	def set_portage_overlay(self):
+			generic_stage_target.set_portage_overlay(self)
+			if "portage_overlay" in self.settings:
+				print "\nWARNING !!!!!"
+				print "\tUsing an portage overlay for earlier stages could cause build issues."
+				print "\tIf you break it, you buy it. Don't complain to us about it."
+				print "\tDont say we did not warn you\n"
+
+def register(foo):
+	foo.update({"stage2":stage2_target})
+	return foo
diff --git a/catalyst/targets/stage3_target.py b/catalyst/targets/stage3_target.py
new file mode 100644
index 0000000..89edd66
--- /dev/null
+++ b/catalyst/targets/stage3_target.py
@@ -0,0 +1,31 @@
+"""
+stage3 target, builds upon previous stage2/stage3 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst.support import *
+from generic_stage_target import *
+
+class stage3_target(generic_stage_target):
+	"""
+	Builder class for a stage3 installation tarball build.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=[]
+		self.valid_values=[]
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_portage_overlay(self):
+		generic_stage_target.set_portage_overlay(self)
+		if "portage_overlay" in self.settings:
+			print "\nWARNING !!!!!"
+			print "\tUsing an overlay for earlier stages could cause build issues."
+			print "\tIf you break it, you buy it. Don't complain to us about it."
+			print "\tDont say we did not warn you\n"
+
+	def set_cleanables(self):
+		generic_stage_target.set_cleanables(self)
+
+def register(foo):
+	foo.update({"stage3":stage3_target})
+	return foo
diff --git a/catalyst/targets/stage4_target.py b/catalyst/targets/stage4_target.py
new file mode 100644
index 0000000..9168f2e
--- /dev/null
+++ b/catalyst/targets/stage4_target.py
@@ -0,0 +1,43 @@
+"""
+stage4 target, builds upon previous stage3/stage4 tarball
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst.support import *
+from generic_stage_target import *
+
+class stage4_target(generic_stage_target):
+	"""
+	Builder class for stage4.
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["stage4/packages"]
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend(["stage4/use","boot/kernel",\
+				"stage4/root_overlay","stage4/fsscript",\
+				"stage4/gk_mainargs","splash_theme",\
+				"portage_overlay","stage4/rcadd","stage4/rcdel",\
+				"stage4/linuxrc","stage4/unmerge","stage4/rm","stage4/empty"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def set_cleanables(self):
+		self.settings["cleanables"]=["/var/tmp/*","/tmp/*"]
+
+	def set_action_sequence(self):
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+					"config_profile_link","setup_confdir","portage_overlay",\
+					"bind","chroot_setup","setup_environment","build_packages",\
+					"build_kernel","bootloader","root_overlay","fsscript",\
+					"preclean","rcupdate","unmerge","unbind","remove","empty",\
+					"clean"]
+
+#		if "TARBALL" in self.settings or \
+#			"FETCH" not in self.settings:
+		if "FETCH" not in self.settings:
+			self.settings["action_sequence"].append("capture")
+		self.settings["action_sequence"].append("clear_autoresume")
+
+def register(foo):
+	foo.update({"stage4":stage4_target})
+	return foo
+
diff --git a/catalyst/targets/tinderbox_target.py b/catalyst/targets/tinderbox_target.py
new file mode 100644
index 0000000..1d31989
--- /dev/null
+++ b/catalyst/targets/tinderbox_target.py
@@ -0,0 +1,48 @@
+"""
+Tinderbox target
+"""
+# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
+
+from catalyst.support import *
+from generic_stage_target import *
+
+class tinderbox_target(generic_stage_target):
+	"""
+	Builder class for the tinderbox target
+	"""
+	def __init__(self,spec,addlargs):
+		self.required_values=["tinderbox/packages"]
+		self.valid_values=self.required_values[:]
+		self.valid_values.extend(["tinderbox/use"])
+		generic_stage_target.__init__(self,spec,addlargs)
+
+	def run_local(self):
+		# tinderbox
+		# example call: "grp.sh run xmms vim sys-apps/gleep"
+		try:
+			if os.path.exists(self.settings["controller_file"]):
+			    cmd("/bin/bash "+self.settings["controller_file"]+" run "+\
+				list_bashify(self.settings["tinderbox/packages"]),"run script failed.",env=self.env)
+
+		except CatalystError:
+			self.unbind()
+			raise CatalystError,"Tinderbox aborting due to error."
+
+	def set_cleanables(self):
+		self.settings['cleanables'] = [
+			'/etc/resolv.conf',
+			'/var/tmp/*',
+			'/root/*',
+			self.settings['portdir'],
+			]
+
+	def set_action_sequence(self):
+		#Default action sequence for run method
+		self.settings["action_sequence"]=["unpack","unpack_snapshot",\
+		              "config_profile_link","setup_confdir","bind","chroot_setup",\
+		              "setup_environment","run_local","preclean","unbind","clean",\
+		              "clear_autoresume"]
+
+def register(foo):
+	foo.update({"tinderbox":tinderbox_target})
+	return foo
-- 
1.8.3.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [gentoo-catalyst] [PATCH 4/5] Move catalyst.conf and catalystrc to an etc/ directory
  2014-01-12  1:46 [gentoo-catalyst] Re-organize the python structure Brian Dolbec
                   ` (2 preceding siblings ...)
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 3/5] Rename the modules subpkg to targets, to better reflect what it contains Brian Dolbec
@ 2014-01-12  1:46 ` Brian Dolbec
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 5/5] setup.py: Add disutils-based packaging Brian Dolbec
  2014-01-22  5:10 ` [gentoo-catalyst] Re-organize the python structure W. Trevor King
  5 siblings, 0 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12  1:46 UTC (permalink / raw
  To: gentoo-catalyst; +Cc: Brian Dolbec

---
 etc/catalyst.conf   | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 etc/catalystrc      |  5 +++
 files/catalyst.conf | 97 -----------------------------------------------------
 files/catalystrc    |  5 ---
 4 files changed, 102 insertions(+), 102 deletions(-)
 create mode 100644 etc/catalyst.conf
 create mode 100755 etc/catalystrc
 delete mode 100644 files/catalyst.conf
 delete mode 100755 files/catalystrc

diff --git a/etc/catalyst.conf b/etc/catalyst.conf
new file mode 100644
index 0000000..57606ca
--- /dev/null
+++ b/etc/catalyst.conf
@@ -0,0 +1,97 @@
+# /etc/catalyst/catalyst.conf
+
+# Simple desriptions of catalyst settings. Please refer to the online
+# documentation for more information.
+
+# Creates a .DIGESTS file containing the hash output from any of the supported
+# options below.  Adding them all may take a long time.
+# Supported hashes:
+# adler32, crc32, crc32b, gost, haval128, haval160, haval192, haval224,
+# haval256, md2, md4, md5, ripemd128, ripemd160, ripemd256, ripemd320, sha1,
+# sha224, sha256, sha384, sha512, snefru128, snefru256, tiger, tiger128,
+# tiger160, whirlpool
+digests="md5 sha1 sha512 whirlpool"
+
+# Creates a .CONTENTS file listing the contents of the file. Pick from any of
+# the supported options below:
+# auto		- strongly recommended
+# tar-tv	- does 'tar tvf FILE'
+# tar-tvz	- does 'tar tvzf FILE'
+# tar-tvy	- does 'tar tvyf FILE'
+# isoinfo-l	- does 'isoinfo -l -i FILE'
+# isoinfo-f	- does 'isoinfo -f -i FILE'
+# 'isoinfo-f' is the only option not chosen by the automatic algorithm.
+# If this variable is empty, no .CONTENTS will be generated at all.
+contents="auto"
+
+# distdir specifies where your distfiles are located. This setting should
+# work fine for most default installations.
+distdir="/usr/portage/distfiles"
+
+# envscript allows users to set options such as http proxies, MAKEOPTS,
+# GENTOO_MIRRORS, or any other environment variables needed for building.
+# The envscript file sets environment variables like so:
+# export FOO="bar"
+envscript="/etc/catalyst/catalystrc"
+
+# Internal hash function catalyst should use for things like autoresume,
+# seedcache, etc.  The default and fastest is crc32.  You should not ever need
+# to change this unless your OS does not support it.
+# Supported hashes:
+# adler32, crc32, crc32b, gost, haval128, haval160, haval192, haval224,
+# haval256, md2, md4, md5, ripemd128, ripemd160, ripemd256, ripemd320, sha1,
+# sha224, sha256, sha384, sha512, snefru128, snefru256, tiger, tiger128,
+# tiger160, whirlpool
+hash_function="crc32"
+
+# options set different build-time options for catalyst. Some examples are:
+# autoresume = Attempt to resume a failed build, clear the autoresume flags with
+#	the -a option to the catalyst cmdline.  -p will clear the autoresume flags
+#	as well as your pkgcache and kerncache
+#	( This option is not fully tested, bug reports welcome )
+# bindist = enables the bindist USE flag, please see package specific definition,
+#	however, it is suggested to enable this if redistributing builds.
+# ccache = enables build time ccache support
+# distcc = enable distcc support for building. You have to set distcc_hosts in
+# 	your spec file.
+# icecream = enables icecream compiler cluster support for building
+# kerncache = keeps a tbz2 of your built kernel and modules (useful if your
+#	build stops in livecd-stage2)
+# pkgcache = keeps a tbz2 of every built package (useful if your build stops
+#	prematurely)
+# preserve_libs = enables portage to preserve used libs when unmerging packages
+#   (used on installcd-stage2 and stage4 targets)
+# seedcache = use the build output of a previous target if it exists to speed up
+#	the copy
+# snapcache = cache the snapshot so that it can be bind-mounted into the chroot.
+#	WARNING: moving parts of the portage tree from within fsscript *will* break
+#	your cache. The cache is unlinked before any empty or rm processing, though.
+#
+# (These options can be used together)
+options="autoresume bindist kerncache pkgcache seedcache snapcache"
+
+# portdir specifies the source portage tree used by the snapshot target.
+portdir="/usr/portage"
+
+# sharedir specifies where all of the catalyst runtime executables are. Most
+# users do not need to change this.
+sharedir="/usr/lib/catalyst"
+
+# snapshot_cache specifies where the snapshots will be cached to if snapcache is
+# enabled in the options.
+snapshot_cache="/var/tmp/catalyst/snapshot_cache"
+
+# storedir specifies where catalyst will store everything that it builds, and
+# also where it will put its temporary files and caches.
+storedir="/var/tmp/catalyst"
+
+# port_logdir is where all build logs will be kept. This dir will be automatically cleaned
+# of all logs over 30 days old. If left undefined the logs will remain in the build directory
+# as usual and get cleaned every time a stage build is restarted.
+# port_logdir="/var/tmp/catalyst/tmp"
+
+# var_tmpfs_portage will mount a tmpfs for /var/tmp/portage so building takes place in RAM
+# this feature requires a pretty large tmpfs ({open,libre}office needs ~8GB to build)
+# WARNING: If you use too much RAM everything will fail horribly and it is not our fault.
+# set size of /var/tmp/portage tmpfs in gigabytes
+# var_tmpfs_portage=16
diff --git a/etc/catalystrc b/etc/catalystrc
new file mode 100755
index 0000000..bcd729a
--- /dev/null
+++ b/etc/catalystrc
@@ -0,0 +1,5 @@
+#!/bin/bash
+# This is an example catalystrc. As such, it doesn't actually *do* anything.
+
+# Uncomment the following to increase the number of threads used to compile.
+# export MAKEOPTS="-j16"
diff --git a/files/catalyst.conf b/files/catalyst.conf
deleted file mode 100644
index 57606ca..0000000
--- a/files/catalyst.conf
+++ /dev/null
@@ -1,97 +0,0 @@
-# /etc/catalyst/catalyst.conf
-
-# Simple desriptions of catalyst settings. Please refer to the online
-# documentation for more information.
-
-# Creates a .DIGESTS file containing the hash output from any of the supported
-# options below.  Adding them all may take a long time.
-# Supported hashes:
-# adler32, crc32, crc32b, gost, haval128, haval160, haval192, haval224,
-# haval256, md2, md4, md5, ripemd128, ripemd160, ripemd256, ripemd320, sha1,
-# sha224, sha256, sha384, sha512, snefru128, snefru256, tiger, tiger128,
-# tiger160, whirlpool
-digests="md5 sha1 sha512 whirlpool"
-
-# Creates a .CONTENTS file listing the contents of the file. Pick from any of
-# the supported options below:
-# auto		- strongly recommended
-# tar-tv	- does 'tar tvf FILE'
-# tar-tvz	- does 'tar tvzf FILE'
-# tar-tvy	- does 'tar tvyf FILE'
-# isoinfo-l	- does 'isoinfo -l -i FILE'
-# isoinfo-f	- does 'isoinfo -f -i FILE'
-# 'isoinfo-f' is the only option not chosen by the automatic algorithm.
-# If this variable is empty, no .CONTENTS will be generated at all.
-contents="auto"
-
-# distdir specifies where your distfiles are located. This setting should
-# work fine for most default installations.
-distdir="/usr/portage/distfiles"
-
-# envscript allows users to set options such as http proxies, MAKEOPTS,
-# GENTOO_MIRRORS, or any other environment variables needed for building.
-# The envscript file sets environment variables like so:
-# export FOO="bar"
-envscript="/etc/catalyst/catalystrc"
-
-# Internal hash function catalyst should use for things like autoresume,
-# seedcache, etc.  The default and fastest is crc32.  You should not ever need
-# to change this unless your OS does not support it.
-# Supported hashes:
-# adler32, crc32, crc32b, gost, haval128, haval160, haval192, haval224,
-# haval256, md2, md4, md5, ripemd128, ripemd160, ripemd256, ripemd320, sha1,
-# sha224, sha256, sha384, sha512, snefru128, snefru256, tiger, tiger128,
-# tiger160, whirlpool
-hash_function="crc32"
-
-# options set different build-time options for catalyst. Some examples are:
-# autoresume = Attempt to resume a failed build, clear the autoresume flags with
-#	the -a option to the catalyst cmdline.  -p will clear the autoresume flags
-#	as well as your pkgcache and kerncache
-#	( This option is not fully tested, bug reports welcome )
-# bindist = enables the bindist USE flag, please see package specific definition,
-#	however, it is suggested to enable this if redistributing builds.
-# ccache = enables build time ccache support
-# distcc = enable distcc support for building. You have to set distcc_hosts in
-# 	your spec file.
-# icecream = enables icecream compiler cluster support for building
-# kerncache = keeps a tbz2 of your built kernel and modules (useful if your
-#	build stops in livecd-stage2)
-# pkgcache = keeps a tbz2 of every built package (useful if your build stops
-#	prematurely)
-# preserve_libs = enables portage to preserve used libs when unmerging packages
-#   (used on installcd-stage2 and stage4 targets)
-# seedcache = use the build output of a previous target if it exists to speed up
-#	the copy
-# snapcache = cache the snapshot so that it can be bind-mounted into the chroot.
-#	WARNING: moving parts of the portage tree from within fsscript *will* break
-#	your cache. The cache is unlinked before any empty or rm processing, though.
-#
-# (These options can be used together)
-options="autoresume bindist kerncache pkgcache seedcache snapcache"
-
-# portdir specifies the source portage tree used by the snapshot target.
-portdir="/usr/portage"
-
-# sharedir specifies where all of the catalyst runtime executables are. Most
-# users do not need to change this.
-sharedir="/usr/lib/catalyst"
-
-# snapshot_cache specifies where the snapshots will be cached to if snapcache is
-# enabled in the options.
-snapshot_cache="/var/tmp/catalyst/snapshot_cache"
-
-# storedir specifies where catalyst will store everything that it builds, and
-# also where it will put its temporary files and caches.
-storedir="/var/tmp/catalyst"
-
-# port_logdir is where all build logs will be kept. This dir will be automatically cleaned
-# of all logs over 30 days old. If left undefined the logs will remain in the build directory
-# as usual and get cleaned every time a stage build is restarted.
-# port_logdir="/var/tmp/catalyst/tmp"
-
-# var_tmpfs_portage will mount a tmpfs for /var/tmp/portage so building takes place in RAM
-# this feature requires a pretty large tmpfs ({open,libre}office needs ~8GB to build)
-# WARNING: If you use too much RAM everything will fail horribly and it is not our fault.
-# set size of /var/tmp/portage tmpfs in gigabytes
-# var_tmpfs_portage=16
diff --git a/files/catalystrc b/files/catalystrc
deleted file mode 100755
index bcd729a..0000000
--- a/files/catalystrc
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/bash
-# This is an example catalystrc. As such, it doesn't actually *do* anything.
-
-# Uncomment the following to increase the number of threads used to compile.
-# export MAKEOPTS="-j16"
-- 
1.8.3.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [gentoo-catalyst] [PATCH 5/5] setup.py: Add disutils-based packaging
  2014-01-12  1:46 [gentoo-catalyst] Re-organize the python structure Brian Dolbec
                   ` (3 preceding siblings ...)
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 4/5] Move catalyst.conf and catalystrc to an etc/ directory Brian Dolbec
@ 2014-01-12  1:46 ` Brian Dolbec
  2014-01-12  2:11   ` [gentoo-catalyst] " W. Trevor King
  2014-01-22  5:10 ` [gentoo-catalyst] Re-organize the python structure W. Trevor King
  5 siblings, 1 reply; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12  1:46 UTC (permalink / raw
  To: gentoo-catalyst; +Cc: W. Trevor King

From: "W. Trevor King" <wking@tremily.us>

Package catalyst in the usual manner for Python projects.  Now it is
ready for PyPI :).

I also expose the version string in catalyst.__version__, since that's
a more traditional location.
---
 .gitignore           |  4 +++
 MANIFEST.in          |  6 ++++
 catalyst/__init__.py |  3 ++
 setup.py             | 89 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 102 insertions(+)
 create mode 100644 MANIFEST.in
 create mode 100644 setup.py

diff --git a/.gitignore b/.gitignore
index 539da74..d52b297 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1 +1,5 @@
 *.py[co]
+dist
+build
+files
+MANIFEST
diff --git a/MANIFEST.in b/MANIFEST.in
new file mode 100644
index 0000000..4274094
--- /dev/null
+++ b/MANIFEST.in
@@ -0,0 +1,6 @@
+include AUTHORS
+include ChangeLog
+include COPYING
+include Makefile
+recursive-include doc *.conf *.py HOWTO.txt catalyst*.txt
+recursive-include examples README *.example *.spec
diff --git a/catalyst/__init__.py b/catalyst/__init__.py
index e69de29..c058e16 100644
--- a/catalyst/__init__.py
+++ b/catalyst/__init__.py
@@ -0,0 +1,3 @@
+"Catalyst is a release building tool used by Gentoo Linux"
+
+__version__="2.0.15"
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..34eae53
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,89 @@
+# Copyright (C) 2013 W. Trevor King <wking@tremily.us>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+"Catalyst is a release building tool used by Gentoo Linux"
+
+import codecs as _codecs
+from distutils.core import setup as _setup
+import itertools as _itertools
+import os as _os
+
+from catalyst import __version__
+
+
+_this_dir = _os.path.dirname(__file__)
+package_name = 'catalyst'
+tag = '{0}-{1}'.format(package_name, __version__)
+
+
+def files(root):
+	"""Iterate through all the file paths under `root`
+
+	Distutils wants all paths to be written in the Unix convention
+	(i.e. slash-separated) [1], so that's what we'll do here.
+
+	[1]: http://docs.python.org/2/distutils/setupscript.html#writing-the-setup-script
+	"""
+	for dirpath, dirnames, filenames in _os.walk(root):
+		for filename in filenames:
+			path = _os.path.join(dirpath, filename)
+			if _os.path.sep != '/':
+				path = path.replace(_os.path.sep, '/')
+			yield path
+
+
+_setup(
+	name=package_name,
+	version=__version__,
+	maintainer='Gentoo Release Engineering',
+	maintainer_email='releng@gentoo.org',
+	url='http://www.gentoo.org/proj/en/releng/{0}/'.format(package_name),
+	download_url='http://git.overlays.gentoo.org/gitweb/?p=proj/{0}.git;a=snapshot;h={1};sf=tgz'.format(package_name, tag),
+	license='GNU General Public License (GPL)',
+	platforms=['all'],
+	description=__doc__,
+	long_description=_codecs.open(
+		_os.path.join(_this_dir, 'README'), 'r', 'utf-8').read(),
+	classifiers=[
+		'Development Status :: 5 - Production/Stable',
+		'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
+		'Intended Audience :: System Administrators',
+		'Operating System :: POSIX',
+		'Topic :: System :: Archiving :: Packaging',
+		'Topic :: System :: Installation/Setup',
+		'Topic :: System :: Software Distribution',
+		'Programming Language :: Python :: 2',
+		'Programming Language :: Python :: 2.6',
+		'Programming Language :: Python :: 2.7',
+		],
+	scripts=['bin/{0}'.format(package_name)],
+	packages=[
+		package_name,
+		'{0}.arch'.format(package_name),
+		'{0}.base'.format(package_name),
+		'{0}.targets'.format(package_name),
+		],
+	data_files=[
+		('/etc/catalyst', [
+			'etc/catalyst.conf',
+			'etc/catalystrc',
+			]),
+		('lib/catalyst/', list(_itertools.chain(
+			files('livecd'),
+			files('targets'),
+			))),
+		],
+	provides=[package_name],
+	)
-- 
1.8.3.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [gentoo-catalyst] Re: [PATCH 5/5] setup.py: Add disutils-based packaging
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 5/5] setup.py: Add disutils-based packaging Brian Dolbec
@ 2014-01-12  2:11   ` W. Trevor King
  2014-01-12  3:36     ` Brian Dolbec
  0 siblings, 1 reply; 15+ messages in thread
From: W. Trevor King @ 2014-01-12  2:11 UTC (permalink / raw
  To: gentoo-catalyst

[-- Attachment #1: Type: text/plain, Size: 722 bytes --]

On Sat, Jan 11, 2014 at 05:46:58PM -0800, Brian Dolbec wrote:
> --- a/catalyst/__init__.py
> +++ b/catalyst/__init__.py
> @@ -0,0 +1,3 @@
> +"Catalyst is a release building tool used by Gentoo Linux"
> +
> +__version__="2.0.15"

I'd definately add spaces around the equal sign here ;).

> +	download_url='http://git.overlays.gentoo.org/gitweb/?p=proj/{0}.git;a=snapshot;h={1};sf=tgz'.format(package_name, tag),

Do we need to update this with g.o.g.o down indefinitely?  Maybe we
should point folks at one of the distfiles mirrors?

Cheers,
Trevor

-- 
This email may be signed or encrypted with GnuPG (http://www.gnupg.org).
For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [gentoo-catalyst] Re: [PATCH 5/5] setup.py: Add disutils-based packaging
  2014-01-12  2:11   ` [gentoo-catalyst] " W. Trevor King
@ 2014-01-12  3:36     ` Brian Dolbec
  0 siblings, 0 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12  3:36 UTC (permalink / raw
  To: gentoo-catalyst

[-- Attachment #1: Type: text/plain, Size: 790 bytes --]

On Sat, 2014-01-11 at 18:11 -0800, W. Trevor King wrote:
> On Sat, Jan 11, 2014 at 05:46:58PM -0800, Brian Dolbec wrote:
> > --- a/catalyst/__init__.py
> > +++ b/catalyst/__init__.py
> > @@ -0,0 +1,3 @@
> > +"Catalyst is a release building tool used by Gentoo Linux"
> > +
> > +__version__="2.0.15"
> 
> I'd definately add spaces around the equal sign here ;).
> 

DOH! I didn't pay attention properly
just copy/pasted from the main.py

/me fixes

> > +	download_url='http://git.overlays.gentoo.org/gitweb/?p=proj/{0}.git;a=snapshot;h={1};sf=tgz'.format(package_name, tag),
> 
> Do we need to update this with g.o.g.o down indefinitely?  Maybe we
> should point folks at one of the distfiles mirrors?
> 
> Cheers,
> Trevor
> 

We could just...nvm  decided on irc

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 620 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories Brian Dolbec
@ 2014-01-12 20:25   ` Brian Dolbec
  2014-02-22 17:10   ` W. Trevor King
  2014-02-22 19:37   ` [gentoo-catalyst] [PATCH] Makefile: Fix PACKAGE_VERSION extraction W. Trevor King
  2 siblings, 0 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-01-12 20:25 UTC (permalink / raw
  To: gentoo-catalyst

[-- Attachment #1: Type: text/plain, Size: 766 bytes --]

On Sat, 2014-01-11 at 17:46 -0800, Brian Dolbec wrote:

> diff --git a/catalyst b/catalyst
> deleted file mode 100755
> index cb6c022..0000000
> --- a/catalyst
> +++ /dev/null
> @@ -1,419 +0,0 @@
> -#!/usr/bin/python2 -OO
> -
> -# Maintained in full by:
> -# Catalyst Team <catalyst@gentoo.org>
> -# Release Engineering Team <releng@gentoo.org>
> -# Andrew Gaffney <agaffney@gentoo.org>
> -# Chris Gianelloni <wolf31o2@wolf31o2.org>
> -# $Id$
> -


Removed the shebang that got missed when it was moved & renamed to
main.py
I won't re-submit the patch.  It's just all moves with some path import
adjustments.  no real code changes.

Fixed in pending, available in my dev space repo which is temporaily a
backup for g.o.g.o which is down.


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 620 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [gentoo-catalyst] Re-organize the python structure
  2014-01-12  1:46 [gentoo-catalyst] Re-organize the python structure Brian Dolbec
                   ` (4 preceding siblings ...)
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 5/5] setup.py: Add disutils-based packaging Brian Dolbec
@ 2014-01-22  5:10 ` W. Trevor King
  2014-01-22 15:49   ` Rick "Zero_Chaos" Farina
  5 siblings, 1 reply; 15+ messages in thread
From: W. Trevor King @ 2014-01-22  5:10 UTC (permalink / raw
  To: gentoo-catalyst

[-- Attachment #1: Type: text/plain, Size: 740 bytes --]

On Sat, Jan 11, 2014 at 05:46:53PM -0800, Brian Dolbec wrote:
>  [PATCH 1/5] Initial rearrangement of the python directories
>  [PATCH 2/5] Move catalyst_support, builder, catalyst_lock out of...
>  [PATCH 3/5] Rename the modules subpkg to targets, to better reflect...
>  [PATCH 4/5] Move catalyst.conf and catalystrc to an etc/ directory
>  [PATCH 5/5] setup.py: Add disutils-based packaging

Where do we stand on landing this?  It's going to conflict with
vapier's new arm64 patch [1].

Cheers,
Trevor

[1]: http://article.gmane.org/gmane.linux.gentoo.catalyst/2640

-- 
This email may be signed or encrypted with GnuPG (http://www.gnupg.org).
For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [gentoo-catalyst] Re-organize the python structure
  2014-01-22  5:10 ` [gentoo-catalyst] Re-organize the python structure W. Trevor King
@ 2014-01-22 15:49   ` Rick "Zero_Chaos" Farina
  0 siblings, 0 replies; 15+ messages in thread
From: Rick "Zero_Chaos" Farina @ 2014-01-22 15:49 UTC (permalink / raw
  To: gentoo-catalyst

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/22/2014 12:10 AM, W. Trevor King wrote:
> On Sat, Jan 11, 2014 at 05:46:53PM -0800, Brian Dolbec wrote:
>>  [PATCH 1/5] Initial rearrangement of the python directories
>>  [PATCH 2/5] Move catalyst_support, builder, catalyst_lock out of...
>>  [PATCH 3/5] Rename the modules subpkg to targets, to better reflect...
>>  [PATCH 4/5] Move catalyst.conf and catalystrc to an etc/ directory
>>  [PATCH 5/5] setup.py: Add disutils-based packaging
> 
> Where do we stand on landing this?  It's going to conflict with
> vapier's new arm64 patch [1].
> 
> Cheers,
> Trevor
> 
> [1]: http://article.gmane.org/gmane.linux.gentoo.catalyst/2640
> 

Vapier's arm64 patch is trivial at best. We can rewrite it.  If you want
to land some other conflicting change I can rewrite his patch after.
Keep with the rewrite, it is more important at this time.

Thanks,
Zero
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJS3+iYAAoJEKXdFCfdEflKPWYQALItr2bpIOIgeJhOWvc59mkI
M9HxI0Lgid7jGX7q9pHMGhIFG+qTvLyLghwV7snt6ZWKhoGgyoWM3FmHahudYpGh
YQaN3DhlMra+X6NMgai4Eskk8AtcyP7BMvNK/GI8GkyqomtPqLCdFQI5O8e9aC8w
uzKvKdAKTMbiT7L7qm5HgLAH0gOM5tWxQM51kaefEEAUSZlT6TmE2h/qC6Puwunz
FgL95bg1gxVBnQ6NR50Xj7v7z9qHvcBryLqJ6re2UYdPHlwVPrKPbJrMvLuuyrr3
NPlMLkoNybyw0N0NTWNaQU4PP5ytwychM6bdSQqUaujyNP0vKAvfpyBU4whJ8BB1
SWMNAz2tpq3cD2Jq6ClogSXn99Rw0HXF39Dw2lOnlSzHXP1pfvMpCP4yDAxbhBXO
MGGrvtPUm4QwVBvc4SlrPsob3aDIt24FSfdYIbdu8j8d22Y2b3d3M5BTHEDhDWyx
sVr11dXCZHFFsc1XjsTTX3sZg+i05CaE5amddS82h47lMCe56NQGxDLEJfZzsXSy
Md1a/b5FczeaggIKIol/R8xjqOHiT+2c7V6P1b+fPtfYAgjRYjeGQyMaU2xp9zUc
ApDwy/0mUtUTmwesBN2RsH2ZTeefbT/2WHXcaUFVICURGifQq7iSukeYR7zasjCm
PC21RpaUEWW79w4JEYb0
=4Kgx
-----END PGP SIGNATURE-----


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories Brian Dolbec
  2014-01-12 20:25   ` Brian Dolbec
@ 2014-02-22 17:10   ` W. Trevor King
  2014-02-22 18:40     ` Brian Dolbec
  2014-02-22 19:37   ` [gentoo-catalyst] [PATCH] Makefile: Fix PACKAGE_VERSION extraction W. Trevor King
  2 siblings, 1 reply; 15+ messages in thread
From: W. Trevor King @ 2014-02-22 17:10 UTC (permalink / raw
  To: gentoo-catalyst; +Cc: Brian Dolbec

[-- Attachment #1: Type: text/plain, Size: 386 bytes --]

On Sat, Jan 11, 2014 at 05:46:54PM -0800, Brian Dolbec wrote:
>  arch/ia64.py                             |   16 -
>  arch/mips.py                             |  464 --------

This is missing arch/m68k.py

Cheers,
Trevor

-- 
This email may be signed or encrypted with GnuPG (http://www.gnupg.org).
For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories
  2014-02-22 17:10   ` W. Trevor King
@ 2014-02-22 18:40     ` Brian Dolbec
  0 siblings, 0 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-02-22 18:40 UTC (permalink / raw
  To: gentoo-catalyst

[-- Attachment #1: Type: text/plain, Size: 477 bytes --]

On Sat, 22 Feb 2014 09:10:37 -0800
"W. Trevor King" <wking@tremily.us> wrote:

> On Sat, Jan 11, 2014 at 05:46:54PM -0800, Brian Dolbec wrote:
> >  arch/ia64.py                             |   16 -
> >  arch/mips.py                             |  464 --------
> 
> This is missing arch/m68k.py
> 
> Cheers,
> Trevor
> 

Thanks for spotting it.  Fixed and pushed back to origin.

P.S. I also pushed it with your updated setup.py.

-- 
Brian Dolbec <dolsen>


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 620 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [gentoo-catalyst] [PATCH] Makefile: Fix PACKAGE_VERSION extraction
  2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories Brian Dolbec
  2014-01-12 20:25   ` Brian Dolbec
  2014-02-22 17:10   ` W. Trevor King
@ 2014-02-22 19:37   ` W. Trevor King
  2014-02-22 21:46     ` Brian Dolbec
  2 siblings, 1 reply; 15+ messages in thread
From: W. Trevor King @ 2014-02-22 19:37 UTC (permalink / raw
  To: gentoo-catalyst; +Cc: W. Trevor King

The old method grepped for __version__ in catalyst.  That broke with
24c5352 (Initial rearrangement of the python directories, 2013-01-10),
which moved catalyst to bin/catalyst, kept the __version__ in
bin/catalyst, and added a new __version__ in catalyst/main.py.  Then
46b261e (setup.py: Add disutils-based packaging, 2013-06-05)
consolidated the __version__ definitions in catalyst/__init__.py,
removing them from bin/catalyst and catalyst/main.py.  This patch
adjusts the Makefile, invoking Python to extract catalyst.__version__
instead of grepping through the file that defines it.
---
This patch is in git://tremily.us/catalyst.git setup-py as 575419c.

 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index 98accbe..757113c 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
 # Copyright (C) 2011 Sebastian Pipping <sebastian@pipping.org>
 # Licensed under GPL v2 or later
 
-PACKAGE_VERSION = `fgrep '__version__=' catalyst | sed 's|^__version__="\(.*\)"$$|\1|'`
+PACKAGE_VERSION = $(shell PYTHONPATH=. python -c 'import catalyst; print(catalyst.__version__)')
 MAN_PAGE_SOURCES = $(wildcard doc/*.?.txt)
 MAN_PAGES = $(patsubst doc/%.txt,files/%,$(MAN_PAGE_SOURCES))
 MAN_PAGE_INCLUDES = doc/subarches.generated.txt doc/targets.generated.txt
-- 
1.8.5.2.8.g0f6c0d1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [gentoo-catalyst] [PATCH] Makefile: Fix PACKAGE_VERSION extraction
  2014-02-22 19:37   ` [gentoo-catalyst] [PATCH] Makefile: Fix PACKAGE_VERSION extraction W. Trevor King
@ 2014-02-22 21:46     ` Brian Dolbec
  0 siblings, 0 replies; 15+ messages in thread
From: Brian Dolbec @ 2014-02-22 21:46 UTC (permalink / raw
  To: gentoo-catalyst

On Sat, 22 Feb 2014 11:37:36 -0800
"W. Trevor King" <wking@tremily.us> wrote:

> The old method grepped for __version__ in catalyst.  That broke with
> 24c5352 (Initial rearrangement of the python directories, 2013-01-10),
> which moved catalyst to bin/catalyst, kept the __version__ in
> bin/catalyst, and added a new __version__ in catalyst/main.py.  Then
> 46b261e (setup.py: Add disutils-based packaging, 2013-06-05)
> consolidated the __version__ definitions in catalyst/__init__.py,
> removing them from bin/catalyst and catalyst/main.py.  This patch
> adjusts the Makefile, invoking Python to extract catalyst.__version__
> instead of grepping through the file that defines it.
> ---
> This patch is in git://tremily.us/catalyst.git setup-py as 575419c.
> 
>  Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Makefile b/Makefile
> index 98accbe..757113c 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -1,7 +1,7 @@
>  # Copyright (C) 2011 Sebastian Pipping <sebastian@pipping.org>
>  # Licensed under GPL v2 or later
>  
> -PACKAGE_VERSION = `fgrep '__version__=' catalyst | sed 's|^__version__="\(.*\)"$$|\1|'`
> +PACKAGE_VERSION = $(shell PYTHONPATH=. python -c 'import catalyst; print(catalyst.__version__)')
>  MAN_PAGE_SOURCES = $(wildcard doc/*.?.txt)
>  MAN_PAGES = $(patsubst doc/%.txt,files/%,$(MAN_PAGE_SOURCES))
>  MAN_PAGE_INCLUDES = doc/subarches.generated.txt doc/targets.generated.txt

looks fine, added and queued up right after the setup.py commit.

In git pending branch.

-- 
Brian Dolbec <dolsen>



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2014-02-22 21:51 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-12  1:46 [gentoo-catalyst] Re-organize the python structure Brian Dolbec
2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 1/5] Initial rearrangement of the python directories Brian Dolbec
2014-01-12 20:25   ` Brian Dolbec
2014-02-22 17:10   ` W. Trevor King
2014-02-22 18:40     ` Brian Dolbec
2014-02-22 19:37   ` [gentoo-catalyst] [PATCH] Makefile: Fix PACKAGE_VERSION extraction W. Trevor King
2014-02-22 21:46     ` Brian Dolbec
2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 2/5] Move catalyst_support, builder, catalyst_lock out of modules, into the catalyst namespace Brian Dolbec
2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 3/5] Rename the modules subpkg to targets, to better reflect what it contains Brian Dolbec
2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 4/5] Move catalyst.conf and catalystrc to an etc/ directory Brian Dolbec
2014-01-12  1:46 ` [gentoo-catalyst] [PATCH 5/5] setup.py: Add disutils-based packaging Brian Dolbec
2014-01-12  2:11   ` [gentoo-catalyst] " W. Trevor King
2014-01-12  3:36     ` Brian Dolbec
2014-01-22  5:10 ` [gentoo-catalyst] Re-organize the python structure W. Trevor King
2014-01-22 15:49   ` Rick "Zero_Chaos" Farina

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox