From: Brian Dolbec <dolsen@gentoo.org>
To: gentoo-catalyst@lists.gentoo.org
Cc: Brian Dolbec <dolsen@gentoo.org>
Subject: [gentoo-catalyst] [PATCH 2/4] move catalyst_support, builder, catalyst_lock out of modules, into the catalyst's base namespace
Date: Fri, 13 Dec 2013 19:20:09 -0800 [thread overview]
Message-ID: <1386991211-9296-3-git-send-email-dolsen@gentoo.org> (raw)
In-Reply-To: <1386991211-9296-1-git-send-email-dolsen@gentoo.org>
---
catalyst/arch/alpha.py | 6 +-
catalyst/arch/amd64.py | 2 +-
catalyst/arch/arm.py | 6 +-
catalyst/arch/hppa.py | 6 +-
catalyst/arch/ia64.py | 6 +-
catalyst/arch/mips.py | 6 +-
catalyst/arch/powerpc.py | 6 +-
catalyst/arch/s390.py | 6 +-
catalyst/arch/sh.py | 6 +-
catalyst/arch/sparc.py | 6 +-
catalyst/arch/x86.py | 6 +-
catalyst/builder.py | 20 +
catalyst/config.py | 3 +-
catalyst/lock.py | 468 ++++++++++++++++++++
catalyst/main.py | 6 +-
catalyst/modules/builder.py | 20 -
catalyst/modules/catalyst_lock.py | 468 --------------------
catalyst/modules/catalyst_support.py | 718 -------------------------------
catalyst/modules/embedded_target.py | 2 +-
catalyst/modules/generic_stage_target.py | 8 +-
catalyst/modules/generic_target.py | 2 +-
catalyst/modules/grp_target.py | 2 +-
catalyst/modules/livecd_stage1_target.py | 2 +-
catalyst/modules/livecd_stage2_target.py | 2 +-
catalyst/modules/netboot2_target.py | 2 +-
catalyst/modules/netboot_target.py | 2 +-
catalyst/modules/snapshot_target.py | 2 +-
catalyst/modules/stage1_target.py | 2 +-
catalyst/modules/stage2_target.py | 2 +-
catalyst/modules/stage3_target.py | 2 +-
catalyst/modules/stage4_target.py | 2 +-
catalyst/modules/tinderbox_target.py | 2 +-
catalyst/support.py | 718 +++++++++++++++++++++++++++++++
33 files changed, 1269 insertions(+), 1248 deletions(-)
create mode 100644 catalyst/builder.py
create mode 100644 catalyst/lock.py
delete mode 100644 catalyst/modules/builder.py
delete mode 100644 catalyst/modules/catalyst_lock.py
delete mode 100644 catalyst/modules/catalyst_support.py
create mode 100644 catalyst/support.py
diff --git a/catalyst/arch/alpha.py b/catalyst/arch/alpha.py
index f0fc95a..7248020 100644
--- a/catalyst/arch/alpha.py
+++ b/catalyst/arch/alpha.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_alpha(builder.generic):
"abstract base class for all alpha builders"
diff --git a/catalyst/arch/amd64.py b/catalyst/arch/amd64.py
index 262b55a..13e7563 100644
--- a/catalyst/arch/amd64.py
+++ b/catalyst/arch/amd64.py
@@ -1,5 +1,5 @@
-import builder
+from catalyst import builder
class generic_amd64(builder.generic):
"abstract base class for all amd64 builders"
diff --git a/catalyst/arch/arm.py b/catalyst/arch/arm.py
index 2de3942..8f207ff 100644
--- a/catalyst/arch/arm.py
+++ b/catalyst/arch/arm.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_arm(builder.generic):
"Abstract base class for all arm (little endian) builders"
diff --git a/catalyst/arch/hppa.py b/catalyst/arch/hppa.py
index f804398..3aac9b6 100644
--- a/catalyst/arch/hppa.py
+++ b/catalyst/arch/hppa.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_hppa(builder.generic):
"Abstract base class for all hppa builders"
diff --git a/catalyst/arch/ia64.py b/catalyst/arch/ia64.py
index 825af70..4003085 100644
--- a/catalyst/arch/ia64.py
+++ b/catalyst/arch/ia64.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class arch_ia64(builder.generic):
"builder class for ia64"
diff --git a/catalyst/arch/mips.py b/catalyst/arch/mips.py
index b3730fa..7cce392 100644
--- a/catalyst/arch/mips.py
+++ b/catalyst/arch/mips.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_mips(builder.generic):
"Abstract base class for all mips builders [Big-endian]"
diff --git a/catalyst/arch/powerpc.py b/catalyst/arch/powerpc.py
index e9f611b..6cec580 100644
--- a/catalyst/arch/powerpc.py
+++ b/catalyst/arch/powerpc.py
@@ -1,6 +1,8 @@
-import os,builder
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_ppc(builder.generic):
"abstract base class for all 32-bit powerpc builders"
diff --git a/catalyst/arch/s390.py b/catalyst/arch/s390.py
index bf22f66..c49e0b7 100644
--- a/catalyst/arch/s390.py
+++ b/catalyst/arch/s390.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_s390(builder.generic):
"abstract base class for all s390 builders"
diff --git a/catalyst/arch/sh.py b/catalyst/arch/sh.py
index 2fc9531..1fa1b0b 100644
--- a/catalyst/arch/sh.py
+++ b/catalyst/arch/sh.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_sh(builder.generic):
"Abstract base class for all sh builders [Little-endian]"
diff --git a/catalyst/arch/sparc.py b/catalyst/arch/sparc.py
index 5eb5344..2889528 100644
--- a/catalyst/arch/sparc.py
+++ b/catalyst/arch/sparc.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_sparc(builder.generic):
"abstract base class for all sparc builders"
diff --git a/catalyst/arch/x86.py b/catalyst/arch/x86.py
index 0391b79..c8d1911 100644
--- a/catalyst/arch/x86.py
+++ b/catalyst/arch/x86.py
@@ -1,6 +1,8 @@
-import builder,os
-from catalyst_support import *
+import os
+
+from catalyst import builder
+from catalyst.support import *
class generic_x86(builder.generic):
"abstract base class for all x86 builders"
diff --git a/catalyst/builder.py b/catalyst/builder.py
new file mode 100644
index 0000000..ad27d78
--- /dev/null
+++ b/catalyst/builder.py
@@ -0,0 +1,20 @@
+
+class generic:
+ def __init__(self,myspec):
+ self.settings=myspec
+
+ def mount_safety_check(self):
+ """
+ Make sure that no bind mounts exist in chrootdir (to use before
+ cleaning the directory, to make sure we don't wipe the contents of
+ a bind mount
+ """
+ pass
+
+ def mount_all(self):
+ """do all bind mounts"""
+ pass
+
+ def umount_all(self):
+ """unmount all bind mounts"""
+ pass
diff --git a/catalyst/config.py b/catalyst/config.py
index 726bf74..460bbd5 100644
--- a/catalyst/config.py
+++ b/catalyst/config.py
@@ -1,5 +1,6 @@
+
import re
-from modules.catalyst_support import *
+from catalyst.support import *
class ParserBase:
diff --git a/catalyst/lock.py b/catalyst/lock.py
new file mode 100644
index 0000000..2d10d2f
--- /dev/null
+++ b/catalyst/lock.py
@@ -0,0 +1,468 @@
+#!/usr/bin/python
+import os
+import fcntl
+import errno
+import sys
+import string
+import time
+from catalyst.support import *
+
+def writemsg(mystr):
+ sys.stderr.write(mystr)
+ sys.stderr.flush()
+
+class LockDir:
+ locking_method=fcntl.flock
+ lock_dirs_in_use=[]
+ die_on_failed_lock=True
+ def __del__(self):
+ self.clean_my_hardlocks()
+ self.delete_lock_from_path_list()
+ if self.islocked():
+ self.fcntl_unlock()
+
+ def __init__(self,lockdir):
+ self.locked=False
+ self.myfd=None
+ self.set_gid(250)
+ self.locking_method=LockDir.locking_method
+ self.set_lockdir(lockdir)
+ self.set_lockfilename(".catalyst_lock")
+ self.set_lockfile()
+
+ if LockDir.lock_dirs_in_use.count(lockdir)>0:
+ raise "This directory already associated with a lock object"
+ else:
+ LockDir.lock_dirs_in_use.append(lockdir)
+
+ self.hardlock_paths={}
+
+ def delete_lock_from_path_list(self):
+ i=0
+ try:
+ if LockDir.lock_dirs_in_use:
+ for x in LockDir.lock_dirs_in_use:
+ if LockDir.lock_dirs_in_use[i] == self.lockdir:
+ del LockDir.lock_dirs_in_use[i]
+ break
+ i=i+1
+ except AttributeError:
+ pass
+
+ def islocked(self):
+ if self.locked:
+ return True
+ else:
+ return False
+
+ def set_gid(self,gid):
+ if not self.islocked():
+# if "DEBUG" in self.settings:
+# print "setting gid to", gid
+ self.gid=gid
+
+ def set_lockdir(self,lockdir):
+ if not os.path.exists(lockdir):
+ os.makedirs(lockdir)
+ if os.path.isdir(lockdir):
+ if not self.islocked():
+ if lockdir[-1] == "/":
+ lockdir=lockdir[:-1]
+ self.lockdir=normpath(lockdir)
+# if "DEBUG" in self.settings:
+# print "setting lockdir to", self.lockdir
+ else:
+ raise "the lock object needs a path to a dir"
+
+ def set_lockfilename(self,lockfilename):
+ if not self.islocked():
+ self.lockfilename=lockfilename
+# if "DEBUG" in self.settings:
+# print "setting lockfilename to", self.lockfilename
+
+ def set_lockfile(self):
+ if not self.islocked():
+ self.lockfile=normpath(self.lockdir+'/'+self.lockfilename)
+# if "DEBUG" in self.settings:
+# print "setting lockfile to", self.lockfile
+
+ def read_lock(self):
+ if not self.locking_method == "HARDLOCK":
+ self.fcntl_lock("read")
+ else:
+ print "HARDLOCKING doesnt support shared-read locks"
+ print "using exclusive write locks"
+ self.hard_lock()
+
+ def write_lock(self):
+ if not self.locking_method == "HARDLOCK":
+ self.fcntl_lock("write")
+ else:
+ self.hard_lock()
+
+ def unlock(self):
+ if not self.locking_method == "HARDLOCK":
+ self.fcntl_unlock()
+ else:
+ self.hard_unlock()
+
+ def fcntl_lock(self,locktype):
+ if self.myfd==None:
+ if not os.path.exists(os.path.dirname(self.lockdir)):
+ raise DirectoryNotFound, os.path.dirname(self.lockdir)
+ if not os.path.exists(self.lockfile):
+ old_mask=os.umask(000)
+ self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
+ try:
+ if os.stat(self.lockfile).st_gid != self.gid:
+ os.chown(self.lockfile,os.getuid(),self.gid)
+ except SystemExit, e:
+ raise
+ except OSError, e:
+ if e[0] == 2: #XXX: No such file or directory
+ return self.fcntl_locking(locktype)
+ else:
+ writemsg("Cannot chown a lockfile. This could cause inconvenience later.\n")
+
+ os.umask(old_mask)
+ else:
+ self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
+
+ try:
+ if locktype == "read":
+ self.locking_method(self.myfd,fcntl.LOCK_SH|fcntl.LOCK_NB)
+ else:
+ self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
+ except IOError, e:
+ if "errno" not in dir(e):
+ raise
+ if e.errno == errno.EAGAIN:
+ if not LockDir.die_on_failed_lock:
+ # Resource temp unavailable; eg, someone beat us to the lock.
+ writemsg("waiting for lock on %s\n" % self.lockfile)
+
+ # Try for the exclusive or shared lock again.
+ if locktype == "read":
+ self.locking_method(self.myfd,fcntl.LOCK_SH)
+ else:
+ self.locking_method(self.myfd,fcntl.LOCK_EX)
+ else:
+ raise LockInUse,self.lockfile
+ elif e.errno == errno.ENOLCK:
+ pass
+ else:
+ raise
+ if not os.path.exists(self.lockfile):
+ os.close(self.myfd)
+ self.myfd=None
+ #writemsg("lockfile recurse\n")
+ self.fcntl_lock(locktype)
+ else:
+ self.locked=True
+ #writemsg("Lockfile obtained\n")
+
+ def fcntl_unlock(self):
+ import fcntl
+ unlinkfile = 1
+ if not os.path.exists(self.lockfile):
+ print "lockfile does not exist '%s'" % self.lockfile
+ if (self.myfd != None):
+ try:
+ os.close(myfd)
+ self.myfd=None
+ except:
+ pass
+ return False
+
+ try:
+ if self.myfd == None:
+ self.myfd = os.open(self.lockfile, os.O_WRONLY,0660)
+ unlinkfile = 1
+ self.locking_method(self.myfd,fcntl.LOCK_UN)
+ except SystemExit, e:
+ raise
+ except Exception, e:
+ os.close(self.myfd)
+ self.myfd=None
+ raise IOError, "Failed to unlock file '%s'\n" % self.lockfile
+ try:
+ # This sleep call was added to allow other processes that are
+ # waiting for a lock to be able to grab it before it is deleted.
+ # lockfile() already accounts for this situation, however, and
+ # the sleep here adds more time than is saved overall, so am
+ # commenting until it is proved necessary.
+ #time.sleep(0.0001)
+ if unlinkfile:
+ InUse=False
+ try:
+ self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
+ except:
+ print "Read lock may be in effect. skipping lockfile delete..."
+ InUse=True
+ # We won the lock, so there isn't competition for it.
+ # We can safely delete the file.
+ #writemsg("Got the lockfile...\n")
+ #writemsg("Unlinking...\n")
+ self.locking_method(self.myfd,fcntl.LOCK_UN)
+ if not InUse:
+ os.unlink(self.lockfile)
+ os.close(self.myfd)
+ self.myfd=None
+# if "DEBUG" in self.settings:
+# print "Unlinked lockfile..."
+ except SystemExit, e:
+ raise
+ except Exception, e:
+ # We really don't care... Someone else has the lock.
+ # So it is their problem now.
+ print "Failed to get lock... someone took it."
+ print str(e)
+
+ # Why test lockfilename? Because we may have been handed an
+ # fd originally, and the caller might not like having their
+ # open fd closed automatically on them.
+ #if type(lockfilename) == types.StringType:
+ # os.close(myfd)
+
+ if (self.myfd != None):
+ os.close(self.myfd)
+ self.myfd=None
+ self.locked=False
+ time.sleep(.0001)
+
+ def hard_lock(self,max_wait=14400):
+ """Does the NFS, hardlink shuffle to ensure locking on the disk.
+ We create a PRIVATE lockfile, that is just a placeholder on the disk.
+ Then we HARDLINK the real lockfile to that private file.
+ If our file can 2 references, then we have the lock. :)
+ Otherwise we lather, rise, and repeat.
+ We default to a 4 hour timeout.
+ """
+
+ self.myhardlock = self.hardlock_name(self.lockdir)
+
+ start_time = time.time()
+ reported_waiting = False
+
+ while(time.time() < (start_time + max_wait)):
+ # We only need it to exist.
+ self.myfd = os.open(self.myhardlock, os.O_CREAT|os.O_RDWR,0660)
+ os.close(self.myfd)
+
+ self.add_hardlock_file_to_cleanup()
+ if not os.path.exists(self.myhardlock):
+ raise FileNotFound, "Created lockfile is missing: %(filename)s" % {"filename":self.myhardlock}
+ try:
+ res = os.link(self.myhardlock, self.lockfile)
+ except SystemExit, e:
+ raise
+ except Exception, e:
+# if "DEBUG" in self.settings:
+# print "lockfile(): Hardlink: Link failed."
+# print "Exception: ",e
+ pass
+
+ if self.hardlink_is_mine(self.myhardlock, self.lockfile):
+ # We have the lock.
+ if reported_waiting:
+ print
+ return True
+
+ if reported_waiting:
+ writemsg(".")
+ else:
+ reported_waiting = True
+ print
+ print "Waiting on (hardlink) lockfile: (one '.' per 3 seconds)"
+ print "Lockfile: " + self.lockfile
+ time.sleep(3)
+
+ os.unlink(self.myhardlock)
+ return False
+
+ def hard_unlock(self):
+ try:
+ if os.path.exists(self.myhardlock):
+ os.unlink(self.myhardlock)
+ if os.path.exists(self.lockfile):
+ os.unlink(self.lockfile)
+ except SystemExit, e:
+ raise
+ except:
+ writemsg("Something strange happened to our hardlink locks.\n")
+
+ def add_hardlock_file_to_cleanup(self):
+ #mypath = self.normpath(path)
+ if os.path.isdir(self.lockdir) and os.path.isfile(self.myhardlock):
+ self.hardlock_paths[self.lockdir]=self.myhardlock
+
+ def remove_hardlock_file_from_cleanup(self):
+ if self.lockdir in self.hardlock_paths:
+ del self.hardlock_paths[self.lockdir]
+ print self.hardlock_paths
+
+ def hardlock_name(self, path):
+ mypath=path+"/.hardlock-"+os.uname()[1]+"-"+str(os.getpid())
+ newpath = os.path.normpath(mypath)
+ if len(newpath) > 1:
+ if newpath[1] == "/":
+ newpath = "/"+newpath.lstrip("/")
+ return newpath
+
+ def hardlink_is_mine(self,link,lock):
+ import stat
+ try:
+ myhls = os.stat(link)
+ mylfs = os.stat(lock)
+ except SystemExit, e:
+ raise
+ except:
+ myhls = None
+ mylfs = None
+
+ if myhls:
+ if myhls[stat.ST_NLINK] == 2:
+ return True
+ if mylfs:
+ if mylfs[stat.ST_INO] == myhls[stat.ST_INO]:
+ return True
+ return False
+
+ def hardlink_active(lock):
+ if not os.path.exists(lock):
+ return False
+
+ def clean_my_hardlocks(self):
+ try:
+ for x in self.hardlock_paths.keys():
+ self.hardlock_cleanup(x)
+ except AttributeError:
+ pass
+
+ def hardlock_cleanup(self,path):
+ mypid = str(os.getpid())
+ myhost = os.uname()[1]
+ mydl = os.listdir(path)
+ results = []
+ mycount = 0
+
+ mylist = {}
+ for x in mydl:
+ filepath=path+"/"+x
+ if os.path.isfile(filepath):
+ parts = filepath.split(".hardlock-")
+ if len(parts) == 2:
+ filename = parts[0]
+ hostpid = parts[1].split("-")
+ host = "-".join(hostpid[:-1])
+ pid = hostpid[-1]
+ if filename not in mylist:
+ mylist[filename] = {}
+
+ if host not in mylist[filename]:
+ mylist[filename][host] = []
+ mylist[filename][host].append(pid)
+ mycount += 1
+ else:
+ mylist[filename][host].append(pid)
+ mycount += 1
+
+
+ results.append("Found %(count)s locks" % {"count":mycount})
+ for x in mylist.keys():
+ if myhost in mylist[x]:
+ mylockname = self.hardlock_name(x)
+ if self.hardlink_is_mine(mylockname, self.lockfile) or \
+ not os.path.exists(self.lockfile):
+ for y in mylist[x].keys():
+ for z in mylist[x][y]:
+ filename = x+".hardlock-"+y+"-"+z
+ if filename == mylockname:
+ self.hard_unlock()
+ continue
+ try:
+ # We're sweeping through, unlinking everyone's locks.
+ os.unlink(filename)
+ results.append("Unlinked: " + filename)
+ except SystemExit, e:
+ raise
+ except Exception,e:
+ pass
+ try:
+ os.unlink(x)
+ results.append("Unlinked: " + x)
+ os.unlink(mylockname)
+ results.append("Unlinked: " + mylockname)
+ except SystemExit, e:
+ raise
+ except Exception,e:
+ pass
+ else:
+ try:
+ os.unlink(mylockname)
+ results.append("Unlinked: " + mylockname)
+ except SystemExit, e:
+ raise
+ except Exception,e:
+ pass
+ return results
+
+if __name__ == "__main__":
+
+ def lock_work():
+ print
+ for i in range(1,6):
+ print i,time.time()
+ time.sleep(1)
+ print
+ def normpath(mypath):
+ newpath = os.path.normpath(mypath)
+ if len(newpath) > 1:
+ if newpath[1] == "/":
+ newpath = "/"+newpath.lstrip("/")
+ return newpath
+
+ print "Lock 5 starting"
+ import time
+ Lock1=LockDir("/tmp/lock_path")
+ Lock1.write_lock()
+ print "Lock1 write lock"
+
+ lock_work()
+
+ Lock1.unlock()
+ print "Lock1 unlock"
+
+ Lock1.read_lock()
+ print "Lock1 read lock"
+
+ lock_work()
+
+ Lock1.unlock()
+ print "Lock1 unlock"
+
+ Lock1.read_lock()
+ print "Lock1 read lock"
+
+ Lock1.write_lock()
+ print "Lock1 write lock"
+
+ lock_work()
+
+ Lock1.unlock()
+ print "Lock1 unlock"
+
+ Lock1.read_lock()
+ print "Lock1 read lock"
+
+ lock_work()
+
+ Lock1.unlock()
+ print "Lock1 unlock"
+
+#Lock1.write_lock()
+#time.sleep(2)
+#Lock1.unlock()
+ ##Lock1.write_lock()
+ #time.sleep(2)
+ #Lock1.unlock()
diff --git a/catalyst/main.py b/catalyst/main.py
index d972b97..90ee722 100644
--- a/catalyst/main.py
+++ b/catalyst/main.py
@@ -21,7 +21,7 @@ sys.path.append(__selfpath__ + "/modules")
import catalyst.config
import catalyst.util
-from catalyst.modules.catalyst_support import (required_build_targets,
+from catalyst.support import (required_build_targets,
valid_build_targets, CatalystError, hash_map, find_binary, LockInUse)
__maintainer__="Catalyst <catalyst@gentoo.org>"
@@ -197,7 +197,7 @@ def parse_config(myconfig):
def import_modules():
# import catalyst's own modules
- # (i.e. catalyst_support and the arch modules)
+ # (i.e. stage and the arch modules)
targetmap={}
try:
@@ -354,7 +354,7 @@ def main():
parse_config(myconfig)
# Start checking that digests are valid now that the hash_map was imported
- # from catalyst_support
+ # from catalyst.support
if "digests" in conf_values:
for i in conf_values["digests"].split():
if i not in hash_map:
diff --git a/catalyst/modules/builder.py b/catalyst/modules/builder.py
deleted file mode 100644
index ad27d78..0000000
--- a/catalyst/modules/builder.py
+++ /dev/null
@@ -1,20 +0,0 @@
-
-class generic:
- def __init__(self,myspec):
- self.settings=myspec
-
- def mount_safety_check(self):
- """
- Make sure that no bind mounts exist in chrootdir (to use before
- cleaning the directory, to make sure we don't wipe the contents of
- a bind mount
- """
- pass
-
- def mount_all(self):
- """do all bind mounts"""
- pass
-
- def umount_all(self):
- """unmount all bind mounts"""
- pass
diff --git a/catalyst/modules/catalyst_lock.py b/catalyst/modules/catalyst_lock.py
deleted file mode 100644
index 5311cf8..0000000
--- a/catalyst/modules/catalyst_lock.py
+++ /dev/null
@@ -1,468 +0,0 @@
-#!/usr/bin/python
-import os
-import fcntl
-import errno
-import sys
-import string
-import time
-from catalyst_support import *
-
-def writemsg(mystr):
- sys.stderr.write(mystr)
- sys.stderr.flush()
-
-class LockDir:
- locking_method=fcntl.flock
- lock_dirs_in_use=[]
- die_on_failed_lock=True
- def __del__(self):
- self.clean_my_hardlocks()
- self.delete_lock_from_path_list()
- if self.islocked():
- self.fcntl_unlock()
-
- def __init__(self,lockdir):
- self.locked=False
- self.myfd=None
- self.set_gid(250)
- self.locking_method=LockDir.locking_method
- self.set_lockdir(lockdir)
- self.set_lockfilename(".catalyst_lock")
- self.set_lockfile()
-
- if LockDir.lock_dirs_in_use.count(lockdir)>0:
- raise "This directory already associated with a lock object"
- else:
- LockDir.lock_dirs_in_use.append(lockdir)
-
- self.hardlock_paths={}
-
- def delete_lock_from_path_list(self):
- i=0
- try:
- if LockDir.lock_dirs_in_use:
- for x in LockDir.lock_dirs_in_use:
- if LockDir.lock_dirs_in_use[i] == self.lockdir:
- del LockDir.lock_dirs_in_use[i]
- break
- i=i+1
- except AttributeError:
- pass
-
- def islocked(self):
- if self.locked:
- return True
- else:
- return False
-
- def set_gid(self,gid):
- if not self.islocked():
-# if "DEBUG" in self.settings:
-# print "setting gid to", gid
- self.gid=gid
-
- def set_lockdir(self,lockdir):
- if not os.path.exists(lockdir):
- os.makedirs(lockdir)
- if os.path.isdir(lockdir):
- if not self.islocked():
- if lockdir[-1] == "/":
- lockdir=lockdir[:-1]
- self.lockdir=normpath(lockdir)
-# if "DEBUG" in self.settings:
-# print "setting lockdir to", self.lockdir
- else:
- raise "the lock object needs a path to a dir"
-
- def set_lockfilename(self,lockfilename):
- if not self.islocked():
- self.lockfilename=lockfilename
-# if "DEBUG" in self.settings:
-# print "setting lockfilename to", self.lockfilename
-
- def set_lockfile(self):
- if not self.islocked():
- self.lockfile=normpath(self.lockdir+'/'+self.lockfilename)
-# if "DEBUG" in self.settings:
-# print "setting lockfile to", self.lockfile
-
- def read_lock(self):
- if not self.locking_method == "HARDLOCK":
- self.fcntl_lock("read")
- else:
- print "HARDLOCKING doesnt support shared-read locks"
- print "using exclusive write locks"
- self.hard_lock()
-
- def write_lock(self):
- if not self.locking_method == "HARDLOCK":
- self.fcntl_lock("write")
- else:
- self.hard_lock()
-
- def unlock(self):
- if not self.locking_method == "HARDLOCK":
- self.fcntl_unlock()
- else:
- self.hard_unlock()
-
- def fcntl_lock(self,locktype):
- if self.myfd==None:
- if not os.path.exists(os.path.dirname(self.lockdir)):
- raise DirectoryNotFound, os.path.dirname(self.lockdir)
- if not os.path.exists(self.lockfile):
- old_mask=os.umask(000)
- self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
- try:
- if os.stat(self.lockfile).st_gid != self.gid:
- os.chown(self.lockfile,os.getuid(),self.gid)
- except SystemExit, e:
- raise
- except OSError, e:
- if e[0] == 2: #XXX: No such file or directory
- return self.fcntl_locking(locktype)
- else:
- writemsg("Cannot chown a lockfile. This could cause inconvenience later.\n")
-
- os.umask(old_mask)
- else:
- self.myfd = os.open(self.lockfile, os.O_CREAT|os.O_RDWR,0660)
-
- try:
- if locktype == "read":
- self.locking_method(self.myfd,fcntl.LOCK_SH|fcntl.LOCK_NB)
- else:
- self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
- except IOError, e:
- if "errno" not in dir(e):
- raise
- if e.errno == errno.EAGAIN:
- if not LockDir.die_on_failed_lock:
- # Resource temp unavailable; eg, someone beat us to the lock.
- writemsg("waiting for lock on %s\n" % self.lockfile)
-
- # Try for the exclusive or shared lock again.
- if locktype == "read":
- self.locking_method(self.myfd,fcntl.LOCK_SH)
- else:
- self.locking_method(self.myfd,fcntl.LOCK_EX)
- else:
- raise LockInUse,self.lockfile
- elif e.errno == errno.ENOLCK:
- pass
- else:
- raise
- if not os.path.exists(self.lockfile):
- os.close(self.myfd)
- self.myfd=None
- #writemsg("lockfile recurse\n")
- self.fcntl_lock(locktype)
- else:
- self.locked=True
- #writemsg("Lockfile obtained\n")
-
- def fcntl_unlock(self):
- import fcntl
- unlinkfile = 1
- if not os.path.exists(self.lockfile):
- print "lockfile does not exist '%s'" % self.lockfile
- if (self.myfd != None):
- try:
- os.close(myfd)
- self.myfd=None
- except:
- pass
- return False
-
- try:
- if self.myfd == None:
- self.myfd = os.open(self.lockfile, os.O_WRONLY,0660)
- unlinkfile = 1
- self.locking_method(self.myfd,fcntl.LOCK_UN)
- except SystemExit, e:
- raise
- except Exception, e:
- os.close(self.myfd)
- self.myfd=None
- raise IOError, "Failed to unlock file '%s'\n" % self.lockfile
- try:
- # This sleep call was added to allow other processes that are
- # waiting for a lock to be able to grab it before it is deleted.
- # lockfile() already accounts for this situation, however, and
- # the sleep here adds more time than is saved overall, so am
- # commenting until it is proved necessary.
- #time.sleep(0.0001)
- if unlinkfile:
- InUse=False
- try:
- self.locking_method(self.myfd,fcntl.LOCK_EX|fcntl.LOCK_NB)
- except:
- print "Read lock may be in effect. skipping lockfile delete..."
- InUse=True
- # We won the lock, so there isn't competition for it.
- # We can safely delete the file.
- #writemsg("Got the lockfile...\n")
- #writemsg("Unlinking...\n")
- self.locking_method(self.myfd,fcntl.LOCK_UN)
- if not InUse:
- os.unlink(self.lockfile)
- os.close(self.myfd)
- self.myfd=None
-# if "DEBUG" in self.settings:
-# print "Unlinked lockfile..."
- except SystemExit, e:
- raise
- except Exception, e:
- # We really don't care... Someone else has the lock.
- # So it is their problem now.
- print "Failed to get lock... someone took it."
- print str(e)
-
- # Why test lockfilename? Because we may have been handed an
- # fd originally, and the caller might not like having their
- # open fd closed automatically on them.
- #if type(lockfilename) == types.StringType:
- # os.close(myfd)
-
- if (self.myfd != None):
- os.close(self.myfd)
- self.myfd=None
- self.locked=False
- time.sleep(.0001)
-
- def hard_lock(self,max_wait=14400):
- """Does the NFS, hardlink shuffle to ensure locking on the disk.
- We create a PRIVATE lockfile, that is just a placeholder on the disk.
- Then we HARDLINK the real lockfile to that private file.
- If our file can 2 references, then we have the lock. :)
- Otherwise we lather, rise, and repeat.
- We default to a 4 hour timeout.
- """
-
- self.myhardlock = self.hardlock_name(self.lockdir)
-
- start_time = time.time()
- reported_waiting = False
-
- while(time.time() < (start_time + max_wait)):
- # We only need it to exist.
- self.myfd = os.open(self.myhardlock, os.O_CREAT|os.O_RDWR,0660)
- os.close(self.myfd)
-
- self.add_hardlock_file_to_cleanup()
- if not os.path.exists(self.myhardlock):
- raise FileNotFound, "Created lockfile is missing: %(filename)s" % {"filename":self.myhardlock}
- try:
- res = os.link(self.myhardlock, self.lockfile)
- except SystemExit, e:
- raise
- except Exception, e:
-# if "DEBUG" in self.settings:
-# print "lockfile(): Hardlink: Link failed."
-# print "Exception: ",e
- pass
-
- if self.hardlink_is_mine(self.myhardlock, self.lockfile):
- # We have the lock.
- if reported_waiting:
- print
- return True
-
- if reported_waiting:
- writemsg(".")
- else:
- reported_waiting = True
- print
- print "Waiting on (hardlink) lockfile: (one '.' per 3 seconds)"
- print "Lockfile: " + self.lockfile
- time.sleep(3)
-
- os.unlink(self.myhardlock)
- return False
-
- def hard_unlock(self):
- try:
- if os.path.exists(self.myhardlock):
- os.unlink(self.myhardlock)
- if os.path.exists(self.lockfile):
- os.unlink(self.lockfile)
- except SystemExit, e:
- raise
- except:
- writemsg("Something strange happened to our hardlink locks.\n")
-
- def add_hardlock_file_to_cleanup(self):
- #mypath = self.normpath(path)
- if os.path.isdir(self.lockdir) and os.path.isfile(self.myhardlock):
- self.hardlock_paths[self.lockdir]=self.myhardlock
-
- def remove_hardlock_file_from_cleanup(self):
- if self.lockdir in self.hardlock_paths:
- del self.hardlock_paths[self.lockdir]
- print self.hardlock_paths
-
- def hardlock_name(self, path):
- mypath=path+"/.hardlock-"+os.uname()[1]+"-"+str(os.getpid())
- newpath = os.path.normpath(mypath)
- if len(newpath) > 1:
- if newpath[1] == "/":
- newpath = "/"+newpath.lstrip("/")
- return newpath
-
- def hardlink_is_mine(self,link,lock):
- import stat
- try:
- myhls = os.stat(link)
- mylfs = os.stat(lock)
- except SystemExit, e:
- raise
- except:
- myhls = None
- mylfs = None
-
- if myhls:
- if myhls[stat.ST_NLINK] == 2:
- return True
- if mylfs:
- if mylfs[stat.ST_INO] == myhls[stat.ST_INO]:
- return True
- return False
-
- def hardlink_active(lock):
- if not os.path.exists(lock):
- return False
-
- def clean_my_hardlocks(self):
- try:
- for x in self.hardlock_paths.keys():
- self.hardlock_cleanup(x)
- except AttributeError:
- pass
-
- def hardlock_cleanup(self,path):
- mypid = str(os.getpid())
- myhost = os.uname()[1]
- mydl = os.listdir(path)
- results = []
- mycount = 0
-
- mylist = {}
- for x in mydl:
- filepath=path+"/"+x
- if os.path.isfile(filepath):
- parts = filepath.split(".hardlock-")
- if len(parts) == 2:
- filename = parts[0]
- hostpid = parts[1].split("-")
- host = "-".join(hostpid[:-1])
- pid = hostpid[-1]
- if filename not in mylist:
- mylist[filename] = {}
-
- if host not in mylist[filename]:
- mylist[filename][host] = []
- mylist[filename][host].append(pid)
- mycount += 1
- else:
- mylist[filename][host].append(pid)
- mycount += 1
-
-
- results.append("Found %(count)s locks" % {"count":mycount})
- for x in mylist.keys():
- if myhost in mylist[x]:
- mylockname = self.hardlock_name(x)
- if self.hardlink_is_mine(mylockname, self.lockfile) or \
- not os.path.exists(self.lockfile):
- for y in mylist[x].keys():
- for z in mylist[x][y]:
- filename = x+".hardlock-"+y+"-"+z
- if filename == mylockname:
- self.hard_unlock()
- continue
- try:
- # We're sweeping through, unlinking everyone's locks.
- os.unlink(filename)
- results.append("Unlinked: " + filename)
- except SystemExit, e:
- raise
- except Exception,e:
- pass
- try:
- os.unlink(x)
- results.append("Unlinked: " + x)
- os.unlink(mylockname)
- results.append("Unlinked: " + mylockname)
- except SystemExit, e:
- raise
- except Exception,e:
- pass
- else:
- try:
- os.unlink(mylockname)
- results.append("Unlinked: " + mylockname)
- except SystemExit, e:
- raise
- except Exception,e:
- pass
- return results
-
-if __name__ == "__main__":
-
- def lock_work():
- print
- for i in range(1,6):
- print i,time.time()
- time.sleep(1)
- print
- def normpath(mypath):
- newpath = os.path.normpath(mypath)
- if len(newpath) > 1:
- if newpath[1] == "/":
- newpath = "/"+newpath.lstrip("/")
- return newpath
-
- print "Lock 5 starting"
- import time
- Lock1=LockDir("/tmp/lock_path")
- Lock1.write_lock()
- print "Lock1 write lock"
-
- lock_work()
-
- Lock1.unlock()
- print "Lock1 unlock"
-
- Lock1.read_lock()
- print "Lock1 read lock"
-
- lock_work()
-
- Lock1.unlock()
- print "Lock1 unlock"
-
- Lock1.read_lock()
- print "Lock1 read lock"
-
- Lock1.write_lock()
- print "Lock1 write lock"
-
- lock_work()
-
- Lock1.unlock()
- print "Lock1 unlock"
-
- Lock1.read_lock()
- print "Lock1 read lock"
-
- lock_work()
-
- Lock1.unlock()
- print "Lock1 unlock"
-
-#Lock1.write_lock()
-#time.sleep(2)
-#Lock1.unlock()
- ##Lock1.write_lock()
- #time.sleep(2)
- #Lock1.unlock()
diff --git a/catalyst/modules/catalyst_support.py b/catalyst/modules/catalyst_support.py
deleted file mode 100644
index 316dfa3..0000000
--- a/catalyst/modules/catalyst_support.py
+++ /dev/null
@@ -1,718 +0,0 @@
-
-import sys,string,os,types,re,signal,traceback,time
-#import md5,sha
-selinux_capable = False
-#userpriv_capable = (os.getuid() == 0)
-#fakeroot_capable = False
-BASH_BINARY = "/bin/bash"
-
-try:
- import resource
- max_fd_limit=resource.getrlimit(RLIMIT_NOFILE)
-except SystemExit, e:
- raise
-except:
- # hokay, no resource module.
- max_fd_limit=256
-
-# pids this process knows of.
-spawned_pids = []
-
-try:
- import urllib
-except SystemExit, e:
- raise
-
-def cleanup(pids,block_exceptions=True):
- """function to go through and reap the list of pids passed to it"""
- global spawned_pids
- if type(pids) == int:
- pids = [pids]
- for x in pids:
- try:
- os.kill(x,signal.SIGTERM)
- if os.waitpid(x,os.WNOHANG)[1] == 0:
- # feisty bugger, still alive.
- os.kill(x,signal.SIGKILL)
- os.waitpid(x,0)
-
- except OSError, oe:
- if block_exceptions:
- pass
- if oe.errno not in (10,3):
- raise oe
- except SystemExit:
- raise
- except Exception:
- if block_exceptions:
- pass
- try: spawned_pids.remove(x)
- except IndexError: pass
-
-
-
-# a function to turn a string of non-printable characters into a string of
-# hex characters
-def hexify(str):
- hexStr = string.hexdigits
- r = ''
- for ch in str:
- i = ord(ch)
- r = r + hexStr[(i >> 4) & 0xF] + hexStr[i & 0xF]
- return r
-# hexify()
-
-def generate_contents(file,contents_function="auto",verbose=False):
- try:
- _ = contents_function
- if _ == 'auto' and file.endswith('.iso'):
- _ = 'isoinfo-l'
- if (_ in ['tar-tv','auto']):
- if file.endswith('.tgz') or file.endswith('.tar.gz'):
- _ = 'tar-tvz'
- elif file.endswith('.tbz2') or file.endswith('.tar.bz2'):
- _ = 'tar-tvj'
- elif file.endswith('.tar'):
- _ = 'tar-tv'
-
- if _ == 'auto':
- warn('File %r has unknown type for automatic detection.' % (file, ))
- return None
- else:
- contents_function = _
- _ = contents_map[contents_function]
- return _[0](file,_[1],verbose)
- except:
- raise CatalystError,\
- "Error generating contents, is appropriate utility (%s) installed on your system?" \
- % (contents_function, )
-
-def calc_contents(file,cmd,verbose):
- args={ 'file': file }
- cmd=cmd % dict(args)
- a=os.popen(cmd)
- mylines=a.readlines()
- a.close()
- result="".join(mylines)
- if verbose:
- print result
- return result
-
-# This has map must be defined after the function calc_content
-# It is possible to call different functions from this but they must be defined
-# before hash_map
-# Key,function,cmd
-contents_map={
- # 'find' is disabled because it requires the source path, which is not
- # always available
- #"find" :[calc_contents,"find %(path)s"],
- "tar-tv":[calc_contents,"tar tvf %(file)s"],
- "tar-tvz":[calc_contents,"tar tvzf %(file)s"],
- "tar-tvj":[calc_contents,"tar -I lbzip2 -tvf %(file)s"],
- "isoinfo-l":[calc_contents,"isoinfo -l -i %(file)s"],
- # isoinfo-f should be a last resort only
- "isoinfo-f":[calc_contents,"isoinfo -f -i %(file)s"],
-}
-
-def generate_hash(file,hash_function="crc32",verbose=False):
- try:
- return hash_map[hash_function][0](file,hash_map[hash_function][1],hash_map[hash_function][2],\
- hash_map[hash_function][3],verbose)
- except:
- raise CatalystError,"Error generating hash, is appropriate utility installed on your system?"
-
-def calc_hash(file,cmd,cmd_args,id_string="MD5",verbose=False):
- a=os.popen(cmd+" "+cmd_args+" "+file)
- mylines=a.readlines()
- a.close()
- mylines=mylines[0].split()
- result=mylines[0]
- if verbose:
- print id_string+" (%s) = %s" % (file, result)
- return result
-
-def calc_hash2(file,cmd,cmd_args,id_string="MD5",verbose=False):
- a=os.popen(cmd+" "+cmd_args+" "+file)
- header=a.readline()
- mylines=a.readline().split()
- hash=mylines[0]
- short_file=os.path.split(mylines[1])[1]
- a.close()
- result=header+hash+" "+short_file+"\n"
- if verbose:
- print header+" (%s) = %s" % (short_file, result)
- return result
-
-# This has map must be defined after the function calc_hash
-# It is possible to call different functions from this but they must be defined
-# before hash_map
-# Key,function,cmd,cmd_args,Print string
-hash_map={
- "adler32":[calc_hash2,"shash","-a ADLER32","ADLER32"],\
- "crc32":[calc_hash2,"shash","-a CRC32","CRC32"],\
- "crc32b":[calc_hash2,"shash","-a CRC32B","CRC32B"],\
- "gost":[calc_hash2,"shash","-a GOST","GOST"],\
- "haval128":[calc_hash2,"shash","-a HAVAL128","HAVAL128"],\
- "haval160":[calc_hash2,"shash","-a HAVAL160","HAVAL160"],\
- "haval192":[calc_hash2,"shash","-a HAVAL192","HAVAL192"],\
- "haval224":[calc_hash2,"shash","-a HAVAL224","HAVAL224"],\
- "haval256":[calc_hash2,"shash","-a HAVAL256","HAVAL256"],\
- "md2":[calc_hash2,"shash","-a MD2","MD2"],\
- "md4":[calc_hash2,"shash","-a MD4","MD4"],\
- "md5":[calc_hash2,"shash","-a MD5","MD5"],\
- "ripemd128":[calc_hash2,"shash","-a RIPEMD128","RIPEMD128"],\
- "ripemd160":[calc_hash2,"shash","-a RIPEMD160","RIPEMD160"],\
- "ripemd256":[calc_hash2,"shash","-a RIPEMD256","RIPEMD256"],\
- "ripemd320":[calc_hash2,"shash","-a RIPEMD320","RIPEMD320"],\
- "sha1":[calc_hash2,"shash","-a SHA1","SHA1"],\
- "sha224":[calc_hash2,"shash","-a SHA224","SHA224"],\
- "sha256":[calc_hash2,"shash","-a SHA256","SHA256"],\
- "sha384":[calc_hash2,"shash","-a SHA384","SHA384"],\
- "sha512":[calc_hash2,"shash","-a SHA512","SHA512"],\
- "snefru128":[calc_hash2,"shash","-a SNEFRU128","SNEFRU128"],\
- "snefru256":[calc_hash2,"shash","-a SNEFRU256","SNEFRU256"],\
- "tiger":[calc_hash2,"shash","-a TIGER","TIGER"],\
- "tiger128":[calc_hash2,"shash","-a TIGER128","TIGER128"],\
- "tiger160":[calc_hash2,"shash","-a TIGER160","TIGER160"],\
- "whirlpool":[calc_hash2,"shash","-a WHIRLPOOL","WHIRLPOOL"],\
- }
-
-def read_from_clst(file):
- line = ''
- myline = ''
- try:
- myf=open(file,"r")
- except:
- return -1
- #raise CatalystError, "Could not open file "+file
- for line in myf.readlines():
- #line = string.replace(line, "\n", "") # drop newline
- myline = myline + line
- myf.close()
- return myline
-# read_from_clst
-
-# these should never be touched
-required_build_targets=["generic_target","generic_stage_target"]
-
-# new build types should be added here
-valid_build_targets=["stage1_target","stage2_target","stage3_target","stage4_target","grp_target",
- "livecd_stage1_target","livecd_stage2_target","embedded_target",
- "tinderbox_target","snapshot_target","netboot_target","netboot2_target"]
-
-required_config_file_values=["storedir","sharedir","distdir","portdir"]
-valid_config_file_values=required_config_file_values[:]
-valid_config_file_values.append("PKGCACHE")
-valid_config_file_values.append("KERNCACHE")
-valid_config_file_values.append("CCACHE")
-valid_config_file_values.append("DISTCC")
-valid_config_file_values.append("ICECREAM")
-valid_config_file_values.append("ENVSCRIPT")
-valid_config_file_values.append("AUTORESUME")
-valid_config_file_values.append("FETCH")
-valid_config_file_values.append("CLEAR_AUTORESUME")
-valid_config_file_values.append("options")
-valid_config_file_values.append("DEBUG")
-valid_config_file_values.append("VERBOSE")
-valid_config_file_values.append("PURGE")
-valid_config_file_values.append("PURGEONLY")
-valid_config_file_values.append("SNAPCACHE")
-valid_config_file_values.append("snapshot_cache")
-valid_config_file_values.append("hash_function")
-valid_config_file_values.append("digests")
-valid_config_file_values.append("contents")
-valid_config_file_values.append("SEEDCACHE")
-
-verbosity=1
-
-def list_bashify(mylist):
- if type(mylist)==types.StringType:
- mypack=[mylist]
- else:
- mypack=mylist[:]
- for x in range(0,len(mypack)):
- # surround args with quotes for passing to bash,
- # allows things like "<" to remain intact
- mypack[x]="'"+mypack[x]+"'"
- mypack=string.join(mypack)
- return mypack
-
-def list_to_string(mylist):
- if type(mylist)==types.StringType:
- mypack=[mylist]
- else:
- mypack=mylist[:]
- for x in range(0,len(mypack)):
- # surround args with quotes for passing to bash,
- # allows things like "<" to remain intact
- mypack[x]=mypack[x]
- mypack=string.join(mypack)
- return mypack
-
-class CatalystError(Exception):
- def __init__(self, message):
- if message:
- (type,value)=sys.exc_info()[:2]
- if value!=None:
- print
- print traceback.print_exc(file=sys.stdout)
- print
- print "!!! catalyst: "+message
- print
-
-class LockInUse(Exception):
- def __init__(self, message):
- if message:
- #(type,value)=sys.exc_info()[:2]
- #if value!=None:
- #print
- #kprint traceback.print_exc(file=sys.stdout)
- print
- print "!!! catalyst lock file in use: "+message
- print
-
-def die(msg=None):
- warn(msg)
- sys.exit(1)
-
-def warn(msg):
- print "!!! catalyst: "+msg
-
-def find_binary(myc):
- """look through the environmental path for an executable file named whatever myc is"""
- # this sucks. badly.
- p=os.getenv("PATH")
- if p == None:
- return None
- for x in p.split(":"):
- #if it exists, and is executable
- if os.path.exists("%s/%s" % (x,myc)) and os.stat("%s/%s" % (x,myc))[0] & 0x0248:
- return "%s/%s" % (x,myc)
- return None
-
-def spawn_bash(mycommand,env={},debug=False,opt_name=None,**keywords):
- """spawn mycommand as an arguement to bash"""
- args=[BASH_BINARY]
- if not opt_name:
- opt_name=mycommand.split()[0]
- if "BASH_ENV" not in env:
- env["BASH_ENV"] = "/etc/spork/is/not/valid/profile.env"
- if debug:
- args.append("-x")
- args.append("-c")
- args.append(mycommand)
- return spawn(args,env=env,opt_name=opt_name,**keywords)
-
-#def spawn_get_output(mycommand,spawn_type=spawn,raw_exit_code=False,emulate_gso=True, \
-# collect_fds=[1],fd_pipes=None,**keywords):
-
-def spawn_get_output(mycommand,raw_exit_code=False,emulate_gso=True, \
- collect_fds=[1],fd_pipes=None,**keywords):
- """call spawn, collecting the output to fd's specified in collect_fds list
- emulate_gso is a compatability hack to emulate commands.getstatusoutput's return, minus the
- requirement it always be a bash call (spawn_type controls the actual spawn call), and minus the
- 'lets let log only stdin and let stderr slide by'.
-
- emulate_gso was deprecated from the day it was added, so convert your code over.
- spawn_type is the passed in function to call- typically spawn_bash, spawn, spawn_sandbox, or spawn_fakeroot"""
- global selinux_capable
- pr,pw=os.pipe()
-
- #if type(spawn_type) not in [types.FunctionType, types.MethodType]:
- # s="spawn_type must be passed a function, not",type(spawn_type),spawn_type
- # raise Exception,s
-
- if fd_pipes==None:
- fd_pipes={}
- fd_pipes[0] = 0
-
- for x in collect_fds:
- fd_pipes[x] = pw
- keywords["returnpid"]=True
-
- mypid=spawn_bash(mycommand,fd_pipes=fd_pipes,**keywords)
- os.close(pw)
- if type(mypid) != types.ListType:
- os.close(pr)
- return [mypid, "%s: No such file or directory" % mycommand.split()[0]]
-
- fd=os.fdopen(pr,"r")
- mydata=fd.readlines()
- fd.close()
- if emulate_gso:
- mydata=string.join(mydata)
- if len(mydata) and mydata[-1] == "\n":
- mydata=mydata[:-1]
- retval=os.waitpid(mypid[0],0)[1]
- cleanup(mypid)
- if raw_exit_code:
- return [retval,mydata]
- retval=process_exit_code(retval)
- return [retval, mydata]
-
-# base spawn function
-def spawn(mycommand,env={},raw_exit_code=False,opt_name=None,fd_pipes=None,returnpid=False,\
- uid=None,gid=None,groups=None,umask=None,logfile=None,path_lookup=True,\
- selinux_context=None, raise_signals=False, func_call=False):
- """base fork/execve function.
- mycommand is the desired command- if you need a command to execute in a bash/sandbox/fakeroot
- environment, use the appropriate spawn call. This is a straight fork/exec code path.
- Can either have a tuple, or a string passed in. If uid/gid/groups/umask specified, it changes
- the forked process to said value. If path_lookup is on, a non-absolute command will be converted
- to an absolute command, otherwise it returns None.
-
- selinux_context is the desired context, dependant on selinux being available.
- opt_name controls the name the processor goes by.
- fd_pipes controls which file descriptor numbers are left open in the forked process- it's a dict of
- current fd's raw fd #, desired #.
-
- func_call is a boolean for specifying to execute a python function- use spawn_func instead.
- raise_signals is questionable. Basically throw an exception if signal'd. No exception is thrown
- if raw_input is on.
-
- logfile overloads the specified fd's to write to a tee process which logs to logfile
- returnpid returns the relevant pids (a list, including the logging process if logfile is on).
-
- non-returnpid calls to spawn will block till the process has exited, returning the exitcode/signal
- raw_exit_code controls whether the actual waitpid result is returned, or intrepretted."""
-
- myc=''
- if not func_call:
- if type(mycommand)==types.StringType:
- mycommand=mycommand.split()
- myc = mycommand[0]
- if not os.access(myc, os.X_OK):
- if not path_lookup:
- return None
- myc = find_binary(myc)
- if myc == None:
- return None
- mypid=[]
- if logfile:
- pr,pw=os.pipe()
- mypid.extend(spawn(('tee','-i','-a',logfile),returnpid=True,fd_pipes={0:pr,1:1,2:2}))
- retval=os.waitpid(mypid[-1],os.WNOHANG)[1]
- if retval != 0:
- # he's dead jim.
- if raw_exit_code:
- return retval
- return process_exit_code(retval)
-
- if fd_pipes == None:
- fd_pipes={}
- fd_pipes[0] = 0
- fd_pipes[1]=pw
- fd_pipes[2]=pw
-
- if not opt_name:
- opt_name = mycommand[0]
- myargs=[opt_name]
- myargs.extend(mycommand[1:])
- global spawned_pids
- mypid.append(os.fork())
- if mypid[-1] != 0:
- #log the bugger.
- spawned_pids.extend(mypid)
-
- if mypid[-1] == 0:
- if func_call:
- spawned_pids = []
-
- # this may look ugly, but basically it moves file descriptors around to ensure no
- # handles that are needed are accidentally closed during the final dup2 calls.
- trg_fd=[]
- if type(fd_pipes)==types.DictType:
- src_fd=[]
- k=fd_pipes.keys()
- k.sort()
-
- #build list of which fds will be where, and where they are at currently
- for x in k:
- trg_fd.append(x)
- src_fd.append(fd_pipes[x])
-
- # run through said list dup'ing descriptors so that they won't be waxed
- # by other dup calls.
- for x in range(0,len(trg_fd)):
- if trg_fd[x] == src_fd[x]:
- continue
- if trg_fd[x] in src_fd[x+1:]:
- new=os.dup2(trg_fd[x],max(src_fd) + 1)
- os.close(trg_fd[x])
- try:
- while True:
- src_fd[s.index(trg_fd[x])]=new
- except SystemExit, e:
- raise
- except:
- pass
-
- # transfer the fds to their final pre-exec position.
- for x in range(0,len(trg_fd)):
- if trg_fd[x] != src_fd[x]:
- os.dup2(src_fd[x], trg_fd[x])
- else:
- trg_fd=[0,1,2]
-
- # wax all open descriptors that weren't requested be left open.
- for x in range(0,max_fd_limit):
- if x not in trg_fd:
- try:
- os.close(x)
- except SystemExit, e:
- raise
- except:
- pass
-
- # note this order must be preserved- can't change gid/groups if you change uid first.
- if selinux_capable and selinux_context:
- import selinux
- selinux.setexec(selinux_context)
- if gid:
- os.setgid(gid)
- if groups:
- os.setgroups(groups)
- if uid:
- os.setuid(uid)
- if umask:
- os.umask(umask)
- else:
- os.umask(022)
-
- try:
- #print "execing", myc, myargs
- if func_call:
- # either use a passed in func for interpretting the results, or return if no exception.
- # note the passed in list, and dict are expanded.
- if len(mycommand) == 4:
- os._exit(mycommand[3](mycommand[0](*mycommand[1],**mycommand[2])))
- try:
- mycommand[0](*mycommand[1],**mycommand[2])
- except Exception,e:
- print "caught exception",e," in forked func",mycommand[0]
- sys.exit(0)
-
- #os.execvp(myc,myargs)
- os.execve(myc,myargs,env)
- except SystemExit, e:
- raise
- except Exception, e:
- if not func_call:
- raise str(e)+":\n "+myc+" "+string.join(myargs)
- print "func call failed"
-
- # If the execve fails, we need to report it, and exit
- # *carefully* --- report error here
- os._exit(1)
- sys.exit(1)
- return # should never get reached
-
- # if we were logging, kill the pipes.
- if logfile:
- os.close(pr)
- os.close(pw)
-
- if returnpid:
- return mypid
-
- # loop through pids (typically one, unless logging), either waiting on their death, or waxing them
- # if the main pid (mycommand) returned badly.
- while len(mypid):
- retval=os.waitpid(mypid[-1],0)[1]
- if retval != 0:
- cleanup(mypid[0:-1],block_exceptions=False)
- # at this point we've killed all other kid pids generated via this call.
- # return now.
- if raw_exit_code:
- return retval
- return process_exit_code(retval,throw_signals=raise_signals)
- else:
- mypid.pop(-1)
- cleanup(mypid)
- return 0
-
-def cmd(mycmd,myexc="",env={}):
- try:
- sys.stdout.flush()
- retval=spawn_bash(mycmd,env)
- if retval != 0:
- raise CatalystError,myexc
- except:
- raise
-
-def process_exit_code(retval,throw_signals=False):
- """process a waitpid returned exit code, returning exit code if it exit'd, or the
- signal if it died from signalling
- if throw_signals is on, it raises a SystemExit if the process was signaled.
- This is intended for usage with threads, although at the moment you can't signal individual
- threads in python, only the master thread, so it's a questionable option."""
- if (retval & 0xff)==0:
- return retval >> 8 # return exit code
- else:
- if throw_signals:
- #use systemexit, since portage is stupid about exception catching.
- raise SystemExit()
- return (retval & 0xff) << 8 # interrupted by signal
-
-def file_locate(settings,filelist,expand=1):
- #if expand=1, non-absolute paths will be accepted and
- # expanded to os.getcwd()+"/"+localpath if file exists
- for myfile in filelist:
- if myfile not in settings:
- #filenames such as cdtar are optional, so we don't assume the variable is defined.
- pass
- else:
- if len(settings[myfile])==0:
- raise CatalystError, "File variable \""+myfile+"\" has a length of zero (not specified.)"
- if settings[myfile][0]=="/":
- if not os.path.exists(settings[myfile]):
- raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]
- elif expand and os.path.exists(os.getcwd()+"/"+settings[myfile]):
- settings[myfile]=os.getcwd()+"/"+settings[myfile]
- else:
- raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]+" (2nd try)"
-"""
-Spec file format:
-
-The spec file format is a very simple and easy-to-use format for storing data. Here's an example
-file:
-
-item1: value1
-item2: foo bar oni
-item3:
- meep
- bark
- gleep moop
-
-This file would be interpreted as defining three items: item1, item2 and item3. item1 would contain
-the string value "value1". Item2 would contain an ordered list [ "foo", "bar", "oni" ]. item3
-would contain an ordered list as well: [ "meep", "bark", "gleep", "moop" ]. It's important to note
-that the order of multiple-value items is preserved, but the order that the items themselves are
-defined are not preserved. In other words, "foo", "bar", "oni" ordering is preserved but "item1"
-"item2" "item3" ordering is not, as the item strings are stored in a dictionary (hash).
-"""
-
-def parse_makeconf(mylines):
- mymakeconf={}
- pos=0
- pat=re.compile("([0-9a-zA-Z_]*)=(.*)")
- while pos<len(mylines):
- if len(mylines[pos])<=1:
- #skip blanks
- pos += 1
- continue
- if mylines[pos][0] in ["#"," ","\t"]:
- #skip indented lines, comments
- pos += 1
- continue
- else:
- myline=mylines[pos]
- mobj=pat.match(myline)
- pos += 1
- if mobj.group(2):
- clean_string = re.sub(r"\"",r"",mobj.group(2))
- mymakeconf[mobj.group(1)]=clean_string
- return mymakeconf
-
-def read_makeconf(mymakeconffile):
- if os.path.exists(mymakeconffile):
- try:
- try:
- import snakeoil.fileutils
- return snakeoil.fileutils.read_bash_dict(mymakeconffile, sourcing_command="source")
- except ImportError:
- try:
- import portage.util
- return portage.util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
- except:
- try:
- import portage_util
- return portage_util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
- except ImportError:
- myf=open(mymakeconffile,"r")
- mylines=myf.readlines()
- myf.close()
- return parse_makeconf(mylines)
- except:
- raise CatalystError, "Could not parse make.conf file "+mymakeconffile
- else:
- makeconf={}
- return makeconf
-
-def msg(mymsg,verblevel=1):
- if verbosity>=verblevel:
- print mymsg
-
-def pathcompare(path1,path2):
- # Change double slashes to slash
- path1 = re.sub(r"//",r"/",path1)
- path2 = re.sub(r"//",r"/",path2)
- # Removing ending slash
- path1 = re.sub("/$","",path1)
- path2 = re.sub("/$","",path2)
-
- if path1 == path2:
- return 1
- return 0
-
-def ismount(path):
- "enhanced to handle bind mounts"
- if os.path.ismount(path):
- return 1
- a=os.popen("mount")
- mylines=a.readlines()
- a.close()
- for line in mylines:
- mysplit=line.split()
- if pathcompare(path,mysplit[2]):
- return 1
- return 0
-
-def addl_arg_parse(myspec,addlargs,requiredspec,validspec):
- "helper function to help targets parse additional arguments"
- global valid_config_file_values
-
- messages = []
- for x in addlargs.keys():
- if x not in validspec and x not in valid_config_file_values and x not in requiredspec:
- messages.append("Argument \""+x+"\" not recognized.")
- else:
- myspec[x]=addlargs[x]
-
- for x in requiredspec:
- if x not in myspec:
- messages.append("Required argument \""+x+"\" not specified.")
-
- if messages:
- raise CatalystError, '\n\tAlso: '.join(messages)
-
-def touch(myfile):
- try:
- myf=open(myfile,"w")
- myf.close()
- except IOError:
- raise CatalystError, "Could not touch "+myfile+"."
-
-def countdown(secs=5, doing="Starting"):
- if secs:
- print ">>> Waiting",secs,"seconds before starting..."
- print ">>> (Control-C to abort)...\n"+doing+" in: ",
- ticks=range(secs)
- ticks.reverse()
- for sec in ticks:
- sys.stdout.write(str(sec+1)+" ")
- sys.stdout.flush()
- time.sleep(1)
- print
-
-def normpath(mypath):
- TrailingSlash=False
- if mypath[-1] == "/":
- TrailingSlash=True
- newpath = os.path.normpath(mypath)
- if len(newpath) > 1:
- if newpath[:2] == "//":
- newpath = newpath[1:]
- if TrailingSlash:
- newpath=newpath+'/'
- return newpath
diff --git a/catalyst/modules/embedded_target.py b/catalyst/modules/embedded_target.py
index f38ea00..7cee7a6 100644
--- a/catalyst/modules/embedded_target.py
+++ b/catalyst/modules/embedded_target.py
@@ -11,7 +11,7 @@ ROOT=/tmp/submerge emerge --something foo bar .
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
import os,string,imp,types,shutil
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
from stat import *
diff --git a/catalyst/modules/generic_stage_target.py b/catalyst/modules/generic_stage_target.py
index e99e652..5200d8a 100644
--- a/catalyst/modules/generic_stage_target.py
+++ b/catalyst/modules/generic_stage_target.py
@@ -1,8 +1,8 @@
import os,string,imp,types,shutil
-from catalyst_support import *
+from catalyst.support import *
from generic_target import *
from stat import *
-import catalyst_lock
+from catalyst.lock import LockDir
class generic_stage_target(generic_target):
"""
@@ -431,7 +431,7 @@ class generic_stage_target(generic_target):
normpath(self.settings["snapshot_cache"]+"/"+\
self.settings["snapshot"]+"/")
self.snapcache_lock=\
- catalyst_lock.LockDir(self.settings["snapshot_cache_path"])
+ LockDir(self.settings["snapshot_cache_path"])
print "Caching snapshot to "+self.settings["snapshot_cache_path"]
def set_chroot_path(self):
@@ -441,7 +441,7 @@ class generic_stage_target(generic_target):
"""
self.settings["chroot_path"]=normpath(self.settings["storedir"]+\
"/tmp/"+self.settings["target_subpath"]+"/")
- self.chroot_lock=catalyst_lock.LockDir(self.settings["chroot_path"])
+ self.chroot_lock=LockDir(self.settings["chroot_path"])
def set_autoresume_path(self):
self.settings["autoresume_path"]=normpath(self.settings["storedir"]+\
diff --git a/catalyst/modules/generic_target.py b/catalyst/modules/generic_target.py
index fe96bd7..de51994 100644
--- a/catalyst/modules/generic_target.py
+++ b/catalyst/modules/generic_target.py
@@ -1,4 +1,4 @@
-from catalyst_support import *
+from catalyst.support import *
class generic_target:
"""
diff --git a/catalyst/modules/grp_target.py b/catalyst/modules/grp_target.py
index 6941522..8e70042 100644
--- a/catalyst/modules/grp_target.py
+++ b/catalyst/modules/grp_target.py
@@ -4,7 +4,7 @@ Gentoo Reference Platform (GRP) target
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
import os,types,glob
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class grp_target(generic_stage_target):
diff --git a/catalyst/modules/livecd_stage1_target.py b/catalyst/modules/livecd_stage1_target.py
index 59de9bb..ac846ec 100644
--- a/catalyst/modules/livecd_stage1_target.py
+++ b/catalyst/modules/livecd_stage1_target.py
@@ -3,7 +3,7 @@ LiveCD stage1 target
"""
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class livecd_stage1_target(generic_stage_target):
diff --git a/catalyst/modules/livecd_stage2_target.py b/catalyst/modules/livecd_stage2_target.py
index 5be8fd2..1bfd820 100644
--- a/catalyst/modules/livecd_stage2_target.py
+++ b/catalyst/modules/livecd_stage2_target.py
@@ -4,7 +4,7 @@ LiveCD stage2 target, builds upon previous LiveCD stage1 tarball
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
import os,string,types,stat,shutil
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class livecd_stage2_target(generic_stage_target):
diff --git a/catalyst/modules/netboot2_target.py b/catalyst/modules/netboot2_target.py
index 1ab7e7d..2b3cd20 100644
--- a/catalyst/modules/netboot2_target.py
+++ b/catalyst/modules/netboot2_target.py
@@ -4,7 +4,7 @@ netboot target, version 2
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
import os,string,types
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class netboot2_target(generic_stage_target):
diff --git a/catalyst/modules/netboot_target.py b/catalyst/modules/netboot_target.py
index ff2c81f..9d01b7e 100644
--- a/catalyst/modules/netboot_target.py
+++ b/catalyst/modules/netboot_target.py
@@ -4,7 +4,7 @@ netboot target, version 1
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
import os,string,types
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class netboot_target(generic_stage_target):
diff --git a/catalyst/modules/snapshot_target.py b/catalyst/modules/snapshot_target.py
index 29d6e87..e21bd1a 100644
--- a/catalyst/modules/snapshot_target.py
+++ b/catalyst/modules/snapshot_target.py
@@ -3,7 +3,7 @@ Snapshot target
"""
import os
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class snapshot_target(generic_stage_target):
diff --git a/catalyst/modules/stage1_target.py b/catalyst/modules/stage1_target.py
index aa43926..25f7116 100644
--- a/catalyst/modules/stage1_target.py
+++ b/catalyst/modules/stage1_target.py
@@ -3,7 +3,7 @@ stage1 target
"""
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class stage1_target(generic_stage_target):
diff --git a/catalyst/modules/stage2_target.py b/catalyst/modules/stage2_target.py
index 6083e2b..15acdee 100644
--- a/catalyst/modules/stage2_target.py
+++ b/catalyst/modules/stage2_target.py
@@ -3,7 +3,7 @@ stage2 target, builds upon previous stage1 tarball
"""
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class stage2_target(generic_stage_target):
diff --git a/catalyst/modules/stage3_target.py b/catalyst/modules/stage3_target.py
index 4d3a008..89edd66 100644
--- a/catalyst/modules/stage3_target.py
+++ b/catalyst/modules/stage3_target.py
@@ -3,7 +3,7 @@ stage3 target, builds upon previous stage2/stage3 tarball
"""
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class stage3_target(generic_stage_target):
diff --git a/catalyst/modules/stage4_target.py b/catalyst/modules/stage4_target.py
index ce41b2d..9168f2e 100644
--- a/catalyst/modules/stage4_target.py
+++ b/catalyst/modules/stage4_target.py
@@ -3,7 +3,7 @@ stage4 target, builds upon previous stage3/stage4 tarball
"""
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class stage4_target(generic_stage_target):
diff --git a/catalyst/modules/tinderbox_target.py b/catalyst/modules/tinderbox_target.py
index d6d3ea3..5985c5b 100644
--- a/catalyst/modules/tinderbox_target.py
+++ b/catalyst/modules/tinderbox_target.py
@@ -3,7 +3,7 @@ Tinderbox target
"""
# NOTE: That^^ docstring has influence catalyst-spec(5) man page generation.
-from catalyst_support import *
+from catalyst.support import *
from generic_stage_target import *
class tinderbox_target(generic_stage_target):
diff --git a/catalyst/support.py b/catalyst/support.py
new file mode 100644
index 0000000..316dfa3
--- /dev/null
+++ b/catalyst/support.py
@@ -0,0 +1,718 @@
+
+import sys,string,os,types,re,signal,traceback,time
+#import md5,sha
+selinux_capable = False
+#userpriv_capable = (os.getuid() == 0)
+#fakeroot_capable = False
+BASH_BINARY = "/bin/bash"
+
+try:
+ import resource
+ max_fd_limit=resource.getrlimit(RLIMIT_NOFILE)
+except SystemExit, e:
+ raise
+except:
+ # hokay, no resource module.
+ max_fd_limit=256
+
+# pids this process knows of.
+spawned_pids = []
+
+try:
+ import urllib
+except SystemExit, e:
+ raise
+
+def cleanup(pids,block_exceptions=True):
+ """function to go through and reap the list of pids passed to it"""
+ global spawned_pids
+ if type(pids) == int:
+ pids = [pids]
+ for x in pids:
+ try:
+ os.kill(x,signal.SIGTERM)
+ if os.waitpid(x,os.WNOHANG)[1] == 0:
+ # feisty bugger, still alive.
+ os.kill(x,signal.SIGKILL)
+ os.waitpid(x,0)
+
+ except OSError, oe:
+ if block_exceptions:
+ pass
+ if oe.errno not in (10,3):
+ raise oe
+ except SystemExit:
+ raise
+ except Exception:
+ if block_exceptions:
+ pass
+ try: spawned_pids.remove(x)
+ except IndexError: pass
+
+
+
+# a function to turn a string of non-printable characters into a string of
+# hex characters
+def hexify(str):
+ hexStr = string.hexdigits
+ r = ''
+ for ch in str:
+ i = ord(ch)
+ r = r + hexStr[(i >> 4) & 0xF] + hexStr[i & 0xF]
+ return r
+# hexify()
+
+def generate_contents(file,contents_function="auto",verbose=False):
+ try:
+ _ = contents_function
+ if _ == 'auto' and file.endswith('.iso'):
+ _ = 'isoinfo-l'
+ if (_ in ['tar-tv','auto']):
+ if file.endswith('.tgz') or file.endswith('.tar.gz'):
+ _ = 'tar-tvz'
+ elif file.endswith('.tbz2') or file.endswith('.tar.bz2'):
+ _ = 'tar-tvj'
+ elif file.endswith('.tar'):
+ _ = 'tar-tv'
+
+ if _ == 'auto':
+ warn('File %r has unknown type for automatic detection.' % (file, ))
+ return None
+ else:
+ contents_function = _
+ _ = contents_map[contents_function]
+ return _[0](file,_[1],verbose)
+ except:
+ raise CatalystError,\
+ "Error generating contents, is appropriate utility (%s) installed on your system?" \
+ % (contents_function, )
+
+def calc_contents(file,cmd,verbose):
+ args={ 'file': file }
+ cmd=cmd % dict(args)
+ a=os.popen(cmd)
+ mylines=a.readlines()
+ a.close()
+ result="".join(mylines)
+ if verbose:
+ print result
+ return result
+
+# This has map must be defined after the function calc_content
+# It is possible to call different functions from this but they must be defined
+# before hash_map
+# Key,function,cmd
+contents_map={
+ # 'find' is disabled because it requires the source path, which is not
+ # always available
+ #"find" :[calc_contents,"find %(path)s"],
+ "tar-tv":[calc_contents,"tar tvf %(file)s"],
+ "tar-tvz":[calc_contents,"tar tvzf %(file)s"],
+ "tar-tvj":[calc_contents,"tar -I lbzip2 -tvf %(file)s"],
+ "isoinfo-l":[calc_contents,"isoinfo -l -i %(file)s"],
+ # isoinfo-f should be a last resort only
+ "isoinfo-f":[calc_contents,"isoinfo -f -i %(file)s"],
+}
+
+def generate_hash(file,hash_function="crc32",verbose=False):
+ try:
+ return hash_map[hash_function][0](file,hash_map[hash_function][1],hash_map[hash_function][2],\
+ hash_map[hash_function][3],verbose)
+ except:
+ raise CatalystError,"Error generating hash, is appropriate utility installed on your system?"
+
+def calc_hash(file,cmd,cmd_args,id_string="MD5",verbose=False):
+ a=os.popen(cmd+" "+cmd_args+" "+file)
+ mylines=a.readlines()
+ a.close()
+ mylines=mylines[0].split()
+ result=mylines[0]
+ if verbose:
+ print id_string+" (%s) = %s" % (file, result)
+ return result
+
+def calc_hash2(file,cmd,cmd_args,id_string="MD5",verbose=False):
+ a=os.popen(cmd+" "+cmd_args+" "+file)
+ header=a.readline()
+ mylines=a.readline().split()
+ hash=mylines[0]
+ short_file=os.path.split(mylines[1])[1]
+ a.close()
+ result=header+hash+" "+short_file+"\n"
+ if verbose:
+ print header+" (%s) = %s" % (short_file, result)
+ return result
+
+# This has map must be defined after the function calc_hash
+# It is possible to call different functions from this but they must be defined
+# before hash_map
+# Key,function,cmd,cmd_args,Print string
+hash_map={
+ "adler32":[calc_hash2,"shash","-a ADLER32","ADLER32"],\
+ "crc32":[calc_hash2,"shash","-a CRC32","CRC32"],\
+ "crc32b":[calc_hash2,"shash","-a CRC32B","CRC32B"],\
+ "gost":[calc_hash2,"shash","-a GOST","GOST"],\
+ "haval128":[calc_hash2,"shash","-a HAVAL128","HAVAL128"],\
+ "haval160":[calc_hash2,"shash","-a HAVAL160","HAVAL160"],\
+ "haval192":[calc_hash2,"shash","-a HAVAL192","HAVAL192"],\
+ "haval224":[calc_hash2,"shash","-a HAVAL224","HAVAL224"],\
+ "haval256":[calc_hash2,"shash","-a HAVAL256","HAVAL256"],\
+ "md2":[calc_hash2,"shash","-a MD2","MD2"],\
+ "md4":[calc_hash2,"shash","-a MD4","MD4"],\
+ "md5":[calc_hash2,"shash","-a MD5","MD5"],\
+ "ripemd128":[calc_hash2,"shash","-a RIPEMD128","RIPEMD128"],\
+ "ripemd160":[calc_hash2,"shash","-a RIPEMD160","RIPEMD160"],\
+ "ripemd256":[calc_hash2,"shash","-a RIPEMD256","RIPEMD256"],\
+ "ripemd320":[calc_hash2,"shash","-a RIPEMD320","RIPEMD320"],\
+ "sha1":[calc_hash2,"shash","-a SHA1","SHA1"],\
+ "sha224":[calc_hash2,"shash","-a SHA224","SHA224"],\
+ "sha256":[calc_hash2,"shash","-a SHA256","SHA256"],\
+ "sha384":[calc_hash2,"shash","-a SHA384","SHA384"],\
+ "sha512":[calc_hash2,"shash","-a SHA512","SHA512"],\
+ "snefru128":[calc_hash2,"shash","-a SNEFRU128","SNEFRU128"],\
+ "snefru256":[calc_hash2,"shash","-a SNEFRU256","SNEFRU256"],\
+ "tiger":[calc_hash2,"shash","-a TIGER","TIGER"],\
+ "tiger128":[calc_hash2,"shash","-a TIGER128","TIGER128"],\
+ "tiger160":[calc_hash2,"shash","-a TIGER160","TIGER160"],\
+ "whirlpool":[calc_hash2,"shash","-a WHIRLPOOL","WHIRLPOOL"],\
+ }
+
+def read_from_clst(file):
+ line = ''
+ myline = ''
+ try:
+ myf=open(file,"r")
+ except:
+ return -1
+ #raise CatalystError, "Could not open file "+file
+ for line in myf.readlines():
+ #line = string.replace(line, "\n", "") # drop newline
+ myline = myline + line
+ myf.close()
+ return myline
+# read_from_clst
+
+# these should never be touched
+required_build_targets=["generic_target","generic_stage_target"]
+
+# new build types should be added here
+valid_build_targets=["stage1_target","stage2_target","stage3_target","stage4_target","grp_target",
+ "livecd_stage1_target","livecd_stage2_target","embedded_target",
+ "tinderbox_target","snapshot_target","netboot_target","netboot2_target"]
+
+required_config_file_values=["storedir","sharedir","distdir","portdir"]
+valid_config_file_values=required_config_file_values[:]
+valid_config_file_values.append("PKGCACHE")
+valid_config_file_values.append("KERNCACHE")
+valid_config_file_values.append("CCACHE")
+valid_config_file_values.append("DISTCC")
+valid_config_file_values.append("ICECREAM")
+valid_config_file_values.append("ENVSCRIPT")
+valid_config_file_values.append("AUTORESUME")
+valid_config_file_values.append("FETCH")
+valid_config_file_values.append("CLEAR_AUTORESUME")
+valid_config_file_values.append("options")
+valid_config_file_values.append("DEBUG")
+valid_config_file_values.append("VERBOSE")
+valid_config_file_values.append("PURGE")
+valid_config_file_values.append("PURGEONLY")
+valid_config_file_values.append("SNAPCACHE")
+valid_config_file_values.append("snapshot_cache")
+valid_config_file_values.append("hash_function")
+valid_config_file_values.append("digests")
+valid_config_file_values.append("contents")
+valid_config_file_values.append("SEEDCACHE")
+
+verbosity=1
+
+def list_bashify(mylist):
+ if type(mylist)==types.StringType:
+ mypack=[mylist]
+ else:
+ mypack=mylist[:]
+ for x in range(0,len(mypack)):
+ # surround args with quotes for passing to bash,
+ # allows things like "<" to remain intact
+ mypack[x]="'"+mypack[x]+"'"
+ mypack=string.join(mypack)
+ return mypack
+
+def list_to_string(mylist):
+ if type(mylist)==types.StringType:
+ mypack=[mylist]
+ else:
+ mypack=mylist[:]
+ for x in range(0,len(mypack)):
+ # surround args with quotes for passing to bash,
+ # allows things like "<" to remain intact
+ mypack[x]=mypack[x]
+ mypack=string.join(mypack)
+ return mypack
+
+class CatalystError(Exception):
+ def __init__(self, message):
+ if message:
+ (type,value)=sys.exc_info()[:2]
+ if value!=None:
+ print
+ print traceback.print_exc(file=sys.stdout)
+ print
+ print "!!! catalyst: "+message
+ print
+
+class LockInUse(Exception):
+ def __init__(self, message):
+ if message:
+ #(type,value)=sys.exc_info()[:2]
+ #if value!=None:
+ #print
+ #kprint traceback.print_exc(file=sys.stdout)
+ print
+ print "!!! catalyst lock file in use: "+message
+ print
+
+def die(msg=None):
+ warn(msg)
+ sys.exit(1)
+
+def warn(msg):
+ print "!!! catalyst: "+msg
+
+def find_binary(myc):
+ """look through the environmental path for an executable file named whatever myc is"""
+ # this sucks. badly.
+ p=os.getenv("PATH")
+ if p == None:
+ return None
+ for x in p.split(":"):
+ #if it exists, and is executable
+ if os.path.exists("%s/%s" % (x,myc)) and os.stat("%s/%s" % (x,myc))[0] & 0x0248:
+ return "%s/%s" % (x,myc)
+ return None
+
+def spawn_bash(mycommand,env={},debug=False,opt_name=None,**keywords):
+ """spawn mycommand as an arguement to bash"""
+ args=[BASH_BINARY]
+ if not opt_name:
+ opt_name=mycommand.split()[0]
+ if "BASH_ENV" not in env:
+ env["BASH_ENV"] = "/etc/spork/is/not/valid/profile.env"
+ if debug:
+ args.append("-x")
+ args.append("-c")
+ args.append(mycommand)
+ return spawn(args,env=env,opt_name=opt_name,**keywords)
+
+#def spawn_get_output(mycommand,spawn_type=spawn,raw_exit_code=False,emulate_gso=True, \
+# collect_fds=[1],fd_pipes=None,**keywords):
+
+def spawn_get_output(mycommand,raw_exit_code=False,emulate_gso=True, \
+ collect_fds=[1],fd_pipes=None,**keywords):
+ """call spawn, collecting the output to fd's specified in collect_fds list
+ emulate_gso is a compatability hack to emulate commands.getstatusoutput's return, minus the
+ requirement it always be a bash call (spawn_type controls the actual spawn call), and minus the
+ 'lets let log only stdin and let stderr slide by'.
+
+ emulate_gso was deprecated from the day it was added, so convert your code over.
+ spawn_type is the passed in function to call- typically spawn_bash, spawn, spawn_sandbox, or spawn_fakeroot"""
+ global selinux_capable
+ pr,pw=os.pipe()
+
+ #if type(spawn_type) not in [types.FunctionType, types.MethodType]:
+ # s="spawn_type must be passed a function, not",type(spawn_type),spawn_type
+ # raise Exception,s
+
+ if fd_pipes==None:
+ fd_pipes={}
+ fd_pipes[0] = 0
+
+ for x in collect_fds:
+ fd_pipes[x] = pw
+ keywords["returnpid"]=True
+
+ mypid=spawn_bash(mycommand,fd_pipes=fd_pipes,**keywords)
+ os.close(pw)
+ if type(mypid) != types.ListType:
+ os.close(pr)
+ return [mypid, "%s: No such file or directory" % mycommand.split()[0]]
+
+ fd=os.fdopen(pr,"r")
+ mydata=fd.readlines()
+ fd.close()
+ if emulate_gso:
+ mydata=string.join(mydata)
+ if len(mydata) and mydata[-1] == "\n":
+ mydata=mydata[:-1]
+ retval=os.waitpid(mypid[0],0)[1]
+ cleanup(mypid)
+ if raw_exit_code:
+ return [retval,mydata]
+ retval=process_exit_code(retval)
+ return [retval, mydata]
+
+# base spawn function
+def spawn(mycommand,env={},raw_exit_code=False,opt_name=None,fd_pipes=None,returnpid=False,\
+ uid=None,gid=None,groups=None,umask=None,logfile=None,path_lookup=True,\
+ selinux_context=None, raise_signals=False, func_call=False):
+ """base fork/execve function.
+ mycommand is the desired command- if you need a command to execute in a bash/sandbox/fakeroot
+ environment, use the appropriate spawn call. This is a straight fork/exec code path.
+ Can either have a tuple, or a string passed in. If uid/gid/groups/umask specified, it changes
+ the forked process to said value. If path_lookup is on, a non-absolute command will be converted
+ to an absolute command, otherwise it returns None.
+
+ selinux_context is the desired context, dependant on selinux being available.
+ opt_name controls the name the processor goes by.
+ fd_pipes controls which file descriptor numbers are left open in the forked process- it's a dict of
+ current fd's raw fd #, desired #.
+
+ func_call is a boolean for specifying to execute a python function- use spawn_func instead.
+ raise_signals is questionable. Basically throw an exception if signal'd. No exception is thrown
+ if raw_input is on.
+
+ logfile overloads the specified fd's to write to a tee process which logs to logfile
+ returnpid returns the relevant pids (a list, including the logging process if logfile is on).
+
+ non-returnpid calls to spawn will block till the process has exited, returning the exitcode/signal
+ raw_exit_code controls whether the actual waitpid result is returned, or intrepretted."""
+
+ myc=''
+ if not func_call:
+ if type(mycommand)==types.StringType:
+ mycommand=mycommand.split()
+ myc = mycommand[0]
+ if not os.access(myc, os.X_OK):
+ if not path_lookup:
+ return None
+ myc = find_binary(myc)
+ if myc == None:
+ return None
+ mypid=[]
+ if logfile:
+ pr,pw=os.pipe()
+ mypid.extend(spawn(('tee','-i','-a',logfile),returnpid=True,fd_pipes={0:pr,1:1,2:2}))
+ retval=os.waitpid(mypid[-1],os.WNOHANG)[1]
+ if retval != 0:
+ # he's dead jim.
+ if raw_exit_code:
+ return retval
+ return process_exit_code(retval)
+
+ if fd_pipes == None:
+ fd_pipes={}
+ fd_pipes[0] = 0
+ fd_pipes[1]=pw
+ fd_pipes[2]=pw
+
+ if not opt_name:
+ opt_name = mycommand[0]
+ myargs=[opt_name]
+ myargs.extend(mycommand[1:])
+ global spawned_pids
+ mypid.append(os.fork())
+ if mypid[-1] != 0:
+ #log the bugger.
+ spawned_pids.extend(mypid)
+
+ if mypid[-1] == 0:
+ if func_call:
+ spawned_pids = []
+
+ # this may look ugly, but basically it moves file descriptors around to ensure no
+ # handles that are needed are accidentally closed during the final dup2 calls.
+ trg_fd=[]
+ if type(fd_pipes)==types.DictType:
+ src_fd=[]
+ k=fd_pipes.keys()
+ k.sort()
+
+ #build list of which fds will be where, and where they are at currently
+ for x in k:
+ trg_fd.append(x)
+ src_fd.append(fd_pipes[x])
+
+ # run through said list dup'ing descriptors so that they won't be waxed
+ # by other dup calls.
+ for x in range(0,len(trg_fd)):
+ if trg_fd[x] == src_fd[x]:
+ continue
+ if trg_fd[x] in src_fd[x+1:]:
+ new=os.dup2(trg_fd[x],max(src_fd) + 1)
+ os.close(trg_fd[x])
+ try:
+ while True:
+ src_fd[s.index(trg_fd[x])]=new
+ except SystemExit, e:
+ raise
+ except:
+ pass
+
+ # transfer the fds to their final pre-exec position.
+ for x in range(0,len(trg_fd)):
+ if trg_fd[x] != src_fd[x]:
+ os.dup2(src_fd[x], trg_fd[x])
+ else:
+ trg_fd=[0,1,2]
+
+ # wax all open descriptors that weren't requested be left open.
+ for x in range(0,max_fd_limit):
+ if x not in trg_fd:
+ try:
+ os.close(x)
+ except SystemExit, e:
+ raise
+ except:
+ pass
+
+ # note this order must be preserved- can't change gid/groups if you change uid first.
+ if selinux_capable and selinux_context:
+ import selinux
+ selinux.setexec(selinux_context)
+ if gid:
+ os.setgid(gid)
+ if groups:
+ os.setgroups(groups)
+ if uid:
+ os.setuid(uid)
+ if umask:
+ os.umask(umask)
+ else:
+ os.umask(022)
+
+ try:
+ #print "execing", myc, myargs
+ if func_call:
+ # either use a passed in func for interpretting the results, or return if no exception.
+ # note the passed in list, and dict are expanded.
+ if len(mycommand) == 4:
+ os._exit(mycommand[3](mycommand[0](*mycommand[1],**mycommand[2])))
+ try:
+ mycommand[0](*mycommand[1],**mycommand[2])
+ except Exception,e:
+ print "caught exception",e," in forked func",mycommand[0]
+ sys.exit(0)
+
+ #os.execvp(myc,myargs)
+ os.execve(myc,myargs,env)
+ except SystemExit, e:
+ raise
+ except Exception, e:
+ if not func_call:
+ raise str(e)+":\n "+myc+" "+string.join(myargs)
+ print "func call failed"
+
+ # If the execve fails, we need to report it, and exit
+ # *carefully* --- report error here
+ os._exit(1)
+ sys.exit(1)
+ return # should never get reached
+
+ # if we were logging, kill the pipes.
+ if logfile:
+ os.close(pr)
+ os.close(pw)
+
+ if returnpid:
+ return mypid
+
+ # loop through pids (typically one, unless logging), either waiting on their death, or waxing them
+ # if the main pid (mycommand) returned badly.
+ while len(mypid):
+ retval=os.waitpid(mypid[-1],0)[1]
+ if retval != 0:
+ cleanup(mypid[0:-1],block_exceptions=False)
+ # at this point we've killed all other kid pids generated via this call.
+ # return now.
+ if raw_exit_code:
+ return retval
+ return process_exit_code(retval,throw_signals=raise_signals)
+ else:
+ mypid.pop(-1)
+ cleanup(mypid)
+ return 0
+
+def cmd(mycmd,myexc="",env={}):
+ try:
+ sys.stdout.flush()
+ retval=spawn_bash(mycmd,env)
+ if retval != 0:
+ raise CatalystError,myexc
+ except:
+ raise
+
+def process_exit_code(retval,throw_signals=False):
+ """process a waitpid returned exit code, returning exit code if it exit'd, or the
+ signal if it died from signalling
+ if throw_signals is on, it raises a SystemExit if the process was signaled.
+ This is intended for usage with threads, although at the moment you can't signal individual
+ threads in python, only the master thread, so it's a questionable option."""
+ if (retval & 0xff)==0:
+ return retval >> 8 # return exit code
+ else:
+ if throw_signals:
+ #use systemexit, since portage is stupid about exception catching.
+ raise SystemExit()
+ return (retval & 0xff) << 8 # interrupted by signal
+
+def file_locate(settings,filelist,expand=1):
+ #if expand=1, non-absolute paths will be accepted and
+ # expanded to os.getcwd()+"/"+localpath if file exists
+ for myfile in filelist:
+ if myfile not in settings:
+ #filenames such as cdtar are optional, so we don't assume the variable is defined.
+ pass
+ else:
+ if len(settings[myfile])==0:
+ raise CatalystError, "File variable \""+myfile+"\" has a length of zero (not specified.)"
+ if settings[myfile][0]=="/":
+ if not os.path.exists(settings[myfile]):
+ raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]
+ elif expand and os.path.exists(os.getcwd()+"/"+settings[myfile]):
+ settings[myfile]=os.getcwd()+"/"+settings[myfile]
+ else:
+ raise CatalystError, "Cannot locate specified "+myfile+": "+settings[myfile]+" (2nd try)"
+"""
+Spec file format:
+
+The spec file format is a very simple and easy-to-use format for storing data. Here's an example
+file:
+
+item1: value1
+item2: foo bar oni
+item3:
+ meep
+ bark
+ gleep moop
+
+This file would be interpreted as defining three items: item1, item2 and item3. item1 would contain
+the string value "value1". Item2 would contain an ordered list [ "foo", "bar", "oni" ]. item3
+would contain an ordered list as well: [ "meep", "bark", "gleep", "moop" ]. It's important to note
+that the order of multiple-value items is preserved, but the order that the items themselves are
+defined are not preserved. In other words, "foo", "bar", "oni" ordering is preserved but "item1"
+"item2" "item3" ordering is not, as the item strings are stored in a dictionary (hash).
+"""
+
+def parse_makeconf(mylines):
+ mymakeconf={}
+ pos=0
+ pat=re.compile("([0-9a-zA-Z_]*)=(.*)")
+ while pos<len(mylines):
+ if len(mylines[pos])<=1:
+ #skip blanks
+ pos += 1
+ continue
+ if mylines[pos][0] in ["#"," ","\t"]:
+ #skip indented lines, comments
+ pos += 1
+ continue
+ else:
+ myline=mylines[pos]
+ mobj=pat.match(myline)
+ pos += 1
+ if mobj.group(2):
+ clean_string = re.sub(r"\"",r"",mobj.group(2))
+ mymakeconf[mobj.group(1)]=clean_string
+ return mymakeconf
+
+def read_makeconf(mymakeconffile):
+ if os.path.exists(mymakeconffile):
+ try:
+ try:
+ import snakeoil.fileutils
+ return snakeoil.fileutils.read_bash_dict(mymakeconffile, sourcing_command="source")
+ except ImportError:
+ try:
+ import portage.util
+ return portage.util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
+ except:
+ try:
+ import portage_util
+ return portage_util.getconfig(mymakeconffile, tolerant=1, allow_sourcing=True)
+ except ImportError:
+ myf=open(mymakeconffile,"r")
+ mylines=myf.readlines()
+ myf.close()
+ return parse_makeconf(mylines)
+ except:
+ raise CatalystError, "Could not parse make.conf file "+mymakeconffile
+ else:
+ makeconf={}
+ return makeconf
+
+def msg(mymsg,verblevel=1):
+ if verbosity>=verblevel:
+ print mymsg
+
+def pathcompare(path1,path2):
+ # Change double slashes to slash
+ path1 = re.sub(r"//",r"/",path1)
+ path2 = re.sub(r"//",r"/",path2)
+ # Removing ending slash
+ path1 = re.sub("/$","",path1)
+ path2 = re.sub("/$","",path2)
+
+ if path1 == path2:
+ return 1
+ return 0
+
+def ismount(path):
+ "enhanced to handle bind mounts"
+ if os.path.ismount(path):
+ return 1
+ a=os.popen("mount")
+ mylines=a.readlines()
+ a.close()
+ for line in mylines:
+ mysplit=line.split()
+ if pathcompare(path,mysplit[2]):
+ return 1
+ return 0
+
+def addl_arg_parse(myspec,addlargs,requiredspec,validspec):
+ "helper function to help targets parse additional arguments"
+ global valid_config_file_values
+
+ messages = []
+ for x in addlargs.keys():
+ if x not in validspec and x not in valid_config_file_values and x not in requiredspec:
+ messages.append("Argument \""+x+"\" not recognized.")
+ else:
+ myspec[x]=addlargs[x]
+
+ for x in requiredspec:
+ if x not in myspec:
+ messages.append("Required argument \""+x+"\" not specified.")
+
+ if messages:
+ raise CatalystError, '\n\tAlso: '.join(messages)
+
+def touch(myfile):
+ try:
+ myf=open(myfile,"w")
+ myf.close()
+ except IOError:
+ raise CatalystError, "Could not touch "+myfile+"."
+
+def countdown(secs=5, doing="Starting"):
+ if secs:
+ print ">>> Waiting",secs,"seconds before starting..."
+ print ">>> (Control-C to abort)...\n"+doing+" in: ",
+ ticks=range(secs)
+ ticks.reverse()
+ for sec in ticks:
+ sys.stdout.write(str(sec+1)+" ")
+ sys.stdout.flush()
+ time.sleep(1)
+ print
+
+def normpath(mypath):
+ TrailingSlash=False
+ if mypath[-1] == "/":
+ TrailingSlash=True
+ newpath = os.path.normpath(mypath)
+ if len(newpath) > 1:
+ if newpath[:2] == "//":
+ newpath = newpath[1:]
+ if TrailingSlash:
+ newpath=newpath+'/'
+ return newpath
--
1.8.3.2
next prev parent reply other threads:[~2013-12-14 3:20 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-14 3:20 [gentoo-catalyst] rewrite-on-master patches, round-2 Brian Dolbec
2013-12-14 3:20 ` [gentoo-catalyst] [PATCH 1/4] Initial rearrangement of the python directories Brian Dolbec
2013-12-14 5:22 ` Matt Turner
2013-12-14 5:48 ` W. Trevor King
2013-12-14 12:12 ` Brian Dolbec
2013-12-14 14:17 ` Dylan Baker
2013-12-14 16:42 ` Brian Dolbec
2013-12-14 23:57 ` W. Trevor King
2013-12-14 3:20 ` Brian Dolbec [this message]
2013-12-14 3:20 ` [gentoo-catalyst] [PATCH 3/4] rename the modules subpkg to targets, it better reflects what it contains Brian Dolbec
2013-12-14 3:20 ` [gentoo-catalyst] [PATCH 4/4] rename files directory to etc to better reflect the directories contents Brian Dolbec
2013-12-15 4:39 ` W. Trevor King
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1386991211-9296-3-git-send-email-dolsen@gentoo.org \
--to=dolsen@gentoo.org \
--cc=gentoo-catalyst@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox