On Fri, 2011-03-25 at 22:59 +0000, Duncan wrote:
> Simply my experience-educated opinion. YMMV, as they say. And of course,
> it applies to new installations more than your current situation, but as
> you mentioned that you are planning such a new installation...
Duncan, thanks for your very thorough discussion of current technologies
disk/RAID/filesystem/etc. technologies. Wow! I'm going to have to read
it through several times to absorb it. I've gotten to the point at
which I'm more involved with what I can _do_ with the Linux boxes I set
up than what I can do that's cool and cutting edge with Linux in setting
them up, but playing with bleeding-edge stuff has always been tempting.
Some of the stuff you've mentioned, such as btrfs, are totally new to me
since I haven't kept up with the state of the art. Some years ago we
had EVMS, which was developed by IBM here in Austin. I was a member of
the Capital Area Central Texas UNIX Society (CACTUS) and we had the EVMS
developers come and talk about it. EVMS was great. It was a layered
technology with an API for a management client, so you could have a CLI,
a GUI, a web-based management client, whatever, and a all of them useing
the same API to the disk management layer. It was an umbrella
technology which covered several levels of Linux MD Raid plus LVM. You
could put fundamental storage elements together like tinker-toys and
slice and dice them any way you wanted to.
EVMS was started from an initrd, which set up the EVMS platform and then
did a pivot_root to the EVMS-supported result. I have our SOHO
firewall/gateway and file server set up with it. The root fs is on a
Linux MD RAID-1 array, and what's on top of that I've rather forgotten
but the result is a drive and partition layout that makes sense for the
purpose of the box. I set this up as a kind of proof-of-concept
exercise because I was taken with EVMS and figured it would be useful,
which it was. The down side of this was that some time after that, IBM
dropped support for the EVMS project and pulled their developers off of
it. I was impressed with the fact that IBM was actually paying people
to develop open source stuff, but when they pulled the plug on it, EVMS
became an orphaned project. The firewall/gateway box runs Gentoo, so I
proceeded with regular updates until one day the box stopped booting.
The libraries, notably glibc, embedded in the initrd system got out of
sync, version wise, with the rest of the system, and I was getting some
severely strange errors early in the boot process followed by a kernel
panic. It took a bit of work to even _see_ the errors, since they were
emitted in the boot process earlier than CLI scroll-back comes into
play, and then there was further research to determine what I needed to
do to fix the problem. I ended up having to mount and then manually
repair the initrd internal filesystem, manually reconstituting library
symlinks as required.
I've built some Linux boxes for one of my clients - 1U servers and the
like. These folks are pretty straight-forward in their requirements,
mainly asking that the boxes just work. The really creative work goes
into the medical research PHP application that lives on the boxes, and
I've learned beaucoup stuff about OOP in PHP, AJAX, etc. from the main
programming fellow on the project. We've standardized on Ubuntu server
edition on SuperMicro 4 drive 1U boxes. These boxes generally come with
RAID supported by a proprietary chipset or two, which never works quite
right with Linux, so the first job I always do with these is to rip out
the SATA cabling from the back plane and replace the on-board RAID with
an LSI 3ware card. These cards don't mess around - they JUST WORK.
LSI/3ware has been very good about supporting Linux for their products.
We generally set these up as RAID-5 boxes. There's a web-based
monitoring daemon for Linux that comes with the card, and it just works,
too, although it takes a bit of dickering. The RAID has no component in
user-space (except for the monitor daemon) and shows up as a single SCSI
drive, which can be partitioned and formatted just as if it were a
single drive. The 3ware cards are nice! If you're using a redundant
array such as RAID-1 or RAID-5, you can designate a drive as a hot-spare
and if one of the drives in an array fails, the card will fail over to
the hot-spare, rebuild the array, and the monitor daemon will send you
an email telling you that it happened. Slick!
The LSI 3ware cards aren't cheap, but they're not unreasonable either,
and I've never had one fail. I'm thinking that the my drive setup on my
new desktop box will probably use RAID-1 supported by a 3ware card.
I'll probably use an ext3 filesystem on the partitions. I know ext4 is
under development, but I'm not sure if it would offer me any advantages.
I used reiserfs on some of the partitions on my servers, and on some
partitions on my desktop box too. Big mistake! There was a bug in
reiserfs support the current kernel when I built the first server and
the kernel crapped all over the hard drive one night and the box
crashed! I was able to fix it and salvage customer data, but it was
pretty scary. Hans Reiser is in prison for life for murder, and there's
like one person on the Linux kernel development group who maintains
reiserfs. Ext3/4, on the other hand, is solid, maybe not quite as fast,
but supported by a dedicated group of developers.
So I'm thinking that this new box will have a couple of
professional-grade (the 5-year warranty type) 1.5 or 2 TB drives and a
3ware card. I still haven't settled on the mainboard, which will have
to support the 3ware card, a couple of sound cards and a legacy Adaptec
SCSI card for our ancient but extremely well-built HP scanner. The
chipset will have to be well supported in Linux. I'll probably build
the box myself once I decide on the hardware.
I'm gonna apply the KISS principle to the OS design for this, and stay
away from bleeding edge software technologies, although, especially
after reading your essay, it's very tempting to try some of this stuff
out to see what the the _really_ smart people are coming up with! I'm
getting off of the Linux state-of-the art train for a while and go
walking in the woods. The kernel will have to be low-latency since I
may use the box for recording work with Jack and Ardour2, and almost
certainly for audio editing, and maybe video editing at some point.
That's where my energy is going to go for this one.
Lindsay Haisley |"Windows .....
FMP Computer Services | life's too short!"
http://www.fmp.com | - Brad Johnston