Gentoo Logo
Gentoo Spaceship




Note: Due to technical difficulties, the Archives are currently not up to date. GMANE provides an alternative service for most mailing lists.
c.f. bug 424647
List Archive: gentoo-desktop
Navigation:
Lists: gentoo-desktop: < Prev By Thread Next > < Prev By Date Next >
Headers:
To: gentoo-desktop@g.o
From: Duncan <1i5t5.duncan@...>
Subject: Re: System problems - some progress
Date: Sat, 26 Mar 2011 08:40:11 +0000 (UTC)
Lindsay Haisley posted on Fri, 25 Mar 2011 21:46:32 -0500 as excerpted:

> On Fri, 2011-03-25 at 22:59 +0000, Duncan wrote:
>> Simply my experience-educated opinion.  YMMV, as they say.  And of
>> course,
>> it applies to new installations more than your current situation, but
>> as you mentioned that you are planning such a new installation...
> 
> Duncan, thanks for your very thorough discussion of current technologies
> disk/RAID/filesystem/etc. technologies.  Wow!  I'm going to have to read
> it through several times to absorb it.  I've gotten to the point at
> which I'm more involved with what I can _do_ with the Linux boxes I set
> up than what I can do that's cool and cutting edge with Linux in setting
> them up, but playing with bleeding-edge stuff has always been tempting.

By contrast, Linux is still my hobby, tho really, a full time one in that 
I spend hours a day at it, pretty much 7 days a week.  I'm thinking I 
might switch to Linux as a job at some point, perhaps soon, but it's not a 
switch I'll make lightly, and it's not something I'll even consider 
"selling my soul for" to take -- it'll be on my terms or I might as well 
stay with Linux as a hobby -- an arrangement that works and that suits me 
fine.

Because Linux is a hobby, I go where my interest leads me.  Even tho I'm 
not a developer, I test, bisect and file bugs on the latest git kernels 
and am proud to be able to say that a number of bugs were fixed before 
release (and one, a Radeon AGP graphics bug, after, it hit stable and two 
kernel releases before it was ultimately fixed, as reasonably new graphics 
cards on AGP busses aren't as common as they once were...) because of my 
work.

Slowly, one at a time, I've tackled Bind DNS, NTPD, md/RAID, direct 
netfilter/iptables (which interestingly enough were *SIGNIFICANTLY* easier 
for me to wrap my mind around than the various so-called "easier" firewall 
tools that ultimately use netfilter/iptables at the back end anyway, 
perhaps because I already understood network basics and all the "simple" 
ones simply obscured the situation for me) and other generally considered 
"enterprise" tools.  But while having my head around these normally 
enterprise technologies well enough to troubleshoot them may well help me 
with a job in the field in the future, that's not why I learned them.  As 
a Linux hobbyist, I learned them for much the same reason mountain climber 
hobbyists climb mountains, because they were there, and for the personal 
challenge.

Meanwhile, as I alluded to earlier, I tried LVM2 (successor to both EVMS 
and the original LVM, as you likely already know) on top of md/RAID, but 
found that for me, they layering of technologies obscured my 
understanding, to the point where I was no longer comfortable with my 
ability to recover in a disaster situation in which both the RAID and LVM2 
levels were damaged.

Couple that with an experience where I had a broken LVM2 that needed 
rebuilt, but with the portage tree on LVM2, and I realized that for what I 
was doing, especially since md/raid's partitioned-raid support was now 
quite robust, the LVM2 layer just added complexity for very little real 
added flexibility or value, particularly since I couldn't put / on it 
anyway, without an initr* (one of the technologies I've never taken time 
to understand to the point I'm comfortable using it).

That's why I recommended that you pick a storage layer technology that 
fits your needs as best you can, get comfortable with it, and avoid if 
possible multi-layering.  The keep-it-simple rule really /does/ help avoid 
admin-level fat-fingering, which really /is/ a threat to data and system 
integrity.  Sure, there's a bit of additional flexibility by layering, but 
it's worth the hard look at whether the benefit really does justify the 
additional complexity.  In triple-digit or even higher double-digit 
storage device situations, basically enterprise level, there's certainly 
many scenarios where the multi-level layering adds significant value, but 
part of being a good admin, ultimately, is recognizing where that's not 
the case, and with a bit of experience under my belt, I realized it wasn't 
a good tradeoff for my usage.

Here, I picked md/raid over lvm2 (and over hardware RAID) for a number of 
reasons.  First, md/raid for / can be directly configured on the kernel 
command line.  No initr* needed.  That allowed me to put off learning 
initr* tech for another day, as well as reducing complexity.  As for
md/raid over hardware RAID, there's certainly a place for both, 
particularly when md/raid may be layered on hardware raid, but for low-
budget emphasis of the R(edundant) in Redundant Array of Independent 
Devices (RAID), there's nothing like being able to plug in not just 
another drive, but any standard (SATA in my case) controller, and/or any 
mobo, and with at worst a kernel rebuild with the new drivers (since I 
choose to build-in what I need and not build what I don't, so a kernel 
rebuild would be necessary if it were different controllers), be back up 
and running.  No having to find a compatible RAID card...  Plus, the Linux 
kernel md/RAID stack has FAR FAR more testing under all sorts of corner-
case situations than any hardware RAID is **EVER** going to get.

But as I mentioned, the times they are a changin', and with the latest 
desktop environments (kde4.6, gnome3 and I believe the latest gnome2, and 
xfce?, at minimum) leaving the deprecated hal behind in favor of udev/
udisks/upower and etc, and with udisks in particular depending on device-
mapper, now part of lvm2, and the usual removable device auto-detect/auto-
mount functionality of the desktops dependent in turn on udisks, while it 
probably won't affect non-X server based installations and arguably 
doesn't /heavily/ (other than non-optional dependencies) affect *ix 
traditionalist desktop users who aren't comfortable with auto-mount in the 
first place (I'm one, there's known security tradeoffs involved, see 
recent stories on auto-triggered vulns due to gnome scanning for icons on 
auto-mounted thumbdrives and/or cds, for instance), it's a fact that 
within the year, most new releases will be requiring that device-mapper 
and thus lvm2 be installed and device-mapper enabled in the kernel, to 
support their automount functionality.  As such, and because lvm2 has at 
least basic raid-0 and raid-1 support (tho not the more advanced stuff, 
raid5/6/10/50/60 etc, last I checked, but I may well be behind) of its 
own, particularly for distributions relying on prebuilt kernels, therefore 
modules, therefore initr*s, already, so lvm2's initr* requirement isn't a 
factor, lvm2 is likely to be a better choice for many than md/raid.

Meanwhile, while btrfs isn't /yet/ a major choice unless you want still 
clearly experimental, by first distro releases next year, it's very likely 
to be.  And because it's the clear successor to ext*, and has built-in 
multi-device and volume management flexibility of its own, come next year, 
both lvm2 and md/raid will lose their place in the spotlight to a large 
degree.  Yet still, btrfs is not yet mature and while tantalizing in its 
closeness, remains still an impractical immediate choice.  Plus, there's 
likely to be some limitations to its device management abilities that 
aren't clear yet, at least not to those not intimately following its 
development, and significant questions remain on what filesystem-supported 
features will be boot-time-supported as well.

> Some of the stuff you've mentioned, such as btrfs, are totally new to me
> since I haven't kept up with the state of the art.  Some years ago we
> had EVMS, which was developed by IBM here in Austin.  I was a member of
> the Capital Area Central Texas UNIX Society (CACTUS) and we had the EVMS
> developers come and talk about it.  EVMS was great.  It was a layered
> technology with an API for a management client, so you could have a CLI,
> a GUI, a web-based management client, whatever, and a all of them useing
> the same API to the disk management layer.  It was an umbrella
> technology which covered several levels of Linux MD Raid plus LVM.  You
> could put fundamental storage elements together like tinker-toys and
> slice and dice them any way you wanted to.

The technology is truly wonderful.  Unfortunately, the fact that running / 
on it requires an initr* means it's significantly less useful than it 
might be.  Were that one requirement removed, the whole equation would be 
altered and I may still be running LVM instead of md/raid.  Not that it's 
much of a problem for those running a binary-based distribution already 
dependent on an initr*, but even in my early days on Linux, back on 
Mandrake, I was one of the outliers who learned kernel config 
customization and building within months of switching to Linux, and never 
looked back.  And once you're doing that, why have an initr* unless you're 
absolutely forced into it, which in turn puts a pretty strong negative on 
any technology that's going to force you into it...

> EVMS was started from an initrd, which set up the EVMS platform and then
> did a pivot_root to the EVMS-supported result.  I have our SOHO
> firewall/gateway and file server set up with it.  The root fs is on a
> Linux MD RAID-1 array, and what's on top of that I've rather forgotten
> but the result is a drive and partition layout that makes sense for the
> purpose of the box.  I set this up as a kind of proof-of-concept
> exercise because I was taken with EVMS and figured it would be useful,
> which it was.  The down side of this was that some time after that, IBM
> dropped support for the EVMS project and pulled their developers off of
> it.  I was impressed with the fact that IBM was actually paying people
> to develop open source stuff, but when they pulled the plug on it, EVMS
> became an orphaned project.  The firewall/gateway box runs Gentoo, so I
> proceeded with regular updates until one day the box stopped booting.
> The libraries, notably glibc, embedded in the initrd system got out of
> sync, version wise, with the rest of the system, and I was getting some
> severely strange errors early in the boot process followed by a kernel
> panic.  It took a bit of work to even _see_ the errors, since they were
> emitted in the boot process earlier than CLI scroll-back comes into
> play, and then there was further research to determine what I needed to
> do to fix the problem.  I ended up having to mount and then manually
> repair the initrd internal filesystem, manually reconstituting library
> symlinks as required.

That's interesting.  I thought most distributions used or recommended use 
of an alternative libc in their initr*, one that either fully static-
linked so didn't need included if the binaries were static-linked, or at 
least was smaller and more fit for the purpose of a dedicated limited-
space-ram-disk early-boot environment.

But if you're basing the initr* on glibc, which would certainly be easier 
and is, now that I think of it, probably the way gentoo handles it, yeah, 
I could see the glibc getting stale in the initrd.

... Because if it was an initramfs, it'd be presumably rebuilt and thus 
synced with the live system when the kernel was updated, since an 
initramfs is appended to the kernel binary file itself.  That seems to me 
to be a big benefit to the initramfs system, easier syncing with the built 
kernel and the main system, tho it would certainly take longer, and I 
expect for that reason that the initramfs rebuild could be short-
circuited, thus allowed to get stale, if desired.  But it should at least 
be easier to keep updated if desired, because it /is/ dynamically attached 
to the kernel binary at each kernel build.

But as I said I haven't really gotten into initr*s.  In fact, I don't even 
build busybox, etc, here, sticking it in package.provided.  If my working 
system gets so screwed up that I'd end up with busybox, I simply boot to 
the backup / partition instead.  That backup is actually a fully 
operational system snapshot taken at the point the backup was made, so it 
includes everything the system did at that point.  As such, unlike most 
people's limited recovery environments, I have a full-featured X, KDE, 
etc, all fully functional and working just as they were on my main system 
at the time of the backup.  So if it comes to it, I can simply switch to 
it as my main root, and go back to work, updating or building from a new 
stage-3 as necessary, at my leisure, not because I have to before I can 
get a working system again, because the backup /is/ a working system, as 
fully functional (if outdated) as it was on the day I took the backup.

> I've built some Linux boxes for one of my clients - 1U servers and the
> like.  These folks are pretty straight-forward in their requirements,
> mainly asking that the boxes just work.  The really creative work goes
> into the medical research PHP application that lives on the boxes, and
> I've learned beaucoup stuff about OOP in PHP, AJAX, etc. from the main
> programming fellow on the project.  We've standardized on Ubuntu server
> edition on SuperMicro 4 drive 1U boxes.  These boxes generally come with
> RAID supported by a proprietary chipset or two, which never works quite
> right with Linux, so the first job I always do with these is to rip out
> the SATA cabling from the back plane and replace the on-board RAID with
> an LSI 3ware card.  These cards don't mess around - they JUST WORK.
> LSI/3ware has been very good about supporting Linux for their products.
> We generally set these up as RAID-5 boxes.  There's a web-based
> monitoring daemon for Linux that comes with the card, and it just works,
> too, although it takes a bit of dickering.  The RAID has no component in
> user-space (except for the monitor daemon) and shows up as a single SCSI
> drive, which can be partitioned and formatted just as if it were a
> single drive.  The 3ware cards are nice!  If you're using a redundant
> array such as RAID-1 or RAID-5, you can designate a drive as a hot-spare
> and if one of the drives in an array fails, the card will fail over to
> the hot-spare, rebuild the array, and the monitor daemon will send you
> an email telling you that it happened.  Slick!

> The LSI 3ware cards aren't cheap, but they're not unreasonable either,
> and I've never had one fail.  I'm thinking that the my drive setup on my
> new desktop box will probably use RAID-1 supported by a 3ware card.

That sounds about as standard as a hardware RAID card can be.  I like my 
md/raid because I can literally use any standard SATA controller, but 
there are certainly tradeoffs.  I'll have to keep this in mind in case I 
ever do need to scale an installation to where hardware RAID is needed 
(say if I were layering md/kernel RAID-0 on top of hardware RAID-1 or 
RAID-5/6).

FWIW, experience again.  I don't believe software RAID-5/6 to be worth 
it.  md/raid 0 or 1 or 10, yes, but if I'm doing RAID-5 or 6, it'll be 
hardware, likely underneath a kernel level RAID-0 or 10, for a final 
RAID-50/60/500/600/510/610, with the left-most digit as the hardware 
implementation.  This because software RAID-5/6 is simply slow.

Similar experience, Linux md/RAID-1 is /surprisingly/ efficient, MUCH more 
so than one might imagine, as the kernel I/O scheduler makes *VERY* good 
use of parallel scheduling.  In fact, in many cases it beats RAID-0 
performance, unless of course you need the additional space of RAID-0.  
Certainly, that has been my experience, at least.

> I'll
> probably use an ext3 filesystem on the partitions.  I know ext4 is under
> development, but I'm not sure if it would offer me any advantages.

FWIW, ext4 is no longer "under development", it's officially mature and 
ready for use, and has been for a number of kernels, now.  In fact, 
they're actually planning on killing separate ext2/3 driver support as the 
ext4 driver implements it anyway.

As such, I'd definitely recommend considering ext4, noting of course that 
you can specifically enable/disable various ext4 features at mkfs and/or 
mount time, if desired.  Thus there's really no reason to stick with ext3 
now.  Go with ext4, and if your situation warrants, disable one or more of 
the ext4 features, making it more ext3-like.

OTOH, I'd specifically recommend evaluating the journal options, 
regardless of ext3/ext4 choice.  ext3 defaulted to "ordered" for years, 
then for a few kernels, switched to "writeback" by default, then just 
recently (2.6.38?) switched back to "ordered".  AFAIK, ext4 has always 
defaulted to the faster but less corner-case crash safe "writeback".  The 
third and most conservative option is of course "journal" (journal the 
data too, as opposed to metadata only, with the other two).

Having lived thru the reiserfs "writeback" era and been OH so glad when 
they implemented and defaulted to "ordered" for it, I don't believe I'll 
/ever/ trust anything beyond what I'd trust on a RAID-0 without backups, 
to "writeback" again, regardless of /what/ the filesystem designers say or 
what the default is.

And, I know of at least one person that experienced data integrity issues 
with writeback on ext3 when the kernel was defaulting to that, that 
immediately disappeared when he switched back to ordered.

Bottom line, yeah I believe ext4 is safe, but ext3 or ext4, unless you 
really do /not/ care about your data integrity or are going to the extreme 
and already have data=journal, DEFINITELY specify data=ordered, both in 
your mount options, and by setting the defaults via tune2fs.

If there's one bit of advice in all these posts that I'd have you take 
away, it's that.  It's NOT worth the integrity of your data!  Use 
data=ordered unless you really do NOT care, to the same degree that you 
don't put data you care about on RAID-0, without at least ensuring that 
it's backed up elsewhere.  I've seen people lose data needlessly over 
this; I've lost it on reiserfs myself before they implemented data=ordered 
by default, and truly, just as with RAID-0, data=writeback is NOT worth 
whatever performance increase it might bring, unless you really do /not/ 
care about the data integrity on that filesystem!

> I used reiserfs on some of the partitions on my servers, and on some
> partitions on my desktop box too.  Big mistake!  There was a bug in
> reiserfs support the current kernel when I built the first server and
> the kernel crapped all over the hard drive one night and the box
> crashed!

IDR the kernel version but there's one that's specifically warned about.  
That must have been the one...

But FWIW, I've had no problems, even thru a period of bad-ram resulting in 
kernel crashes and the like, since the introduction of journal=ordered.  
Given the time mentioned above when ext3 defaulted to data=writeback, I'd 
even venture to say that for that period and on those kernels, reiserfs 
may well have been safer than ext3!

> I was able to fix it and salvage customer data, but it was
> pretty scary.  Hans Reiser is in prison for life for murder, and there's
> like one person on the Linux kernel development group who maintains
> reiserfs.  Ext3/4, on the other hand, is solid, maybe not quite as fast,
> but supported by a dedicated group of developers.

For many years, the kernel person doing most of the reiserfs maintenance 
and the one who introduced the previously mentioned data=ordered mode by 
default and data=journal mode as an option, was Chris Mason.  I believe he 
was employed by SuSE, for years the biggest distribution to default to 
reiserfs, even before it was in mainline, I believe.  I'm not sure if he's 
still the official reiserfs maintainer due to his current duties, but he 
DEFINITELY groks the filesystem.  Those current duties?  He's employed by 
Oracle now, and is the lead developer of btrfs.

Now reiserfs does have its warts.  It's not particularly fast any more, 
and has performance issues on multi-core systems due to its design around 
the BKL (big kernel lock, deprecated for years with users converting to 
other lock methods, no current in-tree users with 2.6.38 and set to be 
fully removed with 2.6.39), which tho reiserfs was converted to other 
locking a few kernels ago, the single-access-at-a-time assumption and 
bottleneck lives on.  However, it is and has been quite stable for many 
years now, since the intro of data=ordered, to the point that as mentioned 
above, I believe it was safer than ext3 during the time ext3 defaulted to 
writeback, because reiserfs still had the saner default of data=ordered.

But to each his own.  I'd still argue that a data=writeback default is 
needlessly risking data, however, and far more dangerous regardless of 
whether it's ext3, ext4, or reiserfs, than any of the three themselves 
are, otherwise.

> So I'm thinking that this new box will have a couple of
> professional-grade (the 5-year warranty type) 1.5 or 2 TB drives and a
> 3ware card.  I still haven't settled on the mainboard, which will have
> to support the 3ware card, a couple of sound cards and a legacy Adaptec
> SCSI card for our ancient but extremely well-built HP scanner.  The
> chipset will have to be well supported in Linux.  I'll probably build
> the box myself once I decide on the hardware.

FWIW, my RAID is 4x SATA 300 gig Seagates, 5 year warranty I expect now 
either expired or soon to.  Most of the system is RAID-1 across all four, 
however, and I'm backed up to external as well altho I'll admit that 
backup's a dated, now.  I bought them after having a string of bad luck 
with ~1 year failures on both Maxtor (which had previously been quite 
dependable for me) and Western Digital (which I had read bad things about 
but thought I'd try after Maxtor, only to have the same ~1 year issues).  
Obviously, they've long outlasted those, so I've been satisfied.

As I said, I'll keep the 3ware RAID cards in mind.

Mainboard:  If a server board fits your budget, I'd highly recommend 
getting a Tyan board that's Linux certified.  The one I'm running in my 
main machine is now 8 years old, /long/ out of warranty and beyond further 
BIOS updates, but still running solid.  It was a $400 board back then, 
reasonable for a dual-socket Opteron.  Not only did it come with Linux 
certifications for various distributions, but they had Linux specific 
support.  Further, how many boards do /you/ know of that have a pre-
customized sensors.conf file available for the download? =:^)  And when 
the dual-cores came out, a BIOS update was made available that supported 
them.  As a result, while it was $400, that then leading edge dual socket 
Opteron board from eight years ago, while it's no longer leading edge by 
any means, eight years later still forms the core for an acceptably decent 
system, dual-dual-core Opteron 290s @ 2.8 GHz (topped out the sockets), 
currently 6 gig RAM, 3x2-gig as I had one stick die that I've not 
replaced, but 8 sockets so I could run 16 gig if I wanted, 4xSATA drives, 
only SATA-150, but they're RAIDED, Radeon hd4650 AGP (no PCI-E, tho it 
does have PCI-X), etc.  No PCI-E, limited to SATA-150, and no hardware 
virtualization instruction support on the CPUs, so it's definitely dated, 
but after all, it's an 8 years' old system!

It's likely to be a decade old by the time I actually upgrade it.  Yes, 
it's definitely a server-class board and the $400 I paid reflected that, 
but 8 years and shooting for 10!  And with the official Linux support 
including a custom sensors.conf.  I'm satisfied that I got my money's 
worth.

But I don't believe all Tyan's boards are as completely Linux supported as 
that one was, so do your research.

> I'm gonna apply the KISS principle to the OS design for this, and stay
> away from bleeding edge software technologies, although, especially
> after reading your essay, it's very tempting to try some of this stuff
> out to see what the the _really_ smart people are coming up with!  I'm
> getting off of the Linux state-of-the art train for a while and go
> walking in the woods.  The kernel will have to be low-latency since I
> may use the box for recording work with Jack and Ardour2, and almost
> certainly for audio editing, and maybe video editing at some point.
> That's where my energy is going to go for this one.

Well, save btrfs for a project a couple years down the line, then.  But 
certainly, investigate md/raid vs lvm2 and make your choice, keeping in 
mind that while nowdays they overlap features, md/raid doesn't require an 
initr* to run / on it, while lvm2 will likely be pulled in as a dependency 
for your X desktop, at least kde/gnome/xfce, by later this year, whether 
you actually use its lvm features or not.

And do consider ext4, but regardless of ext3/4, be /sure/ you either 
choose data=ordered or can give a good reason why you didn't.  (Low-
latency writing just might be a reasonable excuse for data=writeback, but 
be sure you keep backed up if you do!)  Because /that/ one may well save 
your data, someday!

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman



Replies:
Re: Re: System problems - some progress
-- Lindsay Haisley
References:
System problems
-- Lindsay Haisley
Re: System problems
-- Roman Zilka
Re: System problems
-- Lindsay Haisley
Re: System problems
-- Roman Zilka
Re: System problems
-- Lindsay Haisley
Re: System problems
-- Roman Zilka
Re: System problems
-- Lindsay Haisley
Re: System problems
-- eamjr56
Re: System problems
-- Lindsay Haisley
Re: System problems
-- Dale
Re: System problems
-- Jorge Manuel B. S. Vicetto
Re: System problems - some progress
-- Lindsay Haisley
Re: System problems - some progress
-- Paul Hartman
Re: System problems - some progress
-- Lindsay Haisley
Re: System problems - some progress
-- Paul Hartman
Re: System problems - some progress
-- Lindsay Haisley
Re: System problems - some progress
-- Paul Hartman
Re: System problems - some progress
-- Lindsay Haisley
Re: System problems - some progress
-- Duncan
Re: Re: System problems - some progress
-- Lindsay Haisley
Re: System problems - some progress
-- Duncan
Re: Re: System problems - some progress
-- Lindsay Haisley
Navigation:
Lists: gentoo-desktop: < Prev By Thread Next > < Prev By Date Next >
Previous by thread:
Re: Re: System problems - some progress
Next by thread:
Re: Re: System problems - some progress
Previous by date:
Re: Re: System problems - some progress
Next by date:
Re: Re: System problems - some progress


Updated Jun 28, 2012

Summary: Archive of the gentoo-desktop mailing list.

Donate to support our development efforts.

Copyright 2001-2013 Gentoo Foundation, Inc. Questions, Comments? Contact us.