1 |
On Fri, 2011-03-25 at 22:59 +0000, Duncan wrote: |
2 |
> Simply my experience-educated opinion. YMMV, as they say. And of course, |
3 |
> it applies to new installations more than your current situation, but as |
4 |
> you mentioned that you are planning such a new installation... |
5 |
|
6 |
Duncan, thanks for your very thorough discussion of current technologies |
7 |
disk/RAID/filesystem/etc. technologies. Wow! I'm going to have to read |
8 |
it through several times to absorb it. I've gotten to the point at |
9 |
which I'm more involved with what I can _do_ with the Linux boxes I set |
10 |
up than what I can do that's cool and cutting edge with Linux in setting |
11 |
them up, but playing with bleeding-edge stuff has always been tempting. |
12 |
Some of the stuff you've mentioned, such as btrfs, are totally new to me |
13 |
since I haven't kept up with the state of the art. Some years ago we |
14 |
had EVMS, which was developed by IBM here in Austin. I was a member of |
15 |
the Capital Area Central Texas UNIX Society (CACTUS) and we had the EVMS |
16 |
developers come and talk about it. EVMS was great. It was a layered |
17 |
technology with an API for a management client, so you could have a CLI, |
18 |
a GUI, a web-based management client, whatever, and a all of them useing |
19 |
the same API to the disk management layer. It was an umbrella |
20 |
technology which covered several levels of Linux MD Raid plus LVM. You |
21 |
could put fundamental storage elements together like tinker-toys and |
22 |
slice and dice them any way you wanted to. |
23 |
|
24 |
EVMS was started from an initrd, which set up the EVMS platform and then |
25 |
did a pivot_root to the EVMS-supported result. I have our SOHO |
26 |
firewall/gateway and file server set up with it. The root fs is on a |
27 |
Linux MD RAID-1 array, and what's on top of that I've rather forgotten |
28 |
but the result is a drive and partition layout that makes sense for the |
29 |
purpose of the box. I set this up as a kind of proof-of-concept |
30 |
exercise because I was taken with EVMS and figured it would be useful, |
31 |
which it was. The down side of this was that some time after that, IBM |
32 |
dropped support for the EVMS project and pulled their developers off of |
33 |
it. I was impressed with the fact that IBM was actually paying people |
34 |
to develop open source stuff, but when they pulled the plug on it, EVMS |
35 |
became an orphaned project. The firewall/gateway box runs Gentoo, so I |
36 |
proceeded with regular updates until one day the box stopped booting. |
37 |
The libraries, notably glibc, embedded in the initrd system got out of |
38 |
sync, version wise, with the rest of the system, and I was getting some |
39 |
severely strange errors early in the boot process followed by a kernel |
40 |
panic. It took a bit of work to even _see_ the errors, since they were |
41 |
emitted in the boot process earlier than CLI scroll-back comes into |
42 |
play, and then there was further research to determine what I needed to |
43 |
do to fix the problem. I ended up having to mount and then manually |
44 |
repair the initrd internal filesystem, manually reconstituting library |
45 |
symlinks as required. |
46 |
|
47 |
I've built some Linux boxes for one of my clients - 1U servers and the |
48 |
like. These folks are pretty straight-forward in their requirements, |
49 |
mainly asking that the boxes just work. The really creative work goes |
50 |
into the medical research PHP application that lives on the boxes, and |
51 |
I've learned beaucoup stuff about OOP in PHP, AJAX, etc. from the main |
52 |
programming fellow on the project. We've standardized on Ubuntu server |
53 |
edition on SuperMicro 4 drive 1U boxes. These boxes generally come with |
54 |
RAID supported by a proprietary chipset or two, which never works quite |
55 |
right with Linux, so the first job I always do with these is to rip out |
56 |
the SATA cabling from the back plane and replace the on-board RAID with |
57 |
an LSI 3ware card. These cards don't mess around - they JUST WORK. |
58 |
LSI/3ware has been very good about supporting Linux for their products. |
59 |
We generally set these up as RAID-5 boxes. There's a web-based |
60 |
monitoring daemon for Linux that comes with the card, and it just works, |
61 |
too, although it takes a bit of dickering. The RAID has no component in |
62 |
user-space (except for the monitor daemon) and shows up as a single SCSI |
63 |
drive, which can be partitioned and formatted just as if it were a |
64 |
single drive. The 3ware cards are nice! If you're using a redundant |
65 |
array such as RAID-1 or RAID-5, you can designate a drive as a hot-spare |
66 |
and if one of the drives in an array fails, the card will fail over to |
67 |
the hot-spare, rebuild the array, and the monitor daemon will send you |
68 |
an email telling you that it happened. Slick! |
69 |
|
70 |
The LSI 3ware cards aren't cheap, but they're not unreasonable either, |
71 |
and I've never had one fail. I'm thinking that the my drive setup on my |
72 |
new desktop box will probably use RAID-1 supported by a 3ware card. |
73 |
I'll probably use an ext3 filesystem on the partitions. I know ext4 is |
74 |
under development, but I'm not sure if it would offer me any advantages. |
75 |
I used reiserfs on some of the partitions on my servers, and on some |
76 |
partitions on my desktop box too. Big mistake! There was a bug in |
77 |
reiserfs support the current kernel when I built the first server and |
78 |
the kernel crapped all over the hard drive one night and the box |
79 |
crashed! I was able to fix it and salvage customer data, but it was |
80 |
pretty scary. Hans Reiser is in prison for life for murder, and there's |
81 |
like one person on the Linux kernel development group who maintains |
82 |
reiserfs. Ext3/4, on the other hand, is solid, maybe not quite as fast, |
83 |
but supported by a dedicated group of developers. |
84 |
|
85 |
So I'm thinking that this new box will have a couple of |
86 |
professional-grade (the 5-year warranty type) 1.5 or 2 TB drives and a |
87 |
3ware card. I still haven't settled on the mainboard, which will have |
88 |
to support the 3ware card, a couple of sound cards and a legacy Adaptec |
89 |
SCSI card for our ancient but extremely well-built HP scanner. The |
90 |
chipset will have to be well supported in Linux. I'll probably build |
91 |
the box myself once I decide on the hardware. |
92 |
|
93 |
I'm gonna apply the KISS principle to the OS design for this, and stay |
94 |
away from bleeding edge software technologies, although, especially |
95 |
after reading your essay, it's very tempting to try some of this stuff |
96 |
out to see what the the _really_ smart people are coming up with! I'm |
97 |
getting off of the Linux state-of-the art train for a while and go |
98 |
walking in the woods. The kernel will have to be low-latency since I |
99 |
may use the box for recording work with Jack and Ardour2, and almost |
100 |
certainly for audio editing, and maybe video editing at some point. |
101 |
That's where my energy is going to go for this one. |
102 |
|
103 |
-- |
104 |
Lindsay Haisley |"Windows ..... |
105 |
FMP Computer Services | life's too short!" |
106 |
512-259-1190 | |
107 |
http://www.fmp.com | - Brad Johnston |