1 |
On Wed, 2006-02-01 at 15:19 +0100, Ramon van Alteren wrote: |
2 |
> >I'm running a software raid spanned over 3 controllers ... I'm not aware |
3 |
> >of a hardware-based solution that even comes close to that flexibility. |
4 |
> How many disks can you put in ? |
5 |
> AFAIK most motherboard interfaces support 4 SATA disks |
6 |
plop in another controller and it's +4 or more :-) |
7 |
The onboard controllers are often cheap (SiL SATA) or broken (VIA |
8 |
KT133), so usually you'll use a "known-good" PCI/PCIe controller to |
9 |
avoid problems. |
10 |
|
11 |
Right now the case (miditower) is limiting, but if I wanted I could get |
12 |
a Promise TX4 for ~60Eur and add 4 SATA disks. |
13 |
|
14 |
The really cool thing with software raid is that you can just stuff any |
15 |
block device in and it'll work - mix SCSI, IDE, SATA, md - you can stack |
16 |
raid1 over raid0 if you want. |
17 |
Just label all disks so that you know which one just died ... |
18 |
|
19 |
> >But if you need a disk array with maximum performance I'd still suggest |
20 |
> >a hardware-based solution. |
21 |
> Me too :-) |
22 |
> But considering your comments, why ? |
23 |
|
24 |
Looking at benchmarks I can see ~500M/s from good controllers - it'll be |
25 |
hard to get the same level without dedicating a whole CPU to that. SAN |
26 |
boxen have wonderful managment features and belong into the "just |
27 |
work(tm)" category - each linux software raid will have to be |
28 |
consciously put together. |
29 |
|
30 |
A 4U box which plugs in to a SCSI or FC port costs ~5k Eur, the same as |
31 |
Do-it-yourself costs ~1,5k, but you don't get a fully tested solution |
32 |
that really works. You'll have to see what happens if you yank out a |
33 |
disk or two, you'll have to configure monitoring, etc. etc. |
34 |
|
35 |
Now if you are a company you'll pay a small premium just so that you |
36 |
don't have to care about any technical details ... time is money :-) |
37 |
-- |
38 |
Stand still, and let the rest of the universe move |