1 |
On 2/2/06, Ian P. Christian <pookey@×××××××××.uk> wrote: |
2 |
> I would like to see an end to the 'my software RAID could have your hardware |
3 |
> RAID' and 'my hardware RAID could eat software RAID for breakfast' argument - |
4 |
> and I fear the only way to do this is with figures, not stories. |
5 |
|
6 |
I use soft raid since about 4 years on many systems and recommend it to anyone. |
7 |
|
8 |
I use it in two scenarios: |
9 |
|
10 |
1. on production systems, raid1 is used. It gives a big flexibility because: |
11 |
* I can use it on _partition_ level, not the drive level |
12 |
* I can easily swap drives (small differences in size does not matter, |
13 |
which is good because I can use drives from different vendors to |
14 |
minimize double failure risk) |
15 |
* I receive a high security from disk failure |
16 |
* there is no (0%) impact on performance if done correctly |
17 |
* I can easily swap motherboards, servers - whatever - there is no |
18 |
fricking problem with vendor-lockin. There is more, I can easily move |
19 |
one drive to other system to copy it (see below) |
20 |
* underlying partition in Linux raid1 are normal plain fs, which can |
21 |
be mounted read-only _anywhere_ |
22 |
* I can use it as a migration path (also with complex system |
23 |
reinstalls) using above tricks and having this (second raid1 drive) as |
24 |
a backup during operation. |
25 |
* I can do dangerous operation (reiserfsck --rebuild-tree) in safe way |
26 |
(disable one disk/partition from raid, then run operation only on one |
27 |
connected, if it works, add second and automatic rebuild will sync it) |
28 |
* I can do all of it in remote way without need of bios-type configuration |
29 |
|
30 |
|
31 |
2. on systems which require high performance. Real world example: |
32 |
a. two WD 320GB disks (JB series, IMO one of the must buy: quiet, |
33 |
cool, extremely fast, 3 years warranty) |
34 |
b. two 100GB partitions (on each of disk) configured as a one big |
35 |
raid0 (200GB) partition |
36 |
c. two 200GB partitions (on each disk) configured as one big raid1 |
37 |
(200GB) partition |
38 |
d. remaining 40gb is out of the scope this mail |
39 |
e. every day, partition b. is copied to partition c, to provide backup |
40 |
(raid0 is twice more prone to disk failure, because even if one fails, |
41 |
whole raidset fails) |
42 |
f. everything is being kept on partition b. due to performance, only |
43 |
big media files and backups are on partition c. |
44 |
g. and now the numbers: constant sustained transfer from partition b |
45 |
on such server is: 120MB/s (mega bytes). CD ISO image is loaded into |
46 |
memory under 6 seconds. |
47 |
Please pay attention that such configuration gives a very good |
48 |
performance (outperforms most raid5 solutions including hardware |
49 |
ones, even based on SCSI 15000rpm u320 disks), and gives me |
50 |
reliability (two disk drives with raid1) and up to 1 day in data |
51 |
removal (raid1 does not protect from user data removal). |
52 |
|
53 |
Such solutions are cheap, effective, easy to maintain, fast and safe. |
54 |
|
55 |
That being said there are exceptions to such usage are: |
56 |
* more than 1TB needed |
57 |
* an ultra performance needed regardless cost: in such situation usage |
58 |
of high priced raid solutions (1+0) is preferred (but we're talking |
59 |
here about a small fortune I'mean EMC etc, not the 3ware SMB |
60 |
solutions. |
61 |
* other ;) |
62 |
|
63 |
Taking above exceptions I consider any standard raid controller |
64 |
(including SCSI ones in typical dell/hp/ibm servers) to be a worse |
65 |
solution than what I described. That is based on data from real ibm |
66 |
servers (2005 models ) with raid5 and SCSI 10k drives (70MB/s) i |
67 |
checked last summer. I consider addon cards for ide/sata (3ware, Sil, |
68 |
etc) a totally wrong solution (vendor-locking, lower performance, |
69 |
single point of failure, drivers problems, more maintenance needed, |
70 |
config in bios often needed). |
71 |
|
72 |
-- |
73 |
radoslaw. |
74 |
|
75 |
-- |
76 |
gentoo-server@g.o mailing list |