1 |
On 10/4/2011, at 2:50pm, Mark Knecht wrote: |
2 |
> ... loses 1 drive and |
3 |
> then, while in the process of fixing the RAID, loses a second drive. |
4 |
> Most of us (myself included) buy identical drives all at the same time |
5 |
> from the same vendor. This means all the drives were likely from the |
6 |
> same manufacturing batch and, if they are drives that will fail at all |
7 |
> then the group will likely experience multiple drive failures. |
8 |
|
9 |
It doesn't make it *likely* that they'll fail simultaneously. It makes it less unlikely. |
10 |
|
11 |
> The |
12 |
> underlying idea of RAID is that the drives are not likely to fail at |
13 |
> the same time giving us time to fix the array. However, if /dev/sda |
14 |
> fails the chances of /dev/sdb failing is higher if they were built at |
15 |
> the same time in the same plant. |
16 |
|
17 |
^ This is a more accurate synopsis. |
18 |
|
19 |
> Reading the mdadm list for the last couple of years it seems that many |
20 |
> folks running data centers intentionally buy drives from multiple |
21 |
> manufactures, or drives of different sizes from the same manufacturer, |
22 |
> hoping to lower the chances of multiple failures at the same time. |
23 |
|
24 |
|
25 |
I've found it sometimes quite inconvenient to do this, and whilst I consider it good practice I get the impression a lot of people, perhaps the majority, don't bother (or don't even know they should). I kinda think it's a nice thing to do but not essential - I don't know that the risk of simultaneous failure is increased that significantly. Many high-end servers will be sold off-the-shelf by their manufacturers with consecutively-serialed drives in the RAID array - I don't think this is considered risky enough for Dell or IBM to offer non-matching drives as a purchasing option. |
26 |
|
27 |
One also has to wonder what the performance implications might be of having three drives in an array with slightly different rotational speeds, spin-up and seek times. |
28 |
|
29 |
Ultimately, we shouldn't be fully dependent upon RAID for the integrity of our data, anyway. "RAID is not a backup" is the famous saying. |
30 |
|
31 |
> As for hardware RAID the risk I hear about there is that if the |
32 |
> controller itself fails then you need an identical backup controller |
33 |
> or you risk the possibility that you won't be able to recover |
34 |
> anything. I don't know how true that is or whether it's just FUD. |
35 |
|
36 |
Generally you just need a similar one. |
37 |
|
38 |
In the case of 3ware you can connect your drives to any other 3ware controller and it will recognise the array descriptors written at the start of the drive. |
39 |
|
40 |
I haven't swapped drives between the PERCs (rebadged Adaptec, I think) of Dell 2650s & 2850s, but these machines are now so cheap on the secondhand market anyway, you can afford to have a spare identical one. |
41 |
|
42 |
I think you're over-estimating the *risk* of being unable to find a RAID controller of the same model. But certainly if you buy a good PCIe SATA card on the secondhand market it will not be cheap to replace in the event of failure, and a bargain may not come up on eBay immediately. So I think you'll certainly be able to recover your data, you may just some inconvenience of having to wait to find a cheap enough card or spend a lot of money buying an obsolete card in a hurry. Ideally, you have a spare in advance or buy hardware RAID under a 5-year warranty (in which case it's replaced next-day by the manufacturer). |
43 |
|
44 |
This is really a matter of horses-for-courses. Most people (including myself) don't really "need" hardware RAID. Hardware RAID is much more expensive, but I do consider it "better", if only because you can hot-swap. That is not assured with cheap SATA controllers. |
45 |
|
46 |
OTOH Linux's software RAID does seem to be just as fast (??) as hardware RAID, and has some cool features. |
47 |
|
48 |
Stroller. |