1 |
On Thu, Oct 6, 2011 at 6:14 PM, Michael Mol <mikemol@×××××.com> wrote: |
2 |
> |
3 |
> On Oct 6, 2011 9:06 PM, "Mark Knecht" <markknecht@×××××.com> wrote: |
4 |
>> |
5 |
>> On Thu, Oct 6, 2011 at 1:28 PM, Michael Mol <mikemol@×××××.com> wrote: |
6 |
>> > On Thu, Oct 6, 2011 at 4:21 PM, Mark Knecht <markknecht@×××××.com> |
7 |
>> > wrote: |
8 |
>> >> On Thu, Oct 6, 2011 at 1:03 PM, Paul Hartman |
9 |
>> >> My worry was that if the mdraid daemon saw one drive gone - either |
10 |
>> >> when starting to spin down or when one spins up slowly - and if mdraid |
11 |
>> >> didn't understand that all this stuff was taking place intentionally |
12 |
>> >> then it might mark that drive as having failed. |
13 |
>> > |
14 |
>> > Does mdraid even have an awareness of timeouts, or does it leave that |
15 |
>> > to lower drivers? I think the latter condition is more likely. |
16 |
>> > |
17 |
>> > I suspect, though, that if your disk fails to spin up reasonably |
18 |
>> > quickly, it's already failed. |
19 |
>> > |
20 |
>> |
21 |
>> In general I agree. However drives that are designed for RAID have a |
22 |
>> feature known as Time Limited Error Recovery (TLER) which supposedly |
23 |
>> guarantees that they'll get the drive back to responding fast enough |
24 |
>> to not have it marked as failed in the RAID array: |
25 |
>> |
26 |
>> http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery |
27 |
>> |
28 |
>> When I built my first RAID I bought some WD 1TB green drives, built |
29 |
>> the RAID and immediately had drives failing because they didn't have |
30 |
>> this sort of feature. I replaced them with RAID Edition drives that |
31 |
>> have the TLER feature and have never had a problem since. (Well, I |
32 |
>> actually bought all new drives and kept the six 1TB drives which I'd |
33 |
>> mostly used up for other things like external eSATA backup drives, |
34 |
>> etc...) |
35 |
>> |
36 |
>> Anyway, I'm possibly over sensitized to this sort of timing problem |
37 |
>> specifically in a RAID which is why I asked the question of Paul in |
38 |
>> the first place. |
39 |
> |
40 |
> My first RAID was with three Seagate economy 1.5TB drives in RAID 5, shortly |
41 |
> followed by three 1TB WD black drives in RAID 0. I never had the problems |
42 |
> you describe, though I rebuit the RAID5 several times as I was figuring |
43 |
> things out. (the 3TB RAID0 was for some heavy duty scratch space.) |
44 |
|
45 |
Yeah, I understand. This sort of problem, I found out after joining |
46 |
the linux-raid list, is _very_ dependent on the _exact_ model of |
47 |
drives chosen to build the RAID. I've had exactly ZERO problems with |
48 |
any the 2 drive RAID0's, 3 & 5 drive RAID1's and 5 drive RAID6's that |
49 |
I built using WD RAID Edition drives. They've run for 18 months and |
50 |
nothing has ever gone off line or needed any attention of any type. |
51 |
They just work. |
52 |
|
53 |
On the other hand all the RAID0 & RAID1's that I build using the WD |
54 |
1TB _Green_ drives simply wouldn't work reliably. They'd go off line |
55 |
every day or two, and I'm talking in the very same computer. No other |
56 |
differences hardware wise. I've heard of people using the same drive |
57 |
model (but possibly different firmware) having similar problems until |
58 |
they got a WD app to twiddle with the firmware, and others that never |
59 |
got the drives working well at all. The drives are perfectly fine |
60 |
non-RAID. |
61 |
|
62 |
I appreciate the inputs. It's an interesting subject and hearing other |
63 |
people's experiences helps put some shape around the space. |
64 |
|
65 |
Cheers, |
66 |
Mark |