1 |
On Thursday 10 May 2012 19:51:14 Mark Knecht wrote: |
2 |
> On Thu, May 10, 2012 at 11:13 AM, Norman Invasion |
3 |
> |
4 |
> <invasivenorman@×××××.com> wrote: |
5 |
> > On 10 May 2012 14:01, Mark Knecht <markknecht@×××××.com> wrote: |
6 |
> >> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion |
7 |
> >> |
8 |
> >> <invasivenorman@×××××.com> wrote: |
9 |
> >>> On 9 May 2012 04:47, Dale <rdalek1967@×××××.com> wrote: |
10 |
> >>>> Hi, |
11 |
> >>>> |
12 |
> >>>> As some know, I'm planning to buy me a LARGE hard drive to put all my |
13 |
> >>>> videos on, eventually. The prices are coming down now. I keep seeing |
14 |
> >>>> these "green" drives that are made by just about every company |
15 |
> >>>> nowadays. When comparing them to a non "green" drive, do they hold up |
16 |
> >>>> as good? Are they as dependable as a plain drive? I guess they are |
17 |
> >>>> more efficient and I get that but do they break quicker, more often |
18 |
> >>>> or no difference? |
19 |
> >>>> |
20 |
> >>>> I have noticed that they tend to spin slower and are cheaper. That |
21 |
> >>>> much I have figured out. Other than that, I can't see any other |
22 |
> >>>> difference. Data speeds seem to be about the same. |
23 |
> >>> |
24 |
> >>> They have an ugly tendency to nod off at 6 second intervals. |
25 |
> >>> This runs up "193 Load_Cycle_Count" unacceptably: as many |
26 |
> >>> as a few hundred thousand in a year & a million cycles is |
27 |
> >>> getting close to the lifetime limit on most hard drives. I end |
28 |
> >>> up running some iteration of |
29 |
> >>> # hdparm -B 255 /dev/sda |
30 |
> >>> every boot. |
31 |
> >> |
32 |
> >> Very true about the 193 count. Here's a drive in a system that was |
33 |
> >> built in Jan., 2010 so it's a bit over 2 years old at this point. It's |
34 |
> >> on 24/7 and not rebooted except for more major updates, etc. My tests |
35 |
> >> say the drive spins down and starts back up every 2 minutes and has |
36 |
> >> been doing so for about 28 months. IIRC the 193 spec on this drive was |
37 |
> >> something like 300000 max with the drive currently clocking in at |
38 |
> >> 700488. I don't see any evidence that it's going to fail but I am |
39 |
> >> trying to make sure it's backed up often. Being that it's gone >2x at |
40 |
> >> this point I will swap the drive out in the early summer no matter |
41 |
> >> what. This week I'll be visiting where the machine is so I'm going to |
42 |
> >> put a backup drive in the box to get ready. |
43 |
> > |
44 |
> > Yes, I just learned about this problem in 2009 or so, & |
45 |
> > checked on my FreeBSD laptop, which turned out to be |
46 |
> > at >400000. It only made it another month or so before |
47 |
> > having unrecoverable errors. |
48 |
> > |
49 |
> > Now, I can't conclusively demonstrate that the 193 |
50 |
> > Load_Cycle_Count was somehow causitive, but I |
51 |
> > gots my suspicions. Many of 'em highly suspectable. |
52 |
> |
53 |
> It's part of the 'Wear Out Failure' part of the Bathtub Curve posted |
54 |
> in the last few days. That said, some Toyotas go 100K miles, and |
55 |
> others go 500K miles. Same car, same spec, same production line, |
56 |
> different owners, different roads, different climates, etc. |
57 |
> |
58 |
> It's not possible to absolutely know when any drive will fail. I |
59 |
> suspect that the 300K spec is just that, a spec. They'd replace the |
60 |
> drive if it failed at 299,999 and wouldn't replace it at 300,001. That |
61 |
> said, they don't want to spec thing too tightly, and I doubt many |
62 |
> people make a purchasing decision on a spec like this, so for the vast |
63 |
> majority of drives most likely they'd do far more than 300K. |
64 |
> |
65 |
> At 2 minutes per count on that specific WD Green Drive, if a home |
66 |
> machine is turned on for instance 5 hours a day (6PM to 11PM) then |
67 |
> 300K count equates to around 6 years. To me that seems pretty generous |
68 |
> for a low cost home machine. However for a 24/7 production server it's |
69 |
> a pretty fast replacement schedule. |
70 |
> |
71 |
> Here's data for my 500GB WD RAID Edition drives in my compute server |
72 |
> here. It's powered down almost every night but doesn't suffer from the |
73 |
> same firmware issues. The machine was built in April, 2010, so it's a |
74 |
> bit of 2 years old. Note that it's been powered on less than 1/2 the |
75 |
> number of hours but only has a 193 count of 907 vs > 700000! |
76 |
> |
77 |
> Cheers, |
78 |
> Mark |
79 |
> |
80 |
> |
81 |
> c2stable ~ # smartctl -a /dev/sda |
82 |
> smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build) |
83 |
> Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net |
84 |
> |
85 |
> === START OF INFORMATION SECTION === |
86 |
> Model Family: Western Digital RE3 Serial ATA |
87 |
> Device Model: WDC WD5002ABYS-02B1B0 |
88 |
> Serial Number: WD-WCASYA846988 |
89 |
> LU WWN Device Id: 5 0014ee 2042c3477 |
90 |
> Firmware Version: 02.03B03 |
91 |
> User Capacity: 500,107,862,016 bytes [500 GB] |
92 |
> Sector Size: 512 bytes logical/physical |
93 |
> Device is: In smartctl database [for details use: -P show] |
94 |
> ATA Version is: 8 |
95 |
> ATA Standard is: Exact ATA specification draft version not indicated |
96 |
> Local Time is: Thu May 10 11:45:45 2012 PDT |
97 |
> SMART support is: Available - device has SMART capability. |
98 |
> SMART support is: Enabled |
99 |
> |
100 |
> === START OF READ SMART DATA SECTION === |
101 |
> SMART overall-health self-assessment test result: PASSED |
102 |
> |
103 |
> General SMART Values: |
104 |
> Offline data collection status: (0x84) Offline data collection activity |
105 |
> was suspended by an |
106 |
> interrupting command from host. |
107 |
> Auto Offline Data Collection: |
108 |
> Enabled. Self-test execution status: ( 0) The previous self-test |
109 |
> routine completed without error or no self-test has ever been run. |
110 |
> Total time to complete Offline |
111 |
> data collection: ( 9480) seconds. |
112 |
> Offline data collection |
113 |
> capabilities: (0x7b) SMART execute Offline immediate. |
114 |
> Auto Offline data collection |
115 |
> on/off support. |
116 |
> Suspend Offline collection upon new |
117 |
> command. |
118 |
> Offline surface scan supported. |
119 |
> Self-test supported. |
120 |
> Conveyance Self-test |
121 |
> supported. |
122 |
> Selective Self-test supported. |
123 |
> SMART capabilities: (0x0003) Saves SMART data before |
124 |
> entering |
125 |
> power-saving mode. |
126 |
> Supports SMART auto save |
127 |
> timer. |
128 |
> Error logging capability: (0x01) Error logging supported. |
129 |
> General Purpose Logging |
130 |
> supported. |
131 |
> Short self-test routine |
132 |
> recommended polling time: ( 2) minutes. |
133 |
> Extended self-test routine |
134 |
> recommended polling time: ( 112) minutes. |
135 |
> Conveyance self-test routine |
136 |
> recommended polling time: ( 5) minutes. |
137 |
> SCT capabilities: (0x303f) SCT Status supported. |
138 |
> SCT Error Recovery Control |
139 |
> supported. |
140 |
> SCT Feature Control supported. |
141 |
> SCT Data Table supported. |
142 |
> |
143 |
> SMART Attributes Data Structure revision number: 16 |
144 |
> Vendor Specific SMART Attributes with Thresholds: |
145 |
> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE |
146 |
> UPDATED WHEN_FAILED RAW_VALUE |
147 |
> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail |
148 |
> Always - 0 |
149 |
> 3 Spin_Up_Time 0x0027 239 235 021 Pre-fail |
150 |
> Always - 1050 |
151 |
> 4 Start_Stop_Count 0x0032 100 100 000 Old_age |
152 |
> Always - 935 |
153 |
> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail |
154 |
> Always - 0 |
155 |
> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age |
156 |
> Always - 0 |
157 |
> 9 Power_On_Hours 0x0032 091 091 000 Old_age |
158 |
> Always - 7281 |
159 |
> 10 Spin_Retry_Count 0x0032 100 100 000 Old_age |
160 |
> Always - 0 |
161 |
> 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age |
162 |
> Always - 0 |
163 |
> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age |
164 |
> Always - 933 |
165 |
> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age |
166 |
> Always - 27 |
167 |
> 193 Load_Cycle_Count 0x0032 200 200 000 Old_age |
168 |
> Always - 907 |
169 |
> 194 Temperature_Celsius 0x0022 106 086 000 Old_age |
170 |
> Always - 41 |
171 |
> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age |
172 |
> Always - 0 |
173 |
> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age |
174 |
> Always - 0 |
175 |
> 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age |
176 |
> Offline - 0 |
177 |
> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age |
178 |
> Always - 0 |
179 |
> 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age |
180 |
> Offline - 0 |
181 |
|
182 |
Is this 193 Load_Cycle_Count an issue only on the green drives? |
183 |
|
184 |
I have a very old Compaq laptop here that shows: |
185 |
|
186 |
# smartctl -A /dev/sda | egrep "Power_On|Load_Cycle" |
187 |
9 Power_On_Hours 0x0012 055 055 000 Old_age Always |
188 |
- 19830 |
189 |
193 Load_Cycle_Count 0x0012 001 001 000 Old_age Always |
190 |
- 1739734 |
191 |
|
192 |
Admittedly, there are some 60 errors on it (having been used extensively on |
193 |
bouncy trains, buses, aeroplanes, etc) but it is still refusing to die ... |
194 |
O_O |
195 |
|
196 |
It is a Hitachi 20G |
197 |
|
198 |
=== START OF INFORMATION SECTION === |
199 |
Model Family: Hitachi Travelstar 80GN |
200 |
Device Model: IC25N020ATMR04-0 |
201 |
Serial Number: MRX107K1DS623H |
202 |
Firmware Version: MO1OAD5A |
203 |
User Capacity: 20,003,880,960 bytes [20.0 GB] |
204 |
Sector Size: 512 bytes logical/physical |
205 |
Device is: In smartctl database [for details use: -P show] |
206 |
ATA Version is: 6 |
207 |
ATA Standard is: ATA/ATAPI-6 T13 1410D revision 3a |
208 |
Local Time is: Sat May 12 10:30:13 2012 BST |
209 |
SMART support is: Available - device has SMART capability. |
210 |
SMART support is: Enabled |
211 |
|
212 |
=== START OF READ SMART DATA SECTION === |
213 |
|
214 |
-- |
215 |
Regards, |
216 |
Mick |