1 |
napalm@××××××××××.org wrote: |
2 |
> On Wed, May 09, 2012 at 06:58:47PM -0500, Dale wrote: |
3 |
>> Mark Knecht wrote: |
4 |
>>> On Wed, May 9, 2012 at 3:24 PM, Dale <rdalek1967@×××××.com> wrote: |
5 |
>>>> Alan McKinnon wrote: |
6 |
>>> <SNIP> |
7 |
>>>>> My thoughts these days is that nobody really makes a bad drive anymore. |
8 |
>>>>> Like cars[1], they're all good and do what it says on the box. Same |
9 |
>>>>> with bikes[2]. |
10 |
>>>>> |
11 |
>>>>> A manufacturer may have some bad luck and a product range is less than |
12 |
>>>>> perfect, but even that is quite rare and most stuff ups can be fixed |
13 |
>>>>> with new firmware. So it's all good. |
14 |
>>>> |
15 |
>>>> |
16 |
>>>> That's my thoughts too. It doesn't matter what brand you go with, they |
17 |
>>>> all have some sort of failure at some point. They are not built to last |
18 |
>>>> forever and there is always the random failure, even when a week old. |
19 |
>>>> It's usually the loss of important data and not having a backup that |
20 |
>>>> makes it sooooo bad. I'm not real picky on brand as long as it is a |
21 |
>>>> company I have heard of. |
22 |
>>>> |
23 |
>>> |
24 |
>>> One thing to keep in mind is statistics. For a single drive by itself |
25 |
>>> it hardly matters anymore what you buy. You cannot predict the |
26 |
>>> failure. However if you buy multiple identical drives at the same time |
27 |
>>> then most likely you will either get all good drives or (possibly) a |
28 |
>>> bunch of drives that suffer from similar defects and all start failing |
29 |
>>> at the same point in their life cycle. For RAID arrays it's |
30 |
>>> measurably best to buy drives that come from different manufacturing |
31 |
>>> lots, better from different factories, and maybe even from different |
32 |
>>> companies. Then, if a drive fails, assuming the failure is really the |
33 |
>>> fault of the drive and not some local issue like power sources or ESD |
34 |
>>> events, etc., it's less likely other drives in the box will fail at |
35 |
>>> the same time. |
36 |
>>> |
37 |
>>> Cheers, |
38 |
>>> Mark |
39 |
>>> |
40 |
>>> |
41 |
>> |
42 |
>> |
43 |
>> |
44 |
>> You make a good point too. I had a headlight to go out on my car once |
45 |
>> long ago. I, not thinking, replaced them both since the new ones were |
46 |
>> brighter. Guess what, when one of the bulbs blew out, the other was out |
47 |
>> VERY soon after. Now, I replace them but NOT at the same time. Keep in |
48 |
>> mind, just like a hard drive, when one headlight is on, so is the other |
49 |
>> one. When we turn our computers on, all the drives spin up together so |
50 |
>> they are basically all getting the same wear and tear effect. |
51 |
>> |
52 |
>> I don't use RAID, except to kill bugs, but that is good advice. People |
53 |
>> who do use RAID would be wise to use it. |
54 |
>> |
55 |
>> Dale |
56 |
>> |
57 |
>> :-) :-) |
58 |
>> |
59 |
> |
60 |
> hum hum! |
61 |
> I know that Windows does this by default (it annoys me so I disable it) |
62 |
> but does linux disable or stop running the disks if they're inactive? |
63 |
> I'm assuming there's an option somewhere - maybe just `unmount`! |
64 |
> |
65 |
|
66 |
|
67 |
The default is to keep them all running and to not spin them down. I |
68 |
have never had a Linux OS to spin down a drive unless I set/told it to. |
69 |
You can do this tho. The command and option is: |
70 |
|
71 |
hdparm -S /dev/sdX |
72 |
|
73 |
X would be the drive number. There is also the -s option but it is not |
74 |
recommended. |
75 |
|
76 |
There is also the -y and -Y options. Before using ANY of these, read |
77 |
the man page. Each one has it uses and you need to know for sure which |
78 |
one does what you want. |
79 |
|
80 |
Dale |
81 |
|
82 |
:-) :-) |
83 |
|
84 |
-- |
85 |
I am only responsible for what I said ... Not for what you understood or |
86 |
how you interpreted my words! |
87 |
|
88 |
Miss the compile output? Hint: |
89 |
EMERGE_DEFAULT_OPTS="--quiet-build=n" |