1 |
On Wed, May 09, 2012 at 06:58:47PM -0500, Dale wrote: |
2 |
> Mark Knecht wrote: |
3 |
> > On Wed, May 9, 2012 at 3:24 PM, Dale <rdalek1967@×××××.com> wrote: |
4 |
> >> Alan McKinnon wrote: |
5 |
> > <SNIP> |
6 |
> >>> My thoughts these days is that nobody really makes a bad drive anymore. |
7 |
> >>> Like cars[1], they're all good and do what it says on the box. Same |
8 |
> >>> with bikes[2]. |
9 |
> >>> |
10 |
> >>> A manufacturer may have some bad luck and a product range is less than |
11 |
> >>> perfect, but even that is quite rare and most stuff ups can be fixed |
12 |
> >>> with new firmware. So it's all good. |
13 |
> >> |
14 |
> >> |
15 |
> >> That's my thoughts too. It doesn't matter what brand you go with, they |
16 |
> >> all have some sort of failure at some point. They are not built to last |
17 |
> >> forever and there is always the random failure, even when a week old. |
18 |
> >> It's usually the loss of important data and not having a backup that |
19 |
> >> makes it sooooo bad. I'm not real picky on brand as long as it is a |
20 |
> >> company I have heard of. |
21 |
> >> |
22 |
> > |
23 |
> > One thing to keep in mind is statistics. For a single drive by itself |
24 |
> > it hardly matters anymore what you buy. You cannot predict the |
25 |
> > failure. However if you buy multiple identical drives at the same time |
26 |
> > then most likely you will either get all good drives or (possibly) a |
27 |
> > bunch of drives that suffer from similar defects and all start failing |
28 |
> > at the same point in their life cycle. For RAID arrays it's |
29 |
> > measurably best to buy drives that come from different manufacturing |
30 |
> > lots, better from different factories, and maybe even from different |
31 |
> > companies. Then, if a drive fails, assuming the failure is really the |
32 |
> > fault of the drive and not some local issue like power sources or ESD |
33 |
> > events, etc., it's less likely other drives in the box will fail at |
34 |
> > the same time. |
35 |
> > |
36 |
> > Cheers, |
37 |
> > Mark |
38 |
> > |
39 |
> > |
40 |
> |
41 |
> |
42 |
> |
43 |
> You make a good point too. I had a headlight to go out on my car once |
44 |
> long ago. I, not thinking, replaced them both since the new ones were |
45 |
> brighter. Guess what, when one of the bulbs blew out, the other was out |
46 |
> VERY soon after. Now, I replace them but NOT at the same time. Keep in |
47 |
> mind, just like a hard drive, when one headlight is on, so is the other |
48 |
> one. When we turn our computers on, all the drives spin up together so |
49 |
> they are basically all getting the same wear and tear effect. |
50 |
> |
51 |
> I don't use RAID, except to kill bugs, but that is good advice. People |
52 |
> who do use RAID would be wise to use it. |
53 |
> |
54 |
> Dale |
55 |
> |
56 |
> :-) :-) |
57 |
> |
58 |
|
59 |
hum hum! |
60 |
I know that Windows does this by default (it annoys me so I disable it) |
61 |
but does linux disable or stop running the disks if they're inactive? |
62 |
I'm assuming there's an option somewhere - maybe just `unmount`! |