1 |
Mark Knecht posted on Tue, 30 Mar 2010 13:26:59 -0700 as excerpted: |
2 |
|
3 |
> I've set up a duplicate boot partition on sdb and it boots. However one |
4 |
> thing I'm unsure if when I change the hard drive boot does the old sdb |
5 |
> become the new sda because it's what got booted? Or is the order still |
6 |
> as it was? The answer determines what I do in grub.conf as to which |
7 |
> drive I'm trying to use. I can figure this out later by putting |
8 |
> something different on each drive and looking. Might be system/BIOS |
9 |
> dependent. |
10 |
|
11 |
That depends on your BIOS. My current system (the workstation, now 6+ |
12 |
years old but still going strong as it was a $400+ server grade mobo) will |
13 |
boot from whatever disk I tell it to, but keeps the same BIOS disk order |
14 |
regardless -- unless I physically turn one or more of them off, of |
15 |
course. My previous system would always switch the chosen boot drive to |
16 |
be the first one. (I suppose it could be IDE vs. SATA as well, as the |
17 |
switcher was IDE, the stable one is SATA-1.) |
18 |
|
19 |
So that's something I guess you figure out for yourself. But it sounds |
20 |
like you're already well on your way... |
21 |
|
22 |
>> 100% waits for long periods... |
23 |
|
24 |
>> 1a) Dying disk. |
25 |
>> 1b) hard to read data sectors. |
26 |
> |
27 |
> All new drives, smartctl says no problems reading anything and no |
28 |
> registered error correction has taken place. |
29 |
|
30 |
That's good. =:^) Tho of course there's an infant mortality period of the |
31 |
first few (1-3) months, too, before the statistics settle down. So just |
32 |
because they're new doesn't necessarily mean they're not bad. |
33 |
|
34 |
FWIW, when I switched to RAID was after having two drives go out at almost |
35 |
exactly the year point. Needless to say I was a bit paranoid. So when I |
36 |
got the new set to setup as RAID, the first thing I did (before I |
37 |
partitioned or otherwise put any data of value on them) was run |
38 |
badblocks -w on all of them. It took well over a day, actually ~3 days |
39 |
IIRC but don't hold me to the three. Luckily, doing them in parallel |
40 |
didn't slow things down any, as it was the physical disk speed that was |
41 |
the bottleneck. But I let the thing finish on all four drives, and none |
42 |
of them came up with a single badblock in any of the four patterns. |
43 |
Additionally, after writing and reading back the entire drive four times, |
44 |
smart still said nothing relocated or anything. So I was happy. And the |
45 |
drives have served me well, tho they're probably about at the end of their |
46 |
five year warranty right about now. |
47 |
|
48 |
The point being... it /is/ actually possible to verify that they're |
49 |
working well before you fdisk/mkfs and load data. Tho it does take |
50 |
awhile... days... on drives of modern size. |
51 |
|
52 |
>> 3) suspend the disks after a period of inactivity |
53 |
> |
54 |
> This could be part of what's going on, but I don't think it's the whole |
55 |
> story. My drives (WD Green 1TB drives) apparently park the heads after 8 |
56 |
> seconds (yes 8 seconds!) of inactivity to save power. Each time it parks |
57 |
> it increments the Load_Cycle_Count SMART parameter. I've been tracking |
58 |
> this on the three drives in the system. The one I'm currently using is |
59 |
> incrementing while the 2 that sit unused until I get RAID going again |
60 |
> are not. Possibly there is something about how these drives come out of |
61 |
> park that creates large delays once in awhile. |
62 |
|
63 |
You may wish to take a second look at that, for an entirely /different/ |
64 |
reason. If those are the ones I just googled on the WD site, they're |
65 |
rated 300K load/unload cycles. Take a look at your BIOS spin-down |
66 |
settings, and use hdparm to get a look at the disk's powersaving and |
67 |
spindown settings. You may wish to set the disks to something more |
68 |
reasonable, as with 8 second timeouts, that 300k cycles isn't going to |
69 |
last so long... |
70 |
|
71 |
You may recall a couple years ago when Ubuntu accidentally shipped with |
72 |
laptop mode (or something, IDR the details) turned on by default, and |
73 |
people were watching their drives wear out before their eyes. That's |
74 |
effectively what you're doing, with an 8-second idle timeout. Most laptop |
75 |
drives (2.5" and 1.8") are designed for it. Most 3.5" desktop/server |
76 |
drives are NOT designed for that tight an idle timeout spec, and in fact, |
77 |
may well last longer spinning at idle overnight, as opposed to shutting |
78 |
down every day even. |
79 |
|
80 |
I'd at least look into it, as there's no use wearing the things out |
81 |
unnecessarily. Maybe you'll decide to let them run that way and save the |
82 |
power, but you'll know about the available choices then, at least. |
83 |
|
84 |
-- |
85 |
Duncan - List replies preferred. No HTML msgs. |
86 |
"Every nonfree program has a lord, a master -- |
87 |
and if you use the program, he is your master." Richard Stallman |