1 |
On Fri, Sep 20, 2013 at 6:20 AM, Tanstaafl <tanstaafl@×××××××××××.org> wrote: |
2 |
> Hi all, |
3 |
> |
4 |
> Being that one of the big reasons I stopped using RAID5/6 was the rebuild |
5 |
> times - can be DAYS for a large array - I am very curious if anyone has |
6 |
> done, or knows of anyone who has done any tests comparing rebuild times when |
7 |
> using slow SATA, faster SAS and fastest SSD drives. |
8 |
> |
9 |
> Of course, this question is moot if using ZFS RAID, but not every situation |
10 |
> or circumstance will allow it... |
11 |
|
12 |
I don't have an all-out comparison, but at least a data point for you |
13 |
with somewhat cheap and recent hardware. I have a new (2 months old) |
14 |
home RAID6 made out of: |
15 |
|
16 |
6 Western Digital Red 3TB SATA drives |
17 |
LSI 9200-8e SAS JBOD controller |
18 |
Sans Digital TR8X+B SAS/SATA enclosure w/ SFF-8088 cables |
19 |
|
20 |
I created a standard linux software RAID6 using mdadm, resulting in |
21 |
11TB of usable space (4 data drives, 2 parity). |
22 |
|
23 |
A couple weeks ago one of the drives died. I hot-swap replaced it with |
24 |
a new one (with no down-time) and the rebuild took exactly 10 hours. |
25 |
|
26 |
Under normal operation, the speed of the array for contiguous |
27 |
read/writes is about 600MB/sec, which is faster than my SSD (single |
28 |
drive, not RAIDed). |
29 |
|
30 |
FWIW |