1 |
On 04.05.2011 11:54, Joost Roeleveld wrote: |
2 |
> On Wednesday 04 May 2011 10:07:58 Evgeny Bushkov wrote: |
3 |
>> On 04.05.2011 01:49, Florian Philipp wrote: |
4 |
>>> Am 03.05.2011 19:54, schrieb Evgeny Bushkov: |
5 |
>>>> Hi. |
6 |
>>>> How can I find out which is the parity disk in a RAID-4 soft array? I |
7 |
>>>> couldn't find that in the mdadm manual. I know that RAID-4 features a |
8 |
>>>> dedicated parity disk that is usually the bottleneck of the array, so |
9 |
>>>> that disk must be as fast as possible. It seems useful to employ a few |
10 |
>>>> slow disks with a relatively fast disk in such a RAID-4 array. |
11 |
>>>> |
12 |
>>>> Best regards, |
13 |
>>>> Bushkov E. |
14 |
>>> You are seriously considering a RAID4? You know, there is a reason why |
15 |
>>> it was superseded by RAID5. Given the way RAID4 operates, a first guess |
16 |
>>> for finding the parity disk in a running array would be the one with the |
17 |
>>> worst SMART data. It is the parity disk that dies the soonest. |
18 |
>>> |
19 |
>>> From looking at the source code it seems like the last specified disk is |
20 |
>>> parity. Disclaimer: I'm no kernel hacker and I have only inspected the |
21 |
>>> code, not tried to understand the whole MD subsystem. |
22 |
>>> |
23 |
>>> Regards, |
24 |
>>> Florian Philipp |
25 |
>> Thank you for answering... The reason I consider RAID-4 is a few |
26 |
>> sata/150 drives and a pair of sata II drives I've got. Let's look at |
27 |
>> the problem from the other side: I can create RAID-0(from sata II |
28 |
>> drives) and then add it to RAID-4 as the parity disk. It doesn't bother |
29 |
>> me if any disk from the RAID-0 fails, that wouldn't disrupt my RAID-4 |
30 |
>> array. For example: |
31 |
>> |
32 |
>> mdadm --create /dev/md1 --level=4 -n 3 -c 128 /dev/sdb1 /dev/sdc1 missing |
33 |
>> mdadm --create /dev/md2 --level=0 -n 2 -c 128 /dev/sda1 /dev/sdd1 |
34 |
>> mdadm /dev/md1 --add /dev/md2 |
35 |
>> |
36 |
>> livecd ~ # cat /proc/mdstat |
37 |
>> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] |
38 |
>> md2 : active raid0 sdd1[1] sda1[0] |
39 |
>> 20969472 blocks super 1.2 128k chunks |
40 |
>> |
41 |
>> md1 : active raid4 md2[3] sdc1[1] sdb1[0] |
42 |
>> 20969216 blocks super 1.2 level 4, 128k chunk, algorithm 0 [3/2] [UU_] |
43 |
>> [========>............] recovery = 43.7% (4590464/10484608) finish=1.4min |
44 |
>> speed=69615K/sec |
45 |
>> |
46 |
>> That configuration works well, but I'm not sure if md1 is the parity |
47 |
>> disk here, that's why I asked. May be I'm wrong and RAID-5 is the only |
48 |
>> worth array, I'm just trying to consider all pros and cons here. |
49 |
>> |
50 |
>> Best regards, |
51 |
>> Bushkov E. |
52 |
> I only use RAID-0 (when I want performance and don't care about the data), |
53 |
> RAID-1 (for data I can't afford to loose) and RAID-5 (data I would like to |
54 |
> keep). I have never bothered with RAID-4. |
55 |
> |
56 |
> What do you see in the "dmesg" after the mdadm commands? |
57 |
> It might actually mention which is the parity disk in there. |
58 |
> |
59 |
> -- |
60 |
> Joost |
61 |
> |
62 |
There's nothing special in dmesg: |
63 |
|
64 |
md: bind<md2> |
65 |
RAID conf printout: |
66 |
--- level:4 rd:3 wd:2 |
67 |
disk 0, o:1, dev:sdb1 |
68 |
disk 1, o:1, dev:sdc1 |
69 |
disk 2, o:1, dev:md2 |
70 |
md: recovery of RAID array md1 |
71 |
|
72 |
I've run some tests with different chunk sizes, the fastest was |
73 |
raid-10(4 disks), raid-5(3 disks) was closely after. Raid-4(4 disks) was |
74 |
almost as fast as raid-5 so I don't see any sense to use it. |
75 |
|
76 |
Best regards, |
77 |
Bushkov E. |