Gentoo Archives: gentoo-user

From: Evgeny Bushkov <zhen@×××××××××.ru>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] mdadm and raid4
Date: Wed, 04 May 2011 10:01:58
Message-Id: 4DC1239C.3030202@dotcomltd.ru
In Reply to: Re: [gentoo-user] mdadm and raid4 by Joost Roeleveld
1 On 04.05.2011 13:38, Joost Roeleveld wrote:
2 > On Wednesday 04 May 2011 13:08:34 Evgeny Bushkov wrote:
3 >> On 04.05.2011 11:54, Joost Roeleveld wrote:
4 >>> On Wednesday 04 May 2011 10:07:58 Evgeny Bushkov wrote:
5 >>>> On 04.05.2011 01:49, Florian Philipp wrote:
6 >>>>> Am 03.05.2011 19:54, schrieb Evgeny Bushkov:
7 >>>>>> Hi.
8 >>>>>> How can I find out which is the parity disk in a RAID-4 soft
9 >>>>>> array? I
10 >>>>>> couldn't find that in the mdadm manual. I know that RAID-4
11 >>>>>> features a
12 >>>>>> dedicated parity disk that is usually the bottleneck of the array,
13 >>>>>> so
14 >>>>>> that disk must be as fast as possible. It seems useful to employ a
15 >>>>>> few
16 >>>>>> slow disks with a relatively fast disk in such a RAID-4 array.
17 >>>>>>
18 >>>>>> Best regards,
19 >>>>>> Bushkov E.
20 >>>>> You are seriously considering a RAID4? You know, there is a reason
21 >>>>> why
22 >>>>> it was superseded by RAID5. Given the way RAID4 operates, a first
23 >>>>> guess
24 >>>>> for finding the parity disk in a running array would be the one with
25 >>>>> the worst SMART data. It is the parity disk that dies the soonest.
26 >>>>>
27 >>>>> From looking at the source code it seems like the last specified
28 >>>>> disk is parity. Disclaimer: I'm no kernel hacker and I have only
29 >>>>> inspected the code, not tried to understand the whole MD subsystem.
30 >>>>>
31 >>>>> Regards,
32 >>>>> Florian Philipp
33 >>>> Thank you for answering... The reason I consider RAID-4 is a few
34 >>>> sata/150 drives and a pair of sata II drives I've got. Let's look at
35 >>>> the problem from the other side: I can create RAID-0(from sata II
36 >>>> drives) and then add it to RAID-4 as the parity disk. It doesn't
37 >>>> bother
38 >>>> me if any disk from the RAID-0 fails, that wouldn't disrupt my RAID-4
39 >>>> array. For example:
40 >>>>
41 >>>> mdadm --create /dev/md1 --level=4 -n 3 -c 128 /dev/sdb1 /dev/sdc1
42 >>>> missing mdadm --create /dev/md2 --level=0 -n 2 -c 128 /dev/sda1
43 >>>> /dev/sdd1 mdadm /dev/md1 --add /dev/md2
44 >>>>
45 >>>> livecd ~ # cat /proc/mdstat
46 >>>> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
47 >>>> md2 : active raid0 sdd1[1] sda1[0]
48 >>>>
49 >>>> 20969472 blocks super 1.2 128k chunks
50 >>>>
51 >>>> md1 : active raid4 md2[3] sdc1[1] sdb1[0]
52 >>>>
53 >>>> 20969216 blocks super 1.2 level 4, 128k chunk, algorithm 0
54 >>>> [3/2] [UU_]
55 >>>>
56 >>>> [========>............] recovery = 43.7% (4590464/10484608)
57 >>>> finish=1.4min speed=69615K/sec
58 >>>>
59 >>>> That configuration works well, but I'm not sure if md1 is the parity
60 >>>> disk here, that's why I asked. May be I'm wrong and RAID-5 is the only
61 >>>> worth array, I'm just trying to consider all pros and cons here.
62 >>>>
63 >>>> Best regards,
64 >>>> Bushkov E.
65 >>> I only use RAID-0 (when I want performance and don't care about the
66 >>> data), RAID-1 (for data I can't afford to loose) and RAID-5 (data I
67 >>> would like to keep). I have never bothered with RAID-4.
68 >>>
69 >>> What do you see in the "dmesg" after the mdadm commands?
70 >>> It might actually mention which is the parity disk in there.
71 >>>
72 >>> --
73 >>> Joost
74 >> There's nothing special in dmesg:
75 >>
76 >> md: bind<md2>
77 >> RAID conf printout:
78 >> --- level:4 rd:3 wd:2
79 >> disk 0, o:1, dev:sdb1
80 >> disk 1, o:1, dev:sdc1
81 >> disk 2, o:1, dev:md2
82 >> md: recovery of RAID array md1
83 >>
84 >> I've run some tests with different chunk sizes, the fastest was
85 >> raid-10(4 disks), raid-5(3 disks) was closely after. Raid-4(4 disks) was
86 >> almost as fast as raid-5 so I don't see any sense to use it.
87 >>
88 >> Best regards,
89 >> Bushkov E.
90 > What's the result of:
91 > mdadm --misc --detail /dev/md1
92 > ?
93 >
94 > Not sure what info this command will provide with a RAID-4...
95 >
96 > --
97 > Joostlivecd ~ # mdadm --misc --detail /dev/md1
98 livecd ~ # mdadm --misc --detail /dev/md1
99 /dev/md1:
100 Version : 1.2
101 Creation Time : Wed May 4 13:54:33 2011
102 Raid Level : raid4
103 Array Size : 122624 (119.77 MiB 125.57 MB)
104 Used Dev Size : 61312 (59.89 MiB 62.78 MB)
105 Raid Devices : 3
106 Total Devices : 3
107 Persistence : Superblock is persistent
108
109 Update Time : Wed May 4 13:55:14 2011
110 State : clean
111 Active Devices : 3
112 Working Devices : 3
113 Failed Devices : 0
114 Spare Devices : 0
115
116 Chunk Size : 128K
117
118 Name : livecd:1 (local to host livecd)
119 UUID : 654218f0:9f0b88d5:d82f39bc:ae08aa1e
120 Events : 19
121
122 Number Major Minor RaidDevice State
123 0 8 17 0 active sync /dev/sdb1
124 1 8 33 1 active sync /dev/sdc1
125 3 9 2 2 active sync /dev/md2
126
127 Best regards,
128 Bushkov E.