1 |
On Monday 25 August 2014 18:46:23 Kerin Millar wrote: |
2 |
> On 25/08/2014 17:51, Peter Humphrey wrote: |
3 |
> > On Monday 25 August 2014 13:35:11 Kerin Millar wrote: |
4 |
> >> I now wonder if this is a race condition between the init script running |
5 |
> >> `mdadm -As` and the fact that the mdadm package installs udev rules that |
6 |
> >> allow for automatic incremental assembly? |
7 |
> > |
8 |
> > Isn't it just that, with the kernel auto-assembly of the root partition, |
9 |
> > and udev rules having assembled the rest, all the work's been done by the |
10 |
> > time the mdraid init script is called? I had wondered about the time that |
11 |
> > udev startup takes; assembling the raids would account for it. |
12 |
> |
13 |
> Yes, it's a possibility and would constitute a race condition - even |
14 |
> though it might ultimately be a harmless one. |
15 |
|
16 |
I thought a race involved the competitors setting off at more-or-less the same |
17 |
time, not one waiting until the other had finished. No matter. |
18 |
|
19 |
> As touched upon in the preceding post, I'd really like to know why mdadm |
20 |
> sees fit to return a non-zero exit code given that the arrays are actually |
21 |
> assembled successfully. |
22 |
|
23 |
I can see why a dev might think "I haven't managed to do my job" here. |
24 |
|
25 |
> After all, even if the arrays are assembled at the point that mdadm is |
26 |
> executed by the mdraid init script, partially or fully, it surely ought |
27 |
> not to matter. As long as the arrays are fully assembled by the time |
28 |
> mdadm exits, it should return 0 to signify success. Nothing else makes |
29 |
> sense, in my opinion. It's absurd that the mdraid script is drawn into |
30 |
> printing a blank error message where nothing has gone wrong. |
31 |
|
32 |
I agree, that is absurd. |
33 |
|
34 |
> Further, the mdadm ebuild still prints elog messages stating that mdraid |
35 |
> is a requirement for the boot runlevel but, with udev rules, I don't see |
36 |
> how that can be true. With udev being event-driven and calling mdadm |
37 |
> upon the introduction of a new device, the array should be up and |
38 |
> running as of the very moment that all the disks are seen, no matter |
39 |
> whether the mdraid init script is executed or not. |
40 |
|
41 |
We agree again. The question is what to do about it. Maybe a bug report |
42 |
against mdadm? |
43 |
|
44 |
--->8 |
45 |
|
46 |
> > Right. Here's the position: |
47 |
> > 1. I've left /etc/init.d/mdraid out of all run levels. I have nothing but |
48 |
> > comments in mdadm.conf, but then it's not likely to be read anyway if the |
49 |
> > init script isn't running. |
50 |
> > 2. I have empty /etc/udev rules files as above. |
51 |
> > 3. I have kernel auto-assembly of raid enabled. |
52 |
> > 4. I don't use an init ram disk. |
53 |
> > 5. The root partition is on /dev/md5 (0.99 metadata) |
54 |
> > 6. All other partitions except /boot are under /dev/vg7 which is built on |
55 |
> > top of /dev/md7 (1.x metadata). |
56 |
> > 7. The system boots normally. |
57 |
> |
58 |
> I must confess that this boggles my mind. Under these circumstances, I |
59 |
> cannot fathom how - or when - the 1.x arrays are being assembled. |
60 |
> Something has to be executing mdadm at some point. |
61 |
|
62 |
I think it's udev. I had a look at the rules, but I no grok. I do see |
63 |
references to mdadm though. |
64 |
|
65 |
> > Do I even need sys-fs/mdadm installed? Maybe |
66 |
> > I'll try removing it. I have a little rescue system in the same box, so |
67 |
> > it'd be easy to put it back if necessary. |
68 |
> |
69 |
> Yes, you need mdadm because 1.x metadata arrays must be assembled in |
70 |
> userspace. |
71 |
|
72 |
I realised after writing that that I may well need it for maintenance. I'd do |
73 |
that from my rescue system though, which does have it installed, so I think I |
74 |
can ditch it from the main system. |
75 |
|
76 |
-- |
77 |
Regards |
78 |
Peter |