On Tue, 2011-03-22 at 00:37 -0100, Jorge Manuel B. S. Vicetto wrote:
> I haven't seen anyone mention that the
> latest udev versions react *very* badly to CONFIG_SYSFS_DEPRECATED_V2.
> So be sure to check if you disable that as iirc udev will refuse to
> create the proper device nodes if that kernel option is active.
Jorge, thanks VERY MUCH for the heads-up on CONFIG_SYSFS_DEPRECATED_V2.
Making sure that this, and CONFIG_SYSFS_DEPRECATED are off in my kernel
config has at least gotten the system to a terminal prompt - no single
user mode and no kernel panic. Nvidia support is broken, so no X, but
that's always a hassle with new kernels and can probably be fixed,
assuming that support for my ancient Nvidia card is still available in
the portage tree and compatible with kernel 2.6.36. I've done this many
times before and I'll deal with it later for this upgrade.
Device mapper configuration seems to be broken, which is where I'm at
now. Kernel config settings for RAID autodetect and device mapper
support are exactly as before, but the RAID subsystem appears to be
stumbling on a device name conflict, which has prevented the LVM VG from
Looking into the kernel log file for the 2.6.36 boot up ...
Mar 24 12:18:53 [kernel] [ 1.229464] md: Autodetecting RAID arrays.
Mar 24 12:18:53 [kernel] [ 1.281388] md: Scanned 2 and added 2 devices.
Mar 24 12:18:53 [kernel] [ 1.281566] md: autorun ...
Mar 24 12:18:53 [kernel] [ 1.281739] md: considering sdc1 ...
Mar 24 12:18:53 [kernel] [ 1.281918] md: adding sdc1 ...
Mar 24 12:18:53 [kernel] [ 1.282096] md: adding sdb1 ...
Mar 24 12:18:53 [kernel] [ 1.282273] md: created md0
Mar 24 12:18:53 [kernel] [ 1.282453] md: bind<sdb1>
Mar 24 12:18:53 [kernel] [ 1.282639] md: bind<sdc1>
Mar 24 12:18:53 [kernel] [ 1.282826] md: running: <sdc1><sdb1>
Mar 24 12:18:53 [kernel] [ 1.283219] md: personality for level 1 is not loaded!
Mar 24 12:18:53 [kernel] [ 1.283401] md: do_md_run() returned -22
Mar 24 12:18:53 [kernel] [ 1.283581] md: md0 still in use.
Mar 24 12:18:53 [kernel] [ 1.283756] md: ... autorun DONE.
This is fubar! There are two RAID arrays which support the missing LVM
volume group, and they're seriously out of kilter here. The correct
configuration, booting from my old kernel, is:
$ cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc1 sdd1
36146112 blocks [2/2] [UU]
md0 : active raid1 sdb1 sda1
117218176 blocks [2/2] [UU]
The Linux RAID subsystem is trying to pair drives drives from two
different RAID arrays into a single md! I see the possibility for
serious data corruption here :-( Although it looks as if the RAID
subsystem may have bailed on the RAID array, sensing that something was
The root of this problem is that on the old kernel, there are both
a /dev/hda1 and a /dev/sda1. The former is a partition on an old PATA
drive, while the latter is a proper component of md0, but when
everything becomes /dev/sdNx, there's an obvious conflict and the RAID
subsystem is getting confused and is obviously not seeing it's sda1.
I suspect that this is why the main VG isn't getting set up. What can
be done about this?
Lindsay Haisley | "Fighting against human creativity is like
FMP Computer Services | trying to eradicate dandelions"
512-259-1190 | (Pamela Jones)