1 |
After further investigation can say, although i'm not using lvm, the |
2 |
following command is able to pupulate /dev/mapper/* |
3 |
|
4 |
/sbin/vgscan --mknodes --config "${config}" >/dev/null |
5 |
|
6 |
the command can be located at /lib64/rcscripts/addons/lvm-start.sh |
7 |
which is called by /etc/init.d/lvm start |
8 |
|
9 |
Really strange as i'm not usinglvm, so the command complains "No |
10 |
volume groups found". |
11 |
|
12 |
Also noticed that lvm is able to communcate with device-mapper through |
13 |
/etc/lvm.conf: |
14 |
|
15 |
# Whether or not to communicate with the kernel device-mapper. |
16 |
# Set to 0 if you want to use the tools to manipulate LVM metadata |
17 |
# without activating any logical volumes. |
18 |
# If the device-mapper kernel driver is not present in your kernel |
19 |
# setting this to 0 should suppress the error messages. |
20 |
activation = 1 |
21 |
|
22 |
Don't know if that's why it's able to create the nodes. Any help? |
23 |
|
24 |
2010/12/14 Pau Peris <sibok1981@×××××.com>: |
25 |
> Hi, i'm currently running Gentoo on a RAID0 setup on some Sata disks |
26 |
> using a Jmicron chip from an Asus P6T board. I'm using a fakeraid due |
27 |
> to dualboot restrictions. My whole Gentoo system is on the raid0 |
28 |
> device so i use an initramfs to bootup. I've been running with this |
29 |
> setup for some time, but since i migrated to baselayout2+openrc i |
30 |
> didn't understand why i need /etc/init.d/lvm to start at boot as i |
31 |
> have no lvm setup. Today i was doing some research and some questions |
32 |
> appeared: |
33 |
> |
34 |
> * Every where says i need "<*> RAID support -> <* > RAID-0 |
35 |
> (striping) mode" in kernel for fakeraid to work, but my system still |
36 |
> boots while disabling those options, are they really needed? i don't |
37 |
> understand why it is supposed to be needed. (Is it only for mdadm |
38 |
> usage?) |
39 |
> |
40 |
> * Is "SCSI device support -> <*> RAID Transport Class" option needed? |
41 |
> What is supposed to do? I think raid features are provided by jmicron |
42 |
> driver and kernel understaands how RAID works due to "Multiple devices |
43 |
> driver support (RAID and LVM) -> <*> Device mapper support ", isn't |
44 |
> it? |
45 |
> |
46 |
> * Last question is, after migrating to openrc i noticed that lvm2 |
47 |
> package provides device-mapper tools to manage the array, but i do not |
48 |
> want /etc/init.d/lvm to start at boot as i do not use any lvm setup, i |
49 |
> just would like to get /dev/mapper/ correctly populated using |
50 |
> something like dmraid -ay. I've tried removing lvm from boot and |
51 |
> adding device-mapper instead but /dev/mapper is not populated. How |
52 |
> should i proceed to get rid of lvm script and /dev/mapper populated, |
53 |
> do i need /etc/dmtab to work? Then why would one want to use mdadm |
54 |
> instead of device-mapper, which are the advatges/disadvantages on each |
55 |
> other? i know mdadm needs an optional /etc/mdadm.conf file to work, |
56 |
> and using mdadm one could stop/start the array, add more disks to the |
57 |
> array etc But do mdadm need a boot up script to work? |
58 |
> |
59 |
> As you can see i'm a bit confused about what's best/suits best mdadm |
60 |
> or device-mapper and why are those kernel settings needed. Thanks a |
61 |
> lot in advanced :) |
62 |
> |