1 |
On woensdag 14 augustus 2019 14:17:23 CEST Stefan G. Weichinger wrote: |
2 |
> Am 14.08.19 um 13:20 schrieb J. Roeleveld: |
3 |
|
4 |
> > See next item, make sure you do NOT mount both at the same time. |
5 |
> |
6 |
> I understand and agree ;-) |
7 |
|
8 |
good :) |
9 |
|
10 |
> >> # /usr/bin/sg_vpd --page=di /dev/sdb |
11 |
> >> |
12 |
> >> Device Identification VPD page: |
13 |
> >> Addressed logical unit: |
14 |
> >> designator type: NAA, code set: Binary |
15 |
> >> |
16 |
> >> 0x600605b00d0ce810217ccffe19f851e8 |
17 |
> > |
18 |
> > Yes, this one is different. |
19 |
> > |
20 |
> > I checked the above ID and it looks like it is already correctly |
21 |
> > configured. Is " multipathd " actually running? |
22 |
> |
23 |
> no! |
24 |
|
25 |
Then "multipath -l" will not show anything either. When you have a chance for |
26 |
downtime (and that disk can be umounted) you could try the following: |
27 |
1) stop all services requiring that "disk" to be mounted |
28 |
2) umount that "disk" |
29 |
3) start the "multipath" service |
30 |
4) run "multipath -ll" to see if there is any output |
31 |
|
32 |
If yes, you can access the "disk" via the newly added entry under "/dev/ |
33 |
mapper/" |
34 |
If you modify "/etc/fstab" for this at the point, ensure multipath is started |
35 |
BEFORE the OS tries to mount it during boot. |
36 |
|
37 |
Other option (and only option of "multipath -ll" still doesn't show anything) |
38 |
is to stop the "multipath" service and leave it all as-is. |
39 |
|
40 |
> > If it were running correctly, you would mount " /dev/mapper/.... " instead |
41 |
> > of " /dev/sdc " or " /dev/sdd ". |
42 |
> > |
43 |
> >> In the first week of september I travel there and I have the job to |
44 |
> >> reinstall that server using Debian Linux (yes, gentoo-users, I am |
45 |
> >> getting OT here ;-)). |
46 |
> > |
47 |
> > For something that doesn't get updated/managed often, Gentoo might not be |
48 |
> > the best choice, I agree. |
49 |
> > I would prefer Centos for this one though, as there is far more info on |
50 |
> > multipath from Redhat. |
51 |
> |
52 |
> I will consider this ... |
53 |
|
54 |
The choice is yours. I just haven't found much info about multipath for other |
55 |
distributions. (And I could still use a decent document/guide describing all |
56 |
the different options) |
57 |
|
58 |
> As I understand things here: |
59 |
> |
60 |
> the former admin *tried to* setup multipath and somehow got stuck. |
61 |
|
62 |
My guess: multipath wasn't enabled before the boot-proces would try to mount |
63 |
it, the following needs to be done (and finished) in sequence for it to work: |
64 |
|
65 |
1) The OS needs to detect the disks (/dev/sdc + /dev/sdd). This requires |
66 |
modules to be loaded and the fibrechannel disks to be detected |
67 |
|
68 |
2) multipathd needs to be running and correctly identified the fibrechannel disk |
69 |
and the paths |
70 |
|
71 |
3) The OS needs to mount the fibrechannel disk using the "/dev/mapper/..." |
72 |
entry created by multipath. |
73 |
|
74 |
I run ZFS on top of the multipath entries, which makes it all a bit "simpler", |
75 |
as the HBA module is built-in and the "zfs" services depend on "multipath". |
76 |
All the mounting is done by the zfs services. |
77 |
|
78 |
> That's why it isn't running and not used at all. He somehow mentioned |
79 |
> this in an email back then when he was still working there. |
80 |
> |
81 |
> So currently it seems to me that the storage is attached via "single |
82 |
> path" (is that the term here?) only. "directly"= no redundancy |
83 |
|
84 |
Exactly, and using non-guaranteed drive-letters. (I know for a fact that they |
85 |
can chance as I've had disks move to different letters during subsequent boots. |
86 |
I do have 12 disks getting 4 entries each, which means 48 entries ;) |
87 |
|
88 |
> That means using the lpfc-kernel-module to run the FibreChannel-adapters |
89 |
> ... which failed to come up / sync with a more recent gentoo kernel, as |
90 |
> initially mentioned. |
91 |
|
92 |
Are these modules not included in the main kernel? |
93 |
And maybe they require firmware which, sometimes, requires specific versions |
94 |
between module/kernel versions. |
95 |
|
96 |
> (right now: 4.1.15-gentoo-r1 ... ) |
97 |
|
98 |
Old, but if it works, don't fix it. (Just don't expose it to the internet) |
99 |
|
100 |
> I consider sending a Debian-OS on a SSD there and let the (low |
101 |
> expertise) guy there boot from it. (or a stick). Which in fact is risky |
102 |
> as he doesn't know anything about linux. |
103 |
|
104 |
I wouldn't take that risk on a production server |
105 |
|
106 |
> Or I simply wait for my on-site-appointment and start testing when I am |
107 |
> there. |
108 |
|
109 |
Safest option. |
110 |
|
111 |
> Maybe I am lucky and the debian lpfc stuff works from the start. And |
112 |
> then I could test multipath as well. |
113 |
|
114 |
You could test quickly with the gentoo-install present as described above. The |
115 |
config should be the same regardless. |
116 |
|
117 |
> I assume that maybe the adapters need a firmware update or so. |
118 |
|
119 |
When I added a 2nd HBA to my server, I ended up patching the firmware on both |
120 |
to ensure they were identical. |
121 |
|
122 |
> The current gentoo installation was done with "hardened" profile, not |
123 |
> touched for years, no docs .... so it somehow seems way too much hassle |
124 |
> to get it up to date again. |
125 |
|
126 |
I migrated a few "hardened" profile installations to non-hardened, but it |
127 |
required preparing binary packages on a VM and reinstalling the whole lot with |
128 |
a lot of effort. (empty /var/lib/portage/world, run emerge --depclean, do |
129 |
@system with --empty and than re-populate /var/lib/portage/world and let that |
130 |
be installed using the before-mentioned binaries). |
131 |
|
132 |
A clean install is quicker and simpler. |
133 |
|
134 |
> Additionally no experts on site there, so it |
135 |
> should be low maintenance anyway. |
136 |
|
137 |
A binary distro would be a better choice then. How far is this from your |
138 |
location? |
139 |
|
140 |
-- |
141 |
Joost |