1 |
Hanni Ali wrote: |
2 |
> |
3 |
> I would be interested to hear anyone else's experiences with IPMI. |
4 |
> |
5 |
> Regards, |
6 |
> |
7 |
> Hanni |
8 |
|
9 |
I've recently started using IPMI software (ipmitool, ipmievd) where I |
10 |
can, but mostly for monitoring disk arrays, temperatures, and things |
11 |
like that that aren't available on some systems (nearly all in my case) |
12 |
via lm_sensors, ACPI, SMART, etc.. Support for the various chips is not |
13 |
great, but its nice to get alerts that the BIOS thinks a disk is dead |
14 |
before the other one dies or someone finally arrives in the room to hear |
15 |
some server beeping. That can be a pain to track down. |
16 |
|
17 |
As for the HW level interaction with them security and IP usage is often |
18 |
a problem. Ideally you could use the second NIC which is generally |
19 |
available on most servers to setup a separate network for IPMI |
20 |
management, but again that's not especially easy to do in my case. So |
21 |
far my interactions with it have been limited to power cycling a machine |
22 |
from home, but I haven't gotten console access to work, so unless you |
23 |
can do that if a kernel upgrade goes bad you might be SOL in that case. |
24 |
There are probably fancy tricks you could do with grub, but I haven't |
25 |
gotten that far into it. |
26 |
|
27 |
I think network booting would probably be your best bet. In a cluster |
28 |
setup machines should only take on one of a couple roles and it would be |
29 |
easier to manage a handful of images rather than individual machines. |
30 |
Of course you also need to figure out redundancy for your network image |
31 |
server at that point. |
32 |
|
33 |
VMware ESX w/ VMotion might also be a nice way to handle all of this, |
34 |
but that doesn't totally get around the issue since a physical server |
35 |
could have problems taking all of your virtual servers down and you'd |
36 |
still need to rely on IPMI or serial console to address that machine. |
37 |
It also doesn't help the initial cost problem. |
38 |
|
39 |
Curious to see what you go with. |
40 |
|
41 |
Brian |
42 |
-- |
43 |
gentoo-cluster@g.o mailing list |