1 |
Thats Cool ! |
2 |
|
3 |
I'll test it on my cluster =) |
4 |
it uses infiniband and mvpich2 but some people want to use ordinary |
5 |
eth as interconnect so it will be very useful =) |
6 |
|
7 |
2008/2/26, Justin Bronder <jsbronder@g.o>: |
8 |
> I've been spending the majority of my Gentoo-related time working on a |
9 |
> solution to bug 44132 [1], basically, trying to find a way to gracefully |
10 |
> handle multiple installs of various MPI implementations at the same time in |
11 |
> Gentoo. Theres more information about the solution in my devspace [2], but |
12 |
> a quick summary is that there is a new package (empi) that is much like |
13 |
> crossdev, a new eselect module for empi, and a new eclass that handles both |
14 |
> mpi implementations and packages depending on mpi. |
15 |
> |
16 |
> So, I think I have pushed this work far enough along for it to actually be |
17 |
> somewhat officially offered. My question then, is where should this be |
18 |
> located? There are several mpi packages in the science overlay already, so |
19 |
> should I push this work to there, or would it be more appropriate to make a |
20 |
> new overlay specifically for hp-cluster? |
21 |
> |
22 |
> Future work related to this project will be getting all mpi implementations |
23 |
> and dependant packages converted in the same overlay before bringing it up on |
24 |
> -dev for discussion about inclusion into the main tree. |
25 |
> |
26 |
> I have no real preference either way, but the science team does already have |
27 |
> an overlay :) Let me know what you think. |
28 |
> |
29 |
> [1] https://bugs.gentoo.org/show_bug.cgi?id=44132 |
30 |
> [2] http://dev.gentoo.org/~jsbronder/README.empi.txt |
31 |
> |
32 |
> -- |
33 |
> |
34 |
> Justin Bronder |
35 |
> |
36 |
> |
37 |
|
38 |
|
39 |
-- |
40 |
Gentoo GNU/Linux 2.6.23 Dual Xeon |
41 |
|
42 |
Mail to |
43 |
alexxyum@×××××.com |
44 |
alexxy@××××××.ru |
45 |
-- |
46 |
gentoo-science@l.g.o mailing list |