1 |
On 09/02/08 22:56 -0800, Donnie Berkholz wrote: |
2 |
... |
3 |
> > sys-cluster/empi: Does the same stuff that crossdev does. You create a new |
4 |
> > implementation root by specifying a name and mpi implementation package to |
5 |
> > build it with. [2] Empi adds these to an overlay under a new category with |
6 |
> > the name you gave. The ebuilds inherit mpi.eclass which handles pushing all |
7 |
> > the files to /usr/lib/mpi/<name>, and providing files for eselect-mpi. |
8 |
> |
9 |
> lib == $(get_libdir) ? |
10 |
> |
11 |
|
12 |
Yup, it's actually grabbed during the addition of the implementation to |
13 |
eselect-mpi. |
14 |
|
15 |
> > A couple of final words, hpl and mpi-examples currently wouldn't work without |
16 |
> > empi, mainly because I'm lazy :) Also I still haven't figured out a good way |
17 |
> > to handle do{man,doc,www} etcetera, ideas are welcome. |
18 |
> |
19 |
> Do the same thing as gcc. Install to a path under |
20 |
> /usr/libdir/mpi/..../{man,doc} and use environment variables (MANPATH |
21 |
> etc) and/or symlinks. |
22 |
> |
23 |
That's the general idea, however it's not as simple as just setting *into |
24 |
unless I mess with $D first. The plan being to have something like mpi_doX |
25 |
which would handle setting the correct install path and restore global |
26 |
variables (like $D) on exit. Not sure if this is the best method, comments |
27 |
appreciated :) |
28 |
|
29 |
> > There's still a fair amount of work to be done, but I wanted to see if |
30 |
> > anyone had any feedback regarding the general concept first before |
31 |
> > pushing on. |
32 |
> > |
33 |
> > You can pull the overlay from rsync://cold-front.ath.cx/mpi (for now...) |
34 |
> |
35 |
> Is this in layman? The file's at |
36 |
> gentoo/xml/htdocs/proj/en/overlays/layman-global.txt. |
37 |
> |
38 |
|
39 |
Not yet, it's being hosted off my home machine which doesn't always have the |
40 |
best connection. If a consensus can be reached that this is a worthy |
41 |
solution, I presume I can talk to the overlay/infra guys then, or maybe even |
42 |
the sci team who already has an overlay with some MPI applications in it. |
43 |
|
44 |
|
45 |
Thanks, |
46 |
|
47 |
-- |
48 |
Justin Bronder |