MPICH2 v1.2.1 has been installed into /usr/local/mpich2 for:
Note: The majority of the following information is also available in
/usr/local/mpich2/README
.
Please only use MPICH2 on machines you are authorized to do so.
In order to use mpich2 you will need to place /usr/local/mpich2/bin early in
your path. tcsh users can do the following: set path = (
/usr/local/mpich2/bin $path )
You can verify the commands are found with:
which mpd mpiexec mpirun
Next .mpd.conf should be created in your home directory: touch .mpd.conf &&
chmod 600 .mpd.conf
Once that is done you’ll need to create a password for MPD, this should be
something random but should not be a password you use for anything else.
Replace ‘random-password’ with your chosen password: echo
secretword=random-password >> .mpd.conf
To verify all of the above steps worked okay do the following: mpd &
Wait a few seconds for MPD to start, then:
mpdtrace
mpdallexit
The output should be the hostname you are running on. Now you should create a file to store a list of hostnames to use, an example of this may be (tcsh):
touch mpd.hosts
foreach i ( node1 node2 node3 node4 node5 node6 node7 node8 )
echo $i >> mpd.hosts
end
For multiprocessor/multicore systems append :
Then you can start the daemons on some of the hosts from mpd.hosts where mpdboot -n <n> &
For a compute cluster with a head node
Make sure you kill off your mpd processes with mpdallexit when finished!!
On systems that have more than one processor (SMP system) mpd and mpdboot take a –ncpus=n options to specify how many cpus the system(s) have. This changes the order in which processes start on which system. See section 5.1.7 of the install guide for more info.
To test the ring of mpds: mpdtrace
Each host with mpd running should be listed. The command mpdringtest
can be
used to test how long it takes to go around the ring. To test the ring is
working: mpiexec -1 -n <number> hostname
You will run your MPI code this way: mpiexec -1 -n <number> mpi-program
The -1 option is so the job will not run on the head node.
For more help with mpiexec: mpiexec --help
An ECE/CIS Research Linux compute cluster has been constructed using existing
hardware. The cluster consists of four compute nodes and a head node. The four
nodes are hoek{1-4}.eecis.udel.edu
with the head node being
mpi1.eecis.udel.edu
. To use the cluster log onto the head node and launch jobs
to run on the compute nodes.
Assuming you follow the above generic instructions you should be fine. Here are some specifics on how things should be done for this cluster. For the mpd.hosts file place the following four lines in it:
hoek1:8
hoek2:8
hoek3:8
hoek4:8
The 8 means that they have 8 cores each and the first 8 jobs will go to the first host list, the second 8 to the next host and so on.
For mpdboot run the following on mpi1.eecis.udel.edu: mpdboot -n 5 --ncpus=8
Since there are four compute nodes with eight cores each the argument to the -n option to mpiexec should not exceed 32 and be sure to run with the -1 option so jobs will not run on the head node.
ECE/CIS • University of Delaware — All Rights Reserved • Newark, DE 19716 • USA • 2015 • Website by AndrĂ© Rauh • Maintained by Labstaff
Comments • Contact Us • Accessibility Notice • Legal Notices