The contents of this twiki page are outdated, and are kept for historical purposes. Please exercise caution prior to starting a new project based on this information. Instead, consider whether High-Throughput Parallel Computing might work for you instead.
Picking A Site
The first step is to pick an OSG site that supports your VO and has an MPI version installed. The easiest way to do this is decide on an MPI implementation (MPICH, MPICH2, OpenMPI?, etc.) and perform an LDAP query. For example, to see all of the MPICH versions installed on Purdue's Steele cluster, the following command can be used:
The "GlueSoftwareEnvironmentSetup" field shows what command to use in order to load the MPI version into your environment. This is important for compiling and running the application.
Compiling MPI Jobs
Compiling the application can be done in one of two ways. If the site allows local logins, you can login directly to the server, run the command to load MPI into your environment, and compile your application. This is essentially the same workflow you would use to run the application on your own machine.
Since most sites don't allow local logins, however, another way to compile your application is to submit a batch job that will compile the application for you. This can be accomplished by writing a short script that will source the module and compile the application. The following is an example script that will compile the cpi program.
A Sample Compile Script
# Right now on Purdue's Steele cluster, the modules program is not in the user's
# path when a job is run. In order to ensure the module command works, we need
# to source the module setup script. For sites using softenv, sourcing
# /etc/profile.d/softenv.sh should work instead.
# This is where the command from GlueSoftwareEnvironmentSetup goes
module load mpich-gcc
mpicc -o cpi cpi.c
A Sample Submit Script
Once you have a script that will compile your application, you can submit a Condor-G job to compile your application. For example, the following submit script will compile and return the working executable for the above compile script on Purdue's Steele cluster: