Running Phenomenology Codes on OSG
The SLAC Theory Group
has started working with OSG User
Support to try running some phenomenology codes on OSG
as a proof-of-principle.
The theory group works with
the following experiments within SLAC:
and Super-B. More broadly, it works with
We will initially be running a pair of applications,
that do multiparticle QCD calculations.
In actual use they will produce about 2 or 3 GB of data
stored in a root NTuple, and take about 8 to 12 hours
They are independent Monte Carlo jobs whose output
files will record the random number seed, so no special
parallelization is needed. If some jobs fail, we don't
have to resubmit them.
There will be very roughly 500 jobs submitted at
one time for the proof-of-principle phase.
The executables will be 2 or 3 GB also, and should
be prestaged at the sites. The executable
shouldn't change too often--on the order of once
For testing, the output data will be more like
160KB, and the executable and input data are in a
Ideally we should be able to do a single
exploration ("pilot") run to figure out how many
jobs will be needed, and then submit all the jobs.
The variable number of jobs may be difficult to
handle with Condor DAGMAN.
We would direct jobs to sites that have a hadoop
file system available. Each job can copy its output
data to its site's file system, usually via a POSIX
interface, before exiting. Then, later, the user can
retrieve and delete the data from the site using srm
OSG Discovery Tools
that indicate which sites have storage elements and (at least nominally) how much room
As an example
get_srm_storage_element_id --vo engage --free_online_size 100000 --show_site_name
The User Support team wrote custom scripts to implement a workflow
that uses SRM to store output data. It was mostly serviceable but
had limited functionality and could lose track of data.
The User Support group has implemented a workflow that
the application to $OSG_APP, and to store the output data
from the application. This would replace the "basic" handling
mentioned above. We are also trying to run this workflow
on sites that support
Here's a draft of the instructions:
Running Multi-Core Jobs Using iRODS for Data Handling
0. Setup the environment:
voms-proxy-init -valid 48:00 -voms osg:/osg/Pheno
It's important that the proxy not expire before the
jobs are done, otherwise iRODS won'd be able to save
the output data.
1. Create a tar file with the application, and move it
to the iRODS store at each site:
tar cfh test_application.tar test_application
iput -R osgAppGridFtpGroup test_application.tar
irepl-osg -f /osg/home/pheno/test_application.tar -G osgAppGridFtpGroup
for site in $(ilsresc-osg -G osgAppGridFtpGroup | perl -ne '/Group: osg Resource: (\S+)/ and print "$1 "'); do ibun-osg -f
test_application.tar -R $site -G osgAppGridFtpGroup; done
When this step is done, can use
ils -l /osg/home/pheno
to see the untarred files on the submit host. From
the jobs, the files are accessible using normal file
system operations at
Use -f with iput to overwrite a file that is already there.
Can use -a with ibun to do the operations
asynchronously with email notification.
The tar file shouldn't be compressed.
Files do not get directly uploaded to Tusker. iRODS
puts them at GLOW, and then relies on cvmfs to move
them to Tusker which should take about an hour.
2. Create the submit file. It should have lines like
x509userproxy = /tmp/x509up_uZZZZ
+SiteList = "GLOW,UCSDT2,Firefly,Tusker,Nebraska,prariefire"
Requirements = (stringListMember(GLIDEIN_ResourceName,SiteList) == True)
+RequiresWholeMachine = True
+ProjectName = "Pheno"
The ZZZZ in the x509userproxy is your unix user id
available with the 'id' command.
The five listed sites are all set up for iRODS and
should work for multicore jobs, although so far our
fully-featured test jobs have only run at Tusker.
We also have an example of a run available.
We are in the process of testing a
DESIRED_HTPC_Resources attribute. It's similar
to SiteList above, but should direct glideins
to only the listed sites.
3. Create a wrapper that will run the job. The condor
jobs should no longer use "transfer_output_files"
for anything big. The necessary lines for iRODS on
the worker node could look something like this:
$IRODS_PLUGIN_DIR/icp $OUTPUT irodse://firstname.lastname@example.org:1247?/osg/home/pheno/$OUTPUT
if [ $? -eq 0 ]
echo "`date` icp success"
rm -f $OUTPUT
echo "`date` icp failure"
This "if statement" doesn't delete the output file
if the icp fails, which allows condor to bring
that file back.
It might be best for the worker node to make a
single tar file to hold all the files to put in
4. Submit the jobs
and wait for them to run and finish.
5. Look at and possibly download the output and remove it:
ils -l /osg/home/pheno
irm -f /osg/home/pheno/tp_outputA_863567_9.txt
Would like to try running MPI jobs. Ideally there would be
about 128 jobs of 32 cores that run for 23 hours each, or
approximately 100K hours. Would also be ok to have 500
jobs of 8 cores for 23 hours each. May be ok to spread
out runs over about 3 days.
-- OSG User Support