Testing the OSG Client

Install the Test machine

The OSG RPMs currently support Red Hat Enterprise Linux 5 and variants (Scientific Linux 5 and CentOS 5). Currently most of our testing has been done on Scientific Linux 5: your mileage may vary.

OSG RPMs are distributed via the OSG yum repositories. Some packages depend on packages distributed via the EPEL repositories. So both repositories must be enabled.

  1. If not already present, install EPEL:
    [root@client ~]$ wget http://download.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
    [root@client ~]$ rpm -i epel-release-5-4.noarch.rpm 
  2. Install the yum-priorities plugin:
    [root@client ~]$ yum install yum-priorities
  3. Install the OSG repositories:
    [root@client ~]$ rpm -Uvh http://repo.grid.iu.edu/osg-release-latest.rpm
    WARNING: if you have your own mirror or configuration of the EPEL repository, you MUST verify that the OSG repository has a better yum priority than EPEL. Otherwise, you will have strange depsolving issues.
    COMMENT: Please provide detailed syntax for how to do this. -Horst

Install the OSG-Client

[root@client ~]$ yum install --enablerepo=osg-testing --nogpgcheck osg-client 

Basic Commands

For most of these commands, you will need a valid user certificate. Place the certificate and private key in ~/.globus on the machine.

Test Proxy initiation

[user@client ~]$ grid-proxy-init 
Your identity: /DC=org/DC=doegrids/OU=People/CN=Derek Weitzel 285345
Enter GRID pass phrase for this identity:
Creating proxy ......................................................... Done
Your proxy is valid until: Tue Jul 26 05:39:46 2011

[user@client ~]$ voms-proxy-init -voms hcc
Enter GRID pass phrase:
Your identity: /DC=org/DC=doegrids/OU=People/CN=Derek Weitzel 285345
Creating temporary proxy ......................... Done
Contacting  hcc-voms.unl.edu:15000 [/DC=org/DC=doegrids/OU=Services/CN=http/hcc-voms.unl.edu] "hcc" Done
Creating proxy ........................................... Done

Your proxy is valid until Tue Jul 26 23:49:24 2011

[user@client ~]$ voms-proxy-info -all
subject   : /DC=org/DC=doegrids/OU=People/CN=Derek Weitzel 285345/CN=proxy
issuer    : /DC=org/DC=doegrids/OU=People/CN=Derek Weitzel 285345
identity  : /DC=org/DC=doegrids/OU=People/CN=Derek Weitzel 285345
type      : proxy
strength  : 1024 bits
path      : /tmp/x509up_u500
timeleft  : 11:55:36
key usage : Digital Signature, Key Encipherment
=== VO hcc extension information ===
VO        : hcc
subject   : /DC=org/DC=doegrids/OU=People/CN=Derek Weitzel 285345
issuer    : /DC=org/DC=doegrids/OU=Services/CN=http/hcc-voms.unl.edu
attribute : /hcc/Role=NULL/Capability=NULL
attribute : /hcc/testgroup/Role=NULL/Capability=NULL
timeleft  : 11:55:36
uri       : hcc-voms.unl.edu:15000

Test Simple Job Submission

[user@client ~]$ globus-job-run pf-grid.unl.edu/jobmanager-fork /bin/sh -c "id" 
uid=1761(hcc) gid=4001(grid) groups=4001(grid)

NOTE some of this testing will not work if your client is on a home network behind a NAT, you will likely get globus error 74 or 43. Dropping iptables will help some but not completely unless you configure a GLOBUS_TCP_SOURCE_RANGE on your home router.

Testing the Condor Install

  1. Start Condor
    [root@client ~]$ /sbin/service condor start 
  2. Create a job
    [user@client ~]$ cat << JOB > submit.job
    universe = grid
    grid_resource = gt2 pf-grid.unl.edu/jobmanager-condor
    executable = /bin/hostname
    log = log
    output = output
    error = error
    queue
    JOB
    
  3. Submit job
    [user@client ~]$ condor_submit submit.job 

File Transfer

UberFTP

[user@client ~]$uberftp fndca1.fnal.gov  ls
220 GSI FTP door ready
200 User fnalgrid logged in
drwx------  1 fnalgrid   fnalgrid            512 Jan 26  2010 some_dir
[user@client ~]$uberftp fndca1.fnal.gov  "ls some_dir"
220 GSI FTP door ready
200 User fnalgrid logged in
-r--------  1 fnalgrid   fnalgrid             15 Jan 26  2010 some_file
[user@client ~]$uberftp fndca1.fnal.gov  "cat some_dir/some_file"
220 GSI FTP door ready
200 User fnalgrid logged in
Some text

Globus GridFTP

[user@client ~]$ globus-url-copy gsiftp://pf-grid.unl.edu/etc/hosts ./
[user@client ~]$ cat hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1		localhost.localdomain localhost
::1		localhost6.localdomain6 localhost6
129.93.229.226		pf-grid

LCG-Utils

  1. ls
    [user@client ~]$ lcg-ls -D srmv2 -b srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/
    /mnt/hadoop/Trash
    /mnt/hadoop/backups
    /mnt/hadoop/chukwa
    /mnt/hadoop/dropfiles
    /mnt/hadoop/hello_world
    /mnt/hadoop/images
    /mnt/hadoop/logs
    /mnt/hadoop/lost+found
    /mnt/hadoop/public
    /mnt/hadoop/scratch
    /mnt/hadoop/system
    /mnt/hadoop/tmp
    /mnt/hadoop/user
    
  2. Transfer
    [user@client ~]$ lcg-cp -D srmv2 -b srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/dropfiles/hello_world file://`pwd`/hello_world 

SRM-Fermi

  1. ls
    [user@client ~]$ srmls srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/
      /mnt/hadoop//
          /mnt/hadoop/Trash/
          /mnt/hadoop/backups/
          /mnt/hadoop/chukwa/
          /mnt/hadoop/dropfiles/
          13 /mnt/hadoop/hello_world
          /mnt/hadoop/images/
          /mnt/hadoop/logs/
          /mnt/hadoop/lost+found/
          /mnt/hadoop/public/
          /mnt/hadoop/scratch/
          /mnt/hadoop/system/
          /mnt/hadoop/tmp/
          /mnt/hadoop/user/ 
  2. Transfer
    [user@client ~]$ srmcp srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/dropfiles/hello_world file:///`pwd`/hello_world
    [user@client ~]$ cat hello_world 
    hello world! 

SRM LBNL

  1. ls (really long output)
    [user@client ~]$ srm-ls srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/ 
srm-ls   2.2.2.1.0   Mon Jul 11 10:31:14 PDT 2011
BeStMan and SRM-Clients Copyright(c) 2007-2011,
Lawrence Berkeley National Laboratory. All rights reserved.
Support at SRM@LBL.GOV and documents at http://sdm.lbl.gov/bestman

 
BUILT  Fri Jul 15 13:24:22 EDT 2011  on  glidein.unl.edu
SRM-CLIENT: Connecting to serviceurl httpg://red-srm1.unl.edu:8443/srm/v2/server

SRM-DIR: Mon Jul 25 18:59:23 CDT 2011 Calling srmLsRequest

SRM-DIR: ..........................
	Status    : SRM_SUCCESS
	Explanation : null
	Request token=null

	SURL=/mnt/hadoop/
	Bytes=null
	FileType=DIRECTORY
	StorageType=null
	Status=SRM_SUCCESS
	Explanation=Read from disk..
	FileLocality=ONLINE

	   SURL=/mnt/hadoop/Trash
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/backups
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/chukwa
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/dropfiles
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/hello_world
	   Bytes=13
	   FileType=FILE
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/images
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/logs
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/lost+found
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/public
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/scratch
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/system
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/tmp
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

	   SURL=/mnt/hadoop/user
	   Bytes=null
	   FileType=DIRECTORY
	   StorageType=null
	   Status=SRM_SUCCESS
	   Explanation=Read from disk..

SRM-DIR: Printing text report now ...
SRM-CLIENT*REQUEST_STATUS=SRM_SUCCESS
SRM-CLIENT*SURL=/mnt/hadoop/
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*FILELOCALITY=ONLINE
SRM-CLIENT*SURL=/mnt/hadoop/Trash
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/backups
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/chukwa
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/dropfiles
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/hello_world
SRM-CLIENT*BYTES=13
SRM-CLIENT*FILETYPE=FILE
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/images
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/logs
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/lost+found
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/public
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/scratch
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/system
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/tmp
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..
SRM-CLIENT*SURL=/mnt/hadoop/user
SRM-CLIENT*FILETYPE=DIRECTORY
SRM-CLIENT*FILE_STATUS=SRM_SUCCESS
SRM-CLIENT*FILE_EXPLANATION=Read from disk..

  1. Transfer
    [user@client ~]$ srm-copy srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/dropfiles/hello_world file:///`pwd`/hello_world 
srm-copy   2.2.2.1.0   Mon Jul 11 10:31:14 PDT 2011
BeStMan and SRM-Clients Copyright(c) 2007-2011,
Lawrence Berkeley National Laboratory. All rights reserved.
Support at SRM@LBL.GOV and documents at http://sdm.lbl.gov/bestman

 
BUILT  Fri Jul 15 13:24:22 EDT 2011  on  glidein.unl.edu
SRM-CLIENT: Tue Jul 26 12:01:28 CDT 2011 Connecting to httpg://red-srm1.unl.edu:8443/srm/v2/server

SRM-CLIENT: Tue Jul 26 12:01:29 CDT 2011 Calling SrmPrepareToGet Request now ...
request.token= get:1034871

Request.status=SRM_SUCCESS
Request.explanation=null

SRM-CLIENT: RequestFileStatus for SURL=srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/dropfiles/hello_world is Ready.
SRM-CLIENT: received TURL=gsiftp://red-gridftp6.unl.edu:2811//mnt/hadoop/dropfiles/hello_world

SRM-CLIENT: Tue Jul 26 12:01:33 CDT 2011 start file transfer
SRM-CLIENT:Source=gsiftp://red-gridftp6.unl.edu:2811//mnt/hadoop/dropfiles/hello_world
SRM-CLIENT:Target=file:////home/dweitzel/hello_world

SRM-CLIENT: Tue Jul 26 12:01:36 CDT 2011 end file transfer for srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/dropfiles/hello_world

SRM-CLIENT: Tue Jul 26 12:01:36 CDT 2011 Calling releaseFile

SRM-CLIENT:  ...Calling srmReleaseFiles...
	status=SRM_SUCCESS
	explanation=null
	status=SRM_SUCCESS
	explanation=null

SRM-CLIENT: Request completed with success

SRM-CLIENT: Printing text report now ...

SRM-CLIENT*REQUESTTYPE=get
SRM-CLIENT*TOTALFILES=1
SRM-CLIENT*TOTAL_SUCCESS=1
SRM-CLIENT*TOTAL_FAILED=0
SRM-CLIENT*REQUEST_TOKEN=get:1034871
SRM-CLIENT*REQUEST_STATUS=SRM_SUCCESS
SRM-CLIENT*SOURCEURL[0]=srm://red-srm1.unl.edu:8443/srm/v2/server?SFN=/mnt/hadoop/dropfiles/hello_world
SRM-CLIENT*TARGETURL[0]=file:////home/dweitzel/hello_world
SRM-CLIENT*TRANSFERURL[0]=gsiftp://red-gridftp6.unl.edu:2811//mnt/hadoop/dropfiles/hello_world
SRM-CLIENT*ACTUALSIZE[0]=13
SRM-CLIENT*FILE_STATUS[0]=SRM_FILE_PINNED

OSG Discovery Tools

[user@client ~]$ get_os_versions --vo Engage --match "ScientificSLF Lederman"

Site Name           Compute Element ID                                          OS Version                    
FNAL_FERMIGRID      fermigridosg1.fnal.gov:2119/jobmanager-condor-default       ScientificSLF Lederman 5.5    
FNAL_GPGRID_1       fnpcosg1.fnal.gov:2119/jobmanager-condor-default            ScientificSLF Lederman 5.3    
FNAL_GPGRID_1       fnpcosg1.fnal.gov:2119/jobmanager-condor-group_engage       ScientificSLF Lederman 5.3    
Found OS ScientificSLF Lederman 5.5 for Engage VO at site FNAL_FERMIGRID 
[user@client ~]$ get_surl --vo Engage --show_site_name

SITE NAME                     SURL                                                                                                
UCSDT2                        srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/engage/TESTFILE                             
UCR-HEP                       srm://charm.ucr.edu:10443/srm/v2/server?SFN=/data/bottom/cms/TESTFILE             
CIT_CMS_T2                    srm://cit-se.ultralight.org:8443/srm/v2/server?SFN=/mnt/hadoop/osg/engage/TESTFILE                  
GLOW                          srm://cmssrm.hep.wisc.edu:8443/srm/v2/server?SFN=/osg/vo/engage/TESTFILE                            
..
-- DerekWeitzel - 25 Jul 2011
Topic revision: r10 - 17 Aug 2011 - 17:59:58 - StevenTimm
Hello, TWikiGuest!
Register

 
TWIKI.NET

TWiki | Report Bugs | Privacy Policy

This site is powered by the TWiki collaboration platformCopyright by the contributing authors. All material on this collaboration platform is the property of the contributing authors..