# EPEL 5 (For RHEL 5, CentOS 5, and SL 5) [root@dCache-admin ~]$ curl -O https://dl.fedoraproject.org/pub/epel/epel-release-latest-5.noarch.rpm [root@dCache-admin ~]$ rpm -Uvh epel-release-latest-5.noarch.rpm # EPEL 6 (For RHEL 6, CentOS 6, and SL 6) [root@dCache-admin ~]$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm # EPEL 7 (For RHEL 7, CentOS 7, and SL 7) [root@dCache-admin ~]$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmWARNING: if you have your own mirror or configuration of the EPEL repository, you MUST verify that the OSG repository has a better yum priority than EPEL (details). Otherwise, you will have strange dependency resolution (depsolving) issues.
Choose the correct package name based on your operating systemís major version:
Install the Yum priorities package:
[root@dCache-admin ~]$ yum install PACKAGE
PACKAGE with the package name from the previous step.
/etc/yum.conf has the following line in the
[main] section (particularly when using ROCKS), thereby enabling Yum plugins, including the priorities one:
plugins=1NOTE: If you do not have a required key you can force the installation using
yum install --nogpgcheck yum-priorities.
If you are upgrading from OSG 3.1 (or 3.2) to OSG 3.2 (or 3.3), remove the old OSG repository definition files and clean the Yum cache:
[root@dCache-admin ~]$ yum clean all [root@dCache-admin ~]$ rpm -e osg-release
This step ensures that local changes to
*.repo files will not block the installation of the new OSG repositories. After this step,
*.repo files that have been changed will exist in
/etc/yum.repos.d/ with the
*.rpmsave extension. After installing the new OSG repositories (the next step) you may want to apply any changes made in the
*.rpmsave files to the new
Install the OSG repositories using one of the following methods depending on your EL version:
For EL versions greater than EL5, install the files directly from
[root@dCache-admin ~]$ rpm -Uvh URL
URL is one of the following:
For EL5, download the repo file and install it using the following:
[root@dCache-admin ~]$ curl -O https://repo.grid.iu.edu/osg/3.2/osg-3.2-el5-release-latest.rpm [root@dCache-admin ~]$ rpm -Uvh osg-3.2-el5-release-latest.rpm
yumcommands below to select this host's CA certificates.
empty-ca-certsRPM indicates you will be manually installing the CA certificates on the node.
osg-ca-scriptsRPM provides a cron script that automatically downloads CA updates, and requires further configuration.
[root@dCache-admin ~]$ yum install dcache-gratia-probe
CollectorHost="gratia-osg-itb.opensciencegrid.org:80" SSLHost="gratia-osg-itb.opensciencegrid.org:443" SSLRegistrationHost="gratia-osg-itb.opensciencegrid.org:80" UserVOMapFile="/var/lib/osg/user-vo-map" SiteName="YOUR SITE NAME" Grid="OSG-ITB" EnableProbe="1" DBHostName="ADMIN NODE"If you are installing on dCache admin node you don't need to change DBHostName (localhost is a default) To configure dCache storage probe you will need to modify the configuration file
CollectorHost="gratia-osg-itb.opensciencegrid.org:80" SSLHost="gratia-osg-itb.opensciencegrid.org:443" SSLRegistrationHost="gratia-osg-itb.opensciencegrid.org:80" UserVOMapFile="/var/lib/osg/user-vo-map" SiteName="YOUR SITE NAME" Grid="OSG-ITB" EnableProbe="1" InfoProviderUrl="http://ADMIN NODE:2288/info"In above configuration, please use ITB for testing You will also need to configure
/etc/gums/gums-client.propertiesin order to accurately collect grid resource usage and metrics by VO for transfer submitted using grid proxies or where voms proxy information is not available.
gums.location=https://GUMS_HOST:8443/gums/services/GUMSAdmin gums.authz=https://GUMS_HOST:8443/gums/services/GUMSXACMLAuthorizationServicePortif you are not using a default port (8443) you have to change it as well.
[root@dCache-admin ~]$ service gratia-dcache-transfer start Starting gratia-dcache-transfer: [ OK ]The Gratia storage probe for dCache is a cronjob. The cron file is located at
/etc/cron.d/gratia-probe-dCache-storage.cron. If you want to test it immediately (instead of waiting for cron to run), run the storage probe manually as follows
[root@dCache-admin ~]$ /usr/share/gratia/dCache-storage/dCache-storage_meter.cron.sh
[root@dCache-admin ~]$ service gratia-dcache-transfer stop Stop gratia-dcache-transfer: [ OK ]To stop the Gratia storage probe for dCache, comment the cron job in file
#0 * * * * root "perl -e 'sleep rand 3600' && /usr/share/gratia/dCache-storage/dCache-storage_meter.cron.sh"
[root@dCache-admin ~]$ ps auxww|grep gratia-dcache-transfer root 2680 2.3 0.2 174584 11080 ? S 14:32 0:19 /usr/bin/python /usr/share/gratia/dCache-transfer/gratia-dcache-transferCheck the log files, located under
/var/log/gratiaVerify that you can see the reports in gratia collector you have specified in configuration. Be aware that it could some delay due to Collector being under heavy load is possible. So, be patient. To access the information about Gratia transfer probe for dCache, go to http://[gratia_host]:[gratia_port]/gratia-reporting/, click on "Custom SQL Query" on the left site menu frame, and enter the following query into provided text box
select * from MasterTransferSummary where ProbeName like 'dCache-transfer:dcache_adminhost';click on "Execute Query" and you will see the total number of transfers per user. To access the information about the Gratia storage probe for dCache, go to http://[gratia_host]:[gratia_port]/gratia-reporting/, click on "Custom SQL Query" on the left site menu frame, and enter the following query into provided text box
select * from StorageElement where ProbeName like 'dCache-storage:dcache_adminhost';Click on "Execute Query" and you will see the storage information. To check ITB Gratia collector click here ITB Gratia. To check OSG Gratia collector click here OSG Gratia.