Installing and Maintaining XRootD

About This Guide

XRootD is a hierarchical storage system that can be used in a variety of ways to access data, typically distributed among actual storage resources. One way to use XRootD is to have it refer to many data resources at a single site, and another way to use it is to refer to many storage systems, most likely distributed among sites. An XRootD system includes a redirector, which accepts requests for data and finds a storage repository — locally or otherwise — that can provide the data to the requestor.

Use this page to learn how to install, configure, and use an XRootD redirector as part of a Storage Element (SE) or as part of a global namespace.

Before Starting

Before starting the installation process, consider the following points:

  • User IDs: If it does not exist already, the installation will create the Linux user ID xrootd
  • Service certificate: The XRootD service uses a host certificate at /etc/grid-security/host*.pem
  • Networking: The XRootD service uses port 1094 by default; for more information, see the firewall information page

As with all OSG software installations, there are some one-time (per host) steps to prepare in advance:

Installing an XRootD Server

An installation of the XRootD server consists of the server itself and its dependencies. Install these with Yum:

[root@server ~]# yum install xrootd

Configuring an XRootD Server

Minimal configuration

A new installation of XRootD is already configured to run a standalone server that serves files from /tmp on the local file system. This configuration is useful to verify basic connectivity between your clients and your server. To do this, start the xrootd service with standalone config as described in the managing services section.

You should be able now to copy a file such as /bin/sh using xrdcp command into /tmp. To test, do:

[root@server ~]# yum install xrootd-client
[root@server ~]# xrdcp /bin/sh root://localhost:1094//tmp/first_test
[xrootd] Total 0.76 MB  |====================| 100.00 % [inf MB/s]
[root@server ~]# ls -l /tmp/first_test
-rw-r--r-- 1 xrootd xrootd 801512 Apr 11 10:48 /tmp/first_test

Other than for testing, a standalone server is useful when you want to serve files off of a single host with lots of large disks. If your storage capacity is spread out over multiple hosts, you will need to set up an XRootD cluster.

Advanced configuration

An advanced XRootD setup has multiple components; it is important to validate that each additional component that you set up is working before moving on to the next component. We have included validation instructions after each component below.

Creating an XRootD cluster

rdr.jpg

If your storage is spread out over multiple hosts, you will need to set up an XRootD cluster. The cluster uses one "redirector" node as a frontend for user accesses, and multiple data nodes that have the data that users request. Two daemons will run on each node:

xrootd
The eXtended Root Daemon controls file access and storage.
cmsd
The Cluster Management Services Daemon controls communication between nodes.

Note that for large virtual organizations, a site-level redirector may actually also communicate upwards to a regional or global redirector that handles access to a multi-level hierarchy. This section will only cover handling one level of XRootD hierarchy.

In the instructions below, RDRNODE will refer to the redirector host and DATANODE will refer to the data node host. These should be replaced with the fully-qualified domain name of the host in question.

Modify /etc/xrootd/xrootd-clustered.cfg

You will need to modify the xrootd-clustered.cfg on the redirector node and each data node. The following example should serve as a base configuration for clustering. Further customizations are detailed below.

all.export /tmp stage
set xrdr = RDRNODE
all.manager $(xrdr):3121

if $(xrdr)
  # Lines in this block are only executed on the redirector node
  all.role manager
else
  # Lines in this block are executed on all nodes but the redirector node
  all.role server
  cms.space min 2g 5g
fi

You will need to customize the following lines:

Configuration Line Changes Needed
all.export /tmp stage Change /tmp to the directory to allow XRootD access to
set xrdr=RDRNODE Change to the hostname of the redirector
cms.space min 2g 5g Reserve this amount of free space on the node. For this example, if space falls below 2GB, xrootd will not store further files on this node until space climbs above 5GB. You can use k, m, g, or t to indicate kilobyte, megabytes, gigabytes, or terabytes, respectively.

Further information can be found at http://xrootd.slac.stanford.edu/doc

Verifying the clustered config

Start both xrootd and cmsd on all nodes according to the instructions in the managing services section.

Verify that you can copy a file such as /bin/sh to /tmp on the server data via the redirector:

[root@server ~]# xrdcp /bin/sh  root://RDRNODE:1094///tmp/second_test
[xrootd] Total 0.76 MB  |====================| 100.00 % [inf MB/s]

Check that the /tmp/second_test is located on data server DATANODE.

(Optional) Adding Simple Server Inventory to your cluster

The Simple Server Inventory (SSI) provide means to have an inventory for each data server (See details in XRootD CMS config). SSI requires:

  • A second instance of the xrootd daemon on the redirector
  • A "composite name space daemon" (XrdCnsd) on each data server; this daemon handles the inventory

As an example, we will set up a two-node XRootD cluster with SSI.

Host A is a redirector node that is running the following daemons:

  1. xrootd redirector
  2. cmsd
  3. xrootd - second instance that required for SSI

Host B is a data server that is running the following daemons:

  1. xrootd data server
  2. cmsd
  3. XrdCnsd - started automatically by xrootd

We will need to create a directory on the redirector node for Inventory files.

[root@server ~]# mkdir -p /data/inventory
[root@server ~]# chown xrootd:xrootd /data/inventory

On the data server (host B) let's create the storage cache that will be different from /tmp.

[root@server ~]# mkdir -p  /local/xrootd
[root@server ~]# chown xrootd:xrootd /local/xrootd

We will be running two instances of XRootD on hostA. Modify /etc/xrootd/xrootd-clustered.cfg to give the two instances different behavior, as such:

all.export /data/xrootdfs
set xrdr=hostA
all.manager $(xrdr):3121
if $(xrdr) && named cns
      all.export /data/inventory
      xrd.port 1095
else if $(xrdr)
      all.role manager
      xrd.port 1094
else
      all.role server
      oss.localroot /local/xrootd
      ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory
      #add cms.space if you have less the 11GB
      # cms.space options http://xrootd.slac.stanford.edu/doc/dev/cms_config.htm
      cms.space min 2g 5g
fi

Starting a second instance of XRootD on EL 6

The procedure for starting a second instance differs between EL 6 and EL 7. This section is the procedure for EL 6.

Now, we have to change /etc/sysconfig/xrootd on the redirector node (hostA) to run multiple instances of xrootd. The second instance of xrootd will be named "cns" and will be used for SSI.

XROOTD_USER=xrootd
XROOTD_GROUP=xrootd
XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
XROOTD_CNS_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
CMSD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg"
FRMD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg"
XROOTD_INSTANCES="default cns"
CMSD_INSTANCES="default"
FRMD_INSTANCES="default"

Now, we can start xrootd cluster executing the following commands. On redirector you will see:

[root@server ~]# service xrootd start
Starting xrootd (xrootd, default):                        [  OK  ]
Starting xrootd (xrootd, cns):                             [  OK  ]
[root@server ~]# service cmsd start
Starting xrootd (cmsd, default):                          [  OK  ]

On redirector node you should see two instances of xrootd running:

[root@server ~]# ps auxww|grep xrootd
xrootd   29036  0.0  0.0  44008  3172 ?        Sl   Apr11   0:00 /usr/bin/xrootd -k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-default.pid -n default
xrootd   29108  0.0  0.0  43868  3016 ?        Sl   Apr11   0:00 /usr/bin/xrootd -k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-cns.pid -n cns
xrootd   29196  0.0  0.0  51420  3692 ?        Sl   Apr11   0:00 /usr/bin/cmsd -k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/cmsd-default.pid -n default
warning the log file for second named instance of xrootd with be placed in /var/log/xrootd/cns/xrootd.log

On data server node you should that XrdCnsd process has been started:

[root@server ~]# ps auxww|grep xrootd
xrootd   19156  0.0  0.0  48096  3256 ?        Sl   07:37   0:00 /usr/bin/cmsd -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/cmsd-default.pid -n default
xrootd   19880  0.0  0.0  46124  2916 ?        Sl   08:33   0:00 /usr/bin/xrootd -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-default.pid -n default
xrootd   19894  0.0  0.1  71164  6960 ?        Sl   08:33   0:00 /usr/bin/XrdCnsd -d -D 2 -i 90 -b fermicloud053.fnal.gov:1095:/data/inventory

Starting a second instance of XRootD on EL 7

The procedure for starting a second instance differs between EL 6 and EL 7. This section is the procedure for EL 7.

  1. Create a symlink pointing to /etc/xrootd/xrootd-clustered.cfg at /etc/xrootd/xrootd-cns.cfg:
    [root@server ~]# ln -s /etc/xrootd/xrootd-clustered.cfg /etc/xrootd/xrootd-cns.cfg
    
  2. Start an instance of the xrootd service named cns using the syntax in the managing services section:
    [root@server ~]# systemctl start xrootd@cns
    

Testing an XRootD cluster with SSI

  1. Copy file to redirector node specifying storage path (/data/xrootdfs instead of /tmp):
    [root@server ~]# xrdcp /bin/sh root://localhost:1094//data/xrootdfs/test1
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    
  2. To verify that SSI is working execute cns_ssi command on the redirector node:
    [root@server ~]# cns_ssi list /data/inventory
    fermicloud054.fnal.gov incomplete inventory as of Mon Apr 11 17:28:11 2011
    [root@server ~]# cns_ssi updt /data/inventory
    cns_ssi: fermicloud054.fnal.gov inventory with 1 directory and 1 file updated with 0 errors.
    [root@server ~]# cns_ssi list /data/inventory
    fermicloud054.fnal.gov complete inventory as of Tue Apr 12 07:38:29 2011
    /data/xrootdfs/test1
    
Note: In this example, fermicloud53.fnal.gov is a redirector node and fermicloud054.fnal.gov is a data node.

(Optional) Authorization

There are several authorization options in XRootD available through the security plugins. In this document, we will cover two options for security:

  • Simple Unix Security: Based on user accounts that the client is logged in as.
  • xrootd-lcmaps: Using the lcmaps callout to use GUMS authorization

Note: On the data nodes, the files will actually be owned by unix user xrootd (or other daemon user), not as the user authenticated to, under most circumstances. xrootd will verify the permissions and authorization based on the user that the security plugin authenticates you to (for instance, your unix user for option 1 or your gums id for option 2), but, internally, the data node files will be owned by the xrootd user.

Authorization file

In order to add security to your cluster you will need to add "auth_file" on the your data server node. Create /etc/xrootd/auth_file :

# This means that all the users have read access to the datasets
u * /data/xrootdfs lr

# This means that all the users have full access to their private dirs
u = /data/xrootdfs/@=/ a

# This means that this privileged user can do everything
# You need at least one user like that, in order to create the
# private dir for each user willing to store his data in the facility
u xrootd /data/xrootdfs a
Here we assume that your storage path is "/data/xrootdfs" (same as in the previous example).

Change file ownership (if you have created file as root):

 $ chown xrootd:xrootd /etc/xrootd/auth_file

This file is a flat file of the following form:

idtype id path privs

Some examples of each option. For more details or examples on how to use templated user options, see XRootd Authorization Database File.

idtype Type of id - u for username, g for group, etc
id Username (or groupname). Use * for all users or = for user-specific capabilities, like home directories
path The path prefix to be used for matching purposes.
privs Letter list of privileges: a - all ; l - lookup ; d - delete ; n - rename ; i - insert ; r - read ; k - lock (not used) ; w - write

Security option 1: adding simple (Unix) security

The first step in adding simple Unix security to validate based on username is to create the auth_file as in the previous section.

The next step is to modify /etc/xrootd/xrootd-clustered.cfg on both nodes:

all.export /data/xrootdfs
set xrdr=hostA
all.manager $(xrdr):3121
if $(xrdr) && named cns
      all.export /data/inventory
      xrd.port 1095
else if $(xrdr)
      all.role manager
      xrd.port 1094
else
      all.role server
      oss.localroot /local/xrootd
      ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory
      cms.space min 2g 5g

        # ENABLE_SECURITY_BEGIN
        xrootd.seclib /usr/lib64/libXrdSec.so
        # this specify that we use the 'unix' authentication module, additional one can be specified.
        sec.protocol /usr/lib64 unix
        # this is the authorization file
        acc.authdb /etc/xrootd/auth_file
        ofs.authorize
        # ENABLE_SECURITY_END

fi

Note that, to access users directories, you will have to create them in oss.localroot (For instance, /local/xrootd/data/xrootdfs/username) and make sure they are writable by xrootd user (or the daemon user, if you have changed it). Files in localroot on the data nodes are normally owned by xrootd not by the authenticated username.

After making all the changes, please, restart xrootd and cmsd daemons on all nodes.

Testing an XRootD cluster with simple security enabled

  1. Login on redirector node as root
  2. Check that user "root" still can read files:
    [root@server ~]# xrdcp  root://localhost:1094//data/xrootdfs/test1 /tmp/b
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    
  3. Check that user "root" can not write files under /data/xrootdfs:
    [root@server ~]#  xrdcp /tmp/b root://localhost:1094//data/xrootdfs/test2
    Last server error 3010 ('Unable to create /data/xrootdfs/test2; Permission denied')
    Error accessing path/file for root://localhost:1094//data/xrootdfs/test3
    
    or you may get this error:
    [root@server ~]# xrdcp /tmp/b root://localhost:1094//data/xrootdfs/test2
    Last server error 3011 ('No servers are available to write the file.')
    Error accessing path/file for root://localhost:1094//data/xrootdfs/test2
    
  4. Check that user can copy/retrieve files to/from /data/xrootdfs/~/...
    [root@server ~]# su - user
    -bash-3.2$   xrdcp  /tmp/a  root://localhost:1094//data/xrootdfs/user/test1
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    -bash-3.2$  xrdcp    root://localhost:1094//data/xrootdfs/user/test1 /tmp/c
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    

Security option 2: xrootd-lcmaps authorization

The xrootd-lcmaps security plugin uses the lcmaps package to access the GUMS server to authenticate and authorize users based on X509 certificates.

Certificate Installation

In order to use lcmaps, you will need CA certificates and certificate revocation lists. See the following documents for instructions:

Install xrootd-lcmaps

yum install xrootd-lcmaps
Note that the xrootd-lcmaps is usually coupled to a specific version of xrootd, so make sure that you install xrootd-lcmaps from the same repository that you install xrootd from. Otherwise, you may have dependency issues due to differing versions of shared libraries.

Create host certificates

You will need to have a X509 certificate to talk to GUMS. If you already have a host certificate, you can use a copy of that:

 mkdir /etc/grid-security/xrd
 cp /etc/grid-security/hostkey.pem /etc/grid-security/xrd/xrdkey.pem
 cp /etc/grid-security/hostcert.pem /etc/grid-security/xrd/xrdcert.pem
 chown -R xrootd:xrootd /etc/grid-security/xrd/
 chmod 400 /etc/grid-security/xrd/xrdkey.pem
This certificate should be owned by xrootd and located in /etc/grid-security/xrd.

Authorization File

Next, create /etc/xrootd/auth_file using the example in the above section.

Modify /etc/xrootd/lcmaps.cfg

Edit /etc/xrootd/lcmaps.cfg to point to your authorization server (replace the hostname in red):

scasclient = "lcmaps_scas_client.mod"
             "-resourcetype wn"
             "-actiontype execute-now"
             "-capath /etc/grid-security/certificates"
             "-cert   /etc/grid-security/xrd/xrdcert.pem"
             "-key    /etc/grid-security/xrd/xrdkey.pem"
             "--endpoint https://gums.fnal.gov:8443/gums/services/GUMSXACMLAuthorizationServicePort"

Modify /etc/xrootd/xrootd-clustered.cfg

You will need to add the security plugins to /etc/xrootd/xrootd-clustered.cfg. Add the following lines to the /etc/xrootd/xrootd-clustered.cfg. This will need to be added to all data nodes in the section relevant to the data node server. (For instance, in the above example(s), this should be placed in the last "else" clause after "all.role server". See the section above on Unix security for an example of where the security commands should go).

xrootd.seclib /usr/lib64/libXrdSec.so

set CERTDIR=-certdir:/etc/grid-security/certificates
set CERT=-cert:/etc/grid-security/xrd/xrdcert.pem
set KEY=-key:/etc/grid-security/xrd/xrdkey.pem
set AUTHZFUN=-authzfun:libXrdLcmaps.so
set AUTHZFUNPARMS=-authzfunparms:--osg,--lcmapscfg,/etc/xrootd/lcmaps.cfg,--loglevel,0|useglobals

sec.protocol /usr/lib64 gsi $CERTDIR $CERT $KEY -crl:3 $AUTHZFUN $AUTHZFUNPARMS --gmapopt:2 --gmapto:0
acc.authdb /etc/xrootd/auth_file
ofs.authorize

Restart xrootd and cmsd

service xrootd restart
service cmsd restart

lcmaps notes

For recent versions of lcmaps (1.5+), VOMS signature verification is enabled by default. You may need to either disable it or install the vo-client.

Installing vo-client:

yum install vo-client

Disabling vo verification can be done by adding a line to /etc/sysconfig/xrootd:

export LLGT_VOMS_DISABLE_CREDENTIAL_CHECK=1

This feature is evolving, so stay tuned for updates.

Testing an XRootd Cluster with LCMAPS security enabled

From any machine with the xrootd-client installed, you can test with xrdcp. With a user that has no grid certificate installed, you should get an error:

[user@client ~]$ xrdcp /bin/bash root://HOSTNAME/tmp/lcmaps_test
120327 14:31:52 10509 secgsi_InitProxy: cannot access private key file: /home/dstrain/.globus/userkey.pem
XrdSec: No authentication protocols are available.
Last server error 3010 ('cannot obtain credentials for protocol: Secgsi: ErrParseBuffer: error getting user proxies: kXGS_init: unable to get protocol object.')
Error accessing path/file for root://fermicloud121.fnal.gov/tmp/test2

After running voms-proxy-init or grid-proxy-init to initialize your x509 certificate (usually found in /tmp/x509up_uUID), the xrdcp command should execute cleanly. For instance, the following shows an example of a user creating a voms certificate and copying to the xrootd client and then re-executing the xrdcp command.

[user@client ~]$ voms-proxy-init -voms Engage -valid 999:0
Your identity: /DC=org/DC=doegrids/OU=People/CN=Doug Strain 834323
Creating temporary proxy ..................................................... Done
Contacting  osg-engage.renci.org:15001 [/DC=org/DC=doegrids/OU=Services/CN=osg-engage.renci.org] "Engage" Done
Creating proxy .......................... Done

Your proxy is valid until Tue May  8 05:34:40 2012
[user@client ~]$ scp /tmp/x509up_u44678 CLIENT_HOSTNAME:/tmp/x509up_u44678
On the xrootd-client node,
[user@client ~]$ xrdcp /bin/bash root://fermicloud121.fnal.gov//tmp/lcmaps_test
[xrootd] Total 0.73 MB  |====================| 100.00 % [inf MB/s]
In the above examples, make sure to change "/tmp" to a directory allowed by the /etc/xrootd/auth_file created in a previous section.

(Optional) Adding CMS TFC support to XRootD (CMS sites only)

For CMS users, there is a package available to integrate rule-based name lookup using a storage.xml file. If you are not setting up a CMS site, you can skip this section.

yum install --enablerepo=osg-contrib xrootd-cmstfc

You will need to add your storage.xml to /etc/xrootd/storage.xml and then add the following line to your xrootd configuration:

# Integrate with CMS TFC, placed in /etc/xrootd/storage.xml
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=hadoop
Add the orange text only if you are running hadoop (see below).

See the CMS TWiki for more information:

(Optional) Adding Hadoop support to XRootD

HELP NOTE
3.3 only

Users with Hadoop-based storage under their XRootD installation have extra steps to configure more efficient access to Hadoop Stroage

[root@server ~]# yum install xrootd-hdfs

You will then need to add the following lines to your /etc/xrootd/xrootd-clustered.cfg:

xrootd.fslib /usr/lib64/libXrdOfs.so
ofs.osslib /usr/lib64/libXrdHdfs.so

For more information, see Storage.HadoopXrootd.

(Optional) Adding File Residency Manager (FRM) to an XRootd cluster

If you have a multi-tiered storage system (e.g. some data is stored on SSDs and some on disks or tapes), then install the File Residency Manager (FRM), so you can move data between tiers more easily. If you do not have a multi-tiered storage system, then you do not need FRM and you can skip this section.

The FRM deals with two major mechanisms:

  • local disk
  • remote servers

The description of fully functional multiple xrootd clusters is beyond the scope of this document. In order to have this fully functional system you will need a global redirector and at least one remote xrootd cluster from where files could be moved to the local cluster.

Below are the modifications you should make in order to enable FRM on your local cluster:

  1. Make sure that FRM is enabled in /etc/sysconfig/xrootd on your data sever:
    ROOTD_USER=xrootd
    XROOTD_GROUP=xrootd
    XROOTD_DEFAULT_OPTIONS="-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
    CMSD_DEFAULT_OPTIONS="-l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg"
    FRMD_DEFAULT_OPTIONS="-l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg"
    XROOTD_INSTANCES="default"
    CMSD_INSTANCES="default"
    FRMD_INSTANCES="default"
    
  2. Modify /etc/xrootd/xrootd-clustered.cfg on both nodes to specify options for frm_xfrd (File Transfer Daemon) and frm_purged (File Purging Daemon). For more information, you can visit the FRM Documentation in the frm_xfrd section and the frm_purged section
  3. Start frm daemons on data server:
    [root@server ~]# service frm_xfrd start
    [root@server ~]# service frm_purged start
    

Using XRootD

Managing XRootD services

Start services on the redirector node before starting any services on the data nodes. If you installed only XRootD itself, you will only need to start the xrootd service. However, if you installed cluster management services, you will need to start cmsd as well.

The instructions for starting and stopping an XRootD service depend on whether the service is installed on an EL 6 or EL 7 machine, and whether you are using a standalone or clustered configuration.

On EL 6, which config to use is set in the file /etc/sysconfig/xrootd. For example, to have xrootd use the clustered config, you would have a line such as this:

XROOTD_DEFAULT_OPTIONS="-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -k fifo"

To use the standalone config instead, you would use:

XROOTD_DEFAULT_OPTIONS="-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-standalone.cfg -k fifo"

On EL 7, which config to use is determined by the service name given to systemctl. For example, to have xrootd use the clustered config, you would start up xrootd with this line:

[root@server ~]# systemctl start xrootd@clustered

To use the standalone config instead, you would use:

[root@server ~]# systemctl start xrootd@standalone

The services are:

Service EL 6 service name EL 7 service name
XRootD (standalone config) xrootd xrootd@standalone
XRootD (clustered config) xrootd xrootd@clustered
CMSD (clustered config) cmsd cmsd@clustered

As a reminder, here are common service commands (all run as root):

To … On EL 6, run the command… On EL 7, run the command…
Start a service service SERVICE-NAME start systemctl start SERVICE-NAME
Stop a service service SERVICE-NAME stop systemctl start SERVICE-NAME
Enable a service to start during boot chkconfig SERVICE-NAME on systemctl enable SERVICE-NAME
Disable a service from starting during boot chkconfig SERVICE-NAME off systemctl disable SERVICE-NAME

Getting Help

To get assistance. please use the Help Procedure page.

Reference

File locations

Service/Process Configuration File Description
xrootd /etc/xrootd/xrootd-clustered.cfg Main clustered mode XRootD configuration
/etc/xrootd/auth_file Authorized users file

Service/Process Log File Description
xrootd /var/log/xrootd/xrootd.log XRootD server daemon log
cmsd /var/log/xrootd/cmsd.log Cluster management log
cns /var/log/xrootd/cns/xrootd.log Server inventory (composite name space) log
frm_xfrd, frm_purged /var/log/xrootd/frmd.log File Residency Manager log

Links

Topic revision: r39 - 27 Jul 2017 - 17:40:43 - BrianLin
Hello, TWikiGuest!
Register

 
TWIKI.NET

TWiki | Report Bugs | Privacy Policy

This site is powered by the TWiki collaboration platformCopyright by the contributing authors. All material on this collaboration platform is the property of the contributing authors..