How to install and test Xrootd-server rpm

  1. login as root on a host you want to install xrootd server.
  2. create /etc/yum.repos.d/vdt.repo :
    [vdt-development]
    name = VDT RPM repository - development versions for Redhat Enterprise Linux 5 and compatible
    baseurl = http://vdt.cs.wisc.edu/native/rpm/development/rh5/$basearch
    gpgcheck = 0
    enabled = 1
    
  3. Install xrootd-server and dependent rpms:
    yum install xrootd-server
    
    The following rpms will be installed:
    Installing:
     Installing:
     xrootd-server                  x86_64                  1:3.0.4-X.xu                  vdt-development                  3.2 M
    Installing for dependencies:
     xrootd-client                  x86_64                  1:3.0.4-X.xu                  vdt-development                  1.2 M
     xrootd-libs                    x86_64                  1:3.0.4-X.xu                  vdt-development                  2.0 M
    
    Skip this step if you are installing software on data server or don't want to install xrootdfs on redirector node.

  1. Run xrootd setup:
    $ service xrootd setup     #creates appropriate directory for xrootd, create user,group "xrootd" if needed, change permission
    
    All xrootd related configuration files and directories are now belong to the user defined in /etc/sysconfig/xrootd (default: user "xrootd", group "xrootd"). All xrootd related daemons are owned by this user. These user and group are created if they don't exist. If you want to change the user and the group modify /etc/sysconfig/xrootd. If you start daemons as one user and then decided to change a user you have to modify /etc/sysconfig/xrootd and rerun "service xrootd setup". warning Note: there are still some files in /tmp that should be deleted manually before daemon could start successfully. We are trying to resolve this issue with developers.
  2. In order to test that xrootd is working as a stand alone data server do the following:
    $ service xrootd start                   # starts xrootd daemon 
    Starting xrootd (xrootd, default):                         [  OK  ]
    
    You should be able now to copy files using xrdcp command in /tmp. To test do:
    $ xrdcp /bin/sh root://localhost:1094//tmp/first_test
    [xrootd] Total 0.76 MB  |====================| 100.00 % [inf MB/s]
    $ ls -l /tmp/first_test 
    -rw-r--r-- 1 xrootd xrootd 801512 Apr 11 10:48 /tmp/first_test
    
  3. To stop xrootd server:
    $ service xrootd stop
    

Creating Xrootd Cluster

  1. You need at least two nodes in order to create a cluster. One node serves as a "redirector" node , the other is a data server. You should be able start two daemons on each node xrootd and cmsd after you finish with installation and configuration.
  2. Install rpms on the second node (see steps 1-4 in the previous section).
  3. Select redirector host (A) and data server host ( B). warninghost A,B must be replaced with FQDN, ie output of 'hostname'
  4. You will be able to copy file in the /tmp area on the host B. By default cmsd daemon requires at least 11GB of free space for the storage area. Please check how much space you have available (df -h) and then put an appropriate modification in the /etc/xrootd/xrootd-clustered.cfg.
  5. Modify /etc/xrootd/xrootd-clustered.cfg on both nodes
    all.export /tmp stage
    
    set xrdr = hostA
    all.manager $(xrdr):3121
    if $(xrdr)
      all.role manager
    else
      all.role server
    # add cms.space if you have less the 11GB
    # cms.space options http://xrootd.slac.stanford.edu/doc/dev/cms_config.htm
      cms.space min 2g 5g
    fi
    #all.manager localhost 3121
    #all.role server
    
    
    Changes in the configuration file are given in red. Replace hostA with FQAN of redirector host
  6. Start services:
    $ service xrood start
    Starting xrootd (xrootd, default):                         [  OK  ]
    $ service cmsd start
    Starting xrootd (cmsd, default):                           [  OK  ]
    
  7. Verify that you can copy file to /tmp on the server data via redirector:
    $ xrdcp /bin/sh  root://hostA:1094///tmp/second_test
    [xrootd] Total 0.76 MB  |====================| 100.00 % [inf MB/s]
    
    Check that the /tmp/second_test is located on data server host B.
  8. Stop services:
    $ service cmsd stop
    Shutting down xrootd (cmsd, default):                      [  OK  ]
    $ service xrootd stop
    Shutting down xrootd (xrootd, default):                    [  OK  ]
    

Adding Simple Server Inventory to your cluster

The Simple Server Inventory (SSI) provide means to have an inventory for each data server (See details in xrootd.org/doc/dev/cms_config.pdf). In order to configure it you will need to run a second instance of xrootd daemon as well XrdCnsd process that should run on every data server. We will configure xrootd cluster that consists of two nodes. Host A is a redirector node that is running the following daemons:
  1. xrootd redirector
  2. cmsd
  3. xrootd - second instance that required for SSI

Host B is a data server that is running the following daemons:

  1. xrootd data server
  2. cmsd
  3. XrdCnsd - started automatically by xrootd

We will need to create a directory on the redirector node for Inventory files.

$ mkdir -p /data/inventory
$ chown xrootd.xrootd /data/inventory

On the data server (host B) let's create the storage cache that will be different from /tmp.

$ mkdir -p  /local/xrootd
$ chown xrootd.xrootd /local/xrootd

Now, we have to change /etc/sysconfig/xrootd on redirector node hostA to run multiple instances of xrootd. Second instance of xrood will be name "cns" and will be used for SSI

XROOTD_USER=xrootd
XROOTD_GROUP=xrootd
XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
XROOTD_CNS_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
CMSD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg"
FRMD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg"
XROOTD_INSTANCES="default cns"
CMSD_INSTANCES="default"
FRMD_INSTANCES="default"

Now we have to modify /etc/xrootd/xrootd-clustered.cfg on both nodes so it looks like this:

all.export /data/xrootdfs
set xrdr=hostA
all.manager $(xrdr):3121
if $(xrdr) && named cns
      all.export /data/inventory
      xrd.port 1095
else if $(xrdr)
      all.role manager
      xrd.port 1094
else
      all.role server
      oss.localroot /local/xrootd
      ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory
      #add cms.space if you have less the 11GB
      # cms.space options http://xrootd.slac.stanford.edu/doc/dev/cms_config.htm
      cms.space min 2g 5g
fi

Now, we can start xrootd cluster executing the following commands. On redirector you will see:

$ service xrootd start
Starting xrootd (xrootd, default):                        [  OK  ]
Starting xrootd (xrootd, cns):                             [  OK  ]
$ service cmsd start
Starting xrootd (cmsd, default):                          [  OK  ]

On redirector node you should see two instances of xrootd running:

$ ps auxww|grep xrootd
xrootd   29036  0.0  0.0  44008  3172 ?        Sl   Apr11   0:00 /usr/bin/xrootd -k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-default.pid -n default
xrootd   29108  0.0  0.0  43868  3016 ?        Sl   Apr11   0:00 /usr/bin/xrootd -k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-cns.pid -n cns
xrootd   29196  0.0  0.0  51420  3692 ?        Sl   Apr11   0:00 /usr/bin/cmsd -k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/cmsd-default.pid -n default
warning the log file for second named instance of xrootd with be placed in /var/log/xrootd/cns/xrootd.log

On data server node you should that XrdCnsd process has been started:

$ ps auxww|grep xrootd
xrootd   19156  0.0  0.0  48096  3256 ?        Sl   07:37   0:00 /usr/bin/cmsd -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/cmsd-default.pid -n default
xrootd   19880  0.0  0.0  46124  2916 ?        Sl   08:33   0:00 /usr/bin/xrootd -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-default.pid -n default
xrootd   19894  0.0  0.1  71164  6960 ?        Sl   08:33   0:00 /usr/bin/XrdCnsd -d -D 2 -i 90 -b fermicloud053.fnal.gov:1095:/data/inventory

Testing Xrootd Cluster with SSI

  1. Copy file to redirecor node specifying storage path (/data/xrootdfs instead of /tmp):
    $ xrdcp /bin/sh root://localhost:1094//data/xrootdfs/test1
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    
  2. To verify that SSI is working execute cns_ssi command on the redirector node:
    $ cns_ssi list /data/inventory
    fermicloud054.fnal.gov incomplete inventory as of Mon Apr 11 17:28:11 2011
    $ cns_ssi updt /data/inventory
    cns_ssi: fermicloud054.fnal.gov inventory with 1 directory and 1 file updated with 0 errors.
    $ cns_ssi list /data/inventory
    fermicloud054.fnal.gov complete inventory as of Tue Apr 12 07:38:29 2011
    /data/xrootdfs/test1
    
    Note: In this example fernilcould53.fnal.gov is a redirector node and fermicloud054.fnal.gov is a data server

Adding Simple (Unix) Security

In order to add simple security to your cluster you will need to add "auth_file" on the your data server node. Create /etc/xrootd/auth_file :
# This means that all the users have read access to the datasets
u * /data/xrootdfs lr

# This means that all the users have full access to their private dirs
u = /data/xrootdfs/@=/ a

# This means that this privileged user can do everything
# You need at least one user like that, in order to create the
# private dir for each user willing to store his data in the facility
u xrootd /data/xrootdfs a
Here we assume that your storage path is "/data/xrootdfs" (same as in the previous example).

Change file ownership (if you have created file as root):

 $ chown xrootd.xrootd /etc/xrootd/auth_file

The next step is to modify /etc/xrootd/xrootd-clustered.cfg on both nodes:

all.export /data/xrootdfs
set xrdr=hostA
all.manager $(xrdr):3121
if $(xrdr) && named cns
      all.export /data/inventory
      xrd.port 1095
else if $(xrdr)
      all.role manager
      xrd.port 1094
else
      all.role server
      oss.localroot /local/xrootd
      ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory
 
     # ENABLE_SECURITY_BEGIN
        xrootd.seclib /usr/lib64/libXrdSec.so
        # this specify that we use the 'unix' authentication module, additional one can be specified.
        sec.protocol /usr/lib64 unix
        # this is the authorization file
        acc.authdb /etc/xrootd/auth_file
        ofs.authorize
        # ENABLE_SECURITY_END

fi

Note: change *.fnal.gov with appropriate domain name.

After making all the changes, please, resart xrootd and cmsd daemons on all nodes.

Testing Xrootd Cluster with simple security enabled

  1. Login on redirector node as root
  2. Check that user "root" still can read files:
      
    $ xrdcp  root://localhost:1094//data/xrootdfs/test1 /tmp/b
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    
  3. Check that user "root" can not write files under /data/xrootdfs:
     
    $  xrdcp /tmp/b root://localhost:1094//data/xrootdfs/test2
    Last server error 3010 ('Unable to create /data/xrootdfs/test2; Permission denied')
    Error accessing path/file for root://localhost:1094//data/xrootdfs/test3
    
  4. Check that user can copy/retrieve files to/from /data/xrootdfs/~/...
    $ su - user
    -bash-3.2$   xrdcp  /tmp/a  root://localhost:1094//data/xrootdfs/user/test1
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    -bash-3.2$  xrdcp    root://localhost:1094//data/xrootdfs/user/test1 /tmp/c
    [xrootd] Total 0.00 MB  |====================| 100.00 % [inf MB/s]
    

Adding File Residency Manager (FRM) to Xrootd Cluster

The FRM deals with two major mechanisms:
  • local disk
  • remote servers

The description of fully functional multiple xrootd clusters is beyond the scope of this document. In order to have this fully functional system you will need a global redirector and at least one remote xrootd cluster from where files could be moved to the local cluster.

Below are the modifications you should make in order to enabled FRM on your local cluster:

  1. Make sure that FRM is enabled in /etc/sysconfig/xrootd on your data sever:
    ROOTD_USER=xrootd
    XROOTD_GROUP=xrootd
    XROOTD_DEFAULT_OPTIONS="-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
    CMSD_DEFAULT_OPTIONS="-l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg"
    FRMD_DEFAULT_OPTIONS="-l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg"
    XROOTD_INSTANCES="default"
    CMSD_INSTANCES="default"
    FRMD_INSTANCES="default"
    
  2. Modify /etc/xrootd/xrootd-clustered.cfg on both nodes to specify options for frm_xfrd (File Transfer Daemon) and frm_purged (File Purging Daemon)
  3. Start frm daemons on data server:
    $ service frm_xfrd start
    $ service frm_purged start
    
warning Both daemons will use /var/log/xrootd/frmd.log for logging.

Installing xrootdfs on the redirector node.

If you want to install xrootdfs on the same node where redirector is running, please, follow these instructions

-- TanyaLevshina - 05 Apr 2011

Topic revision: r9 - 14 Jun 2011 - 21:34:41 - TanyaLevshina
Hello, TWikiGuest
Register

 
TWIKI.NET

TWiki | Report Bugs | Privacy Policy

This site is powered by the TWiki collaboration platformCopyright by the contributing authors. All material on this collaboration platform is the property of the contributing authors..