You are here: TWiki > Tier3 Web>XrootdInstall (15 Jun 2016, ElizabethChism)

Reviewed Passed
by RobGardner
Test Passed
by SuchandraThapa
by MarcoMambelli

WARNING! This page is for an older version of Xrootd. For newer versions, please visit Xrootd Install

Xrootd Install


Xrootd is a high performance network storage system widely used in high energy physics experiments such as ATLAS and ALICE. The underlying Xroot data transfer protocol provides highly efficient access to ROOT based data files. This page provides instructions for creating a simple Xrootd storage system consisting of one redirector node and one or more data server nodes.

For usage instructions check XrootdClient.

Getting Started

Hosts: Roles & Firewalls (Cluster Administrator)

Identify hosts for the redirector, data server(s) and an interactive-user node which will mount the Xrootd file system (using XrootdFS). We recommend not running a data server on the same host as the redirector. Check the firewall configuration on these hosts following guidelines below. If all hosts are on the public network that is easiest.

Create the Xrootd administrative account (Cluster Administrator)

You need a non-privileged Xrootd administrative Unix account (e.g. xrdadmin) on the redirector and all data servers, and that Unix account must have a login shell. If you manage your accounts by hand (rather than using a service such as LDAP) you would, on your management node:
/usr/sbin/groupadd xrdadmin 
/usr/sbin/useradd --gid xrdadmin xrdadmin
and copy /etc/passwd, /etc/shadow, /etc/group, and /etc/gshadow to each of the xrootd nodes.

Prepare the storage directories (Cluster Administrator)

We recommend using the same pathnames for all data server nodes. For larger data servers we recommend creating a separate partition (from the system partition) for these directories (see below). The directories have the following roles:
  • /storage/path : a directory containing internal Xrootd references to files in the system.
  • /storage/cache: where physical files are stored
On each data server, prepare the directories and give ownership to the xrdadmin account which will be used by the Xrootd administrator.
mkdir -p /storage/path
mkdir -p /storage/cache
chown xrdadmin:xrdadmin /storage/path
chown xrdadmin:xrdadmin /storage/cache

Install Pacman

Pacman is a package management program used to install OSG software. PacmanInstall describes how to install Pacman which can be installed by either the cluster administrator or the (non-privileged) Xrootd administration account. For example the cluster administrator might install this in /opt/pacman such as in the following
cd /opt
tar --no-same-owner -xzvf pacman-3.29.tar.gz
cd pacman-3.29
cd ..
ln -s pacman-3.29 pacman
Once installed setup its environment with for example source /opt/pacman/

Installing and configuring the redirector

From the Xrootd administrative account (xrdadmin), create an installation directory and assign its path to $INSTALL_DIR.
export INSTALL_DIR=/path_to_xrootd_installation_directory
mkdir -p $INSTALL_DIR

Install the Xrootd package from a VDT software cache. Pacman will ask whether you want to trust the cache (answer yall).

pacman -get  

Update the environment and run the post installation script:


You can verify that the version installed is the version you expected by invoking vdt-version :


The next step is to configure the Xrootd redirector. A full list of configuration options are documented by man configure_xrootd.

$VDT_LOCATION/vdt/setup/configure_xrootd --server y --this-is-xrdr  --storage-path /storage/path --storage-cache /storage/cache  --enable-security --set-data-server-xrd-port 1093 --user xrdadmin 
  • $VDT_LOCATION is the installation directory (same as $INSTALL_DIR, a standard variable put into the shell environment by the VDT setup script)
  • /storage/cache and /storage/path are the directories discussed above
  • --enable-security is a configuration option necessary to enable security (user ownership on directories)
  • By default client access is permitted to any host in your domain. To restrict access or allow client access from hosts outside your local domain, use the option --set-security-binding followed by the domain from which the clients can contact the data server nodes. A wildcard is accepted. Multiple domains can be specified by supplying a comma separated list. The default is your domain (same as --set-security-binding *
  • See the Access Control Lists section in Advanced Topics below for information about file access defaults and customization.
  • --set-data-server-xrd-port 1093 sets the port used by the xrd daemon on the data servers. If 1093 is busy choose a different port and change accordingly the Firewall configuration (see the advanced section below). If you do not specify this option Xrootd uses a random available port (ephemeral port) and that may conflict with Firewall restrictions on the data servers.
  • --user xrdadmin sets the account used to run Xrootd. It is needed only if you install as root. If you install already as xrdadmin and omit the option you will see a message like: No user provided.  Defaulting to 'xrdadmin'. If you install as root and specify no user it will default to daemon that may not be what you want.
  • --logdir /var/log/xrootd (new in OSG 1.2.12) sets the log directory. The default is $VDT_LOCATION/xrootd/var/logs. If you change it, e.g. to /var/log/xrootd, you must make sure that the xrootd user (xrdadmin) can write into it.

Now vdt-control -list shows the Xrootd service is installed:

vdt-control --list
Service                 | Type   | Desired State
xrootd                  | init   | enable

Startup the redirector:

vdt-control --non-root --on

Here is an example of running Xrootd redirector services (xrdadmin is the Xrootd administrator):

ps -fu xrootd
xrdadmin    1510     1  0 17:09 pts/1    00:00:00 /opt/xrd-install-dir/xrootd/bin//xrootd -l /opt/xrd-install-dir/xrootd/var/logs/xrdlog -c /opt/xro
xrdadmin    1551     1  0 17:09 pts/1    00:00:00 /opt/xrd-install-dir/xrootd/bin//cmsd -l /opt/xrd-install-dir/xrootd/var/logs/cmslog -c /opt/xroot

Installing and configuring a data server

From the Xrootd administrative account (xrdadmin), install Xrootd following the same instructions as for the redirector host. Then configure the data server with the same arguments as the redirector, but replacing the option --this-is-xrdr with --xrdr-host
$VDT_LOCATION/vdt/setup/configure_xrootd --server y --xrdr-host --storage-path  /storage/path --storage-cache /storage/cache --enable-security  --set-data-server-xrd-port 1093  --user xrdadmin

Now vdt-control -list and vdt-control --non-root --on work as before. Here is an example showing running Xrootd data services (xrootd is the Xrootd user):

ps -fu xrootd
xrdadmin    2774     1  0 12:44 pts/0    00:00:00 /opt/xrd-install-dir/xrootd/bin//xrootd -l /opt/xrd-install-dir/xrootd/var/logs/xrdlog -c /opt/xro
xrdadmin    2819     1  0 12:44 pts/0    00:00:00 /opt/xrd-install-dir/xrootd/bin//cmsd -l /opt/xrd-install-dir/xrootd/var/logs/cmslog -c /opt/xroot

Installing the XrootdFS file system

XrootdFS is a POSIX file system for an Xrootd storage cluster based on FUSE (Filesystem in Userspace). FUSE is a kernel module that intercepts and services requests to non-privileged user space file systems like XrootdFS. Install XrootdFS on nodes where you want a single Xrootd file system to appear, e.g. on an interactive user node.

For releases older than xrootdfs 3.0rcX , XrootdFS require CNS.
FUSE installation requires root privileges but FUSE is normally already installed. XrootdFS can be installed by the Xrootd Administrator (xrdadmin).

Install FUSE (Cluster Admnistrator)

Three rpm packages must be installed:
  • fuse
  • fuse-libs
  • kernel-module-fuse (patched Kernel module, only on RHEL 4 based OS that would have older versions, not on RHEL 5.4 or later)

This can be checked by using rpm (e.g. rpm -q [package-name]) and verifying that the package name and version are returned. If the package is not installed, rpm will print a message saying that the package is not installed. This can be done via the yum utility (e.g. yum install fuse fuse-libs) or rpm commands directly. Using yum is preferable since it will bring in any dependencies that fuse or the fuse-libs requires and automatically install the correct versions of the fuse kernel modules. Alternatively (for those familiar with building kernels) FUSE can be downloaded from and built according to instructions provided there. Note that root privileges are required. It is essential that the FUSE version ( > 2.7.3) and flavor match your kernel.

To allow non root users to install and mount XrootdFS, e.g the Xrootd administrative account (xrdadmin) as below, then you need also the following:

  • create the mount point: an empty directory owned by xrdadmin
  • create the file /etc/fuse.conf with the following line (add the line if the file exists):
  • add the user xrdadmin to the group fuse, e.g. edit /etc/groups

Install XrootdFS

Setup Pacman, make an installation directory and assign its path to $INSTALL_DIR. Install XrootdFS from the cache. The installation described here is done as the Xrootd administrative account (xrdadmin). Pacman will ask whether you want to trust the cache (answer yall).
export INSTALL_DIR=/path_to_xrootdfs_installation_directory
mkdir -p $INSTALL_DIR
pacman -get 

Configure XrootdFS with the following command where:

  • $VDT_LOCATION is the installation directory of the XrootdFS package (/opt/xrootdfs)
  • xrdadmin is the non-privileged user that runs the XrootdFS service (it must have a login shell)
  • FQDN of the redirector host, e.g.
  • /xrootd is the mount point for the file system
  • /storage/path is the storage_path directory on the redirector host, as discussed above
$VDT_LOCATION/vdt/setup/configure_xrootdfs \
 --user xrdadmin \
 --cache /xrootd \
 --xrdr-host  \
 --xrdr-storage-path /storage/path

Start/Stop Xrootd or XrootdFS

Go to the Xrootd installation directory $INSTALL_DIR on the redirector, data server or client nodes, as appropriate.

To start:

vdt-control --non-root --on

To stop:

vdt-control --non-root --off

Testing and Using the System

Simple copy tests

Login on the redirector node (e.g. and:
source $INSTALL_DIR/ 
echo “This is a test” >/tmp/test 
xrdcp /tmp/test xroot:// 
xrdcp xroot:// /tmp/test1 
diff /tmp/test1 /tmp/test 
Users can write in their own space. E.g. a user with Unix account name myuser (the account must exist, eg. in /etc/passwd, etc, on the redirector and on all the data servers) could save a file in Xrootd:
source $INSTALL_DIR/ 
echo “This is a user test” >/tmp/test 
xrdcp /tmp/test xroot:// 
Note that xrdcp creates missing directories.

Testing the Xrootd file system XrootdFS

With XrootdFS installed on an interactive node one has full POSIX access to the data in the Xrootd storage system. One can use all the normal Unix commands to list (ls), copy (cp), make directories (mkdir), delete files (rm), etc. Log into your interactive (client) node and try it out.

If you have problems with XrootdFS first make sure that you can access the redirector form that node (e.g. do the tests above).

Tests using the libXrdPosixPreload library

On one of the machines with Xrootd installed (redirector or data server) you can use the Xrootd POSIX preload library to test creating directories, copying files, etc:

source $INSTALL_DIR/
export LD_PRELOAD=$VDT_LOCATION/xrootd/lib/ 
echo “This is a new test” >/tmp/test 
mkdir xroot://
cp /tmp/test xroot:// 
cp xroot:// /tmp/test1 
diff /tmp/test1 /tmp/test 
rm xroot:// 
rmdir xroot://

Advanced installation options and issues

Access Control Lists and security options

Xrootd implements Access Control Lists that define user permissions on Xrootd files. ACLs are defined in the file $VDT_LOCATION/xrootd/etc/auth_file. The default configuration allows:
  • full read/write access to the Xrootd administrator xrdadmin
  • full read/write access to user files in directories named by the Unix user account name
  • read access to all files for all users
For more authorization options see the this section in the Scalla: Extended Features Supplement Authentication & Access Control Configuration Reference. E.g. to give a Unix group (our_group) read/write permissions to a directory called workdir, add the following line to $VDT_LOCATION/xrootd/etc/auth_file:
g our_group /storage/path/workdir a
This file must be synchronized across all servers in the Xrootd cluster. You must restart Xrootd (vdt-control --non-root --off, vdt-control --non-root --on) to load the new configuration, again the redirector and on each data server.

In the configuration file ($VDT_LOCATION/xrootd/etc/xrootd.cfg) the options dealing with security (block marked by ENABLE_SECURITY_BEGIN/END) are enabled only on the data servers by default because those are the ones safeguarding the files, anyway you can edit the file and:

  • add the security options also to the redirector. This is useful specially when there are no shared configuration files: it will enforce the restrictions on all requests to the redirector, even if one or more data server have no security enabled. On the data servers with security enabled the request is checked against security constraints also on the data server. A request has to be allowed by the redirector and at least by one of the data servers.
  • remove the security options from the data servers and leave if only on the redirector. This works on clusters where data servers are not accessible directly by xrootd clients. A client connecting directly to a data server would bypass the security restrictions of this setup.

Firewalls and Xrootd port usage (Cluster Admnistrator)

It is important to make sure firewalls (if any) are not blocking Xrootd communication ports. Xrootd uses the following ports (this description is consistent with the configuration above and the option --set-data-server-xrd-port 1093):
Host Port Number Protocol
Xrootd data server 1093 tcp
Xrootd redirector 1094 tcp
1213 tcp

Firewall configuration for the Xrootd redirector

Edit the /etc/sysconfig/iptables file to add these lines ahead of the REJECT line (if your host has reject rules in its iptables configuration):

# begin xrootd-rdr # Xrootd connections  (from anywhere) 
-A RH-Firewall-1-INPUT -m state --state NEW,ESTABLISHED -p tcp -m tcp --dport 1094 -j ACCEPT 
-A RH-Firewall-1-INPUT -m state --state NEW,ESTABLISHED -p tcp -m tcp --dport 1213 -j ACCEPT 
# end xrootd-rdr #

Check the status of iptables:

/etc/init.d/iptables status
Restart iptables:
/etc/init.d/iptables restart
Check the status of the iptables to see the changes:
/etc/init.d/iptables status

Firewall configuration for an Xrootd data server

Check the firewall rules with /etc/init.d/iptables status. To change the configuration, edit /etc/sysconfig/iptables and add the following lines ahead of the REJECT line:

# begin xrootd-ds # Xrootd connections  (from anywhere) 
-A RH-Firewall-1-INPUT -m state --state NEW,ESTABLISHED -p tcp -m tcp --dport 1093 -j ACCEPT 
# end xrootd-ds #
Check the status of iptables, resart, and check again:
/etc/init.d/iptables status
/etc/init.d/iptables restart
/etc/init.d/iptables status

Creating a separate disk partition for the data cache (Cluster Administrator)

Instead of storing the Xrootd data on the system partition you can have it on a separate one. Here is a simple example (many approaches taken in practice):
  • Choose your disk or RAID array
  • Create a partition of type Linux using fdisk (answer prompts as appropriate):
    /sbin/fdisk /dev/sda
  • Format the partition into an ext3 file system:
    mkfs -t ext3 /dev/sda1
  • Mount the disk in your root file system, e.g. by adding to /etc/fstab a line like:
    /dev/sda1               /storage/cache/pool1                ext3    defaults        1 2
    followed by the command mount -a.
  • Create the needed Xrootd directories as usual:
    mkdir -p /storage/path
    mkdir -p /storage/cache
    mkdir -p /storage/cache/pool1

Multiple partitions on a data server (Cluster Administrator)

You can have multiple partitions on the data server, and you can add new partitions without having to reconfigure Xrootd if the partitions and directories are named appropriately. If Xrootd was configured with the option --storage-cache /storage/cache (as above) any directory mounted or linked under /storage/cache will be used as pool.

E.g. if the original /etc/fstab contained:

/dev/sda1               /storage/cache/pool1          ext3    defaults        1 2
with directory (mount point) created with:
mkdir -p /storage/cache/pool1
then new partitions can be added using direct mounts, bind mounts or symbolic links. Edit as appropriate /etc/fstab:
/dev/sda1               /storage/cache/pool1          ext3    defaults        1 2
/dev/sda2               /storage/cache/pool2          ext3    defaults        1 2
/dev/sdb1               /bind_example_mount           ext3    defaults        1 2
/dev/sdb2               /link_example_mount           ext3    defaults        1 2
/bind_example_mount     /storage/cache/pool3          none    bind            1 2
and mount or link by:
mkdir -p /storage/cache/pool2
mkdir -p /storage/cache/pool3
ln -s /link_example_mount /storage/cache/pool4
mount -a
Restart Xrootd when new partitions are added.

Optimizing disk partitions for performance (Cluster Administrator)

To maximize the performance of your Xrootd installation, you should place the storage_cache and storage_path on separate disk partitions. The partition used for the storage cache should be optimized for the kinds of files anticipated by the application. In general, you can use the defaults for the mke2fs program since that will tend to optimize for large files.

The partition used for the storage_path should be optimized to hold a large number of inodes and files. Since the storage_path is used to hold symlinks to the actual data files, the file system should have a large number of inodes since each symlink will require an inode to hold an entry for the symlink. In addition, the size of the data blocks used by the file system on the partition should be made as small as possible since symlinks will typically consume an entire block regardless of the size of data block; a small data block size results in less space being wasted. For an ext3 file system, passing -b 1024 to mke2fs will set the block size to 1024 bytes which is the smallest allowed block size.

Sometimes mke2fs is invoked using the wrapper mkfs -t ext3 ... that passes the options along. Use man mke2fs to see all the options.

Xrootd inherits the characteristics of the underlying file system. E.g. if you use and ext3 file system you are limited to 31998 subdirectories per one directory, stemming from its limit of 32000 links per inode.

Increasing the maximum number of file descriptors (Cluster Administrator)

For bigger systems the default number of file descriptors (ulimit -n, 1024) is insufficient to assure access by many clients simultaneously The limit should be increased to 65500 descriptors on the redirector and all data servers, for root and for the user under which Xrootd is running (xrdadmin). The procedure below is valid for RHEL 4/5 based Linux distributions such as Scientific Linux.

Configure the system to accept the desired value for maximum number of open files. Check the value in /proc/sys/fs/file-max (cat /proc/sys/fs/file-max)to see if it is larger than the value needed for the maximum number of open files. To increase the number of file descriptors to 65500 run:

echo 65500 > /proc/sys/fs/file-max
and, to make it persistent across reboots, edit /etc/sysctl.conf to include the line:
fs.file-max = 65500

Set the value for maximum number of open files (both hard and soft limit) in the file /etc/security/limits.conf. To do that add the following line below the commented line that should already be there:

# domain type item value
root - nofile 65500
xrdadmin - nofile 65500

GridFTP: transfer files between Xrootd and the Grid/WAN

A GridFTP server allows efficient file transfers on the Grid or to remote hosts on the Wide Area Network (WAN).

GridFTPXrootd describes the installation of a GridFTP server optimized to be used in conjunction with Xrootd.

BeStMan Gateway: transfer files and manage the Xrootd storage

BeStMan Gateway, beside allowing efficient file transfers via GridFTP, allows also space management (e.g. space reservation and space tokens). It is a complete Storage Element implementing SRM v2 protocol.

BestmanGatewayXrootd describes the installation of a BeStMan Gateway server optimized to be used in conjunction with Xrootd (it includes also a GridFTPXrootd server).


This section lists troubleshooting information and the solutions to some possible errors.

Log and configuration file locations

If any of the tests described above have failed or you are just curious to see what’s going on, you can find log and configuration files in the following locations:
Host Configuration files Log files
Xrootd – redirector $VDT_LOCATION/xrootd/etc/xrootd.cfg
Xrootd – data server $VDT_LOCATION/xrootd/etc/xrootd.cfg
XrootdFS? – client   $VDT_LOCATION/logs/vdt-control.log

Network problems

If copy commands return errors like:
(cp from xrootd) cp: cannot stat `xroot://': Operation canceled
(cp to xrootd) cp: cannot create regular file `xroot://': Unknown error 4294967295
Check the firewall configuration on the data server. Rules are mentioned above.

Free space less than configured minimum

The error message:
(cp to xrootd) cp: cannot create regular file `xroot://': Unknown error 4294967295
is a generic error message from Xrootd. One possible cause is if the free space on a data server is less that the minimum space configured in (11 GB by default). When this happens $VDT_LOCATION/xrootd/var/logs/cmslog on the data server will have a line:
100421 00:05:55 23723 Meter: Insufficient space;  6GB available < 11GB high watermark
To fix this add to $VDT_LOCATION/xrootd/etc/xrootd.cfg a line with smaller minimum space: linger 0 recalc 15 min 2% 3g 5% 5g

References and Related documents

Scalla Xrootd:

OSG releated:


PM2RPM?_TASK = SE RobertEngel 28 Aug 2011 - 06:31

Topic revision: r49 - 15 Jun 2016 - 17:02:40 - ElizabethChism

TWiki | Report Bugs | Privacy Policy

This site is powered by the TWiki collaboration platformCopyright by the contributing authors. All material on this collaboration platform is the property of the contributing authors..