DX Storage Cluster File Server (CFS) is a native DX Storage application that presents a DX Storage
cluster as a standard Linux file system, enabling the use of DX Storage by software that uses a file
system. You can also layer a network file service over DX Storage CFS using Samba or NFS. CFS
separates file data stored directly in DX Storage from file metadata stored in DX Storage via the
Cluster Name Space (CNS).
To enhance performance, the CFS process uses the local file system as a spool/cache for files. This
local cache is where user files are written before they get spooled to DX Storage and where they will
be accessed on subsequent reads if locally available. The cache/spool size is managed by evicting
files based on a least recently used basis when space needs to be reclaimed.
The Cluster Name Space (CNS) also uses DX Storage to store name space records for the file
system structure and metadata (owner, permissions, etc.), fulfilling the role typically performed by
a database but without the scalability and management concerns. CNS uses a RAM-based cache
to deliver metadata objects that are in high demand. Name space modifications and updates are
continuously flushed to DX Storage via a background process.
Recoverability for both CFS and CNS are enhanced by using the Linux machine's local hard disk or
shared storage for journaling of in process events. In the event of a power failure or system crash,
the processes are restarted automatically and any changes not recorded in the namespace or
spooled to DX Storage are replayed from the journal to prevent data loss. A process monitor runs
at all times to restart a CFS or CNS process in the unlikely event of a process crash. All process
monitoring and recovery activities appear in the syslog for management purposes.
1.2. About this Document
1.2.1. Audience
This document is intended for people in the following roles.
1.Storage system administrators
2.Network administrators
3.Technical architects
Throughout this document, the storage system administrator and network administrator roles will be
referred to as the administrator. The administrators are normally responsible for allocating storage,
managing capacity, monitoring storage system health, replacing malfunctioning hardware, and
adding additional capacity when needed.
1.2.2. Scope
This document covers the steps needed to deploy and configure DX Storage CFS. The reader is
expected to have a background in networking, basic knowledge of Linux operating systems, and
optional experience with NFS, CIFS, SAMBA or other file server clients.
DX Storage CFS has been developed and tested with 64-bit Red Hat Enterprise Linux 6.0;
other RHEL versions or Linux distributions are not currently supported. Subsequent installation
instructions will assume a pre-installed RHEL environment with either internet connectivity or an
alternately configured RHEL yum repository for use in installing required 3rd party packages.
Warning
The recently added Red Hat 'hugepages' feature has been found to be incompatible with
CFS and other applications, often resulting in kernel hangs particularly under high stress
scenarios. As a result it is highly recommended that huge pages be disabled as follows
prior to installing and running CFS:
echo 'never' > /proc/sys/kernel/mm/redhat_transparent_hugepage/enabled
If the feature is not disabled, both CFS and CNS will log a warning at startup as an
additional reminder. For the changes to remain in effect with the next reboot, you must
add 'transparent_hugepage=never' to the existing kernel parameters in the /boot/grub/
menu.lst file prior to rebooting the CFS server.
Note
For optimal performance, it is highly recommended that the 'deadline' I/O scheduler be
utilized on the installed CFS server. The following command demonstrates how to enable
the deadline scheduler on an 'sda' disk:
echo deadline | sudo tee /sys/block/sda/queue/scheduler
For the changes to remain in effect with the next reboot, you must add 'elevator=deadline'
to the existing kernel parameters in the /boot/grub/menu.lst file prior to rebooting the CFS
server.
2.1.2. Recommended Network Infrastructure
A configured Network Time Protocol (NTP) server is strongly recommended to provide clock
synchronization services to DX Storage CFS, the name space server and the DX Storage cluster. A
common sense of time is especially important between CFS servers and the name space server if
the latter is configured remotely. To install a functional NTP configuration on RHEL, run the following
command as the root user:
# yum install ntp
Gigabit Ethernet or better is the recommended connection speed between DX Storage CFS and DX
Storage nodes.
2.1.3. Protecting Local Disk Resources
Red Hat 6 does not guarantee immediate write of data to disk with their default ext4 file system.
In order to ensure data security in the event of a system or power failure, it is therefore highly
recommended that administrators add the 'nodelalloc' mount option to several critical local
There are many valid ways to install each of these components and services. This is not a
comprehensive guide to NTP or other 3rd party package installation. For complete instructions on
installing, please see the documentation included with the operating system, from the software
project web sites, and other materials. The following is a list of examples with steps highlighted for
your reference. Please make sure that you understand the details of configuring these packages
and customize these steps to fit your IT environment.
For information on configuring CFS for High Availability, please reference the Linux-HA project.
2.2.1. Installing the DX Storage CFS Server
DX Storage CFS distribution is available as a Red Hat rpm package that is installed with a shell
script. The package and its dependencies must be installed as the 'root' user with the following
steps:
1.Install caringo-cfs package by running the following:
$ sudo su
# ./installCFS.sh
2.2.2. Cluster Name Space (CNS)
The file system structure, metadata and DX Storage UUIDs are stored in a journaling name space
that persists file metadata into DX Storage automatically.
After installing the DX Storage CFS server, you will need to configure a new name space. This is
only required once at installation time, subsequent CFS share definitions will not require a new
name space. You can initiate the configuration process by running the following command as the
root user:
Note: Please make sure DX Storage is online and reachable prior to configuring the new name
space. cns-admin will prompt for the following values:
1.Log facility. Enter the logging facility to use. The default value is syslog.
a.Log filename. If the file logging facility was selected, enter the filename and location of
the logfile. By default the installer will use a /var/log/caringo/cns.log file. An additional file
in /var/log/caringo/cnsaudit.log will log the UUIDs for all successful DX Storage deletes for
audit purposes. With file logging, default log rotation for both files will be configured to keep
up to 8 log files, rotating weekly or at a max file size of 512 MB.
b.Syslog facility. If the syslog option was selected, enter the facility to log to. The default
value is local4. Configuration of a remote syslog host must be done in the syslogd
configuration file.
2.Log level. Enter the minimum level of log messages to record. The default value is info. Each
log level includes the following:
• info: errors, warnings and information messages like system start up and stop
• verbose: errors, warnings, and info messages plus high-level operational functions
• debug: errors, warnings, info and verbose messages plus lower level operational functions
• trace: all possible log messages
Note
Debug or trace logging are not recommended unless explicitly requested as part
of a support investigation due to the amount of disk space consumed. If debug
logging is required for any length of time, administrators must monitor space
availability carefully and are strongly encouraged to allocate a dedicated partition
for logging to protect other system services.
3.CNS host: The location of the name space server for the name space you are configuring. The
default is localhost (127.0.0.1), which will configure a name space on the local server. Enter an
external IP address if using a remote server separate from the CFS server for the name space.
You may also enter a 0.0.0.0 address if the name space should be created locally but will need
to connect to CFS shares on both the local server and remote servers.
4.DX Storage: use Zeroconf. This option allows you to specify whether or not Zeroconf should
be used to discover the list of nodes in the DX Storage cluster. The CNS server must be in the
same subnet with DX Storage in order to use Zeroconf. CFS 2.6 is compatible with DX Storage
versions 3.0.5 and later.
a.If 'No' is selected:
Cluster primary node address: Enter the IP address or hostname for a DX Storage node
in the target cluster. The target cluster must be the same as the one configured for all CFS
mounts.
Primary node SCSP port: Enter the port the DX Storage node uses for SCSP
communications. The default is '80'.