Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212,
Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S.
Government under vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP
shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft, Windows, Windows NT, and Windows XP are U.S. registered trademarks of Microsoft Corporation.
File System Extender Software installation guide for Linux
HP StorageWorks File System Extender Software installation guide for Linux5
Page 6
6
Page 7
About this guide
This guide provides information about:
• Checking detailed software requirements
• Preparing your environment prior to installing the FSE software
• Installing, upgrading, and uninstalling the FSE software on the Linux platform
• Performing mandatory and optional post-installation steps
• Troubleshooting installation, upgrade, and uninstallation problems
Intended audience
This guide is intended for system administrators with knowledge of:
• Linux platform
Related documentation
The following documents provide related information:
• FSE release notes
• FSE installation guide for Windows
• FSE user guide
You can find these documents from the Manuals page of the HP Business Support Center web site:
http://www.hp.com/support/manuals
In the Storage section, click Archiving and active archiving and then select your product.
Document conventions and symbols
Table 1Document conventions
ConventionElement
Medium blue text: Table 1Cross-reference links and e-mail addresses
Medium blue, underlined text
(http://www.hp.com
Bold text•Keys that are pressed
Italic textText emphasis
Monospace text•File and directory names
Monospace, italic text•Code variables
)
Web site addresses
•Text typed into a GUI element, such as a box
•GUI elements that are clicked or selected, such as menu and
list items, buttons, tabs, and check boxes
•System output
•Code
•Commands, their arguments, and argument values
•Command variables
Monospace, bold textEmphasized monospace text
HP StorageWorks File System Extender Software installation guide for Linux7
Page 8
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
IMPORTANT: Provides clarifying information or specific instructions.
NOTE: Provides additional information.
TIP:Provides helpful hints and shortcuts.
HP technical support
Telephone numbers for worldwide technical support are listed on the HP support web site:
http://www.hp.com/support/
Collect the following information before calling:
• Technical support registration number (if applicable)
• Product serial numbers
• Product model names and numbers
• Error messages
• Operating system type and revision level
• Detailed questions
.
For continuous quality improvement, calls may be recorded or monitored.
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business web site:
http://www.hp.com/go/e-updates
After registering, you will receive e-mail notification of product enhancements, new driver versions,
firmware updates, and other product resources.
HP web sites
For additional information, see the following HP web sites:
• http://www.hp.com
• http://www.hp.com/go/storage
• http://www.hp.com/service_locator
• http://www.hp.com/support/manuals
• http://www.hp.com/support/downloads
Documentation feedback
HP welcomes your feedback.
To make comments and suggestions about product documentation, please send a message to
storagedocs.feedback@hp.com. All submissions become the property of HP.
.
8
Page 9
1Introduction and preparation basics
HP StorageWorks File System Extender (FSE) is a mass storage oriented software product, based on
client-server technology. It provides large storage space by combining disk storage and a tape library with
high-capacity tape media and implementing Hierarchical Storage Management (HSM).
Refer to the FSE user guide for a detailed product description.
This Installation Guide tells you how to prepare the environment and install the FSE software. You then
need to configure FSE resources, such as disk media and tape libraries, HSM file systems, and partitions.
You also need to configure migration policies. These tasks are described in the FSE user guide.
This chapter includes the following topics:
• FSE implementation options, page 9
• Licensing, page 12
• Preparing file systems for FSE, page 12
• Reasons for organizing file systems, page 13
• Organizing the file system layout, page 13
• Estimating the size of file systems, page 14
FSE implementation options
FSE supports both Linux and Windows servers, and Linux and Windows clients. An FSE implementation
can be set up as a:
• Consolidated implementation, where FSE server and client are both installed on a single machine. See
page 9.
• Mixed implementation, a consolidated implementation with additional external FSE clients. See
page 11.
• Distributed implementation, an FSE server system and one or more separate FSE clients. See page 10.
Thus, an FSE implementation can be customized for either heterogeneous or homogenous operating system
environments.
NOTE: Before installing FSE software, consider your current environment so that you can match the FSE
implementation to your needs.
For a description of the specific components shown in the Figure 1 on page 10 and Figure 2 on page 11,
refer to the FSE user guide. Key to components:
fse-hsmHierarchical Storage Manager
fse-mifManagement Interface
fse-pmPartition Manager
fse-rmResource Manager
Consolidated implementation
Consolidated implementations integrate FSE server and client functionality in a single machine. It connects
directly to the disk media and/or SCSI tape library with FSE drives and slots; it also hosts an arbitrary
number of HSM file systems used as storage space by FSE users. The machine runs all the processes of a
HP StorageWorks File System Extender Software installation guide for Linux9
Page 10
working FSE environment. User data from local HSM file systems is recorded on disk media or tape media
in the attached tape library.
Figure 1 Consolidated FSE implementation
Distributed implementation
A distributed implementation consists of a central FSE server with disk media and/or SCSI tape library
attached and one or more external FSE clients that are connected to the server. External FSE clients can run
on different operating systems.
An FSE server is similar to the consolidated implementation with all major services running on it, but it does
not host any HSM file system; all HSM file systems in such an environment reside on FSE clients - the
machines that only run the essential processes for local HSM file system management and utilize major
10Introduction and preparation basics
Page 11
services on the FSE server. User data from these remote file systems is transferred to the FSE server and
recorded on the corresponding disk media or tape media in the attached tape library.
Figure 2 Distributed FSE implementation
NOTE: Communication between the components of a distributed FSE implementation is based on the
CORBA technology (omniORB). A reliable bidirectional network connection from each of the external FSE
clients to the FSE server is an essential prerequisite for a reliable operation of a distributed FSE
implementation. Communication between FSE implementation components through a firewall is neither
supported nor tested.
Distributed implementation is sometimes called distributed system with separate server and clients.
IMPORTANT: In a distributed FSE implementation, if the FSE processes on the FSE server are restarted,
you must restart the FSE processes on all external FSE clients to resume the normal FSE operation.
Mixed implementation
Mixed implementations consist of a consolidated FSE system with additional clients connected to it.
External FSE clients, which can run on different operating systems, are physically separated from the
integrated server and client.
External FSE clients connect to the consolidated FSE system through LAN and host additional HSM file
systems. They run only processes that provide functionality for managing these HSM file systems and
communicate with major services running on the consolidated system. User data from HSM file systems on
clients is transferred to the consolidated FSE system and recorded on disk media and/or tape media in the
attached SCSI tape library.
NOTE: See the note about omniORB for distributed implementations above.
HP StorageWorks File System Extender Software installation guide for Linux11
Page 12
IMPORTANT: In a mixed FSE implementation, if the FSE processes on the consolidated FSE system are
restarted, you must restart the FSE processes on all external FSE clients to resume the normal FSE operation.
Licensing
There are both per machine and capacity-based licenses for HP File System Extender.
• For every machine that runs an FSE client, you need an FSE client license appropriate to the operating
system.
• For every machine that runs the FSE server, you need a base license appropriate to the operating
system. The base license includes a license to migrate 1 TB to secondary storage managed by the FSE
server.
• To migrate more than 1 TB to the secondary storage, you need additional FSE server capacity licenses.
These are available in 1 TB increments. The migrated capacity is the sum of all capacity migrated to
the secondary storage from all associated FSE clients, including all copies where two or more copies of
migrated files are configured, and all versions of modified migrated files.
• To upgrade the FSE client-managed file system to WORM you need a capacity-based license. This is
available in 1 TB increments. The capacity for this WORM license is based on the physical capacity
occupied by the upgraded FSE file system on the production disk.
Preparing file systems for FSE
In order to optimize operation of the FSE implementation and increase its reliability, you should organize
file systems on the host that will become the FSE server, as well as on the FSE client. If you intend to use
disk media, you also need to prepare file systems to hold disk media files.
The following sections explain the importance of preparing file systems for FSE operation and provide
formulas to estimate the required space for FSE components. These explanations and formulas apply
generally when configuring an FSE implementation. The preparation is described in ”Preparing file
systems” on page 27.
The following table summarizes the main parameters to be considered when setting up the environment.
These parameters are discussed later in this chapter.
Table 2Pre-installation size considerations
ParameterDescriptionReference
HSM file system sizeDetermine the minimum HSM file
system size using such data as
expected number of files and
average file size.
Fast Recovery Information (FRI) sizeDetermine the expected size of FRI.”
File System Catalog (FSC) sizeFSC contains location history and
Temporary files in FSE disk bufferTotal storage space on file systems or
volumes that are assigned to the FSE
disk buffer should be at least 10% of
total storage space on all HSM file
systems in the FSE implementation.
Debug filesDebug files are optional but may
grow and fill up the file system.
Dedicate a separate file system to
debug files and use a symbolic link,
or mount another file system to the
debug files directory.
Reasons for organizing file systems
There are several reasons why you need to re-organize file systems on the machine that will host the FSE
software:
• Increase reliability of the core FSE databases.
FSE databases are vital FSE components and need to be secured to allow the FSE implementation to
become as stable as possible. Splitting the file system, which contains FSE databases, into several file
systems provides increased security.
• Reserve sufficient disk space for FSE databases, FSE log files, and FSE debug files.
FSE databases, FSE log, and FSE debug files can grow quite large in time. Gradually, some file systems
that hold these files can become full, which may lead to partial or complete data loss. For more
information on calculation of the required disk space, see ”Estimating the size of file systems” on
page 14.
”
Space requirements of FSE disk
” on page 17
er
buff
”
Storage space for FSE debug
” on page 18
files
On the consolidated FSE system or FSE server, HP recommends that you use Logical Volume Manager
(LVM) volumes for storage of file systems that are used for FSE disk media. This will increase flexibility and
robustness of the file systems that store FSE disk media.
Organizing the file system layout
During the FSE installation process, several new directories are created and FSE-related files are copied to
them. Some of the directories in the FSE directory layout are crucial for correct operation of an FSE
implementation. For improved robustness and safety purposes, they must be placed on separate file
systems. This prevents problems on one directory influencing data on any of the others. In case of the FSE
disk buffer, this may also improve the overall performance of the FSE system.
The directories and the required characteristics of the mounted file systems are listed in Table 3 and
Table 4, according to their location in the FSE implementation.
HP StorageWorks File System Extender Software installation guide for Linux13
Page 14
Table 3FSE server directory layout
DirectoryContentsFile system typeLVM volume
/var/opt/fse/Configuration Database, Resource
Management Database, other
FSE system files
/var/opt/fse/part/File System CatalogsExt3required for the
/var/opt/fse/fri/Fast Recovery Information (FRI)Ext3required for the
/var/opt/fse/diskbuf/FS1
/var/opt/fse/diskbuf/FS2
...
/var/opt/fse/log/FSE log files, FSE debug filesanyrequired for the
/var/opt/fse/dm/Barcode/FSE disk mediaanyrecommended
1. Mount points of file systems assigned to the FSE disk buffer, where /var/opt/fse/diskbuf/ is the root
directory of the FSE disk buffer.
1
Temporary files of the FSE disk
buffer
Ext3required for the
FSE backup
FSE backup
FSE backup
anynot required
FSE backup
NOTE: You can assign additional file systems or volumes to the FSE disk buffer. These file systems must be
mounted on subdirectories one level below the root directory of the FSE disk buffer. For details, see the FSE
user guide, chapter ”Monitoring and maintaining FSE”, section ”Extending storage space of FSE disk
buffer”.
IMPORTANT: To achieve a sufficient stability of the FSE disk media, a separate file system must be
dedicated for each disk medium, and it must be mounted to the corresponding subdirectory of the FSE disk
media directory. Thus, the FSE disk media directory itself does not need to be located on a separate file
system.
Table 4FSE client directory layout
DirectoryContentsFile system typeLVM volume
/var/opt/fse/part/Hierarchical Storage
Management Databases
/var/opt/fse/log/FSE log files, FSE debug
files
Estimating the size of file systems
Each of the previously mentioned file systems holds large databases and/or system files. Therefore, you
need to calculate the space requirement for all of them before they are created.
The sizes of the HSM file system, Fast Recovery Information (FRI) files, File System Catalog (FSC), and
Hierarchical Storage Manager Database (HSMDB) files are all related to several parameters. Among these
parameters are the number of files on an HSM file system and their average size.
By default, 32 MB of journal space is created on Ext3 file systems. This should be sufficient for the file
systems of the following directories:
Ext3required
anyrequired, if the client is part of
the consolidated FSE system
• /var/opt/fse
• /var/opt/fse/part
• /var/opt/fse/diskbuf
14Introduction and preparation basics
Page 15
IMPORTANT: FSE includes the utility HSM Health Monitor, which helps preventing file systems for FSE
databases and system files and HSM file systems from running out of free space. For details, see the FSE
user guide, chapter ”Monitoring and maintaining FSE”, section ”Low storage space detection”.
Formula for the expected HSM file system size
Use this simplified formula to calculate the minimum HSM file system size:
minHSMFSsize ..... the minimum required HSM file-system size in bytes.
afs ..... the average file size in bytes.
nf ..... the expected number of files on an HSM file system.
pon ..... the percentage of online files (%).
bks ..... the file-system block size in bytes.
Formula for the expected size of Fast Recovery Information
Fast Recovery Information (FRI) consists of a set of files, each corresponding to a single open data volume
on a configured FSE medium, which grows in size as the percentage of used space on the volume
increases. FRI files reach maximum size when the corresponding data volume becomes full. The FRI files
are then copied to appropriate locations on the FSE medium and removed from disk.
Use this formula to calculate the expected maximum size of FRI files on disk:
maxFRIsize
nv svlf 350+()nm tbks⁄×[]××
---------- ------------- ------------- ------------ ------------- ------------- ------=
sf nm tbks⁄×[]
where the meaning of the parameters is:
maxFRIsize ..... the estimated maximum size of FRI files on disk in bytes.
nv ..... the total number of open FSE medium volumes
1
sv ..... the size of an FSE medium volume on tape in bytes.
lf ..... the average file name length of files being migrated in bytes.
nm ..... the average number of files migrated together in the same migration job.
2
tbks ..... the block size on an FSE medium
in bytes.
sf ..... the average size of files being migrated in bytes.
[...] ..... square brackets indicate that the value inside is rounded up to an integer.
1. Normally, the number of configured FSE media pools containing media with migrated files.
2. Assuming all FSE media pools are configured with the same block size (block size is uniform on all FSE media).
Formula for the expected File System Catalog size
The File System Catalog (FSC) is a database that consists of the Data Location Catalog (DLC) and the
Name Space Catalog (NSC). The DLC records the full history of file locations on FSE media. The NSC
contains metadata of files on an HSM file system.
in the FSE implementation.
Factors used for FSC size estimation:
• Approximate usage 180 bytes per file for FSC (DLC + NSC) for typical file generation with two copies
and file name size of 16 characters using standard file attributes (Linux). You need to add the size of
additional attributes on Windows - access control lists (ACLs), extended attributes (EAs) and alternate
data streams (ADSs).
• Additional 36 bytes for media volume index for each file copy is required when you run FSC
consistency check. This will be used on the first run of consistency check.
HP StorageWorks File System Extender Software installation guide for Linux15
Page 16
NOTE: HP recommends that you add another 50% as a reserve when calculating the maximum FSC size.
The following examples present space usage for typical configurations.
Example 1: three copies, one generation:
First generation takes (189 for FSC) + (36 x 3 for volume index) = 297 bytes
Each additional generation takes 47 + (36 x 3) = 155 bytes
Total size = ((297 + add. attr. size) x max. number of files) + (155 x number of add. generations)
Example 2: two copies, one generation:
First generation takes (180 for FSC) + (36 x 2 for volume index) = 252 bytes
Each additional generation takes 38 + (36 x 2) = 110 bytes
Total size = ((252 + add. attr. size) x max. number of files) + (110 x number of add. generations)
Example 3: one copy, one generation:
First generation takes (162 for FSC) + (36 for volume index) = 198 bytes
Each additional generation takes 20 + 36 = 56 bytes
Total size = ((198 + add. attr. size) x max. number of files) + (56 x number of add. generations)
NOTE: A well defined FSE backup policy with regular backups prevents the excessive growth of the
transaction log files of the File System Catalogs (FSCs). The transaction log files are committed into the
main databases during the FSE backup process.
For details, see the FSE user guide, chapter ”Backup, restore, and recovery”, section ”Backup”.
Formula for the expected Hierarchical Storage Manager Database (HSMDB) size
This is the formula for calculating the maximum Hierarchical Storage Management Database (HSMDB)
size:
maxHSMDBsize ..... the maximum HSM database size in bytes.
nf ..... the expected number of files on an HSM file system.
pdi ..... the percentage of directories (%).
afnl ..... the average length of file names in bytes.
pon ..... the percentage of online files (%).
NOTE: A well defined FSE backup policy with regular backups prevents the excessive growth of the
transaction log files of the Hierarchical Storage Management Databases (HSMDBs). The transaction log
files are committed into the main databases during the FSE backup process.
For details, see the FSE user guide, chapter ”Backup, restore, and recovery”, section ”Backup”.
16Introduction and preparation basics
Page 17
Sample calculation for the expected sizes of HSM file system, FSC and HSMDB
The following is an example of a calculation of space required on an HSM file system and on the file
systems holding File System Catalog (FSC), and HSM Database (HSMDB), and Fast Recovery Information
(FRI) files.
Sample input for calculations:
• HSM file system can store 10 million entities (files and directories)
• average size of files being migrated is 100 KB
• 20% of the files are online
(online means they occupy space on the local HSM file system)
• 20% of all entities on the HSM file system are directories
• the average file name length of files being migrated is 10 characters
• files have only one generation
• average number of copies per file generation amounts to two
HSM file system size calculation:
minHSMFSsize
FSC size calculation:
First generation takes (162 for FSC) + (36 for volume index) = 198 bytes
• minimum HSM file system size: 216-306 GB, depending on the block size (1 KB, 2 KB, 4 KB)
• maximum File System Catalog size: 453 MB
• maximum HSM Database size: 287 MB
The File System Catalog and the HSMDB together require 740 MB. That is approximately 0.3% of the size
of minimum required HSM file system size for this input.
Sample calculation for the expected total size of the FRI files
The following is an example of a calculation of space required on the file system holding the Fast Recovery
Information (FRI) files.
Sample input for calculation:
• total number of open FSE medium volumes in the FSE implementation is 8
• size of an FSE medium volume on tape is 5135 MB
• average size of files being migrated is 100 KB
• average file name length of files being migrated is 10 characters
• average number of files migrated together in the same migration job is 50
• block size on tape medium is 128 KB
Sample result:
• estimated maximum size of FRI files on disk: 1027 MB
Space requirements of FSE disk buffer
To determine the approximate storage space required by FSE disk buffer, you should consider the following
points:
HP StorageWorks File System Extender Software installation guide for Linux17
Page 18
• Total storage space on file systems or volumes that are assigned to FSE disk buffer should be at least
10% of total storage space on all HSM files systems in the FSE implementation.
• Each file system or volume assigned to FSE disk buffer should be at least twice as large as the largest
file that will be put under FSE control.
Additionally, if you plan to perform a duplication of FSE media, the following prerequisite must be fulfilled:
• At least one of the file systems or volumes assigned to FSE disk buffer should be at least twice as large
as the total storage space on the largest FSE medium.
For example, to enable duplication of a 100 GB medium, each file system or volume assigned to the
FSE disk buffer should have at least 200 GB of storage space.
Storage space for FSE debug files
The /var/opt/fse/log/debug directory holds optional FSE debug files. These files contain large
amount of data and can grow very fast. In order to prevent them filling up the /var/opt/fse file system,
you need to make the directory /var/opt/fse/log/debug a symbolic link to a directory outside the
file system for /var/opt/fse. For example, you can make the symbolic link point to one of the following
directories:
• /var/log/FSEDEBUG
• /tmp/FSEDEBUG
The directory /tmp can be used only if it provides large storage space and is not part of the root file
system, that is: if a separate file system is mounted to /tmp.
Creating symbolic links is done after you create and mount the required file systems.
NOTE: If there is enough disk space that is not yet partitioned, you can also make a new partition for the
debugs, create an Ext3 file system on it, and mount it on /var/opt/fse/log/debug.
You need to add a line to the /etc/fstab file, for example
This chapter provides an installation overview, which summarizes the steps necessary to prepare the system
and to install the FSE software. Where appropriate, you are pointed to more detailed steps within this
document.
ActionComments & where to find details
1. Install the required operating system update.”Required operating system updates” on page 21.
2. Install all necessary third-party packages.”
3. Prepare logical volumes:”
a. Prepare partitions.”
b. Create logical volume groups.”
c. Create logical volumes.”
d. Create file system on each logical volume.
(command: mkfs.ext3
e. Create the mount points.”
f.Update the /etc/fstab file with required
information.
)
Required third-party packages for SUSE Linux
E
nterprise Server 9 (SLES 9)
Required third-party packages for Red Hat
”
nterprise Linux 4 (RHEL 4)
E
Preparing Logical Volume Manager (LVM)
lumes
vo
vo
v
vo
l
”Creating file systems on top of LVM logical
vo
s
”
s
This is to mount the files systems (FSE databases and
system files) automatically at system startup time.
” on page 27.
Step 2: Define and initialize LVM physical
lumes
” on page 27.
Step 3: Create and initialize LVM logical
olume groups
Step 4: Create and initialize LVM logical
lumes
ogical volumes for HSM file systems
lumes
Mounting file systems for FSE databases and
ystem files
Mounting file systems for FSE databases and
ystem files
” on page 28.
” on page 28, ”Step 5: Create LVM
” on page 30.
” on page 31.
” on page 31.
” on page 21.
” on page 22.
” on page 29.
g. Mount the file systems for FSE databases and
system files on previously created directories.
4. Install the FSE software.”
5. Start FSE.”
6. Check the status of Firebird SuperServer and the FSE
processes.
7. Configure and start HSM Health Monitor.”
8. Optionally, configure and start Log Analyzer.”
9. Optionally, install the FSE Management Console.”
10. Configure resources (libraries, drives, media pools,
partitions, media).
HP StorageWorks File System Extender Software installation guide for Linux19
”
Mounting file systems for FSE databases and
ystem files
s
Installing an FSE release” on page 35.
Starting the FSE implementation” on page 43.
”
Checking the status of a running FSE
im
plementation
Configuring and starting HSM Health Monitor”
on page 48.
Configuring and starting Log Analyzer” on
page 48.
Installing the FSE Management Console” on
page 49.
FSE user guide, chapter ”
” on page 31.
” on page 46.
Configuring FSE”.
Page 20
ActionComments & where to find details
11. Mount HSM file systems:
a. Create directories.
b. Update the /etc/fstab file with required
information.
Automating the mounting of HSM file systems”
”
on page 49.
To mount HSM file systems, add entries for these file
systems to the local file system table in the file
/etc/fstab.
NOTE: This step is similar to steps 3e and 3f where
you mounted the FSE databases and system files. In this
step, you are mounting HSM file systems.
20Installation overview
Page 21
3Preparing the operating system environment
This chapter describes the necessary changes that need to be made to the operating system environment
on the computer that will host a consolidated FSE system (integrated server and client) or part of a
distributed FSE implementation (separate server and separate client). It also lists the required third-party
packages that must be installed prior to installing the FSE software.
Preparing the operating system
NOTE: You must be logged on to the system as root in order to prepare the operating system environment.
Required operating system updates
SUSE Linux Enterprise Server
You need to upgrade all SUSE Linux Enterprise Server 9 systems that will become FSE system components
with SLES 9 Service Pack 3 (SLES 9, SP 3, kernel: 2.6.5-7.244-default, 2.6.5-7.244-smp or
2.6.5-7.244-bigsmp) in order to enable installation of the FSE software.
For more information on specifics related to the supported kernel variants, see the FSE release notes.
Red Hat Enterprise Linux
Red Hat Enterprise Linux AS/ES 4 Update 3 (RHEL AS/ES 4, Update 3, kernel: 2.6.9-34.EL,
2.6.9-34.ELhugemem or 2.6.9-34.ELsmp) must be installed on all FSE system components running
on RHEL AS/ES 4.
For more information on specifics related to the supported kernel variants, see the FSE release notes.
Required third-party packages for SUSE Linux Enterprise Server 9 (SLES 9)
Table 5 lists the required package versions for a SUSE Linux Enterprise Server 9 (SLES 9) operating system
and the components of FSE that require each package. Most packages are already included in the
operating system distribution or service pack. Unless stated otherwise, later versions are also acceptable.
Table 5Packages and their relation to FSE components on SUSE Linux Enterprise server
Package
Package name in the rpm -qa output (SLES)
Package file name (SLES)
Required third-party packages for Red Hat Enterprise Linux 4 (RHEL 4)
Table 6lists the required package versions for a Red Hat Enterprise Linux 4 (RHEL 4) operating system and
the components of FSE that require each package. Most packages are already included in the operating
system distribution or service pack. Unless stated otherwise, later versions are also acceptable.
Table 6Packages and their relation to FSE components on Red Hat Enterprise Linux
Package
Package name in the rpm -qa output (RHEL)
Package file name (RHEL)
To check whether the required package versions are installed, use rpm -q followed by the package name
without the version and suffix:
# rpm -q PackageName
If the package has been installed, the command responds with:
PackageName-PackageVersion
Otherwise, the response is:
package PackageName is not installed
Example
To check if libgcc-3.2.2-54.i586.rpm has been installed, enter rpm -q libgcc at the command
prompt. Note that later versions are also acceptable.
Installing Firebird SuperServer on an FSE server
Consolidated FSE systems and FSE servers require the third-party software Firebird SuperServer, used for
implementation of the Resource Management Database.
To install the RPM package (FirebirdSS-1.0.3.972-0.64IO.i386.rpm) to the appropriate
locations, use the RPM installation tool:
1. Install the FirebirdSS package using the command:
HP StorageWorks File System Extender Software installation guide for Linux23
Page 24
2. In the /etc directory, create a plain text file gds_hosts.equiv containing the following two lines:
+
+localhost
3. If you are installing FirebirdSS to a SUSE Linux Enterprise Server system, once the Firebird SuperServer
is installed, open the file /etc/sysconfig/firebird with a text editor, search for the
START_FIREBIRD variable, and set its value to "yes". If the line does not exist, add it as follows:
# Start the Firebird RDBMS ?
#
START_FIREBIRD="yes"
If the file /etc/sysconfig/firebird does not exist, create it and add the above contents to it.
Disabling ACPI
Some kernels can have incomplete implementation of support for the Advanced Configuration and Power
Interface (ACPI). Enabled kernel support for ACPI causes problems on the symmetric multiprocessing (SMP)
machines (the machines with multiple processors), and on machines with SCSI disk controllers. This means
that you need to disable the kernel support for ACPI before booting the SMP variant of a Linux kernel on a
SMP machine or an arbitrary kernel variant on a machine with SCSI disk controller. ACPI has to be
disabled on all supported distributions. The following additional boot-loader parameter disables ACPI:
acpi=off
However, with some configurations, this single parameter does not give the desired effect. In such cases, a
different set of boot-loader parameters must be specified to disable ACPI. Instead of the acpi=off string,
you must provide the following options:
acpi=oldboot pci=noacpi apm=power-off
See http://portal.suse.com/sdb/en/2002/10/81_acpi.html
control the ACPI code.
Depending on the boot loader you are using on the system, you need to modify the appropriate
boot-loader configuration file to disable ACPI.
Disabling ACPI with GRUB boot loader
To disable ACPI, you need to edit the GRUB configuration file /boot/grub/menu.lst and add the
syntax acpi=off to it.
The following is an example for supplying the required booting parameter to a kernel image that resides in
the directory /boot/bzImage on the system’s first hard drive:
title Linux
root (hd0,0)
kernel /boot/bzImage acpi=off
Disabling ACPI with LILO boot loader
To disable ACPI, you need to edit the LILO configuration file and add the syntax append = "acpi=off"
to it. The LILO configuration file is usually /etc/lilo.conf.
for information on kernel parameters to
This is an example for supplying the required booting parameter to a kernel image that resides in the
directory /boot/bzImage:
image = /boot/bzImage
label = Linux
read-only
append = "acpi=off"
24Preparing the operating system environment
Page 25
After you add this option to the LILO configuration file, run lilo to ensure that at the next boot, ACPI will
be disabled.
HP StorageWorks File System Extender Software installation guide for Linux25
Page 26
26Preparing the operating system environment
Page 27
4Preparing file systems for FSE
In order to optimize the FSE implementation and increase its reliability, it may be necessary to re-organize
the file systems on the host that will be dedicated to the FSE server as well as on the FSE client. When using
disk media, you need to prepare file systems to hold disk media files.
Preparing file systems
The following sections describe the steps you need to perform on the operating systems to manually define
the necessary Logical Volume Manager (LVM) volumes, create file systems on top of them, and mount the
file systems created for FSE databases and system files.
Preparing Logical Volume Manager (LVM) volumes
Most of the file systems that are used by the FSE implementation, HSM file systems and file systems for FSE
databases and system files, should be located on Logical Volume Manager (LVM) volumes. This is required
by the FSE backup and restore functionality, in order to enable file system snapshot creation. For details on
which file systems must be located on the LVM volumes, see chapter ”Introduction and preparation basics”,
section ”Organizing the file system layout” on page 13.
For detailed instructions on LVM usage, see LVM manuals, LVM man pages, and web site
http://tldp.org/HOWTO/LVM-HOWTO
CAUTION: Use the LVM command set with caution, as certain commands can destroy existing file
systems!
.
Preparation overview
You need to perform the following steps to prepare LVM volumes:
1. Get a list of disks and disk partitions that exist on the system.
2. Define and initialize LVM physical volumes. LVM commands for managing LVM physical volumes begin
with letters pv (physical volume) and are located in the directory /sbin.
3. Once physical volumes are configured, you have to create and initialize the LVM logical volume
groups; these use the space on physical volumes. Commands for managing LVM logical volume groups
begin with letters vg (volume group) and are located in the directory /sbin.
4. Create and initialize LVM logical volumes for the file systems you are going to use inside the FSE
implementation. Commands for managing LVM logical volumes begin with letters lv (logical volume)
and are located in the directory /sbin.
For detailed instructions on LVM use, see the LVM man pages and the web site
http://tldp.org/HOWTO/LVM-HOWTO/
.
Step 1: Get a list of available disks and disk partitions
Before you define and initialize physical LVM volumes, you need to know the existing disk and disk
partition configuration.
Invoke the following command to get a list of disks and disk partitions that exist on the system:
# fdisk -l
Step 2: Define and initialize LVM physical volumes
LVM physical volumes can be either whole disks or disk partitions. HP recommends to use disk partitions
and not whole disks as they can be mistakenly considered as free disks.
CAUTION: Any data on these disks or partitions will be lost as you initialize an LVM volume. Make sure
you specify the correct device or partition.
HP StorageWorks File System Extender Software installation guide for Linux27
Page 28
NOTE: Commands for managing LVM physical volumes begin with the letters pv (physical volume) and
are located in the /sbin directory.
In the example below, the first partition of the first SCSI disk and the first partition on the second SCSI disk
are initialized as LVM physical volumes and are dedicated to the LVM volumes. Use values according to
your actual disk configuration:
# pvcreate /dev/cciss/c0d_p1
# pvcreate /dev/cciss/c0d_p2
Step 3: Create and initialize LVM logical volume groups
LVM logical volume groups are a layer on top of the LVM physical volumes. One LVM logical volume group
can occupy one or more LVM physical volumes.
NOTE: Commands for managing LVM logical volume groups begin with the letters vg (volume group)
and are located in the /sbin directory.
CAUTION: HP recommends to separate the FSE databases and system files from the user data on the
HSM file systems by putting them on two separate LVM volume groups, as shown in the following
examples. This helps increasing data safety.
In the following example, the newly created LVM physical volume /dev/cciss/c0d_p1 is assigned to
the LVM volume group vg_fse, and the LVM physical volume /dev/cciss/c0d_p2 is assigned to the
LVM volume group vg_fsesfs. The volume group vg_fse will store FSE databases and system files, and
the volume group vg_fsefs will store the HSM file systems with user files and directories.
When creating the LVM volume groups, use names and values according to your preferences and your
actual LVM physical volume configuration.
To create the volume groups using the default physical extent size, invoke the following commands:
# vgcreate vg_fse /dev/cciss/c0d_p1
# vgcreate vg_fsefs /dev/cciss/c0d_p2
NOTE: If you intend to create LVM logical volumes larger than 256 GB, you must use the option -s
(--physicalextentsize) with vgcreate, and specify a physical extent larger than 4 MB.
For example: a physical extent of 4 MB enables LVM to address up to 256 GB and a physical extent of
32 MB allows addressing 2 TB of disk space. Note that the recommended physical extent size for the FSE
file system and disk media (if under LVM) volume groups is 32 MB.
For details on using the vgcreate command, see the vgcreate man page (man vgcreate).
To create the volume groups using the physical extent size of 32 MB, invoke the following commands:
# vgcreate -s 32M vg_fse /dev/cciss/c0d_p1
# vgcreate -s 32M vg_fsefs /dev/cciss/c0d_p2
Step 4: Create and initialize LVM logical volumes
LVM logical volumes are virtual partitions and can be mounted like ordinary partitions once file systems
are created on them.
28Preparing file systems for FSE
Page 29
NOTE: Commands for managing LVM logical volumes begin with the letters lv (logical volume) and are
located in the /sbin directory.
In the following example, LVM logical volumes are created on the LVM volume group vg_fse for the
important directories that FSE uses, namely:
• /var/opt/fse/
• /var/opt/fse/part/
• /var/opt/fse/fri/
• /var/opt/fse/log/
• /var/opt/fse/diskbuf/FileSystemMountPoint
Optional additional file systems assigned to FSE disk buffer will use mount points conforming to the
following scheme, where NewFileSystemMountPoint is a unique subdirectory name:
• /var/opt/fse/diskbuf/NewFileSystemMountPoint
You should use logical volume names according to your preferences and sizes that correspond to your
actual LVM volume group configuration:
# lvcreate -L 6G -n fsevar vg_fse
# lvcreate -L 6G -n fsepart vg_fse
# lvcreate -L 6G -n fsefri vg_fse
# lvcreate -L 6G -n fselog vg_fse
# lvcreate -L 20G -n fsediskbuf vg_fse
In case of additional file systems assigned to the FSE disk buffer, you should use the following command for
creation of the LVM logical volume for each of them:
# lvcreate -L 20G -n fsediskbufNumber vg_fse
NOTE: You need to leave some free space for optional LVM snapshot volumes on the LVM volume group,
which are created during backup of the FSE implementation. The size of the reserved space should be
approximately 15-20 % of the whole LVM volume group size, as recommended by the LVM developers.
The exact value depends on the actual load of the LVM volumes: it should be increased if frequent changes
to the HSM file systems are expected during the FSE backup or if the FSE backup process is expected to
last longer.
The /var/opt/fse/log/debug directory is not as critical as others, and can be placed on an ordinary
file system. For more details on configuration, see chapter ”Introduction and preparation basics”, section
”Estimating the size of file systems”, subsection ”Storage space for FSE debug files”.
Step 5: Create LVM logical volumes for HSM file systems
To create the LVM logical volume for a single HSM file system that will actually contain user files, use the
lvcreate command. Use a logical volume name according to your preferences, and a size that
corresponds to your actual LVM volume group configuration:
# lvcreate -L 400G -n fsefs_01 vg_fsefs
Repeat the procedure to create the LVM logical volumes for each additional HSM file system you are going
to use.
HP StorageWorks File System Extender Software installation guide for Linux29
Page 30
Creating file systems on top of LVM logical volumes
After the LVM logical volumes have been successfully initialized, you need to create file systems on top of
them using the command mkfs.ext3.
HP recommends that you use the mkfs.ext3 option -b 4096 for specifying a block size of 4096 bytes.
TIP:If you want to check the properties your file system will have without actually creating it, you can run
the command mkfs.ext3 with the switch -n.
An example output of checking the example_fs file system values is as follows:
NOTE: The number of inodes in the mkfs.ext3 output corresponds to the expected maximum number of
files on the file system. If this number is not satisfactory, you can explicitly tell mkfs.ext3 to reserve a
certain number of inodes with the -N option.
Note that the upper limit for the number of inodes is affected by two factors: the file system size and the file
system block size. On a file system with size Fs and block size Bs, the maximum size of inodes ln is
determined by the equation In = Fs / Bs. If you specify a number bigger than ln, the mkfs.ext3
command creates ln inodes.
For example, to create one million inodes you must specify the -N 1000000 option. In case of a block
size of 4096 bytes, the file system size must be equal to or greater than 3.8 GB (4 096 000 bytes) for this
number of inodes to be actually created:
Proceed as follows and first create the file systems for the FSE databases and system files, and after that
create HSM file systems.
Step 1: Creating file systems for FSE databases and system files
Use the following command sequence to create the file systems for the FSE databases and system files (for
our example):
# mkfs.ext3 -b 4096 /dev/vg_fse/fsevar
# mkfs.ext3 -b 4096 /dev/vg_fse/fsepart
30Preparing file systems for FSE
Page 31
# mkfs.ext3 -b 4096 /dev/vg_fse/fsefri
# mkfs.ext3 -b 4096 /dev/vg_fse/fselog
# mkfs.ext3 -b 4096 /dev/vg_fse/fsediskbuf
If you will assign additional file systems to the FSE disk buffer, run the following command for each of them:
# mkfs.ext3 -b 4096 /dev/vg_fse/fsediskbufNumber
Each command reports the properties of the newly created file system.
NOTE: To improve the performance of the FSE disk buffer, you can use Ext2 file systems for its storage
space. Ext2 file system are non-journaled and therefore faster than Ext3. In this case, in the above
command sequence, use the command mkfs.ext2 instead of mkfs.ext3.
Step 2: Creating HSM file systems
To create HSM file systems, proceed as follows:
1. Use the following command to create an HSM file system on top of the LVM logical volume fsefs_01:
In this example, the newly created HSM file system can store a maximum of 10 000 000 files, as the
same number of inodes are reserved on it. Consider the limitation on the number of inodes that can be
created on a file system with specific total size and specific block size.
You should use the number of inodes according to your HSM file system requirements, and the LVM
logical volume name according to your actual LVM volume configuration.
2. Create HSM file systems on all other LVM logical volumes that will be used for FSE partitions. Use
values according to your requirements and the particular purpose of HSM file systems in the FSE
implementation.
NOTE: The number of inodes on the file system cannot be changed once the file system has been put into
use.
The next section provides instructions on how to mount file systems for FSE databases and system files.
Mounting file systems for FSE databases and system files
The last step of the preparation procedure is to mount the file systems for FSE databases and system files.
Note that the file systems for the FSE partitions, that is HSM file systems, can only be mounted after the FSE
daemons have been successfully started.
To mount the necessary file systems, do the following:
1. Create the first of the three important FSE directories (and its parent directories):
# mkdir -p /var/opt/fse
2. Invoke the following command to retrieve the name of the device file. Use the symbolic link from the
step 1 of section ”Creating file systems on top of LVM logical volumes”:
# ls -la /dev/vg_fse/fsevar
The command generates an output similar to the following:
HP StorageWorks File System Extender Software installation guide for Linux31
Page 32
3. To ensure that the file system is automatically mounted at system startup, add the appropriate entry to
the file system table in the file /etc/fstab. When adding the entry, note that the device name in the
first column corresponds to the device file that you retrieved in the previous step:
HP StorageWorks File System Extender Software installation guide for Linux33
Page 34
Creating a symbolic link for debug files directory
There are several possibilities where you may store the FSE debug files, as listed in ”Storage space for FSE
debug files
NOTE: When you make the decision about the placement of the FSE debug files, you need to make sure
that the disk partition holding the target directory with the debug files has enough free space for eventually
a large amount of the debugging data.
HP recommends that you create and mount a large separate file system to the debug files directory, since
filling the file system would cease debug files creation and might affect other processes that use the same
file system.
For example, to create a symbolic link to the directory /var/fse_log/DEBUG located on a separate file
system, use the following commands:
# rmdir /var/opt/fse/log/debug
# ln -s /var/fse_log/DEBUG /var/opt/fse/log/debug
The symbolic link creation for debug files completes the preparation phase. For instructions on installing
FSE software, see chapter ”Installing FSE software” on page 35.
” on page 18.
34Preparing file systems for FSE
Page 35
5Installing FSE software
This chapter describes steps you must follow to install FSE software. The installation procedure depends on
the type of the FSE implementation component you want to install: a consolidated FSE system, an FSE
server or external FSE client.
NOTE: With mixed or distributed FSE system implementations you need to install external FSE clients after
you have installed the consolidated FSE system or the FSE server, respectively. On each system that will be
included in the FSE implementation, the same FSE product version must be installed.
Installation of the FSE Management Console is optional, and you can still install it after the FSE
implementation is configured and put into use. For more information on the FSE Management Console, see
the FSE user guide, chapter ”Introducing HP StorageWorks File System Extender”, section ”FSE user
interfaces”.
This chapter includes the following topics:
• Prerequisites, page 35
• Installation overview, page 35
• Installing an FSE release, page 35
• Verifying and repairing the installed FSE software, page 38
• Preparing the environment for the first startup of the FSE implementation, page 39
• Starting the FSE implementation, page 43
• Checking the status of a running FSE implementation, page 46
• Configuring and starting HSM Health Monitor, page 48
• Configuring and starting Log Analyzer, page 48
• Installing the FSE Management Console, page 49
• Automating the mounting of HSM file systems, page 49
• Configuring the post-start and pre-stop helper scripts, page 50
Prerequisites
Before starting with installation, ensure that the following prerequisite is fulfilled:
• The required versions of the third-party software packages are installed on the system.
For information on how to install the third-party software packages, see the third-party software
documentation and the operating system documentation.
• You must be logged on to the system as root in order to execute shell commands.
Installation overview
To install FSE release software on a Linux system, perform the following:
1. Install the FSE packages located on the Linux part of the FSE release installation DVD-ROM.
2. Prepare the system for startup of the FSE processes.
3. Start the FSE processes.
4. Verify that the FSE processes are running.
Installing an FSE release
The FSE installation process is built on RPM packaging technology. FSE installation consists of several
installation packages that you need to install selectively. You install the packages according to the
component of the FSE implementation you are installing: a consolidated FSE system, an FSE server, or an
FSE client.
HP StorageWorks File System Extender Software installation guide for Linux35
Page 36
The following table lists the three types of FSE implementation and the FSE packages required for each
component, as well as the dependencies between the packages. The dependencies define the correct
order in which the packages must be installed.
Table 7Installation packages
FSE installation packageDepends on
1.fse-common-Version.Distr.Plat.rpm
2.fse-server-Version.Distr.Plat.rpmcommon
3.fse-agent-Version.Distr.Plat.rpmcommon
4.fse-client-Version.Distr.Plat.rpmcommon
5.fse-util-Version.Distr.Plat.rpmcommon
6.fse-cli-admin-Version.Distr.Plat.rpmcommon, server
1. In actual filenames of FSE packages, Version is replaced with actual version numbers in the form of
Major.Minor.SMRNumber.Build, Distr is replaced with sles9 (in the packages for SUSE Linux Enterprise Server 9) or
rhel4 (in the packages for Red Hat Enterprise Linux 4), and Plat is replaced with i586 (in the packages for SUSE Linux
Enterprise Server 9) or i386 (in the packages for Red Hat Enterprise Linux 4).
Installation procedure
You need to change the current directory to the one with installation packages, and install all packages
with a single rpm command invocation. The packages must be specified in the order as defined in Table 7
on page 36:
# rpm -ivh FSEInstallationPackage...
FSE
package(s)
1
client
FSE
server
Consolidated
FSE system
For example, to install a consolidated FSE system, use the following command line:
NOTE: The rpm command checks for dependencies between the packages and may reorder them for the
installation process.
Monitoring the installation
While installing the FSE installation packages, the rpm command reports the installation progress and
generates an output similar to the following example.
NOTE: after successful installation configure HHM and LogAnalyzer,
set them active after next reboot using commands
chkconfig hhm on
chkconfig loganalyzer on
HP StorageWorks File System Extender Software installation guide for Linux37
Page 38
chkconfig loganalyzer_messages on
and start the services with commands
hhm start
loganalyzer start
loganalyzer_messages start
The initial content of the file /etc/opt/fse/services.cfg that you need to modify according to the
actual host name is the following:
server = fseserver.company.net
Verifying and repairing the installed FSE software
Once the FSE release software has been correctly installed, you can use the following procedure to identify
the build number. The procedure can also be used for detection of a corrupted FSE software installation.
Determining the build number
All files that are part of a particular FSE release software or a particular FSE system maintenance release
have the same build number. The build numbers of different system maintenance releases are different. The
build number of each FSE system maintenance release also differs from the initial FSE release build
number.
NOTE: Build numbers are the same across all supported platforms: therefore the build numbers of Linux
and Windows installation packages do match.
NOTE: You can also use this procedure for identifying the installed FSE system maintenance release. For
details on FSE system maintenance releases, see appendix ”FSE system maintenance releases and hot
fixes” on page 73.
Determine the FSE build number using the following command:
# rpm -qa | grep fse
fse-server-3.4.0-Build.Distr
fse-cli-user-3.4.0-Build.Distr
fse-agent-3.4.0-Build.Distr
fse-client-3.4.0-Build.Distr
fse-common-3.4.0-Build.Distr
fse-cli-admin-3.4.0-Build.Distr
fse-util-3.4.0-Build.Distr
Version numbers of all installed FSE packages must be the same, including their Build and Distr
components.
Repairing the FSE software installation
If the FSE software installation becomes corrupt, you need to repair it by re-installing it.
NOTE: You can also use this procedure for repairing the installed FSE system maintenance release. For
details on FSE system maintenance releases, see appendix ”FSE system maintenance releases and hot
fixes” on page 73.
38Installing FSE software
Page 39
Use the rpm -V command to check whether the software installation is corrupt. Note that applied FSE hot
fixes change individual binaries and other files making it difficult to determine whether or not the
installation is corrupt. Proceed as follows:
1. Change the current directory to one with the FSE installation packages and reinstall the packages:
# cd PathToFSEPackageDirectory
# rpm -F --force fse*.rpm
2. Verify the reinstalled packages using the following command:
# rpm -V `rpm -qa | grep fse`
If all FSE release or FSE system maintenance release files were correctly updated, the command output
is empty.
Preparing the environment for the first startup of the FSE implementation
Preparing the environment consists of modifying environment variables, updating the FSE backup
configuration files, and configuring the FSE interprocess communication.
Modifying the PATH environment variable
To be able to execute FSE commands, FSE tools, the omninames command, and start the FSE
Management Console client from any location, change the search path as follows:
You can permanently extend the executable search path by adding the above line to your .bashrc shell
startup file. The changes in .bashrc become effective after your next logon to the system as root.
Modifying the LD_LIBRARY_PATH environment variable
To enable Log Analyzer to run, set the variable for paths to shared libraries as follows:
# export LD_LIBRARY_PATH=/usr/local/lib
You can permanently extend the executable search path by adding the above line to your .bashrc shell
startup file. The changes in .bashrc become effective after your next logon to the system as root.
Modifying the MANPATH environment variable
To read the FSE man pages, extend the search path for man pages as follows:
# export MANPATH=$MANPATH:/opt/fse/man
You can permanently extend the man page search path by adding the above line to your .bashrc shell
startup file. The changes in .bashrc become effective after your next logon to the system as root.
Preparing the FSE backup configuration file
Configure the parameter SNAPSHOT_PCT in the backup configuration file
/etc/opt/fse/backup.cfg.
The line from backup.cfg with SNAPSHOT_PCT configured with its default value is the following:
SNAPSHOT_PCT=10
For details, see the FSE user guide, chapter ”Backup, restore, and recovery”, section ”Backup”, subsection
”How it works?”.
Configuring the FSE interprocess communication
FSE interprocess communication is configured according to the FSE implementation (consolidated,
distributed, or mixed FSE implementation) and the network type used for FSE interprocess communication
(ordinary LAN, private network). In this context, the term ordinary LAN means the common company LAN
to which systems are attached using their primary network adapters, and the term private network means a
dedicated network to which systems are attached using their secondary network adapters.
HP StorageWorks File System Extender Software installation guide for Linux39
Page 40
The initial configuration must be performed after the FSE software package installation, before the FSE
implementation is put into operation for the first time. Afterwards, if the implementation or the network type
is changed, the communication settings must be reconfigured.
CAUTION: An appropriate configuration of the FSE interprocess communication is of crucial importance
for normal FSE operation. Incorrectly configured interprocess communication may lead to a non-operating
FSE implementation.
FSE interprocess communication settings consist of two plain text files installed on each FSE host. The files
and their locations are the following:
Configuration fileLocation
services.cfg/etc/opt/fse
omniORB.cfg/etc/opt/fse
NOTE: Before configuring the FSE interprocess communication, you may need to manually copy the file
omniORB.cfg from the directory /opt/fse/newconfig to the corresponding location.
You can also change the default path where the FSE software searches for the omniORB.cfg file using the
OMNIORB_CONFIG environment variable.
CAUTION: In the following procedures for configuring a LAN connection, if you reconfigure a system with
several network adapters enabled, you need to configure the parameters in the section
--- Private network parameters --- in the omniORB.cfg file as described in the procedures
for private network communication configuration, instead of commenting them out.
In this case, the parameters you specify in omniORB.cfg must be verified against the actual LAN
configuration for that system.
CAUTION: Configuring communication on a consolidated FSE system or an FSE server
NOTE: You can use the fse_net command to test the connection between external FSE clients and the
FSE server. Details are given in the FSE user guide, chapter ”Troubleshooting”.
No external FSE clients or ordinary LAN connection
If your FSE set-up includes only a consolidated FSE system and you do not plan to connect external clients
to it, or if your external FSE clients are connected to the consolidated FSE system or the FSE server through
ordinary LAN, you do not need to modify the default configuration of the services.cfg file. Instead,
perform the following:
1. In the omniORB.cfg file on the consolidated FSE system or the FSE server, verify that all lines in the
section --- Private network parameters --- are inactive (commented out).
Private network connection
If your external FSE clients use a private network for communication with the consolidated FSE system or the
FSE server, you must make the following modifications to the consolidated FSE system or to the FSE server:
1. Add the hostname variable to services.cfg and provide as its value the fully-qualified domain
name (FQDN) that identifies the system inside the private network.
40Installing FSE software
Page 41
The following is an example of a correctly configured services.cfg file in an FSE implementation
using a private network. The server variable is redundant in such FSE implementation:
hostname = fseserver.fsenet
server = fseserver.fsenet
2. In the omniORB.cfg file, configure the parameters in the section
--- Private network parameters --- with the following information:
• the FQDN that identifies the system inside the private network
• the IP address of the system
•the subnet mask
All these parameters must be verified against the actual private network configuration. Ensure that the
FQDN you specify in omniORB.cfg matches the FQDN specified for the hostname variable in the
services.cfg file.
The following example is an excerpt from a properly configured omniORB.cfg file:
# --- Private network parameters ---
#
# Which interface omniORB uses for IORs
endPoint = giop:tcp:fseserver.fsenet:
# The order of network interfaces to use for accepting connections:
# Only localhost and private network. Others are denied.
The following is an example excerpt from the local /etc/hosts file that matches the above
services.cfg and omniORB.cfg files:
123.45.67.89 fse-server1.company.com fse-server1
192.168.240.1 fseserver.fsenet fseserver
123.45.67.90 fse-client1.company.com fse-client1
192.168.240.2 fseclient.fsenet fseclient
HP StorageWorks File System Extender Software installation guide for Linux41
Page 42
NOTE: SUSE Linux Enterprise Server 9 (SLES 9) systems only: Do not run YaST2 after you have configured
this FSE host to use a private network for the FSE interprocess communication. Running YaST2 modifies
/etc/hosts in such a way that subsequent FSE system startups will fail.
Alternatively, you can modify /etc/sysconfig/suseconfig by changing the line
CHECK_ETC_HOSTS="yes" to CHECK_ETC_HOSTS="no". You can then run YaST2 without affecting
the FSE system operation, but you cannot modify host names with it.
Configuring communication on external FSE clients
NOTE: This configuration step is mandatory regardless of the platform of the consolidated FSE system or
FSE server.
Ordinary LAN connection
If the external FSE clients and the consolidated FSE system or the FSE server communicate through the
ordinary LAN, you only need to modify the services.cfg file on each external FSE client and comment
out some parameters in the omniORB.cfg file. Do the following:
1. Modify the value of the server variable in services.cfg to include the fully-qualified domain
name (FQDN) of the consolidated FSE system or the FSE server the client is connected to. For example:
server = fse-server1.company.com
2. In the omniORB.cfg file on the external FSE client, verify that all lines in the section
--- Private network parameters --- are inactive (commented out).
Private network connection
If the external FSE clients and the consolidated FSE system or the FSE server communicate through a private
network, you must modify both configuration files, services.cfg and omniORB.cfg, on each external
Linux client. The following procedure includes the necessary modification steps:
1. Modify the value of the server variable in services.cfg to contain the fully-qualified domain
name (FQDN) that identifies the consolidated FSE system or the FSE server inside the private network.
2. Add the hostname variable to services.cfg and provide as its value the FQDN that identifies the
Linux FSE client system inside the private network.
The following is an example of a properly configured services.cfg file in FSE setups using a private
network:
hostname = fseclient.fsenet
server = fseserver.fsenet
3. In the omniORB.cfg file, configure the parameters in the section
--- Private network parameters --- with the following information:
• the FQDN that identifies the system inside the private network
• the IP address of the system
•the subnet mask
All these parameters must be verified against the actual private network configuration. Ensure that the
FQDN you specify in omniORB.cfg matches the FQDN specified for the hostname variable in the
services.cfg file.
The following example is an excerpt from a properly configured omniORB.cfg file:
# --- Private network parameters ---
#
# Which network interface omniORB uses for IORs
42Installing FSE software
Page 43
endPoint = giop:tcp:fseclient.fsenet:
# The order of network interfaces to use for accepting connections:
# Only localhost and private network. Others are denied.
The following is an example excerpt from the local /etc/hosts file that matches the above
services.cfg and omniORB.cfg files:
123.45.67.89 fse-server1.company.com fse-server1
192.168.240.1 fseserver.fsenet fseserver
123.45.67.90 fse-client1.company.com fse-client1
192.168.240.2 fseclient.fsenet fseclient
NOTE: SUSE Linux Enterprise Server 9 (SLES 9) systems only: Do not run YaST2 after you have configured
this FSE host to use a private network for the FSE interprocess communication. Running YaST2 modifies
/etc/hosts in such a way that subsequent FSE system startups will fail.
Alternatively, you can modify /etc/sysconfig/suseconfig by changing the line
CHECK_ETC_HOSTS="yes" to CHECK_ETC_HOSTS="no". You can then run YaST2 without affecting
the FSE system operation, but you cannot modify host names with it.
Starting the FSE implementation
After installing the required FSE packages and performing the environment preparation tasks, you need to
start the FSE daemons manually for the first time. Note that the installation process modifies the startup
scripts so that the FSE processes are started automatically after each restart of the system.
The startup procedure depends on the particular FSE implementation configuration:
• Starting the FSE processes in a consolidated FSE implementation, page 43
• Starting FSE processes in a distributed or mixed FSE implementation, page 44
Starting the FSE processes in a consolidated FSE implementation
In a consolidated FSE implementation that consists only of an FSE server and FSE client on the same
system, proceed as follows:
1. Start the CORBA Naming Service daemon and the FSE processes by entering:
# fse --start
HP StorageWorks File System Extender Software installation guide for Linux43
Page 44
The bottom part of the output should match the following:
Starting omniORB Naming Service: [ OK ]
Starting FSE Service: [ OK ]
Starting FSE Resource Manager: [ OK ]
Starting FSE Management Interface: [ OK ]
Installing HSMFS Filter module: [ OK ]
Loading HSMFS Filter module: [ OK ]
Starting HSM FS Event Manager: [ OK ]
Mounting HSM File Systems: [ OK ]
Starting FSE processes in a distributed or mixed FSE implementation
The general rule for starting the FSE system in a distributed or mixed implementation is to start the FSE
daemons (services) on the FSE server or the consolidated FSE system first, and start the FSE daemons
(services) on each external FSE client afterwards.
Step 1: Starting the FSE server
Proceed as follows:
1. To start the CORBA Naming Service daemon and the FSE daemons on the FSE server or the
consolidated FSE system, run the following command at local level:
# fse --start
The bottom part of the output should be similar to the following:
FSE server
Starting omniORB Naming Service: [ OK ]
Starting FSE Service: [ OK ]
Starting FSE Resource Manager: [ OK ]
Starting FSE Management Interface: [ OK ]
Consolidated FSE system
Starting omniORB Naming Service: [ OK ]
Starting FSE Service: [ OK ]
Starting FSE Resource Manager: [ OK ]
Starting FSE Management Interface: [ OK ]
Installing HSMFS Filter module: [ OK ]
Loading HSMFS Filter module: [ OK ]
Starting HSM FS Event Manager: [ OK ]
Mounting HSM File Systems: [ OK ]
Step 2: Starting FSE clients
Proceed as follows:
1. To start each of the external FSE clients connected to the already started FSE server or consolidated FSE
system, run the following command at local level:
# fse --start
The bottom part of the output should be similar to the following:
Starting FSE Service: [ OK ]
Installing HSMFS Filter module: [ OK ]
Loading HSMFS Filter module: [ OK ]
44Installing FSE software
Page 45
Starting HSM FS Event Manager: [ OK ]
Mounting HSM File Systems: [ OK ]
Restarting FSE processes
Restart sequence in a distributed or mixed FSE implementation
In a distributed or mixed FSE implementation, restart the FSE server or consolidated FSE system first. The
FSE daemons (services) on each connected FSE client need to be restarted as soon as the FSE daemons
(services) on the server or consolidated system are running again. This sequence is mandatory regardless
of the type of the operating system running on a particular FSE host.
Restarting local FSE processes
Enter the following command to restart the local FSE daemons. The examples also show the expected
command output on different FSE hosts.
Consolidated FSE system
# fse --restart
Unmounting HSM File Systems: [ OK ]
Stopping HSM FS Event Manager: [ OK ]
Unloading HSMFS Filter module: [ OK ]
Stopping FSE Management Interface: [ OK ]
Stopping FSE Resource Manager: [ OK ]
Stopping FSE Service: [ OK ]
Starting FSE Service: [ OK ]
Starting FSE Resource Manager: [ OK ]
Starting FSE Management Interface: [ OK ]
Loading HSMFS Filter module: [ OK ]
Starting HSM FS Event Manager: [ OK ]
Mounting HSM File Systems: [ OK ]
HP StorageWorks File System Extender Software installation guide for Linux45
Page 46
FSE server
# fse --restart
Stopping FSE Management Interface: [ OK ]
Stopping FSE Resource Manager: [ OK ]
Stopping FSE Service: [ OK ]
Starting FSE Service: [ OK ]
Starting FSE Resource Manager: [ OK ]
Starting FSE Management Interface: [ OK ]
External FSE client
# fse --restart
Unmounting HSM File Systems: [ OK ]
Stopping HSM FS Event Manager: [ OK ]
Unloading HSMFS Filter module: [ OK ]
Stopping FSE Service: [ OK ]
Starting FSE Service: [ OK ]
Loading HSMFS Filter module: [ OK ]
Starting HSM FS Event Manager: [ OK ]
Mounting HSM File Systems: [ OK ]
Checking the status of a running FSE implementation
After you have started the FSE daemons (services) on all machines that are part of the FSE implementation,
you can verify that the Firebird SuperServer and the omniNames daemon are running on the consolidated
system or the FSE server, and that the FSE daemons (services) are running on all FSE hosts.
FSE systems require the Firebird SuperServer (FirebirdSS) to be running on the consolidated FSE system
or the FSE server in order to manage the Resource Management Database (RMDB). Firebird SuperServer is
started automatically at the end of the Firebird's RPM package installation.
omniNames is the CORBA Naming Service daemon that allows FSE daemons (services) to communicate
with each other. It must be running on the system that hosts the FSE server, that is, on the consolidated FSE
system or the FSE server system.
Checking Firebird SuperServer
Checking Firebird SuperServer on SUSE Linux Enterprise Server
You can check if the FirebirdSS process is running with the command below. This example also
displays its output with FirebirdSS running.
# /etc/init.d/firebird status
Checking for Firebird: running
If the reported line is:
Checking for Firebird: unused
you need to start Firebird manually using the following command. This command also displays its output
when FirebirdSS is successfully started:
# /etc/init.d/firebird start
Starting Firebird [ OK ]
If this does not resolve the problem, consult Firebird SuperServer documentation for alternative
troubleshooting steps.
46Installing FSE software
Page 47
Checking Firebird SuperServer on Red Hat Enterprise Linux
You can check if the FirebirdSS process is running with the command below. This example also
displays its output with FirebirdSS running.
# /etc/init.d/firebird status
ibserver (pid 2260) is running...
If the reported line is:
ibserver is stopped
you need to start Firebird manually using the following command. This command also displays its output
when FirebirdSS is successfully started:
# /etc/init.d/firebird start
Starting ibserver: [ OK ]
If this does not resolve the problem, consult Firebird SuperServer documentation for alternative
troubleshooting steps.
Checking the omniNames daemon
You can check the status of the omniNames daemon with the omninames --status command. If you
followed the steps in section ”Modifying the PATH environment variable” on page 39, the omninames
location should be in the command search path. Enter the following:
# omninames --status
The command should generate an output similar to the following example:
omniNames ( pid 842 ) is running...
If the reported line is:
omniNames is stopped
you need to start omniNames manually using the following command. This command also displays its
output when the omniNames daemon is successfully started:
# omninames --start
Starting omniORB Naming Service: [ OK ]
Checking FSE Processes
The status of FSE daemons can be monitored using the fse command. Apart from checking the status, this
command is also used for starting and stopping the FSE daemons. Starting and stopping actions are only
allowed to be executed by an FSE administrator, but all users can do a status check.
You can check the status of locally running FSE daemons by running the fse command with the
--status option:
# fse --status
The output of this command depends on the type of FSE implementation.
The next section contains outputs of the fse --status command when it is run on a particular
component of the FSE implementation. You should check the status of the FSE daemons and verify that the
output you get corresponds to the appropriate example.
Checking FSE daemons is the last step of the basic installation process. However, it is strongly
recommended that you also perform the post-installation steps described in the next section.
For a description of configuration procedures, see the FSE user guide.
HP StorageWorks File System Extender Software installation guide for Linux47
Page 48
Example outputs of the fse --status command
If software installation was successful, you should get the following (typical) outputs with the
fse --status command:
Consolidated FSE system - FSE daemons running
fse-svc ( pid 17399 ) is running...
fse-rm ( pid 17411 ) is running...
fse-mif ( pid 17427 ) is running...
fse-fsevtmgr ( pid 17707 ) is running...
FSE server - FSE daemons running
fse-svc ( pid 17399 ) is running...
fse-rm ( pid 17411 ) is running...
fse-mif ( pid 17427 ) is running...
FSE client - FSE daemons running
fse-svc ( pid 17399 ) is running...
fse-fsevtmgr ( pid 17707 ) is running...
Note that the actual process IDs (the numbers that follow the pid strings) will be different on your system.
Configuring and starting HSM Health Monitor
To use HSM Health Monitor, you need to configure it and start its daemon on the Linux systems that are
included in the FSE implementation.
In a mixed or distributed FSE implementation, you need to start the HSM Health Monitor daemon on the
consolidated FSE system or FSE server before starting the HSM Health Monitor daemon on the external FSE
clients.
Configuring HSM Health Monitor
For details on configuration of the HSM Health Monitor utility, see the FSE user guide, chapter ”Monitoring
and maintaining FSE”, section ”Low storage space detection”.
Starting the HSM Health Monitor daemon
To start the HSM Health Monitor daemon on the local system, invoke the following command:
# hhm start
Configuring and starting Log Analyzer
If you decide to use Log Analyzer for monitoring the log files, you need to configure it and start its
daemons on the Linux systems on which you want to monitor the log files.
Configuring Log Analyzer
For details on configuration of the Log Analyzer utility, see the FSE user guide, chapter ”Monitoring and
To start the Log Analyzer daemons on the local system, depending on the event log file that you want to
monitor, invoke the following commands:
# loganalyzer start
# loganalyzer_messages start
48Installing FSE software
Page 49
Installing the FSE Management Console
This section describes how to install the FSE Management Console server and client components.
To use the FSE graphical user interface, you need to install the Management Console server as well as the
Management Console client.
NOTE: You must install the FSE Management Console server on the system that hosts the FSE server.
You can install the FSE Management Console client on any system in the intranet, not necessarily on a
system that is part of the FSE implementation. This is possible as long as there is no firewall between the
FSE Management Console server and FSE Management Console client systems or the firewall allows
communication through the ports that are used by CORBA.
For details on how to configure and start the FSE Management Console components, see the FSE user guide, chapter ”Configuring, starting, and stopping the FSE Management Console”.
Installing the FSE Management Console server
To install the FSE Management Console server, follow the procedure:
1. On the FSE installation DVD-ROM, locate the appropriate installation package of the FSE Management
Console server.
SUSE Linux specific
The installation package for supported SUSE Linux distributions is located in the directory
/sles9/ia32/GUI.
Red Hat Linux specific
The installation package for supported Red Hat Linux distributions is located in the directory
/rhel4/ia32/GUI.
2. Using the rpm command, install the FSE Management Console server files:
# rpm -ivh fse-gui-server-3.4.0-Build.i386.rpm
Installing the FSE Management Console client
To install the FSE Management Console client, follow the procedure:
1. On the FSE installation DVD-ROM, locate the appropriate installation package of the FSE Management
Console client.
SUSE Linux specific
The installation package for supported SUSE Linux distributions is located in the directory
/sles9/ia32/GUI.
Red Hat Linux specific
The installation package for supported Red Hat Linux distributions is located in the directory
/rhel4/ia32/GUI.
2. Using the rpm command, install the FSE Management Console client files:
# rpm -ivh fse-gui-client-3.4.0-Build.i386.rpm
Automating the mounting of HSM file systems
To mount HSM file systems automatically, add entries for these file systems to the local file system table in
the file /etc/fstab. Use the mount points and device file paths from your actual file system configuration
on the local host:
1. Create a directory that will serve as a mount point for the HSM file system:
# mkdir /fse/fsefs_01
HP StorageWorks File System Extender Software installation guide for Linux49
After the corresponding FSE partitions are configured, you will need to remove the commenting characters
from the file /etc/fstab. This will enable automatic mounting of the local HSM file systems after
subsequent restarts of the locally running FSE processes.
The automation of mounting completes the software installation phase. To find out how to configure FSE
resources such as disk media, tape libraries, FSE partitions, and how to perform other configuration tasks,
see the FSE user guide, chapter ”Configuring FSE”.
The next section describes how to configure the post-start and pre-stop scripts.
Configuring the post-start and pre-stop helper scripts
You can set up two helper scripts to automatically perform various tasks at startup and shut-down of the
local FSE processes. These scripts are called post-start and pre-stop scripts, and are plain text files
containing lists of commands to be run sequentially.
Both scripts are executed by the fse command. If they are not present, their execution is simply skipped.
NOTE: The commands that you specify in the post-start and pre-stop scripts should not block the execution
of the fse command. Therefore, they must conform to the following rules:
• They must not require interactive input.
• They must finish in a reasonable time and return the control to the script afterwards.
The post-start script
The post-start script is executed by the fse --start command after all local FSE processes have been
started and, if the local system hosts HSM file systems, all HSM file systems with an entry in the
/etc/fstab file have been mounted. The script therefore, runs the specified commands directly after this
particular component of the FSE implementation is put into its fully operational state.
The post-start script must be named post_start.sh. It has to be located on the local system in the
/opt/fse/sbin directory. The script must have execute permissions.
The following are examples of the post-start script.
1. The keyword hsmfs in the third column refers to the type of file system. This is an HSM file system.
2. The keyword noauto refers to the file system mounting option. An HSM file system cannot be automatically mounted at system
startup when the local FSE processes are not running yet.
50Installing FSE software
Page 51
Example 1
/etc/init.d/nfsserver start
Example 2
rcsmb start
/usr/sbin/exportfs -a
The pre-stop script
The pre-stop script is executed by the fse --stop command before all locally mounted HSM file systems
are unmounted and before all FSE processes that are running locally are shut down. The script runs the
specified commands directly before this particular component of the FSE implementation is pulled out of its
fully operational state.
The pre-stop script must be named pre_stop.sh. It must be located on the local system in the
/opt/fse/sbin directory. The script must have execute permissions.
The following are examples of the pre-stop script.
Example 1
/etc/init.d/nfsserver stop
Example 2
rcsmb stop
/usr/sbin/exportfs -a -u
HP StorageWorks File System Extender Software installation guide for Linux51
Page 52
52Installing FSE software
Page 53
6Upgrading from previous FSE releases
In this chapter, you will be instructed how to upgrade an FSE implementation that currently uses a previous
FSE product version to release version 3.4. Upgrading will add the new functionality that has been
implemented in the new FSE release, and install the FSE utilities: HSM Health Monitor and Log Analyzer.
The upgrade procedure will preserve the current state of the FSE implementation, including its
configuration, all migrated data on FSE media, a complete state of all HSM file systems and FSE databases
and system files.
For a list of previous FSE products that can be upgraded to the new release, see the FSE release notes,
chapter ”Supported hardware and software”, section ”Supported upgrade paths”.
This chapter includes the following topics:
• Prerequisites, page 53
• Upgrade overview, page 53
• Shutting down the FSE implementation, page 54
• Upgrading the operating system on Linux hosts, page 56
• Upgrading the Linux FSE server, page 57
• Upgrading the Windows FSE server, page 59
• Upgrading Linux FSE clients, page 59
• Upgrading Windows FSE clients, page 60
• Configuring and starting HSM Health Monitor, page 60
• Configuring and starting Log Analyzer, page 61
• Upgrading the FSE Management Console, page 61
• Verifying availability of the configured FSE partitions, page 62
Prerequisites
Before starting with upgrade, ensure that the following prerequisites are fulfilled:
• A previous version of the FSE product is installed that can be upgraded with the current FSE version.
For information on which previous FSE products can be upgraded, see the FSE release notes, chapter
”Supported hardware and software”, section ”Supported upgrade paths”.
• The required versions of the third-party software packages are installed on the system.
For information on how to upgrade or install the third-party software packages, see the third-party
software documentation and the operating system documentation.
• Make sure that you also have the following available:
• FSE installation guide for Windows
This is required, since while upgrading, you may also need to follow instructions provided in the above
manual, depending on the platforms on other systems that are included in your FSE implementation.
• Additionally, make sure you have the following available, depending on the Linux distributions installed
on the Linux FSE hosts:
• SUSE Linux Enterprise Server 9 documentation
• Red Hat Enterprise Linux 4 documentation
This is required for correct operating system upgrade on these hosts.
• You must be logged on to the system as root in order to execute shell commands.
Upgrade overview
To upgrade an FSE system, perform the following procedure. Most of the steps from the procedure are
described in detail in the sections that follow:
1. If the FSE Management Console is installed in the FSE implementation, stop the FSE Management
Console server and the FSE Management Console client processes.
HP StorageWorks File System Extender Software installation guide for Linux53
Page 54
To stop FSE Management Console client process, you need to close its GUI window.
To stop the FSE Management Console server process, invoke the following command on the
consolidated FSE system or FSE server:
# /etc/init.d/guisrv stop
To stop the FSE Management Console server process, run the Services administrative tool, locate and
right-click the entry File System Extender GUI Server, and click Stop.
For details, see the FSE user guide, chapter ”Configuring, starting, and stopping the FSE Management
Console”.
2. Shutdown the FSE implementation.
Shutting down the FSE implementation means terminating FSE processes on all systems that are
included in the FSE implementation.
3. On the consolidated FSE system or FSE server, stop the omniNames daemon (service).
4. On all hosts that are part of the FSE implementation and are running on a Linux platform, upgrade the
operating system to the required version.
For more information on the supported operating systems, see the latest support matrices.
5. Upgrade the consolidated FSE system or the FSE server to the new FSE release and start the FSE
processes on it.
6. Upgrade external FSE clients and start the FSE processes on them.
NOTE: Each system that is part of the FSE implementation must be upgraded with the same FSE
product version.
7. If the FSE Management Console is installed in the FSE implementation, upgrade the FSE Management
Console components.
8. If there is FSE Management Console installed in the FSE implementation, start the FSE Management
Console server and the FSE Management Console client processes.
For details, see the FSE user guide, chapter ”Configuring, starting, and stopping the FSE Management
Console”.
9. Configure and start HSM Health Monitor.
For details on configuring the HSM Health Monitor utility, see the FSE user guide, chapter ”Monitoring
and maintaining FSE”, section ”Low storage space detection”.
10.Optionally, configure and start Log Analyzer.
For details on configuring the Log Analyzer utility, see the FSE user guide, chapter ”Monitoring and
For details, see the FSE user guide, chapter ”Backup, restore, and recovery”.
CAUTION: If you have had a backup policy defined for backing up your FSE implementation, all
backups of the FSE implementation that were created using the previous FSE version are useless. To
preserve the data safety level, you need to create new backups immediately after the upgrade is
complete.
Shutting down the FSE implementation
You need to terminate current FSE activity on the consolidated FSE system or the FSE server and all external
FSE clients to be able to perform the upgrade. Before shutting down the FSE processes, you are strongly
advised to check the File System Catalogs (FSCs) of all FSE partitions and eliminate any inconsistencies,
which could potentially escalate into more severe problems after the upgrade process.
Perform the following procedure:
54Upgrading from previous FSE releases
Page 55
1. On all systems that host either an internal (part of the consolidated FSE system) or an external FSE
client, close all applications that access files or directories on HSM file systems and make sure that all
non-FSE processes which have had locks to objects on these file systems are terminated.
You can use the fuser or lsof command to find out which files are being accessed on the HSM file
system. For details, refer to the fuser and lsof man pages.
2. Optionally, check the consistency of the File System Catalogs (FSCs) by comparing them with the
corresponding HSM file systems.
Proceed as follows:
a. Before starting the consistency check, make sure that all consistency checking processes that might
have already been running on FSE partitions have completed. Inspect the following log files:
You can proceed with the next steps only if the log files show no current consistency checking
activity.
b. On all systems that host FSE partitions, remove old consistency check log files.
Run the following commands:
# cd /var/opt/fse/log/
# rm -f check_hsmfs_fsc_*
c. Start the consistency check for all configured FSE partitions.
For each FSE partition, invoke the following command:
# fsecheck --fsc-hsmfs PartitionName
Search the resulting log files for inconsistency indicators. Before proceeding with shutting down the FSE
implementation, you are urged to eliminate all inconsistencies found. For details, see the FSE user guide, chapter ”Monitoring and maintaining FSE”, section ”Checking the consistency of the File
System Catalog”.
3. If you are upgrading your FSE product with an FSE system maintenance release or an FSE hot fix, on
each external FSE client on which FSE utility daemons are running, you need to stop the FSE utility
daemons:
a. To stop the HSM Health Monitor daemon on the local system, run the following command:
# hhm stop
b. To stop the Log Analyzer daemons on the local system, run the following commands:
# /etc/init.d/loganalyzer stop
# /etc/init.d/loganalyzer_messages stop
4. On each external FSE client, shut down the currently running FSE processes using the fse --stop
command. The shut-down progress is shown in the command output:
# fse --stop
Unmounting HSM File Systems: [ OK ]
Stopping HSM FS Event Manager: [ OK ]
Unloading HSMFS Filter module: [ OK ]
Stopping FSE Service: [ OK ]
5. Verify that all FSE processes have been terminated properly:
Run the following command. Its output should be empty:
# ps -ef | grep fse
6. If you are upgrading your FSE product with an FSE system maintenance release or an FSE hot fix, if the
FSE utility daemons are running on the consolidated FSE system or the FSE server, you need to stop
them:
HP StorageWorks File System Extender Software installation guide for Linux55
Page 56
a. To stop the HSM Health Monitor daemon on the local system, run the following command:
# hhm stop
b. To stop the Log Analyzer daemons on the local system, run the following commands:
# /etc/init.d/loganalyzer stop
# /etc/init.d/loganalyzer_messages stop
7. On the consolidated FSE system or FSE server, shut down the currently running FSE processes using the
fse --stop command. The shut-down progress is shown in the command output:
Consolidated FSE system
fse --stop
Unmounting HSM File Systems: [ OK ]
Stopping HSM FS Event Manager: [ OK ]
Unloading HSMFS Filter module: [ OK ]
Stopping FSE Management Interface: [ OK ]
Stopping FSE Resource Manager: [ OK ]
Stopping FSE Service: [ OK ]
FSE server
fse --stop
Stopping FSE Management Interface: [ OK ]
Stopping FSE Resource Manager: [ OK ]
Stopping FSE Service: [ OK ]
8. Stop the omniNames daemon.
Invoke the following command and verify that its actual output matches the output shown below:
omninames --stop
Stopping omniORB Naming Service: [ OK ]
9. Similarly as with the external FSE clients (step 5), verify that all FSE processes have been terminated.
10.Prevent automatic start of omniNames, the FSE processes, and guisrv (the FSE Management Console
server process) by using the chkconfig command.
Upgrading the operating system on Linux hosts
On each Linux host that is part of the FSE implementation, upgrade the operating system to the required
version.
Depending on the type of a particular FSE host that you are upgrading, you may need to perform
additional steps before starting the operating system upgrade and after the operating system upgrade is
complete. These steps are required because of a changed format of the FileSystemID parameter in the
FSE partition configuration files.
Follow the procedure:
1. If you are upgrading a consolidated FSE system or an external FSE client, gather information about the
local HSM file systems. Invoke the following command and store its report:
# lvscan -b
The command displays a report similar to the following:
lvscan -- ACTIVE "/dev/vg_fse01/hsmfs01" [8.70 GB] 58:0
lvscan -- ACTIVE "/dev/vg_fse02/hsmfs02" [7.81 GB] 58:4
lvscan -- ACTIVE "/dev/vg_fse02/hsmfs03" [5.86 GB] 58:5
lvscan -- ACTIVE "/dev/vg_fse02/hsmfs04" [3.43 GB] 58:6
56Upgrading from previous FSE releases
Page 57
lvscan -- 4 logical volumes with 25.80 GB total in 2 volume groups
lvscan -- 4 active logical volumes
2. On the local system, upgrade the operating system.
For a list of supported operating system versions, see the latest support matrices.
CAUTION: Ensure that during or after the operating system upgrade, all LVM volumes on the local
system are converted to LVM2 volumes. Failing to convert the LVM volumes to LVM2 volumes may
result in loss of the data stored on these volumes.
For information on how to upgrade the operating system and the LVM volumes, see the operating
system documentation.
3. If you are upgrading a consolidated FSE system or an external FSE client, update the entry for each
HSM file system in the local /etc/fstab file as follows:
a. Invoke the following command, where DeviceFileSymlink is the value from the first column of
/etc/fstab for this HSM file system entry:
# ls -la DeviceFileSymlink
The command generates an output similar to the following:
b. In the local /etc/fstab file, replace DeviceFileSymlink with DeviceFilePathname,
where DeviceFilePathname is the device file that the symbolic link points to.
In the above example, the value of DeviceFilePathname is
/dev/mapper/vg_fse01-hsmfs01.
c. In the local /etc/fstab file, comment out the line with the HSM file system entry.
Upgrading the Linux FSE server
To upgrade the consolidated FSE system or FSE server running on a Linux platform, you need to install FSE
release 3.4 installation packages over the installed previous FSE product.
Before starting the installation, if the consolidated FSE system or FSE server which you are upgrading has
been configured such that the FSE log files have been stored on a separate file system, you need to ensure
that this file system is located on an LVM volume.
The following procedure is required, if the file system for FSE log files is not located on an LVM volume:
1. Move the FSE log files to a temporary location on another file system.
2. Configure a new LVM volume for the FSE log files, create a file system on it, and mount the file system
to the appropriate directory. For detailed instructions, see the FSE installation guide for Linux, chapter
”Preparing file systems”.
3. Move the FSE log files from the temporary location to the newly created file system.
Installing FSE release 3.4 software on the Linux FSE server
If you are upgrading from a pre-3.4 version to the FSE release 3.4, perform the following procedure to
appropriately install the software on the consolidated FSE system or FSE server:
• At the command line, change the current directory to the one with the installation packages. Install all
required FSE release 3.4 RPM packages with two invocations of the rpm command. You must pay
regard to the proposed order of precedence.
If you are upgrading a consolidated FSE system, invoke the following commands:
In both cases, the second rpm command installs the FSE utilities: HSM Health Monitor and Log
Analyzer.
If you are upgrading the FSE release 3.4 with an FSE system maintenance release or an FSE hot fix for this
release, perform the following procedure to appropriately install the software on the consolidated FSE
system or FSE server:
• At the command line, change the current directory to the one with the installation packages. Install all
required FSE system maintenance release or FSE hot fix RPM packages with one invocation of the rpm
command.
If you are upgrading a consolidated FSE system, invoke the following command:
On the consolidated FSE system or the FSE server, you need to start the FSE processes. Depending on the
type of the FSE host, you may need to perform additional steps after the startup. These steps are required
because of a changed format of the FileSystemID parameter in the FSE partition configuration files.
Follow the procedure:
1. Run the fse --start command to start omniNames (CORBA Naming Service daemon) and the FSE
server processes:
# fse --start
Consolidated FSE system
Starting omniORB Naming Service: [ OK ]
Starting FSE Service: [ OK ]
Starting FSE Resource Manager: [ OK ]
Starting FSE Management Interface: [ OK ]
Installing HSMFS Filter module: [ OK ]
Loading HSMFS Filter module: [ OK ]
Starting HSM FS Event Manager: [ OK ]
Mounting HSM File Systems: [ OK ]
FSE server
Starting omniORB Naming Service: [ OK ]
Starting FSE Service: [ OK ]
58Upgrading from previous FSE releases
Page 59
Starting FSE Resource Manager: [ OK ]
Starting FSE Management Interface: [ OK ]
2. Update the FileSystemID parameter in each FSE partition configuration file as follows:
3. If you are performing this procedure on the consolidated FSE system, proceed as follows:
a. In the local file /etc/fstab, make all entries related to the local HSM file systems active again:
remove the commenting characters that you added in step 3 of the procedure in section
”Upgrading the operating system on Linux hosts” on page 56.
b. Run the fse --restart command to restart the FSE server processes:
# fse --restart
Upgrading the Windows FSE server
For information on how to upgrade the Windows FSE server, see the FSE installation guide for Windows,
chapter ”Upgrading from previous FSE releases”.
Upgrading Linux FSE clients
On each external FSE client running on a Linux platform, you have to upgrade the installed previous FSE
product to release 3.4 and manually start the FSE client processes afterwards.
HP StorageWorks File System Extender Software installation guide for Linux59
Page 60
Installing FSE release 3.4 software on a Linux FSE client
To install FSE release 3.4 software over the installed previous FSE product appropriately, change the
current directory to the one with the installation packages. Install all required FSE 3.4 RPM packages using
two invocations of the rpm command. You need to pay regard to the proposed order of precedence:
The second rpm command installs the FSE utilities: HSM Health Monitor and Log Analyzer.
As the packages get installed, the internal RPM script enables the automatic startup of the FSE processes at
FSE client system’s boot time.
Starting up a Linux FSE client
On the external FSE client, you need to start the FSE processes, but you also need to perform an additional
step before the startup. This step is required because of a changed format of the FileSystemID
parameter in the FSE partition configuration files.
Follow the procedure:
1. On the external FSE client, make all entries in the file /etc/fstab that are related to the local HSM
file systems active again: remove the commenting characters that you added in step 3 of the procedure
in section ”Upgrading the operating system on Linux hosts” on page 56.
2. Run the fse --start command to start the FSE client processes. The proper FSE client startup
progress is shown in the command output.
# fse --start
Starting FSE Service: [ OK ]
Installing HSMFS Filter module: [ OK ]
Loading HSMFS Filter module: [ OK ]
Starting HSM FS Event Manager: [ OK ]
Mounting HSM File Systems: [ OK ]
Upgrading Windows FSE clients
For information on how to upgrade the Windows FSE clients, see the FSE installation guide for Windows,
chapter ”Upgrading from previous FSE releases”.
Configuring and starting HSM Health Monitor
To use HSM Health Monitor, you need to configure it and start its daemon on the hosts that are included in
the FSE implementation.
In a mixed or distributed FSE implementation, you need to start the HSM Health Monitor daemon on the
consolidated FSE system or FSE server before starting the HSM Health Monitor daemon on the external FSE
clients.
Configuring HSM Health Monitor
For details on configuration of the HSM Health Monitor utility, see the FSE user guide, chapter ”Monitoring
and maintaining FSE”, section ”Low storage space detection”.
Starting the HSM Health Monitor daemon on Linux systems
To start the HSM Health Monitor daemon on the local system, invoke the following command:
# hhm start
60Upgrading from previous FSE releases
Page 61
Starting the HSM Health Monitor service on Windows systems
For information on how to start HSM Health Monitor, see the FSE installation guide for Windows, chapter
”Upgrading from previous FSE releases”.
Configuring and starting Log Analyzer
If you decide to use Log Analyzer for monitoring the log files, you need to configure it and start its
daemons on the hosts on which you want to monitor the log files.
Configuring Log Analyzer
For details on configuration of the Log Analyzer utility, see the FSE user guide, chapter ”Monitoring and
Starting the Log Analyzer daemons on Linux systems
To start the Log Analyzer daemons on the local system, depending on the event log file that you want to
monitor, invoke the following commands:
# loganalyzer start
# loganalyzer_messages start
Starting the Log Analyzer service on Windows systems
For information on how to start HSM Health Monitor, see the FSE installation guide for Windows, chapter
”Upgrading from previous FSE releases”.
Upgrading the FSE Management Console
This section contains procedures for upgrading the FSE Management Console components during a
general FSE software upgrade. This assumes that all FSE Management Console processes in the FSE
implementation are stopped. The processes must be restarted at the end of the FSE upgrade procedure,
after the basic FSE software has already been upgraded.
Upgrading the FSE Management Console on Linux systems
The procedure for upgrading the FSE Management Console server is different from the procedure for
upgrading the FSE Management Console client.
Upgrading the FSE Management Console server
To upgrade the FSE Management Console server, proceed as follows:
1. Create a backup copy of the FSE Management Console server configuration file.
BackupDirectoryPath is the full path to a directory outside the FSE Management Console
installation tree:
At this point, you can modify the FSE Management Console server configuration, if needed. For details,
see ”Configuring the FSE Management Console” in the FSE user guide.
Upgrading the FSE Management Console client
To update the FSE Management Console client, proceed as follows:
HP StorageWorks File System Extender Software installation guide for Linux61
Page 62
1. Change the current directory to the one with the new FSE Management Console client package and
run the command:
# rpm -U fse-gui-client-3.4.0-Build.i386.rpm
For information on how to upgrade the Management Console on Windows systems, see the FSE
installation guide for Windows, chapter ”Upgrading from previous FSE releases”.
Verifying availability of the configured FSE partitions
Before you start using the upgraded FSE implementation, you have to make sure that all configured FSE
partitions are available. To perform the verification, invoke the following command on the consolidated
FSE system or FSE server:
fsepartition --list
The command lists the FSE partitions that are configured in the FSE implementation. The reported status of
all FSE partitions should be mounted. If this is not the case, you need to perform troubleshooting steps on
the problematic FSE partitions. For description of troubleshooting actions, see the FSE installation guide for
a particular platform, chapter ”Troubleshooting”, and the FSE user guide, chapter ”Troubleshooting”.
62Upgrading from previous FSE releases
Page 63
7Uninstalling FSE software
If you want to remove the FSE software from a system that is part of your FSE implementation, you need to
uninstall all FSE components.
NOTE: In a distributed FSE system the sequence of uninstalling from different hosts that are part of the FSE
system matters. If external FSE clients are part of your FSE system, you need to uninstall the FSE software
from the external clients before uninstalling the consolidated FSE system or FSE server.
During FSE uninstallation, you can uninstall only the FSE Management Console components or you can
completely remove the FSE software from a particular system. In the latter case, if the FSE Management
Console components are installed on a system that also hosts basic FSE software, you must uninstall them
before uninstalling basic FSE software.
Uninstalling FSE software
NOTE: You must be logged on to the system as root in order to execute shell commands.
Uninstalling the FSE Management Console
To uninstall the FSE Management Console server or client, proceed as follows:
1. Stop the FSE Management Console server or client process. For details, see FSE user guide, chapter
”Configuring, starting, and stopping the FSE Management Console”.
2. Remove the FSE Management Console server or client package.
• To remove the FSE Management Console server package, run the command:
# rpm -e fse-gui-server
• To remove the FSE Management Console client package, run the command:
# rpm -e fse-gui-client
Uninstalling basic FSE software
Prerequisite
If there is the FSE Management Console server or client installed on the local system, you must uninstall it
as described in ”Uninstalling the FSE Management Console” on page 63.
To remove the FSE software, proceed as follows:
1. If the HSM Health Monitor daemon is running on the local system, invoke the following command to
stop it:
# hhm stop
2. If the Log Analyzer daemons are running, depending on the event log file that you are monitoring,
invoke the following commands to stop the Log Analyzer daemons:
# loganalyzer stop
# loganalyzer_messages stop
3. Stop the FSE processes using the fse --stop command. The command and its output is show below:
# fse --stop
Unmounting HSM File Systems: [ OK ]
Stopping HSM FS Event Manager: [ OK ]
HP StorageWorks File System Extender Software installation guide for Linux63
Page 64
Unloading HSMFS Filter module: [ OK ]
Stopping FSE Management Interface: [ OK ]
Stopping FSE Resource Manager: [ OK ]
Stopping FSE Service: [ OK ]
The fse --stop command executes the pre_stop script, if it exists. It also performs the following,
depending on where it is run:
• When used on a consolidated system or an external FSE client, the command unmounts all local
mounted HSM file systems.
• It terminates the respective consolidated FSE system, FSE server or FSE client operation by shutting
down all locally running FSE processes on the belonging host.
4. Disable automatic mounting of local HSM file systems after subsequent restarts of the local FSE
processes by commenting out the HSM file system entries in the file /etc/fstab.
For each such entry, add a hash characters followed by a single space at the beginning of the
7. If FSE hot fixes have been installed, manually remove the backups of the original FSE release or FSE
system maintenance release files from their backup location, the backup directory itself, and the hot fix
ReadMe files from the /opt/fse/doc directory.
For details on FSE hot fixes, see appendix ”FSE system maintenance releases and hot fixes” on
page 73.
8. The uninstallation process leaves the FSE databases and system files intact, and you must remove them
manually.
CAUTION: Note that manual removal of the FSE databases and system files will permanently
delete information about the FSE configuration and resources, and metadata of the FSE user files.
The set of entities that require manual removal depends on which FSE host is being uninstalled. Proceed
as follows:
a. If there were separate file systems mounted for the FSE partition-specific files (to the directory
/var/opt/fse/part) and the FSE log and debug files (to the directory /var/opt/fse/log),
remove all directories and files from them, and then unmount them using the umount command.
b. Disable automatic mounting of these file systems by commenting out their file system entries in the
file /etc/fstab, like you did in the step 2 for HSM file systems.
c. Remove the following entities by deleting the corresponding directories using the rm -r command:
Fast Recovery Information (FRI) files/var/opt/fse/friconsolidated FSE system, FSE server
Log and debug files/var/opt/fse/logconsolidated FSE system, FSE server,
FSE client
Backup, trace and CORBA/omniORB
configuration files
Other files/opt/fseconsolidated FSE system, FSE server,
/etc/opt/fseconsolidated FSE system, FSE server
FSE client
9. Optionally, you can also remove all the installed third-party software packages that were required by
FSE software, provided that you do not need them for other purposes.
HP StorageWorks File System Extender Software installation guide for Linux65
Page 66
66Uninstalling FSE software
Page 67
8Troubleshooting
You may encounter problems during the FSE installation, upgrade, and uninstallation process. Before you
contact HP technical support, you should verify that all the prerequisites are met and that you have
followed the corresponding procedures as described in this guide, including, for example, the operating
system preparation phase. See also the FSE release notes for additional problems related to installation,
upgrade, and uninstallation.
This chapter includes the following topics:
• General problems, page 67
• Installation problems, page 67
General problems
DescriptionShell command returns “Permission denied” or similar message.
ExplanationYou have to be logged on to the system as root in order to execute the shell commands, which
you need to run while performing the installation, upgrade, or uninstallation steps.
WorkaroundMake sure you are logged on as root user. You can check this with the command below:
# whoami
root
If the command responds with the user name other than root, log out from the system and log in
again with the root user name. Re-invoke the command that failed.
Installation problems
DescriptionInstallation process of Red Hat Enterprise Linux 4 (RHEL 4) does not detect disks in the correct
order, thus obstructing the installation from succeeding.
(Red Hat Enterprise Linux system specific)
ExplanationOn a system with SCSI host bus adapters attached, if kernel modules for these adapters are not
loaded in time, the disk order as detected by the RHEL 4 installation differs from that detected
by the BIOS. In this case, the actual startup disk may not be shown as being the first disk.
WorkaroundStart the RHEL 4 installation in expert mode and ensure the modules for the adapters attached
to the system are loaded in the appropriate sequence. For example, load the module for the
adapter to which the boot disk is connected first, followed by modules for all other adapters.
To start the RHEL 3 installation in the expert mode, boot the system from the RHEL 4 installation
CD-ROM, wait for the boot prompt, and specify the following boot options:
expert noprobe.
When you are asked to load the device drivers, load them in the appropriate order.
HP StorageWorks File System Extender Software installation guide for Linux67
Page 68
DescriptionAfter installation of SUSE Linux Enterprise Server 9 on a system with network adapters using
both copper and fibre cabling and the same driver, the network is not accessible.
(SUSE Linux Enterprise Server system specific)
ExplanationEven though the configuration of network adapters in YaST2 seemed to be successful, the
network adapters are misconfigured. YaST2 apparently assigned a wrong ethX device to each
adapter.
Workaround1. Use the command line tool ethtool to determine the ethX device of a particular network
adapter:
# ethtool eth0
# ethtool eth1
The following is an example of the ethtool output. Note the lines marked in bold. The last
line helps you determine if the adapter is connected to LAN.
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: g
Link detected: yes
2. Run YaST and reconfigure the adapters using the information you acquired from ethtool.
DescriptionDuring FSE client installation, the following error message is displayed, and the installation
fails:
Setting HSM File System Filter module:
ERROR: FS Filter (hsmfs.ko) not available for kernel 2.4.21-295
...
Loading HSM FS Filter module: [FAILED]
ExplanationThis message is displayed because the HSM file system filter module is only available for the
Linux kernel revision 2.6 and FSE will not operate with older kernels.
WorkaroundBefore installing FSE product, upgrade the Linux host to a supported operating system version,
which already includes the appropriate kernel revision. For a list of supported operating
systems, see the latest support matrices.
68Troubleshooting
Page 69
HP StorageWorks File System Extender Software installation guide for Linux69
Page 70
70Troubleshooting
Page 71
AIntegrating existing file systems in the FSE
implementation
When the FSE is installed on a system where user files are located on already existing file systems, you
may want to bring these files under FSE control. This is to make FSE aware of the already-existing files and
include them in standard operations such as migration and recall.
The following procedure is for guidance only. Some of the steps are documented in the FSE user guide,
chapter ”Configuring FSE”.
Integrating existing file systems
The following procedure summarizes the steps to convert the Ext2 file system (ext2 fs) to an HSM file system
that stores user data in the FSE implementation. HSM file system is based on the Ext3 file system (ext3 fs).
If necessary, first convert the Ext2 file system to an Ext3 file system:
1. Unmount the existing file system.
2. Convert Ext2 file system to Ext3 using the command:
# tune2fs -j LogicalVolumeDevice
Example:
# tune2fs -j /dev/fse_sda/fs1
Configure a new FSE partition:
1. Prepare a new FSE partition configuration on the FSE server side.
2. Add this FSE partition to the FSE configuration.
Next, convert the Ext3 file system to an HSM file system:
3. If the Ext3 file system is still mounted, unmount it.
4. If needed, create a new mount point (new directory) for the HSM file system.
5. Mount the logical volume LogicalVolumeDevice that stores the Ext3 file system to the mount point
as HSM file system.
6. If you created a new mount point in step 4, enable automatic mounting of the new HSM file system by
adding its entry with appropriate options to the /etc/fstab file.
If you reused an existing mount point for mounting the HSM file system, modify its entry in the
/etc/fstab file accordingly.
For details, see section
7. The final step is to make the FSE implementation aware of the existing directories and files. This needs
to be done to introduce old directories and files to the FSE implementation and to add file attributes to
the appropriate Hierarchical Storage Management Database (HSMDB).
Directories and files are introduced to the FSE by running a treewalk through all directories and
opening each file on the HSM file system. To perform this process, invoke the following command from
the HSM file system root directory:
HSMFileSystemRoot # find * -type f | xargs -n1 head -n0
”Automating the mounting of HSM file systems” on page 49.
HP StorageWorks File System Extender Software installation guide for Linux71
Page 72
72Integrating existing file systems in the FSE implementation
Page 73
BFSE system maintenance releases and hot fixes
This appendix contains the following topics:
• FSE software: releases, hot fixes, and system maintenance releases, page 73
• FSE system maintenance releases and FSE hot fixes, page 74
IMPORTANT: In this appendix, the presented Major, Minor, SMRNumber, and Build numbers that
form version strings may be fictitious and may not necessarily represent actual software versions.
For details on the new software versioning model, see the FSE release notes, chapter ”New features”.
FSE software: releases, hot fixes, and system maintenance releases
FSE software is distributed in form of FSE releases, FSE hot fixes, and FSE system maintenance releases. FSE
hot fixes and FSE system maintenance releases are meant for fixing problems that emerged after the new
FSE version was released. The next sections describe the differences between them and provide
recommendations for their usage.
FSE releases
Every new FSE release introduces new functionality. An FSE release with a new minor version number
introduces functional improvements and enhancements, while an FSE release with a new major version
number introduces major functionality changes and/or architectural redesigns. The FSE release installation
process uses the native operating system installation method, that is RPM packages.
FSE hot fixes
An FSE hot fix is an update to the FSE software that solves one or more problems that were reported by FSE
customers or FSE support engineers. It is based either on an FSE release or an FSE system maintenance
release. Each new FSE hot fix solves one or more newly reported problems.
An FSE hot fix includes all packages and files that are included into FSE release software. Therefore, a hot
fix replaces the entire FSE software fileset. It can be either used for a first-time FSE installation, or an
existing FSE installation can be upgraded with it.
FSE hot fixes are cumulative. This means that an FSE hot fix with a particular consecutive number contains
all functional changes of the hot fixes that are labelled with smaller consecutive numbers, and therefore
supersedes all of them.
NOTE: FSE hot fixes are designed to solve the problems that emerge between release dates of FSE
releases and/or FSE system maintenance releases. If an FSE hot fix and a particular SMR containing this
hot fix are available, you are recommended to use the SMR rather than the hot fix.
FSE system maintenance releases
An FSE system maintenance release (SMR) includes functional changes and improvements of several FSE
hot fixes. It can be treated as an FSE release, improved with all the FSE hot fixes that were included into it.
Usually, it includes all the hot fixes that have been released and confirmed by the time the system
maintenance release was prepared, which are complemented with additional, SMR-only enhancements.
An FSE system maintenance release includes all packages and files that are included into FSE release
software. Therefore, an SMR replaces the entire FSE software fileset. It can be either used for a first-time
FSE installation, or an existing FSE installation can be upgraded with it.
FSE system maintenance releases are cumulative. This means that an SMR with a particular consecutive
number contains all functional changes of the SMRs that are labelled with smaller consecutive numbers,
and therefore supersedes all of them.
HP StorageWorks File System Extender Software installation guide for Linux73
Page 74
NOTE: FSE hot fixes are designed to solve the problems that emerge between release dates of FSE
releases and/or FSE system maintenance releases. If an FSE hot fix and a particular SMR containing this
hot fix are available, you are recommended to use the SMR rather than the hot fix.
FSE system maintenance releases and FSE hot fixes
This section instructs you how to install, determine, and uninstall FSE system maintenance releases and FSE
hot fixes.
Installing a system maintenance release
FSE system maintenance releases are installed in the same way as FSE release software. You should always
install the latest available system maintenance release. Consider the following, before installing a
particular system maintenance release:
• An FSE system maintenance release may not be installed if another system maintenance release that is
labelled with a higher consecutive number has already been installed, except to fall back to the
previous version if the new system maintenance release introduces problems.
For details on installing FSE release software, see chapter ”Installing FSE software”.
For details on upgrading FSE release software, see chapter ”Upgrading from previous FSE releases”.
Determining the installed system maintenance release
To determine which FSE system maintenance release is installed on the system, proceed as follows:
1. Invoke the following command:
# rpm -qa | grep fse
2. Inspect the output of the rpm command:
fse-server-3.3.1-142.sles9
fse-cli-user-3.3.1-142.sles9
fse-agent-3.3.1-142.sles9
fse-client-3.3.1-142.sles9
fse-common-3.3.1-142.sles9
fse-cli-admin-3.3.1-142.sles9
This example shows that on the host that is part of the FSE implementation, the system maintenance
release 3.3.1 is installed, and the platform is SUSE Linux Enterprise Server.
Uninstalling a system maintenance release
FSE system maintenance releases are uninstalled in the same way as FSE release software.
For details on installing FSE release software, see chapter ”Uninstalling FSE software”.
Installing a hot fix
FSE hot fixes are installed in the same way as FSE release software. Consider the following, before
installing a particular hot fix:
• An FSE hot fix may not be installed if another hot fix that is labelled with a higher consecutive number
has already been installed, except to fall back to the previous hot fix if the new hot fix introduces
problems.
• An FSE hot fix may not be installed if a system maintenance release that includes this hot fix has
already been installed.
For details on installing FSE release software, see chapter ”Installing FSE software”.
For details on upgrading FSE release software, see chapter ”Upgrading from previous FSE releases”.
74FSE system maintenance releases and hot fixes
Page 75
Determining the installed hot fix
To determine which FSE hot fix is installed on the system, proceed as follows:
1. At the command prompt, invoke the following command:
# fsesystem --version
The command will display an output similar to the following:
File System Extender
Copyright (C) 2002-2006 GRAU Data Storage
Copyright (C) 2004-2006 Hewlett-Packard
fsesystem 3.3.0.142 "Release Hotfix_02"
libhsmui 3.3.0.142 "Release Hotfix_02"
libhsmparser 3.3.0.142 "Release Hotfix_02"
libhsmipc 3.3.0.142 "Release Hotfix_02"
libhsmcmn 3.3.0.142 "Release Hotfix_02"
From the above example, the following information can be identified:
• On the local system, the FSE hot fix 02 is installed.
• The installed hot fix is based on the FSE release version.
• The build number of the installed hot fix is 142.
Uninstalling a hot fix
FSE hot fixes are uninstalled in the same way as FSE release software.
For details on installing FSE release software, see chapter ”Uninstalling FSE software”.
HP StorageWorks File System Extender Software installation guide for Linux75
Page 76
76FSE system maintenance releases and hot fixes
Page 77
Glossary
This glossary defines terms used in this guide or related to this product and is not a comprehensive
glossary of computer terms.
administrative job
(admin job)
administrator
(FSE administrator)
agent
(FSE agent)
Back End Agent
(BEA)
backup
(FSE backup)
backup jobA specific FSE job type that executes the backup process. Backup jobs can be started by FSE
backup log
(FSE backup log)
backup media pool
(backup FSE media
pool)
A process, executed by the Management Interface, which performs formatting and initialization
of the FSE media.
A system administrator who installs, configures, monitors, and maintains an FSE
implementation.
An FSE process that executes tasks on request by FSE jobs and some FSE commands. The FSE
agents are the Back End Agent and the Library Agent.
An FSE agent which executes migration, recall, maintenance and recovery jobs; it also formats
and initializes FSE media. All read and write operations on the FSE media are handled by the
Back End Agent. The Back End Agent process name is fse-bea.
A process that creates backup image with complete and consistent state of the FSE server at a
particular point in time. The backup image can be used to recover the server in case of
disaster. Backup image file can be located on a local or remote disk, or stored on tape media
of the backup FSE media pool.
administrator with the fsebackup command. See also ”backup (FSE backup)”.
A file that stores information about activity of the FSE backup process. FSE backup log is named
backup.log and located in the directory /var/opt/fse/log (on Linux platform) or in the
directory %InstallPath%\var\log (on Windows platform).
A special type of FSE media pool, used exclusively for backup and restore of the FSE
implementation. In contrast to regular FSE media pools, the media cartridges in the backup FSE
media pool do not contribute to the secondary storage space available to FSE users. See also
”media pool (FSE media pool)”.
capacity diskA disk with large storage space which stores file systems either dedicated to the FSE disk media
or assigned to the FSE disk buffer. If used for the FSE disk media, it can be either locally
mounted on the consolidated FSE system or FSE server, or used on a remote system which is
accessible via CIFS or NFS protocol. If used for the FSE disk buffer, it can only be locally
mounted on the consolidated FSE system or FSE server. Note that configurations with several
FSE disk buffer file systems residing on the same capacity disk may reduce the performance of
the FSE disk buffer.
check log
(FSE check log)
CIFSThe file sharing protocol used on Windows platform. This is one of the protocols used for
A file that records all the inconsistencies found by the maintenance job, which performs a File
System Catalog consistency check for a specified FSE partition. FSE check logs are named
check_hsmfs_fsc_PartitionName (in case of the FSC vs. HSM file system consistency
check) or check_media_fsc_PartitionName (in case of the FSC vs. FSE media
consistency check). They are located in the directory /var/opt/fse/log (on Linux platform)
or in the directory %InstallPath%\var\log (on Windows platform).
accessing files on an HSM file system from a remote machine, particularly a Windows client. A
smb server (Samba) must be running on the machine hosting the HSM file system, in order to
allow file transfers. See also ”Samba”.
HP StorageWorks File System Extender Software installation guide for Linux77
Page 78
client (FSE client)A set of FSE daemons (services) responsible for managing the HSM file systems on the FSE
partitions. These daemons (services) are: File System Event Manager, Hierarchical Storage
Manager, and Service, with additional FSE Windows Service on Windows systems. See also
”external client (FSE external client)”.
An FSE command-line interface command that communicates with FSE system components
through the Management Interface. FSE commands are used to configure, monitor, and
maintain the FSE implementation.
A set of FSE commands that are available to FSE users.
A directory in the FSE directory layout which contains the currently used and old revisions of the
FSE configuration files for configured FSE resources.
A plain text file used to configure or modify the configuration of an FSE resource. FSE
configuration files are stored in the Configuration Database. The Configuration Database is
located in the directory /var/opt/fse/cfg (on Linux platform) or in the directory
%InstallPath%\var\cfg (on Windows platform). See also ”configuration file template
(FSE configuration file template)”.
A pre-designed document for an FSE configuration file, included in the FSE package. It contains
default configuration values for configuring an FSE resource. These values should be modified
before the FSE administrator can apply the template. FSE configuration file templates are
located in the directory /opt/fse/newconfig (on Linux platform) or in the directory
%InstallPath%\newconfig (on Windows platform). See also ”configuration file (FSE
configuration file)”.
Revision of an FSE configuration file. Each configured FSE resource has its own set of
configuration revisions stored in the Configuration Database. This set represents the
configuration history of that resource.
consistency check of
FSC
A process executed for a particular FSE partition that checks the consistency of the
corresponding File System Catalog (FSC). It compares the contents of the FSC either with the
corresponding HSM file system or with Fast Recovery Information (FRI) for that FSE partition.
Any inconsistencies found are reported to the shell output and to the FSE check log. The FSC
consistency check is performed by an appropriate type of maintenance job.
CORBACORBA is the acronym for Common Object Request Broker Architecture. This is an architecture
and infrastructure that computer applications use to work together over networks.
critical watermarkA user-defined value for the critical level of disk space usage on an HSM file system. When this
value is reached, a forced release process is started for all files on the Hierarchical Storage
Manager’s release candidate list.
daemon (service)
(FSE daemon
(service))
Data Location
Catalog (DLC)
An FSE process that runs on the FSE implementation to provide basic functionality. FSE
daemons (services) are: File System Event Manager, Hierarchical Storage Manager, Partition
Manager, Management Interface, Resource Manager and Service.
A part of a File System Catalog. Data Location Catalog contains information about locations of
the migrated file data on the FSE media. It maintains a full history of file locations. See also
”File System Catalog (FSC)”.
database
(FSE database)
A file or set of related files with important information on the current status of the FSE
implementation, characteristics of the FSE implementation, or configuration of the FSE
implementation. FSE databases are the Resource Management Database, File System Catalogs,
Hierarchical Storage Management Databases, and Configuration Database.
78
Page 79
debug file
(FSE debug file)
A log file of a single FSE process, which records the execution trace of operations inside the
process. FSE debug files are created only if tracing in FSE is enabled (typically on request of
technical support personnel of FSE). FSE debug files are located in the directory
/var/opt/fse/log/debug (on Linux platform) or in the directory
%InstallPath%\var\log\debug (on Windows platform).
deletion policyA set of rules that define expiration periods for groups of directories on an HSM file system.
Files which are older than the expiration period defined by the corresponding deletion policy
can be automatically deleted using the fsefile command. Configuration parameters for the
deletion policy are specified in the FSE partition configuration file. Each FSE partition has its
own deletion policy.
dirty fileA file on an HSM file system which has been recently created and not yet migrated, or a file
that has changed since the last migration or recall.
disabled driveAn FSE drive which is temporarily put out of operation and cannot be used by migration, recall,
administrative, maintenance and recovery jobs. This is done by changing its status in the
Resource Management Database to disabled. The status is set by an FSE administrator with
the fsedrive command. The example situation that calls for disabling the drive is when it
needs cleaning or servicing.
disk buffer
(FSE disk buffer)
One or more file systems, located on the consolidated FSE system or the FSE server, which are
configured to store temporary data used by some of the FSE processes. The FSE disk buffer is
managed by the Resource Manager. Separate file systems (on Linux platform) or volumes (disk
partitions, on Windows platform) are usually allocated for FSE disk buffer.
disk media pool
(FSE disk media pool)
An FSE disk media pool is a special type of FSE media pool where the migrated data is stored
on disks rather than on tape media. FSE disk media therefore emulate FSE tape media, with the
advantage of being much faster for migrations and recalls. Using and managing FSE disk
media pools is similar to using and managing FSE tape media pools. See also ”media pool
(FSE media pool)”.
disk medium
(FSE disk medium)
A disk medium is one file system mounted to a subdirectory of the directory
/var/opt/fse/dm (on Linux platform) or the directory %InstallPath%\var\dm (on
Windows platform). Disk media emulates tape media and is used for regular archiving of the
data on HSM file systems. The advantage of using disk media is in shortened recall of offline
files. See also ”medium (FSE medium)”.
drive
A tape drive inside the FSE library, configured to be used by the FSE implementation.
(FSE drive)
duplicationSee ”media duplication”.
enabled driveAn FSE drive which is put back in operation by changing its status in the Resource
Management Database to online. This status is set manually using the fsedrive command.
An enabled drive is fully available for FSE system operation.
erroneous driveAn FSE drive which is temporarily put out of operation and cannot be used by migration, recall,
administrative, maintenance and recovery jobs. Its status is automatically set in the Resource
Management Database to error as soon as the problems in the drive are detected by the
Back End Agent. Usually, such problems occur when the drive needs servicing.
error log
(FSE error log)
A file that records error messages of the running FSE processes. It stores information on all
major errors that occur during FSE system operation. These errors often require an intervention
of the FSE administrator. Major errors are, for example, failed initialization of an FSE medium,
erroneous FSE drive operation, migration job failure, and so on. The FSE error log also provides
usage statistics that can be used for fine tuning of the FSE configuration. The FSE error log,
named error.log, is located in the directory /var/opt/fse/log (on Linux platform) or in
the directory %InstallPath%\var\log (on Windows platform).
HP StorageWorks File System Extender Software installation guide for Linux79
Page 80
event log
(FSE event log)
A file that records relevant information on events happening in the FSE processes during the
operation of the FSE implementation. FSE event log, named fse.log, is located in the
directory /var/opt/fse/log (on Linux platform) or in the directory
%InstallPath%\var\log (on Windows platform).
explicit releaseUnconditional release, started for a file or set of files on the release candidate list, specified by
an FSE user. It is triggered with the fsefile --release command and occurs regardless of
the parameters in the release policy.
external client
(FSE external client)
An external FSE client can be hosted on any of the supported platforms. External FSE clients are
connected to the FSE server through a LAN and host HSM file systems. The client runs only the
processes that provide functionality for managing these HSM file systems and communication
to the major services running on the FSE server. The daemons (services) running on the client
are: File System Event Manager, Hierarchical Storage Manager, and Service, with additional
FSE Windows Service on Windows systems. See also ”client (FSE client)”.
Fast Recovery
Information (FRI)
File System Catalog related data, collected during migrations and used for eventual recovery of
the FSE implementation. Medium volumes that are not yet full have Fast Recovery Information
stored on the disk in the directory /var/opt/fse/fri (on Linux platform) or in the directory
%InstallPath%\var\fri (on Windows platform). Once medium data volumes are full
(filled up with migrated files), Fast Recovery Information is written to the part of the medium
volume that was reserved in advance for this kind of data. Typically, a medium is partitioned to
have a system volume, which stores redundant copies of Fast Recovery Information from all data
volumes on the medium.
file generationA version in the history of a particular file on an HSM file system. Older (non-latest) file
generations are only stored offline, as they have already been migrated to FSE media, whereas
latest file generations can be present either online or offline, depending on the time of their last
access and the corresponding FSE partition policies.
file split
(FSE file split)
In some situations, the amount of available space on the currently open FSE medium volume is
smaller than the amount of data in the currently migrated file. To use the remaining space on
the open volume, FSE splits the migrated file into two or more parts, and stores each part on a
separate volume. Such migrated file parts are called FSE file splits.
File System Catalog
(FSC)
A database, which consists of the Data Location Catalog (DLC) and Name Space Catalog
(NSC). The Data Location Catalog (DLC) contains information about location of files on the FSE
media (full history). The Name Space Catalog (NSC) contains metadata of files on an HSM file
system (the last generation metadata). Each FSE partition has its own File System Catalog. See also ”Data Location Catalog (DLC)” and ”Name Space Catalog (NSC)”.
File System Catalog
journal
A file which contains transaction information for the File System Catalog. It is designed to
increase the catalog robustness. FSC journals contain information about already applied and
pending catalog transactions. File System Catalog journals are located in the directory
/var/opt/fse/part/PartitionName/fsc/journal (on Linux platform) or in the
directory %InstallPath%\var\part\PartitionName\fsc\journal (on Windows
platform).
File System Catalog
See ”recovery (FSE recovery)”.
recovery
File System Catalog
See ”recovery (FSE recovery)”, ”recovery job”.
recovery job
file system eventA creation or modification of a file on an HSM file system, detected and handled by the FSE
software. The following operations are identified as relevant file-system events for FSE: create,
rename, move, delete, read of contents, change of contents (write, truncate) and change of file
attributes (ownership, permissions, and so on).
80
Page 81
File System Event
Manager
A daemon (service) on the FSE client which receives notification about mount events for the
HSM file system and triggers the startup of the appropriate Hierarchical Storage Manager. The
File System Event Manager process name is fse-fsevtmgr.
forced releaseRelease of all files on an HSM file system, which are on the release candidate list, regardless of
their retention time and file size. Forced release is triggered when the critical watermark of
usage is reached on the HSM file system. The process is stopped in the moment when the high
watermark on the HSM file system is reached. See also ”release”.
formattingA process that partitions a medium into medium volumes, according to the parameters
specified in the FSE media pool configuration. During formatting all data on the medium is
erased. If an FSE medium contains valid FSE user data and you want to format it, you must use
forced formatting.
Full Access Mode
(FAM)
One of the two HSM file system filter operational modes. When HSM file system filter is
operating in Full Access Mode for a particular HSM file system, you are allowed to make
changes to the data located on the HSM file system, and use all the HSM file system
functionality that the FSE system provides. For example, you can create, rename, move or delete
directories and files, and change their data or metadata. In this mode, all data movement
processes (migration, release, recall, deletion) can be executed – by being either triggered
explicitly or started according to the configured FSE partition policies. See also ”Limited Access
Mode (LAM)”.
good mediumA usable FSE medium in its fully operational state. In this state, the medium may have any of the
following statuses: uninitialized, free, empty, in use, full. This state lasts as long as
the medium is in a good condition, that is, no write and read errors are detected while writing
data to and reading data from the medium.
Hierarchical Storage
Management
A database with migration candidate and release candidate lists for files on the HSM file
system. Each FSE partition has its own Hierarchical Storage Management Database.
Database (HSMDB)
Hierarchical Storage
Management
Database journal
A file which contains transaction information for Hierarchical Storage Management Databases.
It is designed to increase the database robustness. HSMDB journals contain information about
already applied and pending database transactions. Hierarchical Storage Management
Database journals are located in the directory
/var/opt/fse/part/PartitionName/hsm/journal (on Linux platform) or in the
directory %InstallPath%\var\part\PartitionName\hsm\journal (on Windows
platform).
Hierarchical Storage
Manager (HSM)
A part of the FSE client that handles file system events relevant for the FSE implementation and
enforces migration, release, and recall policies. The Hierarchical Storage Manager process
name is fse-hsm.
Hierarchical Storage
Manager list (HSM
list)
A list of files, maintained by the Hierarchical Storage Manager, which are candidates for
migration or release. HSM lists are: a dirty file list (the list of changed files), a migration
candidate list, and a release candidate list.
high watermarkA user-defined value for the high level of disk space usage on an HSM file system. When this
value is reached, a regular release process is started for all files that are on the Hierarchical
Storage Manager's release candidate list and that have passed their migration or recall
retention period, and are larger than the defined MinFileSize.
host
A computer system that hosts either a consolidated FSE system, an FSE server, or an FSE client.
(FSE host)
hot fix
(FSE hot fix)
A package with updated FSE binaries, FSE configuration files, or other FSE system files that are
applied to the FSE implementation at a specific site, in order to solve one or more problems
reported by an FSE customer.
HP StorageWorks File System Extender Software installation guide for Linux81
Page 82
HSM file system
(HSM FS)
A file system, controlled by the FSE software. It is used to store file metadata (name, attributes,
permissions) and online files of an FSE partition. HSM file systems represent the primary
storage space of an FSE implementation. On the Linux platform, HSM file systems are based on
the Linux native Ext3 file system and FSE specific attributes are stored as Ext3 extended
attributes. On the Windows platform, HSM file systems are based on the Windows native NTFS
file system.
HSM file system filter The function of this module is to intercept relevant file system events for the FSE implementation
and to report them to the Hierarchical Storage Manager. On Linux platform, the filter is a kernel
module providing the HSM file system that is built on top of the Linux native Ext3 file system and
HP OnlineJFS/JFS 3.5, respectively. On Windows platform, the filter is a kernel driver and
intercepts all accesses to the Windows native NTFS file system.
HSM Health Monitor
(HHM)
A utility for monitoring storage capacity on file systems with FSE databases and system files, on
HSM file systems, and on the secondary storage space assigned to HSM file systems. The utility
can be configured to trigger actions if storage capacity rises above or drops below
configurable thresholds.
implementation
(FSE implementation)
HP StorageWorks File System Extender. Hardware with installed FSE software. FSE hardware
includes a tape library with drives, slots, media, and a changer, which is connected to the FSE
server host machine via an SCSI interface. See also ”software (FSE software)”.
initializationA process that labels the medium volume header on each volume found on a medium and
adds the corresponding FSE medium volume entry to the Resource Management Database. If
an FSE medium or FSE medium volume contains valid FSE user data and you want to initialize
it, you must use forced initialization.
job
(FSE job)
A process, executed by Partition Manager or Management Interface, which executes migration,
recall, reorganization, administrative, backup, recovery, and maintenance tasks. Migration and
recall jobs are triggered automatically by their respective policy, whereas reorganization,
administrative, backup, recovery, and maintenance jobs are triggered with FSE commands.
library
A SCSI tape library attached to and configured for use in the FSE implementation.
(FSE library)
Library Agent (LA)An FSE agent that handles requests for moving the FSE media between tape drives and library
slots and triggers media barcode rescans. The Library Agent process name is fse-la-s.
Limited Access Mode
(LAM)
One of the two HSM file system filter operational modes. When the HSM file system filter is
operating in Limited Access Mode for a particular HSM file system, you are only allowed to
browse the directory tree on the HSM file system or read the online data from it. You cannot
create, rename, move or delete directories and files, nor change their data or metadata.
Offline data is not recalled from FSE media; instead, the application that is accessing the file
receives an error. Limited Access Mode does not need FSE processes to be running on the FSE
client. However, if the processes are running, three data movement processes (migration,
release, deletion) can be executed – by being either triggered explicitly or started according to
the configured FSE partition policies. See also ”Full Access Mode (FAM)”.
Log Analyzer (LA)A utility for monitoring FSE event log files, which can be configured to periodically analyze
these files for specific patterns and send notification e-mails or SNMP traps.
low watermarkA user-defined value for a level of disk usage on an HSM file system. When this value is
reached, a regular release process is stopped.
LTO UltriumA Linear Tape-Open (LTO) tape format and tape drive product family from IBM and HP. FSE
supports LTO Ultrium 1, 2, 3, and LTO Ultrium 3 WORM series of tape media, and LTO Ultrium
1, 2, 3, and LTO Ultrium 3 WORM series of tape drives. See also ”medium (FSE medium)”.
maintenance jobOne of the FSE job types that performs maintenance tasks in FSE, namely, the File System
Catalog (FSC) consistency check and recreation of redundant migrated data copies.
82
Page 83
MAMSee ”Medium Auxiliary Memory (MAM)”.
Management
Console (MC)
An add-on to the basic FSE product that provides graphical user interface for most of the
configuration, operation, monitoring, and management tasks in the FSE implementation.
Management Console consists of two separate modules: Management Console server and
Management Console client.
Management
Interface (MIF)
An FSE daemon (service), responsible for accepting, handling, and executing requests for user
actions, issued through the FSE command-line interface. It also starts and manages execution of
the administrative jobs. The Management Interface process name is fse-mif.
media duplicationA process that creates an exact copy of an FSE tape medium. In case of emergency, a medium
duplicate can directly replace the original.
media pool
(FSE media pool)
A common storage for the migrated HSM file system data. It groups FSE media of the same
type into a logical unit, which is tracked as a group. FSE media inside the same FSE media
pool share such characteristics as number of medium volumes (tape partitions), system volume
feature, capacity of medium volumes, and so on. Media pool types are: regular, WORM, and
backup. Each FSE partition needs at least one regular FSE media pool assigned to it. The same
media pool cannot be assigned to multiple partitions.
medium
(FSE medium)
A high-capacity magnetic tape, or a disk medium used by the FSE implementation. FSE medium
volumes are stored on tape partitions, in case of tape medium, or in file system subdirectories,
in case of disk medium. Tape medium is further divided into tape partitions and disk medium is
divided into subdirectories. Tape partitions store FSE medium volumes on a tape medium and
subdirectories store FSE medium volumes on a disk medium. FSE media represent the
secondary storage space in an FSE implementation. See also ”disk medium (FSE disk
medium)”.
Medium Auxiliary
Memory (MAM)
A memory chip integrated into magnetic tapes of the LTO Ultrium family, designed to improve
tape performance. FSE uses Medium Auxiliary Memory in LTO Ultrium and LTO Ultrium WORM
medium to store information about the medium, its volumes, current position of the tape, and so
on, thus reducing the access time when searching for requested data on the medium.
medium volume
(FSE medium volume)
A physical partition on an FSE medium, with its own volume header. There are two types of
medium volumes: data volume and system volume. Data volumes store migrated file data, while
system volumes store redundant copies of Fast Recovery Information from all data volumes on
the medium. LTO media can have a single data volume and no system volume.
migration candidate An online, dirty file that is scheduled for migration.
migration candidate
list
A list of files that were changed and have passed a certain period of time defined by the
migration policy, which are waiting for migration. The list is maintained by the Hierarchical
Storage Manager.
migration policyA set of rules by which the migration jobs for files on an HSM file system are initiated.
Configuration parameters for the respective policy are specified in the FSE partition
configuration file. Each FSE partition has its own migration policy.
migration retention
time
migration, migration
job
A period of time during which a migrated file is kept online before it is released. See also
”retention time”.
A process, executed by the Partition Manager, that copies the file contents from the HSM file
system to the FSE media. When the process is completed, the file entries of the migrated files
are moved from the migration candidate list to the release candidate list.
Name Space Catalog
(NSC)
A part of the File System Catalog. NSC includes metadata of files on an HSM file system, such
as directory structure, standard attributes, ownership (on Linux platform), and ACL (Access
Control List) and Alternate Data Streams (on Windows platform). This metadata enables
recovery of an HSM file system. See also ”File System Catalog (FSC)”.
HP StorageWorks File System Extender Software installation guide for Linux83
Page 84
Network File System
(NFS)
The file-sharing protocol used in UNIX networks. This is one of the protocols that can be used
for accessing files on an HSM file system from a remote machine, particularly a Linux client.
offline fileA file whose contents were migrated to one or more FSE media (according to the number of
copies defined), and then removed from the HSM file system. The header and extended
attributes of an offline file are left intact and remain on the HSM file system. An offline file
becomes online at an FSE user's request, when its contents are recalled from the FSE media.
offline mediumAn FSE medium which is temporarily put out of operation and physically removed from the FSE
library.
online fileA file on an HSM file system that is ready to be used by an application. A file is online
immediately after it is stored on the HSM file system, regardless of whether it was newly created
or recalled from FSE media. An online file becomes offline when it is released.
online mediumAn FSE medium that is physically present in the FSE library and which does not have its status
set to unusable.
ordinary LANA network, usually a company LAN, to which systems are attached using their primary network
adapters.
ordinary LAN
connection
package
A connection between parts of the FSE implementation which uses ordinary LAN for
communication purposes.
A package with FSE software that will be installed to an FSE implementation.
(FSE package)
partition
(FSE partition)
A configurable entity used by the FSE implementation to split the storage and configure it
according to user requirements. Each FSE partition is related to a single HSM file system and
one or more FSE media pools. It has its own policies, File System Catalog, and Hierarchical
Storage Management Database.
Partition Manager
(PM)
An FSE daemon (service) that executes and controls the migration, release, and recall processes
for a single FSE partition, according to the policies defined in the FSE partition configuration
file. The Partition Manager process name is fse-pm.
performance diskA disk with a high throughput which stores HSM file systems. Depending on the location of a
particular HSM file system, it needs to be locally mounted either on the consolidated FSE system
or on an external FSE client.
primary storage
A logical volume (disk partition) used to store the HSM file system.
space
private networkA dedicated network to which systems are attached using their secondary network adapters.
private network
connection
A connection between parts of the FSE implementation which uses private network for
communication purposes.
process
A process running on an FSE implementation that performs FSE-specific tasks.
(FSE process)
recall policyA rule by which recall jobs for files on an HSM file system are conducted. Since recall is started
either implicitly or explicitly on an FSE user's request, the only configurable parameter specifies
the maximum time the user should wait for the file to become online. This parameter is defined
in the FSE partition configuration file. Each FSE partition has its own recall policy.
recall retention time A period of time during which a recalled file is kept online before it is released again. See also
”retention time”.
84
Page 85
recall, recall jobA process, executed by the Partition Manager, that copies the file contents from the FSE media
to the HSM file system. Once the process is completed, the offline file becomes online again. Its
file entry is then added to the release candidate list.
recovery
(FSE recovery)
A process, executed by the Partition Manager, that recovers the HSM file system or the File
System Catalog (FSC) without requiring FSE backup copies. The HSM file system is recreated
from the Name Space Catalog (NSC), while the FSC is recreated either from Fast Recovery
Information (FRI) or from migrated metadata. FSE recovery can be used as a fallback if restore
of FSE is not possible due to missing FSE backup data. A recovery job is triggered explicitly on
an FSE user's request with the fserecover command.
recovery jobSee ”recovery (FSE recovery)”.
recycled medium
volume
An FSE medium volume from which all selected data was copied to another medium volume
during media reorganization. The data on recycled medium volumes can no longer be
retrieved, but the storage space on such volumes can be reused for new migrated data.
redundant data copy
recreation
A process in FSE that recreates a copy of a file set that has become unreadable, most probably
because the FSE medium to which it was migrated is damaged. The process recreates the
damaged data copy using copies of the same data that were migrated to other FSE media
pools. Therefore, the main prerequisite to execute this process is a multiple media pool
configuration of the corresponding FSE partition. A redundant data copy recreation process is
triggered manually by the FSE administrator.
regular releaseRelease of files on an HSM file system that are on the release candidate list and meet the
predefined criteria, triggered when the high watermark of usage is reached on the HSM file
system. The process is stopped when the low watermark on the HSM file system is reached. See also ”release”.
releaseA process that usually follows a migration or recall, executed by the Hierarchical Storage
Manager. This process removes the file data from the HSM file system, leaving only the file
header. Release is triggered by the Partition Manager according to the defined watermarks.
After release, files are offline and their file entries are removed from the release candidate list.
release candidateAn online file that is scheduled for release and that has either been migrated or was brought
online by recall.
release candidate list A list of online files that have already been migrated or that have recently been recalled and
are waiting to be released. The list is maintained by the Hierarchical Storage Manager.
release policyA set of rules by which the release process for files on a release candidate list is initiated.
Configuration parameters for the release policy are specified in the FSE partition configuration
file. Each FSE partition has its own release policy.
resource
(FSE resource)
Either an FSE partition, FSE library, FSE drive, FSE media pool, or FSE medium including
medium volumes. Each FSE resource is configured using an FSE configuration file and has an
entry in the Resource Management Database.
Resource
Management
Database (RMDB)
A database with records of all configured FSE resources. It contains all relevant resource
information, such as configuration parameters, resource characteristics, current status, usage
statistics, and relations to other resources. The Resource Management Database is located in
the directory /var/opt/fse/rmdb (on Linux platform) or in the directory
%InstallPath%\var\rmdb directory (on Windows platform).
Resource Manager
(RM)
An FSE daemon (service) responsible for managing data in the Resource Management
Database, allocating and releasing FSE resources and providing resource information to other
FSE daemons (services) and processes. The Resource Manager process name is fse-rm.
HP StorageWorks File System Extender Software installation guide for Linux85
Page 86
restore
(FSE restore)
A process that recreates complete and consistent state of the FSE server at a particular point in
time, using data from the backup image. Restore is required after disaster causes data on the
FSE server to be damaged or lost.
restore log
(FSE restore log)
A file that stores information about activity of the FSE restore process. FSE restore log is named
restore.log and located in the directory /var/opt/fse/log (on Linux platform) or in the
directory %InstallPath%\var\log (on Windows platform).
retention timeA defined period of time for files on the release candidate list during which a file is kept online
(after it was migrated or recalled) before it is released. The retention time is reset when any of
following changes: file contents, standard attributes, time stamps, permissions, or ownership.
SambaSoftware that allows a Linux server to act as a file server to Windows clients. A Samba server
must be running on the machine that hosts the HSM file system to allow Samba clients to access
the HSM file system. See also ”CIFS”.
secondary storage
space
A space on FSE media controlled by the FSE implementation. Secondary storage space
transparently extends primary storage space, which is the HSM file system. It has to be
configured, formatted, and initialized before it can be used.
server
(FSE server)
A set of FSE daemons (services) that are responsible for managing the configured FSE
resources, accepting and executing user commands, executing all types of FSE jobs, and
monitoring and operating the FSE implementation. These daemons (services) are: Resource
Manager, Management Interface, Partition Manager, Library Agent, and Back End Agent.
ServiceAn FSE daemon (service) which launches other FSE daemons (services) and agents. The Service
process name is fse-svc.
shared library
(FSE shared library)
A proprietary dynamically loaded library, included in the FSE package, that is used by the FSE
processes.
slack space
(FSE slack space)
Space on FSE media that is occupied by the migrated file generations that were treated as
obsolete by the FSE reorganizational scan job. The FSE medium volumes with a relatively high
slack space percentage should eventually be reorganized and reused.
slot
(FSE slot)
A physical place in the FSE library that holds a single FSE tape medium when it is not loaded
in an FSE drive. FSE slots have their entries in the Resource Management Database.
SMRSee ”system maintenance release (FSE system maintenance release)”.
software
(FSE software)
The programs and data files that are included in the FSE package. Once installed, they actively
control the FSE implementation and provide functionality to its users. See also ”implementation
(FSE implementation)”.
system file
(FSE system file)
A temporary file, created by the FSE software, which contains information about the current
status of the FSE implementation. FSE system files are Fast Recovery Information, File System
Catalog transaction logs, Hierarchical Storage Manager lists, and files stored in the FSE disk
buffer.
system maintenance
release
(FSE system
maintenance release)
tape media pool
A complete set of rebuilt FSE installation packages that include updates from several FSE hot
fixes and other improvements, which can be installed over initial FSE release software. System
maintenance release can be used to update any FSE implementation with an appropriate FSE
release version installed.
See ”media pool (FSE media pool)”.
(FSE tape media pool)
86
Page 87
tool
(FSE tool)
A command that communicates directly with FSE daemons (services) and agents besides
Management Interface, designed for low-level modifications and more extensive monitoring
and troubleshooting tasks. FSE tools are only intended for use by experienced FSE users
(typically on request of technical support personnel of FSE). Note that incorrect use of FSE tools
can cause data corruption. FSE tools are located in the directory /opt/fse/sbin/tools (on
Linux platform) or in the directory %InstallPath%\bin (on Windows platform).
UltriumSee ”LTO Ultrium”.
unreliable mediumAn FSE medium to which further writing is disabled, but from which FSE is still able to read
data. As soon as the first write error is detected while the Back End Agent is writing data to the
medium, the medium is automatically marked as unreliable in the Resource Management
Database. Such errors usually occur because of an excessive use of the medium.
unusable mediumAn FSE medium which has been disabled for both writing to and reading from. As soon as the
first read error is detected while the Back End Agent is reading data from the medium, the
medium is automatically marked as unusable in the Resource Management Database. Such
errors usually occur because of an extremely excessive use of the medium.
user
A computer system user who uses one or more HSM file systems for managing data.
(FSE user)
utility
(FSE utility)
An add-on FSE component that provides additional functionality and complements basic FSE
components. In current FSE release, two FSE utilities are part of the FSE software: HSM Health
Monitor and Log Analyzer. See also ”HSM Health Monitor (HHM)”, ”Log Analyzer (LA)”.
verified driveAn FSE drive that is put back in operation after its problems have been resolved. This is done
by changing its status in the Resource Management Database to online. The online status
is set by enabling the drive with the fsedrive command.
WORM file systemA file system on which a file can be written only once, but can be read many times. A WORM
(Write Once Read Many) file system prevents both files and their metadata from being
modified, regardless of their access permissions. Consequently, it is not possible to remove files
from directories or rename them; it is only possible to add new files (as on a regular file
system). Files and directory operations are restricted to read-only after a period of time. A
WORM partition must be related to WORM media pools containing WORM media only.
WORM mediumOn a WORM (Write Once Read Many) medium, data can be recorded only once. Once
recorded, the WORM medium cannot be overwritten, reformatted, or reinitialized, but new
data can be appended to the existing contents. A WORM medium can only be initialized in a
WORM media pool.
HP StorageWorks File System Extender Software installation guide for Linux87
Page 88
88
Page 89
Index
A
audience 7
available disk partitions
27
B
benefits
organizing file systems
build number
installation
38
13
C
capacity-based licensing 12
checking
Firebird SuperServer status
FSE implementation status
FSE processes
omniNames daemon status
configuration files
omniORB.cfg
services.cfg
configuring
post-start scripts
pre-stop scripts
configuring communication
consolidated FSE system
external FSE clients
FSE server
configuring FSE interprocess communication
consolidated FSE implementation
starting FSE processes
consolidated FSE system
configuring communication
consolidated implementation
conventions
document
converting to HSM file systems
creating
LVM logical volume groups
LVM logical volumes
47
40
40
50
51
42
40
7
46
46
47
40
43
40
9
71
28
28, 29
D
debug files
pre-installation considerations
determining
FSE hot fixes
FSE system maintenance releases
directory layout
disabling ACPI
with GRUB boot loader
with LILO boot loader
75
13
24
24
18
74
24
39
disk partitions
available on the system
distributed FSE implementation
starting FSE processes
distributed implementation
distributed system with separate server and external