Copyright (C) 1992-2002 International Business Machines Corporation, The Regents of the University of
California, Sandia Corporation, and Lockheed Martin Energy Research Corporation.
All rights reserved.
Portions of this work were produced by the University of California, Lawrence Livermore National Laboratory (LLNL)under Contract No. W-7405-ENG-48 with the U.S. Department of Energy (DOE), by the University of California, Lawrence Berkeley National Laboratory (LBNL) under Contract No.
DEAC03776SF00098 with DOE, by the University of California, Los Alamos National Laboratory (LANL)
under Contract No. W-7405-ENG-36 with DOE, by Sandia Corporation, Sandia National Laboratories
(SNL) under Contract No. DEAC0494AL85000 with DOE, and Lockheed Martin Energy Research Corporation, Oak Ridge National Laboratory (ORNL) under Contract No. DE-AC05-96OR22464 with DOE. The
U.S. Government has certain reserved rights under its prime contracts with the Laboratories.
DISCLAIMER
Portions of this software were sponsored by an agency of the United States Government. Neither the United States, DOE, The Regents of the University of California, Sandia Corporation, Lockheed Martin EnergyResearch Corporation, nor any of their employees,makes any warranty, express or implied, orassumes any
liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.
Printed in the United States of America.
Printed in the United States of America.
HPSS Release 4.5
September 2002 (Revision 2)
High Performance Storage System is a registered trademark of International Business Machines Corporation.
The High Performance Storage System (HPSS) is software that provides hierarchical storage
management and services for very large storage environments. HPSS may be of interest in
situations having present and future scalability requirements that are very demanding in terms of
total storage capacity, file sizes, data rates, number of objects stored, and numbers of users. HPSS
is part of an open, distributed environment based on OSF Distributed Computing Environment
(DCE) products that form the infrastructure of HPSS. HPSS is the result of a collaborative effort by
leading US Government supercomputer laboratories and industry toaddressveryreal,very urgent
high-end storage requirements. HPSS is offered commercially by IBM Worldwide Government
Industry, Houston, Texas.
HPSS provides scalable parallel storage systems for highly parallel computersas well as traditional
supercomputers and workstation clusters. Concentrating on meeting the high end of storage
system and data management requirements,HPSS isscalable and designed to storeup topetabytes
(1015) of data and to use network-connected storage devices to transfer data at rates up to multiple
gigabytes (109) per second.
HPSS provides a large degree of control for the customer site to manage their hierarchical storage
system. Using configuration information defined by the site, HPSS organizes storage devices into
multiple storage hierarchies. Based on policy information defined by the site and actual usage
information, data are then moved to the appropriate storage hierarchy and to appropriate levels in
the storage hierarchy.
1.2HPSS Capabilities
A central technical goal of HPSS is to move large files between storage devices and parallel or
clustered computers at speeds many times faster than today’s commercialstorage system software
products, and to do this in a way that is more reliable and manageable than ispossible with current
systems. In order to accomplish this goal, HPSS is designed and implemented based on the
concepts described in the following subsections.
1.2.1 Network-centered Architecture
The focus of HPSS is the network, not a single server processor as in conventional storage systems.
HPSS provides servers and movers that can be distributed across a high performance network to
provide scalability and parallelism. The basis for this architecture is the IEEE Mass Storage System
Reference Model, Version 5.
1.2.2 High Data Transfer Rate
HPSS achieves high data transfer rates by eliminating overhead normally associated with data
transfer operations. In general, HPSS servers establish transfer sessions but are not involved in
actual transfer of data.
1.2.3 Parallel Operation Built In
The HPSS Application Program Interface (API) supports parallel or sequential access to storage
devices by clients executing parallel or sequential applications. HPSS also provides a Parallel File
Transfer Protocol. HPSS can even manage data transfers in a situation where the number of data
sources and destination are different. Parallel data transfer is vital in situations that demand fast
access to very large files.
1.2.4 A Design Based on Standard Components
HPSS runs on UNIX with no kernel modifications and is written in ANSI C and Java. It uses the
OSF Distributed Computing Environment (DCE) and Encina from Transarc Corporation as the
basis for its portable, distributed, transaction-based architecture. These components are offered on
many vendors’ platforms. Source code is available to vendors and users for porting HPSS to new
platforms. HPSS Movers and the Client API have been ported to non-DCE platforms. HPSS has
been implemented on the IBM AIX and Sun Solaris platforms. In addition, selected components
have been ported toother vendor platforms. Thenon-DCE Client API andMover have been ported
to SGI IRIX, while the Non-DCE Client API has also been ported to Linux. Parallel FTP client
software has been ported to a number of vendor platforms and is also supported on Linux. Refer
to Section 1.4: HPSS Hardware Platforms on page 37 and Section 2.3: Prerequisite SoftwareConsiderations on page 46 for additional information.
1.2.5 Data Integrity Through Transaction Management
Transactional metadata management and Kerberos security enable a reliable design that protects
user data both from unauthorized use and from corruption or loss. A transaction is an atomic
grouping of metadata management functions that either take place together, or none of them take
place. Journaling makes it possible to back out any partially complete transactions if a failure
occurs. Transaction technology is common in relational data management systems but not in
storage systems. HPSS implements transactions through Transarc’s Encina product. Transaction
management is the key to maintaining reliability and security while scaling upward into a large
distributed storage environment.
1.2.6 Multiple Hierarchies and Classes of Services
Most other storage management systems support simple storage hierarchies consisting of one kind
of disk and one kind of tape. HPSS provides multiple hierarchies, which are particularly useful
when inserting new storage technologies over time. As new disks, tapes, or optical media are
20September 2002HPSS Installation Guide
Release 4.5, Revision 2
added, new classes of service can be set up. HPSS files reside in a particular class of service which
users select based on parameters such as file size and performance. A class of service is
implemented by a storage hierarchy which in turn consists of multiple storage classes, as shown in
Figure 1-2. Storage classes are used to logically group storage media to provide storage for HPSS
files. A hierarchy may be as simple as a single tape, or it may consist of two or more levels of disk,
disk array, and local tape. The user can even set up classes ofservice so thatdata from an older type
of tape is subsequently migrated to a new type of tape. Such a procedure allows migration to new
media over time without having to copy all the old media at once.
1.2.7 Storage Subsystems
To increase the scalability of HPSS in handling concurrent requests, the concept of Storage
Subsystem has been introduced. Each Storage Subsystem contains a single Name Server and Bitfile
Server. If migration and purge are needed for the storage subsystem, then the Storage Subsystem
will also contain a Migration / Purge Server. A Storage Subsystem must also contain a single Tape
Storage Server and/or a single Disk Storage Server. All other servers exist outside of Storage
Subsystems. Data stored within HPSS is assigned to different Storage Subsystems based on
pathname resolution. A pathname consisting of / resolves to the root Name Server.The root Name
Server is the Name Server specified in the Global Configuration file. However, if the pathname
contains junction components, it may resolve to a Name Server in a different Storage Subsystem.
For example, the pathname /JunctionToSubsys2 could lead to the root fileset managed by the
Name Server in Storage Subsystem 2. Sites which do not wish to partition their HPSS through the
use of Storage Subsystems will effectively be running an HPSS with a single Storage Subsystem.
Note that sites are not required to use multiple Storage Subsystems.
Chapter 1 HPSS Basics
1.2.8 Federated Name Space
Federated Name Space supports data access between multiple, separate HPSS systems. With this
capability, a user may access files in all or portions of a separate HPSS system using any of the
configured HPSS interfaces. To create a Federated Name Space, junctions are created to point to
filesets in a different HPSS system. For security purposes, access to foreign filesets is not supported
for NFS, or for end-users of FTP and the Non-DCE Gateway when only the local password file is
used for authentication.
1.3HPSS Components
The components of HPSS include files, filesets, junctions, virtual volumes, physical volumes,
storage segments, metadata, servers, infrastructure, user interfaces, a management interface, and
policies. Storage and file metadata are represented by data structures that describe the attributes
and characteristics of storage system componentssuch as files, filesets, junctions,storagesegments,
and volumes. Servers are the processes that control the logic of the system and control movement
of the data. The HPSS infrastructure provides the services that are used by all the servers for
standard operations such as sending messages and providing reliable transaction management.
User interfaces provide several different views of HPSS to applications with different needs. The
management interface provides a way toadminister and control the storage systemand implement
site policy.
These HPSS components are discussed below in Sections 1.3.1 through 1.3.7.
1.3.1 HPSS Files, Filesets, Volumes, Storage Segments and Related Metadata
The components used to define the structure of the HPSS name space are filesets and junctions.The
components containing user data include bitfiles, physical and virtual volumes, and storage
segments. Components containing metadata describing the attributes and characteristics of files,
volumes, and storage segments, include storage maps, classes of service, hierarchies, and storage
classes.
•Files (Bitfiles). Files in HPSS, called bitfiles in deference to IEEE Reference Model
terminology, are logical strings of bytes, even though a particular bitfile may have a
structure imposed by its owner. This unstructured view decouples HPSS from any
particular file management system that host clients of HPSS might have. HPSS bitfile size
is limited to 2 to the power of 64 minus 1 (264 - 1) bytes.
Each bitfile is identified by a machine-generated name called a bitfile ID. It may also have
a human readable name. It is the job of the HPSS Name Server (discussed in Section 1.3.2)
to map a human readable name to a bitfile's bitfile ID. By separating human readable
names from the bitfiles and their associated bitfile IDs, HPSS allows sites to use different
Name Servers to organize their storage. There is, however, a standard Name Server
included with HPSS.
•Filesets. A fileset is a logical collection of files that can be managed as a single
administrative unit, or more simply, a disjoint directory tree. A fileset has two identifiers:
a human readable name, and a 64-bit integer. Both identifiers are unique to a given DCE
cell.
•Junctions.A junction is a Name Server object that is used to point to a fileset. This fileset
may belong to the same Name Server or to a different Name Server. The ability to point
junctions allows HPSS users to traverse to differentStorage Subsystems and to traverse to
different HPSS systems via the Federated Name Space. Junctions are components of
pathnames and are the mechanism which implements this traversal.
•File Families. HPSS files can be grouped into families. All files in a given family are
recorded on a set of tapes assigned to the family. Only files from the given family are
recorded on these tapes. HPSS supports grouping files on tape volumes only. Families can
only be specified by associating the family with a fileset. All files created in the fileset
belong to the family. When one of these files is migrated from disk to tape, it is recorded on
a tape with other files in the same family. If no tape virtual volume is associated with the
family,a blank tape is reassigned from the default family.The family affiliation is preserved
when tapes are repacked.
•Physical Volumes. A physical volume is a unit of storage media on which HPSS stores
data. The media can be removable (e.g., cartridge tape, optical disk) or non-removable
(magnetic disk). Physical volumes may also be composite media, such as RAID disks, but
must be represented by the host OS as a single device.
Physical volumes are not visible to the end user. The end user simply stores bitfiles into a
logically unlimited storage space. HPSS, however, must implement this storage on a
variety of types and quantities of physical volumes.
For a list of the tape physical volume types supported by HPSS, see Table 2-4: SuggestedBlock Sizes for Tape on page 99.
22September 2002HPSS Installation Guide
Release 4.5, Revision 2
Chapter 1 HPSS Basics
•Virtual Volumes. A virtual volume is used by the Storage Server to provide a logical
abstraction or mapping of physical volumes. A virtual volume may include one or more
physical volumes. Striping of storage media is accomplished by the Storage Servers by
collecting more than one physical volume into a single virtual volume. A virtual volume is
primarily used inside of HPSS,thus hidden from the user,but its existence benefits theuser
by making the user’s data independent of device characteristics. Virtual volumes are
organized as strings of bytes up to 2
64-1
bytes in length that can be addressed by an offset
into the virtual volume.
•Storage Segments.A storage segment is an abstract storage object which is mapped onto
a virtual volume. Each storage segment is associated with a storage class (defined below)
and has a certain measure of location transparency. The Bitfile Server (discussed in Section
1.3.2) uses both disk and tape storage segments as its primary method of obtaining and
accessing HPSS storage resources. Mappings of storage segments onto virtual volumes are
maintained by the HPSS Storage Servers (Section 1.3.2).
•Storage Maps. A storage map is a data structure used by Storage Servers to manage the
allocation of storage space on virtual volumes.
•Storage Classes. A storage class defines a set of characteristics and usage parameters to
beassociated with a particular grouping of HPSS virtual volumes. Each virtual volumeand
its associated physical volumes belong to a single storage class in HPSS. Storage classes in
turn are grouped to form storage hierarchies (see below). An HPSS storage class is used to
logically group storage media to provide storage for HPSS files with specific intended
usage, similar size and usage characteristics.
•Storage Hierarchies. An HPSS storage hierarchy defines the storage classes on which
files in that hierarchy are to be stored. A hierarchy consists of multiple levels of storage,
with each level representing a different storage class. Files are moved up and down the
hierarchy via migrate and stage operations based on usage patterns, storage availability,
and site policies. For example, a storage hierarchy might consist of a fast disk, followed by
a fast data transfer and medium storage capacity robot tape system, which in turn is
followed by a large data storage capacity but relatively slow data transfer tape robot
system. Files are placed on a particular level in the hierarchy depending upon the
migration levels that are associated with each level in the hierarchy. Multiple copies are
controlled by this mechanism. Also data can be placed at higher levels in the hierarchy by
staging operations. The staging and migrating of data is shown in Figure 1-1.
•Class of Service (COS). Each bitfile has an attribute called Class Of Service. The COS
defines a set of parameters associated with operational and performance characteristics of
a bitfile. The COS results in the bitfile being stored in a storage hierarchy suitable for its
anticipated and actual size and usage characteristics. Figure 1-2 shows the relationship
between COS, storage hierarchies, and storage classes.
HPSS servers include the Name Server, Bitfile Server, Migration/Purge Server, Storage Server,
Gatekeeper Server, Location Server, DMAP Gateway, Physical Volume Library, Physical Volume
Repository, Mover, Storage System Manager, and Non-DCE Client Gateway. Figure 1-3 provides a
simplified view of the HPSS system. Each major server component is shown, along with the basic
control communications paths (thin arrowed lines). The thick line reveals actual data movement.
Infrastructure items (those components that “glue together” the distributed servers) are shown at
the top of the cube ingrayscale. These infrastructure items are discussed in Section 1.3.4.HPSSuser
interfaces (the clients listed in the figure) are discussed in Section 1.3.5.
•Name Server (NS). The NS translates a human-oriented name to an HPSS object identifier.
Objects managed by the NSarefiles,filesets, directories, symbolic links, junctions andhard
links. The NS provides access verification to objects and mechanisms for manipulating
access to these objects. The NS provides a Portable Operating System Interface (POSIX)
view of the name space. This name space is a hierarchical structure consisting of
directories,files,andlinks. Filesets allow collections of NS objects to bemanaged asasingle
administrative unit. Junctions are used to link filesets into the HPSS name space.
•Bitfile Server (BFS). The BFS provides the abstraction of logical bitfiles to its clients. A
bitfile is identified by a BFS-generated name called a bitfile ID. Clients may reference
portions of a bitfile by specifying the bitfile ID and a starting address and length. The reads
and writes to a bitfile are random, and BFS supports the notion of holes (areas of a bitfile
where no data has been written). The BFS supports parallel reading and writing of data to
bitfiles. The BFS communicates with the storage segment layer interface of the Storage
Server (see below) to support the mapping of logical portions of bitfiles onto physical
storage devices. The BFS supports the migration, purging, and staging of data in a storage
hierarchy.
26September 2002HPSS Installation Guide
Release 4.5, Revision 2
Chapter 1 HPSS Basics
•Migration/Purge Server (MPS). The MPS allows the local site to implement its storage
management policies by managing the placement of data on HPSS storage media using
site-defined migration and purge policies. By making appropriate calls to the Bitfile and
Storage Servers, MPS copies data to lower levels in the hierarchy (migration), removes data
fromthecurrentlevelonce copies have been made (purge), or moves databetween volumes
at the same level (lateral move). Based on the hierarchy configuration, MPS can be directed
to create duplicate copies of data when it is being migrated from disk or tape. This is done
by copying the data to one or more lower levels in the storage hierarchy.
There are three types of migration: disk migration, tape file migration, and tape volumemigration. Disk purge should alwaysbe run along withdisk migration. Thedesignation disk
or tape refers to the type ofstorage class that migration is runningagainst. See Section 2.6.5:
Migration/Purge Server on page 65 for a more complete discussion of the different types of
migration.
Disk Migration/Purge:
The purpose of disk migration is to make one or more copies of disk files to lower levels in
the hierarchy. The number of copies depends on the configuration of the hierarchy. For
disk, migration and purge are separate operations. Any disk storage class which is
configured for migration should be configured for purge as well. Once a file has been
migrated (copied) downwards in the hierarchy, it becomes eligible for purge, which
subsequently removes the file fromthe current level and allows the disk space to be reused.
Tape File Migration:
The purpose of tape file migration is to make a single, additional copy of files in a tape
storage class to a lower level in the hierarchy. It is also possible to move files downwards
instead of copying them. In this case there is no duplicate copy maintained.
That there is no separate purge component to tape file migration, and tape volumes will
require manual repack and reclaim operations to be performed by the admin.
Tape Volume Migration:
The purpose of tape volume migration is to free tape volumes for reuse. Tape volumes are
selected based on being in the EOM map state and containing the most unused space
(caused by users overwriting or deleting files). The remaining segments on these volumes
are either migrated downwards to the next level in the hierarchy, or are moved laterally to
another tape volume at thesamelevel. This results in emptytape volumes which may then
be reclaimed. Note that there is no purge component to tape volume migration. All of the
operations use a move instead of a copy semantic.
MPS runs migration on each storage class periodically using the time interval specified in
the migration policy for that class. See Section 1.3.7: HPSS Policy Modules on page 35 for
details on migration and purge policies. In addition, migration runs can be started
automatically when the warning or critical space thresholds for the storage class are
exceeded. Purge runs are started automatically on each storage class when the free space
in that class falls below the percentage specified in the purge policy.
•Storage Server (SS). The Storage Servers provide a hierarchy of storage objects: storage
segments, virtual volumes, and physical volumes. The Storage Servers translates storage
segment references into virtual volume references and then into physical volume
references,handles the mappingof physicalresourcesinto striped virtual volumestoallow
parallel I/O to that set of resources, and schedules the mounting and dismounting of
removable media through the Physical Volume Library (see below).
•Gatekeeper Server (GK). The Gatekeeper Server provides two main services:
A. It provides sites with the ability to schedule the use of HPSS resources using the Gate-
keeping Service.
B. Itprovides siteswith the ability to validate user accounts using the Account Validation
Service.
Both of these services allow sites to implement their own policy.
The default Gatekeeping Service policy is to not do any gatekeeping. Sites may choose to
implement site policy for monitoring authorized callers, creates, opens, and stages. The
BFS will call the appropriate GK API depending on the requests that the site-implemented
policy is monitoring.
The Account Validation Service performs authorizations of user storage charges. A site
may perform no authorization, default authorization, or site-customized authorization
depending on how the Accounting Policy is set up and whether or not a site has written
site-specific account validation code. Clients call this service when creating files, changing
file ownership, or changing accounting information. If Account Validation is enabled, the
Account Validation Service determines if the user is allowed to use a specific account or
gives the user anaccount to use, ifneeded. The Name Serverand Bitfile Server alsocall this
service to perform an authorization check just before account-sensitive operations take
place.
•Location Server (LS). The Location Server actsas aninformation clearinghouseto itsclients
through the HPSS Client API to enable them to locate servers and gather information from
both local and remote HPSS systems. Its primary function is to allow a client to determine
a server's location, its CDS pathname, by knowing other information about the server such
as its object UUID, its server type or its subsystem id. This allows a client to contact the
appropriate server.Usually this is for the Name Server,the Bitfile Server or the Gatekeeper.
•DMAP Gateway (DMG). The DMAP Gateway acts as a conduit and translator between
DFS and HPSS servers. It translates calls between DFS and HPSS, migrates data from DFS
into HPSS, and validates data in DFSand HPSS. In addition, it maintains recordsof all DFS
and HPSS filesets and their statistics.
•Physical Volume Library (PVL). The PVL manages all HPSS physical volumes. It is in
charge of mounting and dismounting sets of physical volumes, allocating drive and
cartridge resources to satisfy mount and dismount requests, providing a mapping of
physical volume to cartridge and of cartridge to Physical Volume Repository (PVR), and
issuing commands to PVRs to perform physical mount and dismount actions. A
requirement of the PVL is the support for atomic mounts of sets of cartridges for parallel
accessto data. Atomic mounts are implemented by the PVL, whichwaits untilallnecessary
cartridge resources for a request are available before issuing mount commands to the
PVRs.
•Physical Volume Repository (PVR). The PVR manages all HPSS cartridges. Clients (e.g.,
the PVL) can ask the PVR to mount and dismount cartridges. Clients can also query the
status and characteristics of cartridges. Every cartridge in HPSS must be managed by
28September 2002HPSS Installation Guide
Release 4.5, Revision 2
Chapter 1 HPSS Basics
exactly one PVR. Multiple PVRs are supported within an HPSS system. Each PVR is
typically configured to manage the cartridges for one robot utilized by HPSS.
For information on the types of tape libraries supported by HPSS PVRs, see Section 2.4.2:
Tape Robots on page 54.
An Operator PVR is provided for cartridges not under control of a robotic library. These
cartridges are mounted on a set of drives by operators.
•Mover (MVR). The purpose of the Mover is to transfer data from a source device to a sink
device. A device can be a standard I/O device with geometry (e.g., tape, disk) or a device
without geometry (e.g., network, memory). The MVR’s client (typically the SS) describes
the data to be moved and where the data is to be sent. It is the MVR’s responsibility to
actually transfer the data, retrying failed requests and attempting to optimize transfers.
The MVR supports transfers for disk devices, tape devices and a mover protocol that can
be used as a lightweight coordination and flow control mechanism for large transfers.
•Storage System Management (SSM). SSM roles cover a wide range, including aspects of
configuration, initialization, and termination tasks. The SSM monitors and controls the
resources (e.g., servers) of the HPSS storage system in ways that conform to management
policies of a given customer site. Monitoring capabilities include the ability to query the
values of important management attributes of storage system resources and the ability to
receive notifications of alarms and other significant system events. Controlling capabilities
include the ability to start up and shut down servers and the ability to set the values of
management attributes of storage system resourcesand storage system policy parameters.
Additionally, SSM can request that specific operations be performed on resources within
the storage system, such as adding and deleting logical or physical resources. Operations
performedby SSM areusually accomplished through standardHPSS Application Program
Interfaces (APIs).
SSM has three components: (1) the System Manager, which communicates with all other
HPSS components requiring monitoringor control, (2) the Data Server,which provides the
bridge between the System Manager and the Graphical User Interface (GUI), and (3) the
GUI itself, which includes the Sammi Runtime Environment and the set of SSM windows.
•Non-DCE Client Gateway (NDCG). NDCG provides an interface into HPSS for client
programs running on systems lacking access to DCE or Encina services. By linking the
Non-DCE Client API library, instead of the usual ClientAPI library, allAPI calls arerouted
through the NDCG. The API calls are then executed by the NDCG, and the results are
returnedtothe client application. Note thattheNDCG itself must still runon asystem with
DCE and Encina, while it is the client application using the Non-DCE Client API that does
not suffer this restriction.
1.3.3 HPSS Storage Subsystems
Storage subsystems have been introduced starting with the 4.2 release of HPSS. The goal of this
designis to increase the scalability of HPSS by allowingmultiplename and bitfile servers to be used
within a single HPSS system.EveryHPSS system must now bepartitioned into one or morestorage
subsystems. Each storage subsystem contains a single name server and bitfile server. If migration
and purge are needed, then the storage subsystem should contain a single, optional migration/
purgeserver. A storage subsystem must alsocontain one or morestorage servers, but onlyone disk
storage server and one tape storage server are allowed per subsystem. Name, bitfile, migration/
purge, and storage servers must now exist within a storage subsystem. Each storage subsystem
may contain zero or one gatekeepers to perform site specific user level scheduling of HPSS storage
requests or account validation. Multiple storage subsystems may share a gatekeeper. All other
servers continue to existoutside of storage subsystems.Sites which do not need multiple name and
bitfile servers are served by running an HPSS with a single storage subsystem.
Storage subsystems are assigned integer ids starting withone. Zerois not a valid storage subsystem
id as servers which areindependent of storage subsystems areassigned to storage subsystem zero.
Storage subsystem ids must be unique. They do not need to be sequential and need not start with
one, but they doso by default unlessthe administrator specifies otherwise.Each storage subsystem
has a user-configurable name as well as a unique id. The name and id may be modified by the
administrator at the time the subsystem is configured but may not be changed afterward. In most
cases, the storage subsystem is referred to by its name, but in atleast one case(suffixes on metadata
file names) the storage subsystem is identified by its id. Storage subsystem names must be unique.
There are two types of configuration metadata used to support storage subsystems: a single global
configuration record, and one storage subsystem configuration record per storage subsystem. The
global configuration record contains a collection of those configuration metadata fields which are
used by multiple servers and that are commonly modified. The storage subsystem records contain
configuration metadata which is commonly used within a storage subsystem.
It is possible to use multiple SFS servers within a single HPSS system. Multiple storage subsystems
are able to run from a single SFS server or using one SFS server per storage subsystem. In practice,
differentmetadata files may be located ondifferentSFS servers on a per file basis depending on the
SFS path given for each file. For configuration and recovery purposes, however, it is desirable for
all of the metadata files forasingle subsystem to reside on a singleSFSserver.This single SFS server
may either be a singleserver which supports the entireHPSS system,or it may supportone or more
subsystems. Those metadata files which belong to the servers which reside within storage
subsystems are considered to belong to the storage subsystem as well. In an HPSS system with
multiple storage subsystems, there are multiple copies of these files, and the name of each copy is
suffixed with the integer id of the subsystem so that it may be uniquely identified (for example
bfmigrrec.1, bfmigrrec.2, etc.).
Metadata files that belong to a subsystem (i.e. files with numeric suffix) should never be shared
between servers. For example, the Bitfile Server in Subsystem #1 has a metadata file called
bfmigrrec.1. This file should only be used by theBFS in Subsystem #1, never by any other server.
The definitions of classes of service, hierarchies, and storage classes apply to the entire HPSS
system and are independent of storage subsystems. All classes of service, hierarchies, and storage
classes are known to all storage subsystems within HPSS. The level of resources dedicated to these
entities by each storage subsystem may differ. It is possible to disable selected classes of service
within given storage subsystems. Although the class of service definitions are global, if a class of
service is disabled within a storage subsystem then the bitfile server in that storage subsystem
never selects that class of service. If a class of service is enabled for a storage subsystem, then there
mustbe a non-zero level of storage resources supporting that class of serviceassignedto the storage
servers in that subsystem.
Data stored within HPSS is assigned to different Storage Subsystems based on pathname
resolution. A pathname consisting of “/” resolves to the root Name Server.The root Name Server is
the Name Server specified in the Global Configuration file. However, if the pathname contains
junction components, it may resolve to a Name Server in a different Storage Subsystem. For
example, the pathname “/JunctionToSubsys2” could lead to the root fileset managed by the Name
Server in Storage Subsystem 2. Sites which do not wish to partition their HPSS through the use of
30September 2002HPSS Installation Guide
Release 4.5, Revision 2
Loading...
+ 560 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.