copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software,
Computer Software Documentation, and Technical Data for Commercial Items are
licensed to the U.S. Government under vendor’s standard commercial license.
The information contained herein is subject to change without notice. The only
warranties for HP products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be
construed as constituting an additional warranty. HP shall not be liable for technical or
editorial errors or omissions contained herein.
Logo, Veritas, and Veritas Storage Foundation are trademarks or registered trademarks
of Symantec Corporation or its affiliates in the U.S. and other countries. Other names
may be trademarks of their respective owners.
The Veritas Storage Foundation™ 5.0 Cluster File System Administration Guide
Extracts for the HP Serviceguard Storage Management Suite contains information extracted from the Veritas Storage Foundation
Guide - 5.0 - HP-UX, which has been modified to support the HP Serviceguard Storage
Management Suite bundles that include the Veritas Storage Foundation™ Cluster File
System by Symantec and the Veritas Storage Foundation™ Cluster Volume Manager by
Symantec.
Printing History
The last printing date and part number indicate the current edition.
Table 1Printing History
™
Cluster File System Administration
Printing
Date
June 2007T2271-90034First
January
2008
April 2008T2271-90045Second
Part
Number
T2271-90034Reprint
EditionChanges
Edition
First
Edition
Edition
Original release to support
the HP Serviceguard
Storage Management Suite
A.02.00 release on HP-UX
11i v2.
CFS nested mounts are not
supported with HP
Serviceguard
Second edition to support
the HP Serviceguard
Storage Management Suite
version A.02.00 release on
HP-UX 11i v3
5
6
1Technical Overview
This chapter includes the following topics:
•“Overview of Cluster File System Architecture” on page 8
•“VxFS Functionality on Cluster File Systems” on page 9
•“Benefits and Applications” on page 12
HP Serviceguard Storage Management Suite (SG SMS) bundles provide several options
for clustering and storage. The information in this document applies to the SG SMS
bundles that include the Veritas Storage Foundation™ 5.0 Cluster File System and
Cluster Volume Manager by Symantec:
•SG SMS version A.02.00 bundles T2775CA, T2776CA, and T2777CA for HP-UX 11i
v2
•SG SMS version A.02.00 Mission Critical Operating Environment (MCOE) bundles
T2795CA, T2796CA, and T2797CA for HP-UX 11i v2
•SG SMS version A.02.00 bundles T2775CB, T2776CB, and T2777CB for HP-UX 11i
v3
•SG SMS version A.02.00 High Availability Operating Environment (HAOE) bundles
T8685CB, T8686CB, and T8687CB for HP-UX 11i v3
•SG SMS version A.02.00 Data Center Operating Environment (DCOE) bundles
T8695CB, T8696CB, and T8697CB for HP-UX 11i v3
SG SMS bundles that include the Veritas Storage Foundation Cluster File System (CFS)
allow clustered servers running HP-UX 11i to mount and use the same file system
simultaneously, as if all applications using the file system are running on the same
server. SG SMS bundles that include CFS also include the Veritas Storage Foundation
Cluster Volume Manager (CVM). CVM makes logical volumes and raw device
applications accessible throughout a cluster.
As SG SMS components, CFS and CVM are integrated with HP Serviceguard to
form a highly available clustered computing environment. SG SMS bundles
that include CFS and CVM do not include the Veritas™ Cluster Server by
Symantec (VCS). VCS functions that are required in an SG SMS environment
are performed by Serviceguard. This document focuses on CFS and CVM
administration in an SG SMS environment.
For more information on bundle features, options and applications, see the Application
Use Cases for the HP Serviceguard Storage Management Suite White Paper and the HP
Serviceguard Storage Management Suite Release Notes at:
CFS allows clustered servers to mount and use the same file system simultaneously, as if
all applications using the file system are running on the same server. CVM makes logical
volumes and raw device applications accessible throughout a cluster.
Cluster File System Design
Beginning with version 5.0, CFS uses a Symmetric architecture in which all nodes in the
cluster can simultaneously function as metadata servers. CFS 5.0 has some remnants of
the master/slave node concept from version 4.1, but this functionality has changed in
version 5.0 along with a different naming convention. The first server to mount each
cluster file system becomes the primary CFS node; all other nodes in the cluster are
considered secondary CFS nodes. Applications access user data directly from the node
they are running on. Each CFS node has its own intent log. File system operations, such
as allocating or deleting files, can originate from any node in the cluster.
NOTEThe master/slave node naming convention continues to be used when referring to Veritas
Cluster Volume Manager (CVM) nodes.
Cluster File System Failover
If the server designated as the CFS primary node fails, the remaining nodes in the
cluster elect a new primary node. The new primary node reads the intent log of the old
primary node and completes any metadata updates that were in process at the time of
the failure.
Failure of a secondary node does not require metadata repair, because nodes using a
cluster file system in secondary mode do not update file system metadata directly. The
Multiple Transaction Server distributes file locking ownership and metadata updates
across all nodes in the cluster, enhancing scalability without requiring unnecessary
metadata communication throughout the cluster. CFS recovery from secondary node
failure is therefore faster than from primary node failure.
Group Lock Manager
CFS uses the Veritas Group Lock Manager (GLM) to reproduce UNIX single-host file
system semantics in clusters. This is most important in write behavior. UNIX file
systems make writes appear to be atomic. This means that when an application writes a
stream of data to a file, any subsequent application that reads from the same area of the
file retrieves the new data, even if it has been cached by the file system and not yet
written to disk. Applications can never retrieve stale data, or partial results from a
previous write.
To reproduce single-host write semantics, system caches must be kept coherent and each
must instantly reflect any updates to cached data, regardless of the cluster node from
which they originate. GLM locks a file so that no other node in the cluster can
simultaneously update it, or read it before the update is complete.
8
Chapter 1
Technical Overview
VxFS Functionality on Cluster File Systems
VxFS Functionality on Cluster File Systems
The HP Serviceguard Storage Management Suite uses the Veritas File System (VxFS).
Most of the major features of VxFS local file systems are available on cluster file
systems, including:
•Extent-based space management that maps files up to 1 terabyte in size
•Fast recovery from system crashes using the intent log to track recent file system
metadata updates
•Online administration that allows file systems to be extended and defragmented
while they are in use
Supported Features
The following table lists the features and commands that are available and supported
with CFS. Every VxFS online manual page has a Cluster File System Issues section that
informs you if the command functions on cluster-mounted file systems, and indicates any
difference in behavior from how the command functions on local mounted file systems.
Table 1-1CFS Supported Features
Features and Commands Supported on CFS
Quick I/OThe clusterized Oracle Disk Manager (ODM) is supported with CFS
using the Quick I/O for Databases feature in the following HP
Serviceguard Storage Management Suite CFS bundles for Oracle:
For HP-UX 11i v2 - T2776CA, T2777CA, T2796CA, and T2797CA
For HP-UX 11i v3 - T2776CB, T2777CB, T8686CB, T8687CB,
T8696CB, and T8697CB
Storage
Checkpoints
Freeze and
Thaw
SnapshotsSnapshots are supported with CFS.
QuotasQuotas are supported with CFS.
NFS MountsYou can mount cluster file systems to NFS.
Memory
Mapping
Concurrent
I/O
Storage Checkpoints are supported with CFS.
Synchronizing operations, which require freezing and thawing file
systems, are done on a cluster-wide basis.
Shared memory mapping established by the map() function is
supported on CFS. See the mmap(2) manual page.
This feature extends current support for concurrent I/O to cluster file
systems. Semantics for concurrent read/write access on a file in a
cluster file system matches those for a local mount.
Chapter 1
Delaylog The -o delaylog mount option is supported with cluster mounts.
This is the default state for CFS.
9
Technical Overview
VxFS Functionality on Cluster File Systems
Table 1-1CFS Supported Features (Continued)
Features and Commands Supported on CFS
Disk Layout
Versions
LockingAdvisory file and record locking are supported on CFS. For the
Multiple
Transaction
Servers
See the HP Serviceguard Storage Management Suite Release Notes in the High
Availability section at http://www.docs.hp.com for more information on bundle features
and options.
CFS supports only disk layout Version 6 and Version 7. Cluster
mounted file systems can be upgraded. A local mounted file system
can be upgraded, unmounted, and mounted again, as part of a cluster.
Use the fstyp -v special_device command to ascertain the disk
layout version of a VxFS file system. Use the vxupgrade command to
update the disk layout version.
F_GETLK command, if there is a process holding a conflicting lock, the
l_pid field returns the process ID of the process holding the
conflicting lock. The nodeid-to-node name translation can be done by
examining the /etc/llthosts file or with the fsclustadm
command. Mandatory locking and deadlock detection supported by
traditional fcntl locks are not supported on CFS.
See the fcntl(2) manual page for more information.
With this feature, CFS moves from a primary/secondary architecture,
where only one node in the cluster processes metadata operations
(file creation, deletion, growth, etc.) to a symmetrical architecture,
where all nodes in the cluster can simultaneously process metadata
operations. This allows CFS to handle significantly higher metadata
loads.
Unsupported Features
Functionality that is documented as unsupported may not be expressly prevented from
operating on CFS, but the actual behavior is indeterminate. HP does not advise using
unsupported functionality on CFS, or to alternately mount file systems with
unsupported features as local and cluster mounts.
Table 1-2CFS Unsupported Features
Features and Commands Not Supported on CFS
qlogQuick log is not supported on CFS.
Swap FilesSwap files are not supported on CFS.
The mknod
command
Cache
Advisories
Cached Quick
I/O
You cannot use the mknod command to create devices on CFS.
Cache advisories are set with the mount command on individual
file systems, but are not propagated to other nodes of a cluster.
This Quick I/O for Databases feature that caches data in the file
system cache is not supported on CFS.
10
Chapter 1
Table 1-2CFS Unsupported Features (Continued)
Features and Commands Not Supported on CFS
Technical Overview
VxFS Functionality on Cluster File Systems
Commands that
Depend on File
Access Times
Nested MountsHP Serviceguard does not support CFS nested mounts.
File access times may appear different across nodes because the
atime file attribute is not closely synchronized in a cluster file
system. Utilities that depend on checking access times may not
function reliably.
Chapter 1
11
Technical Overview
Benefits and Applications
Benefits and Applications
The following sections describe CFS benefits and some applications.
Advantages To Using CFS
CFS simplifies or eliminates system administration tasks resulting from hardware
limitations:
•The CFS single file system image administrative model simplifies administration by
allowing all file system management operations, resizing, and reorganization
(defragmentation) to be performed from any node.
•You can create and manage terabyte-sized volumes, so partitioning file systems to fit
within disk limitations is usually not necessary - only extremely large data farms
must be partitioned to accommodate file system addressing limitations. For
maximum supported file system sizes, see Supported File and File System Sizes for
HFS and JFS available at: http://docs.hp.com/en/oshpux11iv3.html#VxFS
•Keeping data consistent across multiple servers is automatic, because all servers in
a CFS cluster have access to cluster-shareable file systems. All cluster nodes have
access to the same data, and all data is accessible by all servers using single server
file system semantics.
•Applications can be allocated to different servers to balance the load or to meet other
operational requirements, because all files can be accessed by all servers. Similarly,
failover becomes more flexible, because it is not constrained by data accessibility.
•The file system recovery portion of failover time in an n-node cluster can be reduced
by a factor of n, by distributing the file systems uniformly across cluster nodes,
because each CFS file system can be on any node in the cluster.
•Enterprise storage arrays are more effective, because all of the storage capacity can
be accessed by all nodes in the cluster, but it can be managed from one source.
•Larger volumes with wider striping improve application I/O load balancing. Not only
is the I/O load of each server spread across storage resources, but with CFS shared
file systems, the loads of all servers are balanced against each other.
•Extending clusters by adding servers is easier because each new server’s storage
configuration does not need to be set up - new servers simply adopt the cluster-wide
volume and file system configuration.
•For the following HP Serviceguard Storage Management Suite CFS for Oracle
bundles, the clusterized Oracle Disk Manager (ODM) feature is available to
applications running in a cluster, enabling file-based database performance to
approach the performance of raw partition-based databases:
— T2776CA, T2777CA, T2796CA, and T2797CA
— T2776CB, T2777CB, T8686CB, T8687CB, T8696CB, and T8697CB
12
Chapter 1
Technical Overview
Benefits and Applications
When To Use CFS
You should use CFS for any application that requires file sharing, such as for home
directories, web pages, and for cluster-ready applications. CFS can also be used when
you want highly available standby data in predominantly read-only environments, or
when you do not want to rely on NFS for file sharing.
Almost all applications can benefit from CFS. Applications that are not “cluster-aware”
can operate and access data from anywhere in a cluster. If multiple cluster applications
running on different servers are accessing data in a cluster file system, overall system
I/O performance improves due to the load balancing effect of having one cluster file
system on a separate underlying volume. This is automatic; no tuning or other
administrative action is required.
Many applications consist of multiple concurrent threads of execution that could run on
different servers if they had a way to coordinate their data accesses. CFS provides this
coordination. These applications can be made cluster-aware allowing their instances to
co-operate to balance the client and data access load, and thereby scale beyond the
capacity of any single server. In these applications, CFS provides shared data access,
enabling application-level load balancing across cluster nodes.
•For single-host applications that must be continuously available, CFS can reduce
application failover time, because it provides an already-running file system
environment in which an application can restart after a server failure.
•For parallel applications, such as distributed database management systems and
web servers, CFS provides shared data to all application instances concurrently. CFS
also allows these applications to grow by the addition of servers, and improves their
availability by enabling them to redistribute load in the event of server failure
simply by reassigning network addresses.
•For workflow applications, such as video production, in which very large files are
passed from station to station, CFS eliminates time consuming and error prone data
copying by making files available at all stations.
•For backup, CFS can reduce the impact on operations by running on a separate
server, while accessing data in cluster-shareable file systems.
Some common applications for CFS are:
•Using CFS on file servers
Two or more servers connected in a cluster configuration (that is, connected to the
same clients and the same storage) serve separate file systems. If one of the servers
fails, the other recognizes the failure, recovers, assumes the role of primary node,
and begins responding to clients using the failed server’s IP addresses.
•Using CFS on web servers
Web servers are particularly suitable to shared clustering, because their application
is typically read-only. Moreover, with a client load balancing front end, a Web server
cluster’s capacity can be expanded by adding a server and another copy of the site. A
CFS-based cluster greatly simplifies scaling and administration for this type of
application.
Chapter 1
13
Technical Overview
Benefits and Applications
14
Chapter 1
2Cluster File System Architecture
This chapter includes the following topics:
•“Role of Component Products” on page 16
•“About CFS” on page 17
•“About Veritas Cluster Volume Manager Functionality” on page 21
Chapter 2
15
Cluster File System Architecture
Role of Component Products
Role of Component Products
The HP Serviceguard Storage Management Suite bundles that include CFS also include
the Veritas™ Volume Manager by Symantec (VxVM) and it's cluster component, the
Veritas Storage Foundation™ Cluster Volume Manager by Symantec (CVM). The
following sections introduce cluster communication, membership ports, and CVM
functionality.
Cluster Communication
Group Membership Atomic Broadcast (GAB) and Low Latency Transport (LLT) are
protocols implemented directly on an ethernet data link. They run on redundant data
links that connect the nodes in a cluster. Serviceguard and CFS are in most respects, two
separate clusters. GAB provides membership and messaging for the clusters and their
applications. GAB membership also provides orderly startup and shutdown of clusters.
LLT is the cluster communication transport. The /etc/gabtab file is used to configure
GAB and the /etc/llttab
creates these configuration files each time the CFS package is started and modifies them
whenever you apply changes to the Serviceguard cluster configuration - this keeps the
Serviceguard cluster synchronized with the CFS cluster.
file is used to configure LLT. Serviceguard cmapplyconf
Any attempt to directly modify /etc/gabtab and /etc/llttab will be overwritten by
cmapplyconf (or cmdeleteconf).
Membership Ports
Each component in a CFS registers with a membership port. The port membership
identifies nodes that have formed a cluster for the individual components. Examples of
port memberships include:
port a heartbeat membership
port f Cluster File system membership
port u Temporarily used by CVM
port v Cluster Volume Manager membership
port w Cluster Volume Manager daemons on different nodes communicate
with one another using this port.
Port memberships are configured automatically and cannot be changed. To display port
memberships, enter the gabconfig -a command.
Veritas™ Cluster Volume Manager Functionality
A VxVM cluster is comprised of nodes sharing a set of devices. The nodes are connected
across a network. CVM (the VxVM cluster component) presents a consistent logical view
of device configurations (including changes) on all nodes. CVM functionality makes
logical volumes and raw device applications accessible throughout a cluster.CVM
enables multiple hosts to concurrently access the logical volumes under its control. If one
node fails, the other nodes can still access the devices. You configure CVM shared storage
after the HP Serviceguard high availability (HA) cluster is configured and running.
16
Chapter 2
Loading...
+ 36 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.