The information in this document is subject to change without notice.
Hewlett-Packard makes no warranty of any kind with regard to this
manual, including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose. Hewlett-Packard
shall not be held liable for errors contained herein or direct, indirect,
special, incidental or consequential damages in connection with the
furnishing, performance, or use of this material.
Warranty
A copy of the specific warranty terms applicable to your Hewlett-Packard
product and replacement parts can be obtained from your local Sales and
Service Office.
U.S. Government License
Proprietary computer software. Valid license from HP required for
possession, use or copying. Consistent with FAR 12.211 and 12.212,
Commercial Computer Software, Computer Software Documentation,
and Technical Data for Commercial Items are licensed to the U.S.
Government under vendor's standard commercial license.
Copyright Notice
Copyright 2004 Hewlett-Packard Development Company L.P. All
rights reserved. Reproduction, adaptation, or translation of this
document without prior written permission is prohibited, except as
allowed under the copyright laws.
Copyright 1986-1996 Sun Microsystems, Inc.
Trademark Notices
MC/ServiceGuard® is a registered trademark of Hewlett-Packard
Company.
NFS® is a registered trademark of Sun Microsystems, Inc.
NIS™ is a trademark of Sun Microsystems, Inc.
UNIX® is a registered trademark of The Open Group.
Figure 3-8. Two Servers with NFS Cross-Mounts after One Server Fails . . . . . . . . 83
5
Figures
6
1Overview of MC/ServiceGuard
NFS
Chapter 17
Overview of MC/ServiceGuard NFS
MC/ServiceGuard NFS is a tool kit that enables you to use
MC/ServiceGuard to set up highly available NFS servers.
You must set up an MC/ServiceGuard cluster before you can set up
Highly Available NFS. For instructions on setting up an
MC/ServiceGuard cluster, see the Managing MC/ServiceGuard manual.
MC/ServiceGuard NFS is a separately purchased set of configuration
files and control scripts, which you customize for your specific needs.
These files, once installed, are located in /opt/cmcluster/nfs.
MC/ServiceGuard allows you to create high availability clusters of HP
9000 Series 800 computers. A high availability computer system allows
applications to continue in spite of a hardware or software failure.
MC/ServiceGuard systems protect users from software failures as well as
from failure of a system processing unit (SPU) or local area network
(LAN) component. In the event that one component fails, the redundant
component takes over, and MC/ServiceGuard coordinates the transfer
between components.
An NFS server is a host that “exports” its local directories (makes them
available for client hosts to mount using NFS). On the NFS client, these
mounted directories look to users like part of the client’s local file system.
With MC/ServiceGuard NFS, the NFS server package containing the
exported file systems can move to a different node in the cluster in the
event of failure. After MC/ServiceGuard starts the NFS package on the
adoptive node, the NFS file systems are re-exported from the adoptive
node with minimum disruption of service to users. The client side hangs
until the NFS server package comes up on the adoptive node. When the
service returns, the user can continue access to the file. You do not need
to restart the client.
Chapter 18
Overview of MC/ServiceGuard NFS
Limitations of MC/ServiceGuard NFS
Limitations of MC/ServiceGuard NFS
The following limitations apply to MC/ServiceGuard NFS:
•Applications lose their file locks when an NFS server package moves
to a new node. Therefore, any application that uses file locking must
reclaim its locks after an NFS server package fails over.
An application that loses its file lock due to an NFS package failover
does not receive any notification. If the server is also an NFS client,
it loses the NFS file locks obtained by client-side processes.
NOTEWith MC/ServiceGuard NFS A.11.11.03 and A.11.23.02, you can
address this limitation by enabling the File Lock Migration feature
(see “Overview of the NFS File Lock Migration Feature” on page 10).
To ensure that the File Lock Migration feature functions properly,
install HP-UX 11i v1 NFS General Release and Performance Patch,
PHNE_26388 (or a superseding patch). For HP-UX 11i v2, the
feature functions properly without a patch.
•If a server is configured to use NFS over TCP and the client is the
same machine as the server, which results in a loopback NFS mount,
the client may hang for about 5 minutes if the package is moved to
another node. The solution is to use NFS over UDP between
NFS-HA-server cross mounts.
NOTEYou cannot use MC/ServiceGuard NFS for an NFS diskless cluster
server.
Chapter 19
Overview of MC/ServiceGuard NFS
Overview of the NFS File Lock Migration Feature
Overview of the NFS File Lock Migration
Feature
MC/ServiceGuard NFS includes a new feature - File Lock Migration. The
detailed information on this feature is as follows:
•Each HA/NFS package designates a unique holding directory located
in one of the filesystems associated with the package. In other words,
an empty directory is created in one of the filesystems that moves
between servers as part of the package. This holding directory is a
configurable parameter and must be dedicated to hold the Status
Monitor (SM) entries only.
•A new script, nfs.flm, periodically (default value is five seconds; you
can change this value by modifying the PROPAGATE_INTERVAL
parameter in the nfs.flm script) copies SM entries from the
/var/statmon/sm directory into the package holding directory. To
edit the nfs.flm script, see “Editing the File Lock Migration Script
(nfs.flm)” on page 43.
•Upon package failover, the holding directory transitions from the
primary node to the adoptive node, because it resides in one of the
filesystems configured as part of the HA/NFS package.
Once the holding directory is on the adoptive node, the SM entries
residing in the holding directory are copied to the /var/statmon/sm
directory on the adoptive node. This populates the new server’s SM
directory with the entries from the primary server.
•After failover, the HA/NFS package IP address is configured on the
adoptive node, and rpc.statd and rpc.lockd are killed and
restarted. This killing and restarting of the daemons triggers a crash
recovery notification event, whereby rpc.statd sends crash
notification messages to all the clients listed in the
/var/statmon/sm directory.
These crash recovery notification messages contain the relocatable
hostname of the HA/NFS package that was previously running on
the primary node and is currently running on the adoptive node.
Chapter 110
Overview of MC/ServiceGuard NFS
Overview of the NFS File Lock Migration Feature
•Any client that holds NFS file locks against files residing in the
HA/NFS package (transitioned between servers) sends reclaim
requests to the adoptive node (where the exported filesystems
currently reside) and reclaims its locks.
•After rpc.statd sends the crash recovery notification messages, the
SM entries in the package holding directory are removed, and the
nfs.flm script is started on the adoptive node. The script once again
copies each /var/statmon/sm file on the HA/NFS server into the
holding directory, every five seconds. Each file residing in the
/var/statmon/sm directory on the adoptive node following the
package migration represents a client that either reclaimed its locks
after failover or has established new locks after failover.
NOTETo enable the File Lock Migration feature, you need MC/ServiceGuard
version A.11.15.
To ensure that the File Lock Migration feature functions properly, install
HP-UX 11i v1 NFS General Release and Performance Patch,
PHNE_26388 (or a superseding patch). For HP-UX 11i v2, the feature
functions properly without a patch.
Chapter 111
Overview of MC/ServiceGuard NFS
Supported Configurations
Supported Configurations
MC/ServiceGuard NFS supports the following configurations:
•Simple failover from an active NFS server node to an idle NFS server
node.
•Failover from one active NFS server node to another active NFS
server node, where the adoptive node supports more than one NFS
package after the failover.
•A host configured as an adoptive node for more than one NFS
package. The host may also be prevented from adopting more than
one failed package at a time.
•Cascading failover, where a package may have up to three adoptive
nodes.
•Server-to-server cross mounting, where one server may mount
another server’s file systems, and the mounts are not interrupted
when one server fails.
These configurations are illustrated in the following sections.
Chapter 112
Simple Failover to an Idle NFS Server
Figure 1-1 shows a simple failover from an active NFS server node to an
idle NFS server node.
Figure 1-1Simple Failover to an Idle NFS Server
Before Failover:
Overview of MC/ServiceGuard NFS
Supported Configurations
Node_A
Node_B
Pkg_1
disks
After Failover:
Node_ANode_B
Pkg_1
disks
Node_A is the primary node for NFS server package Pkg_1. When Node_A
fails, Node_B adopts Pkg_1. This means that Node_B locally mounts the
file systems associated with Pkg_1 and exports them. Both Node_A and
Node_B must have access to the disks that hold the file systems for
Pkg_1.
Chapter 113
Overview of MC/ServiceGuard NFS
Supported Configurations
Failover from One Active NFS Server to Another
Figure 1-2 shows a failover from one active NFS server node to another
active NFS server node.
Figure 1-2Failover from One Active NFS Server to Another
Before Failover:
Node_ANode_B
Pkg_1
disks
Pkg_2
disks
After Failover:
Node_ANode_B
Pkg_1
disks
Pkg_2
disks
In Figure 1-2, Node_A is the primary node for Pkg_1, and Node_B is the
primary node for Pkg_2. When Node_A fails, Node_B adopts Pkg_1 and
becomes the server for both Pkg_1 and Pkg_2.
Chapter 114
Overview of MC/ServiceGuard NFS
Supported Configurations
A Host Configured as Adoptive Node for Multiple
Packages
Figure 1-3 shows a three-node configuration where one node is the
adoptive node for packages on both of the other nodes. If either Node_A or
Node_C fails, Node_B adopts the NFS server package from that node.
Figure 1-3A Host Configured as Adoptive Node for Multiple Packages
Before Failover:
Node_ANode_B
Pkg_1
disks
After Failover:
Node_ANode_B
Pkg_1
disks
When Node_A fails, Node_B becomes the server for Pkg_1.IfNode_C fails,
Node_B will become the server for Pkg_2. Alternatively, you can set the
package control option in the control script, nfs.cntl, to prevent Node_B
from adopting more than one package at a time. With the package
control option, Node_B may adopt the package of the first node that fails,
but if the second node fails, Node_B will not adopt its package. The
package control option prevents a node from becoming overloaded by
adopting too many packages. If an adoptive node becomes overloaded, it
can fail.
Node_C
Pkg_2
disks
Node_C
Pkg_2
disks
Chapter 115
Overview of MC/ServiceGuard NFS
Supported Configurations
Cascading Failover with Three Adoptive Nodes
A package may be configured with up to three adoptive nodes. Figure 1-4
shows this configuration. If Node_A fails, Pkg_1 is adopted by Node_B.
However, if Node_B is down, Pkg_1 is adopted by Node_C, and if Node_C is
down, Pkg_1 is adopted by Node_D. The adoptive nodes are listed in the
package configuration file, /etc/cmcluster/nfs/nfs.conf, in the order
in which they will be tried. Note that all four nodes must have access to
the disks for the Pkg_1 file systems.
Figure 1-4Cascading Failover with Three Adoptive Nodes
Before Failover:
Node_ANode_BNode_DNode_C
Pkg_1 disks
After Failover of Node_A:
Node_ANode_BNode_DNode_C
Pkg_1 disks
After Failover of Node_B:
Node_ANode_BNode_DNode_C
Pkg_1 disks
Chapter 116
Server-to-Server Cross Mounting
Two NFS server nodes may NFS-mount each other’s file systems and
still act as adoptive nodes for each other’s NFS server packages.
Figure 1-5 illustrates this configuration.
Figure 1-5Server-to-Server Cross Mounting
Before Failover:
Overview of MC/ServiceGuard NFS
Supported Configurations
Node_A
/Pkg_2/NFS/mountpoint
/Pkg_1/NFS/mountpoint
/Pkg_1/local/mountpoint
After Failover:
Node_A
/Pkg_2/NFS/mountpoint
/Pkg_1/NFS/mountpoint
/Pkg_1/local/mountpoint
NFS
mount
NFS
mount
Pkg_1 disks
Pkg_2 disks
NFS
mount
NFS
mount
NFS
mount
Node_B
/Pkg_1/NFS/mountpoint
/Pkg_2/NFS/mountpoint
NFS
mount
/Pkg_2/local/mountpoint
Node_B
/Pkg_1/NFS/mountpoint
/Pkg_2/NFS/mountpoint
/Pkg_2/local/mountpoint
/Pkg_1/local/mountpoint
Pkg_1 disks
Pkg_2 disks
Chapter 117
Overview of MC/ServiceGuard NFS
Supported Configurations
The advantage of server-to-server cross-mounting is that every server
has an identical view of the file systems. The disadvantage is that, on the
node where a file system is locally mounted, the file system is accessed
through an NFS mount, which has poorer performance than a local
mount.
Each node NFS-mounts the file systems for both packages. If Node_A
fails, Node_B mounts the filesystem for Pkg_1, and the NFS mounts are
not interrupted.
Chapter 118
Overview of MC/ServiceGuard NFS
How the Control and Monitor Scripts Work
How the Control and Monitor Scripts Work
As with all ServiceGuard packages, the control script starts and stops
the NFS package and determines how the package operates when it is
available on a particular node. The 11i v1 and 11i v2 control script
(hanfs.sh) contains three sets of code that operate depending on the
parameter - start, stop,orfile_lock_migration - with which you call
the script. On 11.0, there are two sets of code that you can call with the
start or stop parameter.
Starting the NFS Services
When called with the start parameter, the control script does the
following:
•Activates the volume group or volume groups associated with the
package.
•Mounts each file system associated with the package.
•Initiates the NFS monitor script to check periodically on the health
of NFS services, if you have configured your NFS package to use the
monitor script.
•Exports each file system associated with the package so that it can
later be NFS-mounted by clients.
•Assigns a package IP address to the LAN card on the current node.
After this sequence, the NFS server is active, and clients can NFS-mount
the exported file systems associated with the package.
Starting File Lock Migration
If you call the control script with the file_lock_migration parameter
after enabling the File Lock Migration feature, the control script does the
following:
•Populates the /var/statmon/sm directory with the Status Monitor
entries from the configured holding directory of the package, and
subsequently removes the entries from the holding directory.
•Kills any running copy of the NFS File Lock Recovery
synchronization script.
Chapter 119
Overview of MC/ServiceGuard NFS
How the Control and Monitor Scripts Work
•Halts the rpc.statd and rpc.lockd daemons to release file locks so
that file systems can be unmounted. If the server is also an NFS
client, it loses the NFS file locks obtained by client-side processes
when these daemons are killed.
•Restarts the rpc.statd and rpc.lockd daemons so that these
daemons can manage file locks for other NFS packages running on
the server. Restarting these daemons also triggers a crash recovery
notification event, whereby rpc.statd sends crash notification
messages to all clients listed in the /var/statmon/sm directory.
•Starts the File Lock Migration synchronization script, which
periodically copies the /var/statmon/sm directory entries to the
holding directory.
Halting the NFS Services
When called with the stop parameter, the control script does the
following:
•Removes the package IP address from the LAN card on the current
node.
•Un-exports all file systems associated with the package so that they
can no longer be NFS-mounted by clients.
•Halts the monitor process.
•Halts the File Lock Migration synchronization script if you enable
the File Lock Migration feature (available on 11i v1 and 11i v2).
•Halts the rpc.statd and rpc.lockd daemons to release file locks so
that file systems can be unmounted. If the server is also an NFS
client, it loses the NFS file locks obtained by client-side processes
when these daemons are killed.
•Restarts the rpc.statd and rpc.lockd daemons so that these
daemons can manage file locks for other NFS packages running on
the server.
•Unmounts each file system associated with the package.
•Deactivates each volume group associated with the package.
After this sequence, the NFS package is inactive on the current node and
may start up on an alternate node or be restarted later on the same node.
Chapter 120
Overview of MC/ServiceGuard NFS
How the Control and Monitor Scripts Work
Monitoring the NFS Services
The monitor script /etc/cmcluster/nfs/nfs.mon works by periodically
checking the status of NFS services using the rpcinfo command. If any
service fails to respond, the script exits, causing a switch to an adoptive
node. The monitor script provides the ability to monitor the rpc.statd,
rpc.lockd, nfsd, rpc.mountd, rpc.pcnfsd, and nfs.flm processes. You
can monitor any or all of these processes as follows:
•To monitor the rpc.statd, rpc.lockd, and nfsd processes, you must
set the NFS_SERVER variable to 1 in the /etc/rc.config.d/nfsconf
file. If one nfsd process dies or is killed, the package fails over, even
if other nfsd processes are running.
•To monitor the rpc.mountd process, you must set the START_MOUNTD
variable to 1 in the /etc/rc.config.d/nfsconf file. To monitor the
rpc.mountd process, you must start it when the system boots up, not
by inetd.
•To monitor the rpc.pcnfsd process, you must set the PCNFS_SERVER
variable to 1 in the /etc/rc.config.d/nfsconf file.
•To monitor the nfs.flm process, you must enable the File Lock
Migration feature. Monitor this process with the ps command, not
with the rpcinfo command. If you enable the File Lock Migration
feature, ensure that the monitor script name is unique for each
package (for example, nfs1.mon).
The default NFS control script, hanfs.sh, does not invoke the monitor
script. You do not have to run the NFS monitor script to use
MC/ServiceGuard NFS. If the NFS package configuration file specifies
AUTO_RUN YES and LOCAL_LAN_FAILOVER YES (the defaults), the
package switches to the next adoptive node or to a standby network
interface in the event of a node or network failure. However, if one of the
NFS services goes down while the node and network remain up, you need
the NFS monitor script to detect the problem and to switch the package
to an adoptive node.
Whenever the monitor script detects an event, it logs the event. Each
NFS package has its own log file. This log file is named according to the
NFS control script, nfs.cntl, by adding a .log extension. For example,
if your control script is called /etc/cmcluster/nfs/nfs1.cntl, the log
file is called /etc/cmcluster/nfs/nfs1.cntl.log.
Chapter 121
Overview of MC/ServiceGuard NFS
How the Control and Monitor Scripts Work
TIPYou can specify the number of retry attempts for all these processes in
the nfs.mon file.
On the Client Side
The client should NFS-mount a file system using the package name in
the mount command. The package name is associated with the package’s
relocatable IP address. On client systems, be sure to use a hard mount
and set the proper retry values for the mount. Alternatively, set the
proper timeout for automounter. The timeout should be greater than the
total end-to-end recovery time for the MC/ServiceGuard NFS
package—that is, running fsck, mounting file systems, and exporting
file systems on the new node. (With journalled file systems, this time
should be between one and two minutes.) Setting the timeout to a value
greater than the recovery time allows clients to reconnect to the file
system after it returns to the cluster on the new node.
Chapter 122
2Installing and Configuring
MC/ServiceGuard NFS
Chapter 223
Installing and Configuring MC/ServiceGuard NFS
This chapter explains how to configure MC/ServiceGuard NFS. You must
set up your MC/ServiceGuard cluster before you can configure
MC/ServiceGuard NFS. For instructions on setting up an
MC/ServiceGuard cluster, see the Managing MC/ServiceGuard manual.
This chapter contains the following sections:
•“Installing MC/ServiceGuard NFS”
•“Monitoring NFS/TCP Services with MC/ ServiceGuard NFS Toolkit”
above). To enable the File Lock Migration feature (available with 11i v1
and 11i v2), you need MC/ServiceGuard A.11.15.
To ensure that the File Lock Migration feature functions properly, install
HP-UX 11i v1 NFS General Release and Performance Patch,
PHNE_26388 (or a superseding patch). For HP-UX 11i v2, the feature
functions properly without a patch.
Use the HP-UX Software Distributor (SD) to install the MC/SG NFS file
set. The following command starts the SD swinstall utility:
/usr/sbin/swinstall
The Software Distributor is documented in Managing HP-UX Software
with SD-UX.
The files are installed in the /opt/cmcluster/nfs directory.
The following files are part of the toolkit:
READMEDescription of the toolkit contents
hanfs.shThe NFS specific control script
nfs.monThe monitor script
nfs_xmntA script for handling cross-mounted NFS server
packages
nfs.flmThe file lock migration script (available with 11i v1 and
11i v2)
NOTEIf the MC/ServiceGuard NFS package has previously been installed, the
files are in /opt/cmcluster/nfs. Use swremove to remove these files
before installing the latest version of MC/ServiceGuard NFS.
Chapter 225
Installing and Configuring MC/ServiceGuard NFS
Installing MC/ServiceGuard NFS
To run the toolkit, you need the following files, which are part of
MC/ServiceGuard:
nfs.cntlThe control script that runs and halts the package
nfs.confThe package configuration file
You can create these two files by running the cmmakepkg command.
Perform the following steps to set up the directory for configuring
MC/ServiceGuard NFS:
NOTEYou may want to save any existing MC/ServiceGuard NFS configuration
file before executing these steps.
1. Run the following command to create the package configuration
template file:
cmmakepkg -p /opt/cmcluster/nfs/nfs.conf
2. Run the following command to create the package control template
file:
cmmakepkg -s /opt/cmcluster/nfs/nfs.cntl
3. Create a directory, /etc/cmcluster/nfs.
4. Run the following command to copy the MC/ServiceGuard NFS
template files to the newly created /etc/cmcluster/nfs directory:
cp /opt/cmcluster/nfs/* /etc/cmcluster/nfs
Chapter 226
Installing and Configuring MC/ServiceGuard NFS
Monitoring NFS/TCP Services with MC/ ServiceGuard NFS Toolkit
Monitoring NFS/TCP Services with MC/
ServiceGuard NFS Toolkit
In addition to monitoring NFS/UDP services, you can monitor NFS/TCP
services with MC/ServiceGuard NFS Toolkit on HP-UX 11.x. For HP-UX
11.0, you need at least MC/ServiceGuard NFS Toolkit A.11.00.03 to
monitor NFS/TCP services. All versions of MC/ServiceGuard NFS
Toolkit for HP-UX 11i v1 and v2 can monitor NFS/TCP services.
IMPORTANTYou must enable NFS/TCP on HP-UX 11.0 for both client and server.
TCP is the default transport mode on HP-UX 11i v1 and 11i v2 and thus
does not need to be enabled on those systems.
Use the following steps to enable NFS/TCP on HP-UX 11.0:
Step 1. Run the configuration command /usr/sbin/setoncenv NFS_TCP 1
Step 2. Stop the NFS client with /sbin/init.d/nfs.client stop
Step 3. Stop the NFS server with /sbin/init.d/nfs.server stop
Step 4. Start the NFS server with /sbin/init.d/nfs.server start
Step 5. Start the NFS client with /sbin/init.d/nfs.client start
From the NFS client, use the mount -o proto=tcp command to
establish a TCP only connection. The mount fails if TCP is not available
on the NFS server.
From the NFS client, use the mount -o proto=udp to command to
establish a UDP only connection. The mount fails if UDP is not available
on the NFS server.
To verify you are monitoring NFS/TCP services, run nfsstat -m. A
return of proto=tcp means you are monitoring NFS/TCP services. A
return of proto=udp means you are monitoring NFS/UDP services.
Use the following steps to disable NFS/TCP functionality on HP-UX 11.0:
Step 1. Enter /usr/sbin/setoncenv NFS_TCP 0 at the command line to sets
the NFS_TCP variable in the /etc/rc.config.d/nfsconf to 0.
Chapter 227
Installing and Configuring MC/ServiceGuard NFS
Monitoring NFS/TCP Services with MC/ ServiceGuard NFS Toolkit
Step 2. Stop the NFS client with /sbin/init.d/nfs.client stop
Step 3. Stop the NFS server with /sbin/init.d/nfs.server stop
Step 4. Start the NFS server with /sbin/init.d/nfs.server start
Step 5. Start the NFS client with /sbin/init.d/nfs.client start
After completing the preceding procedure, NFS will establish only UDP
connections on HP-UX 11.0.
Chapter 228
Loading...
+ 65 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.