trademarks of Compaq Information Technologies Group, L.P.
Microsoft and Windows are trademarks of Microsoft Corporation. UNIX and The Open Group are
trademarks of The Open Group. All other product names mentioned herein may be trademarks or
registered trademarks of their respective companies.
Confidential computer software. Valid license from Compaq required for possession, use, or copying.
Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor’s standard commercial license.
Compaq shall not be liable for technical or editorial errors or omissions contained herein. The information
in this publication is subject to change without notice and is provided "as is" without warranty of any
kind. The entire risk arising out of the use of this information remains with recipient. In no event shall
Compaq be liable for any direct, consequential, incidental, special, punitive, or other damages whatsoever
(including without limitation, damages for loss of business profits, business interruption or loss of business
information), even if Compaq has been advised of the possibility of such damages. The foregoing shall
apply regardless of the negligence or other fault of either party and regardless of whether such liability
sounds in contract, negligence, tort, or any other theory of legal liability, and notwithstanding any failure of
essential purpose of any limited remedy.
The limited warranties for Compaq products are exclusively set forth in the documentation accompanying
such products. Nothing herein should be construed as constituting a further or additional warranty.
About This Manual
1Introduction
1.1
1.2
1.3
1.4
1.4.1
1.4.1.1
1.4.1.2
1.4.1.3
1.4.1.4
1.5
1.6
1.6.1
1.6.2
1.6.3
1.6.4
1.6.5
1.7
The TruCluster Server Product .....................................1–1
Overview of the TruCluster Server Hardware Configuration ..1–2
A–1Converting Storageset Unit Numbers to Disk Names ..........A–1
Contents xv
This manual describes how to set up and maintain the hardware
configuration for a TruCluster Server cluster.
Audience
This manual is for system administrators who will set up and configure the
hardware before installing the TruCluster Server software. The manual
assumes that you are familiar with the tools and methods needed to
maintain your hardware, operating system, and network.
Organization
This manual contains ten chapters and an index. The organization of this
manual has been restructured to provide a more streamlined manual.
Those chapters containing information on SCSI bus requirements and
configuration, and configuring hardware have been split up into two sets of
two chapters each. One set covers the UltraSCSI hardware and is geared
towards radial configurations. The other set covers configurations using
either external termination or radial connection to non-UltraSCSI devices.
A brief description of the contents follows:
About This Manual
Chapter 1Introduces the TruCluster Server product and provides an overview
Chapter 2Describes hardware requirements and restrictions.
Chapter 3Contains information about setting up a shared SCSI bus, SCSI
Chapter 4Describes how to prepare systems for a TruCluster Server
Chapter 5Describes how to set up the Memory Channel cluster interconnect.
Chapter 6Provides an overview of Fibre Channel and describes how
Chapter 7Provides information on the use of, and installation of, Asynchronous
of setting up TruCluster Server hardware.
bus requirements, and how to connect storage to a shared SCSI
bus using the latest UltraSCSI products (DS-DWZZH UltraSCSI
hubs, HSZ70 and HSZ80 RAID array controllers).
configuration, and how to connect host bus adapters to shared
storage using the DS-DWZZH UltraSCSI hubs and the newest
RAID array controllers (HSZ70 and HSZ80).
to set up Fibre Channel hardware.
Transfer Mode (ATM) hardware.
About This Manual xvii
Chapter 8Describes how to configure a shared SCSI bus for tape drive,
tape loader, or tape library usage.
Chapter 9Contains information about setting up a shared SCSI bus, SCSI bus
requirements, and how to connect storage to a shared SCSI bus using
external termination or radial connections to non-UltraSCSI devices.
Chapter 10
Describes how to prepare systems for a TruCluster Server configuration,
and how to connect host bus adapters to shared storage using external
termination or radial connection to non-UltraSCSI devices.
Related Documents
Users of the TruCluster Server product can consult the following manuals for
assistance in cluster installation, administration, and programming tasks:
•TruCluster Server Software Product Description (SPD) — The
comprehensive description of the TruCluster Server Version 5.0A
product. You can find the latest version of the SPD and other TruCluster
Server documentation at the following URL:
The Golden Eggs Visual Configuration Guide provides configuration
diagrams of workstations, servers, storage components, and clustered
About This Manual xix
systems. It is available on line in PostScript and Portable Document Format
(PDF) formats at:
http://www.compaq.com/info/golden-eggs
At this URL you will find links to individual system, storage, or cluster
configurations. You can order the document through the Compaq Literature
Order System (LOS) as order number EC-R026B-36.
In addition, you should have available the following manuals from the Tru64
UNIX documentation set:
•Installation Guide
•Release Notes
•System Administration
•Network Administration
You should also have the hardware documentation for the systems, SCSI
controllers, disk storage shelves or RAID controllers, and any other
hardware you plan to install.
Documentation for the following optional software products will be useful if
you intend to use these products with TruCluster Server:
•Compaq Analyze (DS20 and ES40)
•DECevent™ (AlphaServers other than the DS20 and ES40)
•Logical Storage Manager (LSM)
•NetWorker
•Advanced File System (AdvFS) Utilities
•Performance Manager
Reader’s Comments
Compaq welcomes any comments and suggestions you have on this and
other Tru64 UNIX manuals.
A Reader’s Comment form is located in the back of each printed manual.
The form is postage paid if you mail it in the United States.
Please include the following information along with your comments:
•The full title of the book and the order number. (The order number is
printed on the title page of this book and on its back cover.)
•The section numbers and page numbers of the information on which
you are commenting.
•The version of Tru64 UNIX that you are using.
•If known, the type of processor that is running the Tru64 UNIX software.
The Tru64 UNIX Publications group cannot respond to system problems
or technical support inquiries. Please address technical questions to your
local system vendor or to the appropriate Compaq technical support office.
Information provided with the software media explains how to send problem
reports to Compaq.
Conventions
The following typographical conventions are used in this manual:
#
% cat
file
.
.
.
cat
(1)
A number sign represents the superuser prompt.
Boldface type in interactive examples indicates
typed user input.
Italic (slanted) type indicates variable values,
placeholders, and function argument names.
A vertical ellipsis indicates that a portion of an
example that would normally be present is not
shown.
A cross-reference to a reference page includes
the appropriate section number in parentheses.
For example, cat
(1) indicates that you can find
information on the cat command in Section 1 of
the reference pages.
About This Manual xxi
cluster
Bold text indicates a term that is defined in the
glossary.
xxii About This Manual
This chapter introduces the TruCluster Server product and some basic
cluster hardware configuration concepts.
Subsequent chapters describe how to set up and maintain TruCluster Server
hardware configurations. See the TruCluster Server Software Installation
manual for information about software installation; see the TruCluster
Server Cluster Administration manual for detailed information about setting
up member systems and highly available applications.
1.1 The TruCluster Server Product
TruCluster Server, the newest addition to the Compaq Tru64 UNIX
TruCluster Software products family, extends single-system management
capabilities to clusters. It provides a clusterwide namespace for files and
directories, including a single root file system that all cluster members
share. It also offers a cluster alias for the Internet protocol suite (TCP/IP) so
that a cluster appears as a single system to its network clients.
1
Introduction
TruCluster Server preserves the availability and performance features found
in the earlier TruCluster products:
•Like the TruCluster Available Server Software and TruCluster
Production Server products, TruCluster Server lets you deploy highly
available applications that have no embedded knowledge that they are
executing in a cluster. They can access their disk data from any member
in the cluster.
•Like the TruCluster Production Server Software product, TruCluster
Server lets you run components of distributed applications in parallel,
providing high availability while taking advantage of cluster-specific
synchronization mechanisms and performance optimizations.
TruCluster Server augments the feature set of its predecessors by allowing
all cluster members access to all file systems and all storage in the cluster,
regardless of where they reside. From the viewpoint of clients, a TruCluster
Server cluster appears to be a single system; from the viewpoint of a system
administrator, a TruCluster Server cluster is managed as if it were a single
system. Because TruCluster Server has no built-in dependencies on the
architectures or protocols of its private cluster interconnect or shared storage
Introduction 1–1
interconnect, you can more easily alter or expand your cluster’s hardware
configuration as newer and faster technologies become available.
1.2 Overview of the TruCluster Server Hardware
Configuration
A TruCluster Server hardware configuration consists of a number of highly
specific hardware components:
•TruCluster Server currently supports from one to eight member systems.
•There must be sufficient internal and external SCSI controllers, Fibre
Channel host bus adapters, and disks to provide sufficient storage for
the applications.
•The clusterwide root (
a shared SCSI bus. We recommend placing all member system boot
disks on a shared SCSI bus. If you have a quorum disk, it must be on
a shared SCSI bus.
_____________________Note_____________________
The clusterwide root (/), /usr, and /var file systems, the
member system boot disks, and the quorum disk may be
located behind a RAID array controller, including the HSG80
controller (Fibre Channel).
•You need to allocate a number of Internet Protocol (IP) addresses from
one IP subnet to allow client access to the cluster. The IP subnet has
to be visible to the clients directly or through routers. The miminum
number of allocated addresses is equal to the number of cluster member
systems plus one (for the cluster alias), depending on the type of cluster
alias configuration.
For client access, TruCluster Server allows you to configure any number
of monitored network adapters (using a redundant array of independent
network adapters (NetRAIN) and Network Interface Failure Finder
(NIFF) facilities of the Tru64 UNIX operating system).
•TruCluster Server requires at least one peripheral component
interconnect (PCI) Memory Channel adapter on each system. The
Memory Channel adapters comprise the cluster interconnect for
TruCluster Server, providing host-to-host communications. For a cluster
with two systems, a Memory Channel hub is optional; the Memory
Channel adapters can be connected with a cable.
If there are more than two systems in the cluster, a Memory Channel
hub is required. The Memory Channel hub is a PC-class enclosure that
/), /usr, and /var file systems should be on
1–2 Introduction
contains up to eight linecards. The Memory Channel adapter in each
system in the cluster is connected to the Memory Channel hub.
One or two Memory Channel adapters can be used with TruCluster
Server. When dual Memory Channel adapters are installed, if the
Memory Channel adapter being used for cluster communication fails, the
communication will fail over to the other Memory Channel.
1.3 Memory Requirements
Cluster members require a minimum of 128 MB of memory.
1.4 Minimum Disk Requirements
This section provides an overview of the minimum file system or disk
requirements for a two-node cluster. For more information on the amount
of space required for each required cluster file system, see the TruCluster
Server
1.4.1 Disks Needed for Installation
You need to allocate disks for the following uses:
•One or more disks to hold the Tru64 UNIX operating system. The disk(s)
Software Installation manual.
are either private disk(s) on the system that will become the first cluster
member, or disk(s) on a shared bus that the system can access.
•One or more disks on a shared SCSI bus to hold the clusterwide root (
/usr, and /var AdvFS file systems.
•One disk per member, normally on a shared SCSI bus, to hold member
boot partitions.
•Optionally, one disk on a shared SCSI bus to act as the quorum disk. See
Section 1.4.1.4, and for a more detailed discussion of the quorum disk,
see the TruCluster Server Cluster Administration manual.
The following sections provide more information about these disks.
Figure 1–1 shows a generic two-member cluster with the required file
systems.
1.4.1.1 Tru64 UNIX Operating System Disk
The Tru64 UNIX operating system is installed using AdvFS file systems on
one or more disks on the system that will become the first cluster member.
For example:
The operating system disk (Tru64 UNIX disk) cannot be used as a
clusterwide disk, a member boot disk, or as the quorum disk.
Because the Tru64 UNIX operating system will be available on the first
cluster member, in an emergency, after shutting down the cluster, you have
the option of booting the Tru64 UNIX operating system and attempting to
fix the problem. See the TruCluster Server
for more information.
1.4.1.2 Clusterwide Disk(s)
When you create a cluster, the installation scripts copy the Tru64 UNIX
root (/), /usr, and /var file systems from the Tru64 UNIX disk to the disk
or disks you specify.
We recommend that the disk or disks used for the clusterwide file systems
be placed on a shared SCSI bus so that all cluster members have access to
these disks.
During the installation, you supply the disk device names and partitions
that will contain the clusterwide root (/), /usr, and /var file systems. For
example, dsk3b, dsk4c, and dsk3g:
The /var fileset cannot share the cluster_usr domain, but must be a
separate domain, cluster_var. Each AdvFS file system must be a separate
partition; the partitions do not have to be on the same disk.
If any partition on a disk is used by a clusterwide file system, only
clusterwide file systems can be on that disk. A disk containing a clusterwide
file system cannot also be used as the member boot disk or as the quorum
disk.
1.4.1.3 Member Boot Disk
Each member has a boot disk. A boot disk contains that member’s boot,
swap, and cluster-status partitions. For example, dsk1 is the boot disk for
the first member and dsk2 is the boot disk for the second member:
The installation scripts reformat each member’s boot disk to contain three
partitions: an a partition for that member’s root (/) file system, a b partition
for swap, and an h partition for cluster status information. (There are no
/usr or /var file systems on a member’s boot disk.)
1–4 Introduction
A member boot disk cannot contain one of the clusterwide root (/), /usr,
and /var file systems. Also, a member boot disk cannot be used as the
quorum disk. A member disk can contain more than the three required
partitions. You can move the swap partition off the member boot disk. See
the TruCluster Server
1.4.1.4 Quorum Disk
The quorum disk allows greater availability for clusters consisting of two
members. Its h partition contains cluster status and quorum information.
See the TruCluster Server Cluster Administration manual for a discussion of
how and when to use a quorum disk.
The following restrictions apply to the use of a quorum disk:
•A cluster can have only one quorum disk.
•The quorum disk should be on a shared bus to which all cluster members
are directly connected. If it is not, members that do not have a direct
connection to the quorum disk may lose quorum before members that
do have a direct connection to it.
•The quorum disk must not contain any data. The clu_quorum command
will overwrite existing data when initializing the quorum disk. The
integrity of data (or file system metadata) placed on the quorum disk
from a running cluster is not guaranteed across member failures.
This means that the member boot disks and the disk holding the
clusterwide root (/) cannot be used as quorum disks.
Cluster Administration manual for more information.
•The quorum disk can be small. The cluster subsystems use only 1 MB
of the disk.
•A quorum disk can have either 1 vote or no votes. In general, a quorum
disk should always be assigned a vote. You might assign an existing
quorum disk no votes in certain testing or transitory configurations,
such as a one-member cluster (in which a voting quorum disk introduces
a second point of failure).
•You cannot use the Logical Storage Manager (LSM) on the quorum disk.
1.5 Generic Two-Node Cluster
This section describes a generic two-node cluster with the minimum disk
layout of four disks. Note that additional disks may be needed for highly
available applications. In this section, and the following sections, the type
of PCI SCSI bus adapter is not significant. Also, although an important
consideration, SCSI bus cabling, including Y cables or trilink connectors,
termination, and the use of UltraSCSI hubs is not considered at this time.
Introduction 1–5
Figure 1–1 shows a generic two-node cluster with the minimum number
of disks.
•Tru64 UNIX disk
•Clusterwide root (
/), /usr, and /var
•Member 1 boot disk
•Member 2 boot disk
A minimum configuration cluster may have reduced availability due to the
lack of a quorum disk. As shown, with only two-member systems, both
systems must be operational to achieve quorum and form a cluster. If only
one system is operational, it will loop, waiting for the second system to boot
before a cluster can be formed. If one system crashes, you lose the cluster.
Figure 1–1: Two-Node Cluster with Minimum Disk Configuration and No
Quorum Disk
Network
Member
System
1
PCI SCSI
Adapter
Memory Channel
Tru64
UNIX
Disk
Member
System
2
PCI SCSI
Adapter
Cluster File
Figure 1–2 shows the same generic two-node cluster as shown in Figure 1–1,
but with the addition of a quorum disk. By adding a quorum disk, a cluster
may be formed if both systems are operational, or if either of the systems
and the quorum disk is operational. This cluster has a higher availability
than the cluster shown in Figure 1–1. See the TruCluster Server Cluster
1–6 Introduction
System
root (/)
/usr
/var
Shared SCSI Bus
Member 1
root (/)
swap
Member 2
root (/)
swap
ZK-1587U-AI
Administration manual for a discussion of how and when to use a quorum
disk.
Figure 1–2: Generic Two-Node Cluster with Minimum Disk Configuration
and Quorum Disk
Network
Member
System
1
PCI SCSI
Adapter
Memory Channel
Tru64
UNIX
Disk
Member
System
2
PCI SCSI
Adapter
Shared SCSI Bus
Cluster File
System
root (/)
/usr
/var
Member 1
root (/)
swap
Member 1
root (/)
swap
Quorum
ZK-1588U-AI
1.6 Growing a Cluster from Minimum Storage to a NSPOF
Cluster
The following sections take a progression of clusters from a cluster with
minimum storage to a no-single-point-of-failure (NSPOF) cluster; a cluster
where one hardware failure will not interrupt the cluster operation:
•A cluster with minimum storage for highly available applications
(Section 1.6.1).
•A cluster with more storage, but the single SCSI bus is a single point
of failure (Section 1.6.2).
•Adding a second SCSI bus allows the use of LSM to mirror the /usr and
/var file systems and data disks. However, as LSM cannot mirror the
root (/), member system boot, swap, or quorum disks, so full redundancy
is not achieved (Section 1.6.3).
Introduction 1–7
•Using a RAID array controller in transparent failover mode allows the
use of hardware RAID to mirror the disks. However, without a second
SCSI bus, second Memory Channel, and redundant networks, this
configuration is still not a NSPOF cluster (Section 1.6.4).
•By using an HSZ70, HSZ80, or HSG80 with multiple-bus failover enabled
you can use two shared SCSI buses to access the storage. Hardware
RAID is used to mirror the root (
the member system boot disks, data disks, and quorum disk (if used).
A second Memory Channel, redundant networks, and redundant power
must also be installed to achieve a NSPOF cluster (Section 1.6.5).
/), /usr, and /var file systems, and
1.6.1 Two-Node Clusters Using an UltraSCSI BA356 Storage Shelf
and Minimum Disk Configurations
This section takes the generic illustrations of our cluster example one step
further by depicting the required storage in storage shelves. The storage
shelves could be BA350, BA356 (non-UltraSCSI), or UltraSCSI BA356s.
The BA350 is the oldest model, and can only respond to SCSI IDs 0-6. The
non-Ultra BA356 can respond to SCSI IDs 0-6 or 8-14 (see Section 3.2). The
UltraSCSI BA356 also responds to SCSI IDs 0-6 or 8-14, but also can operate
at UltraSCSI speeds (see Section 3.2).
Figure 1–3 shows a TruCluster Server configuration using an UltraSCSI
BA356 storage unit. The DS-BA35X-DA personality module used in the
UltraSCSI BA356 storage unit is a differential-to-single-ended signal
converter, and therefore accepts differential inputs.
1–8 Introduction
______________________Note_______________________
The figures in this section are generic drawings and do not show
shared SCSI bus termination, cable names, and so forth.
Loading...
+ 290 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.