Hp COMPAQ PROLIANT 6400R Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000
Administrator Guide
Second Edition (June 2001) Part Number 225081-002 Compaq Computer Corporation

Notice

© 2001 Compaq Computer Corporation
Compaq, the Compaq logo, Compaq Insight Manager, SmartStart, ROMPaq, ProLiant, and StorageWorks Registered in U.S. Patent and Trademark Office. ActiveAnswers is a trademark of Compaq Information Technologies Group, L.P. in the United States and other countries.
Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States and other countries.
All other product names mentioned herein may be trademarks of their respective companies.
Compaq shall not be liable for technical or editorial errors or omissions contained herein. The information in this document is provided “as is” without warranty of any kind and is subject to change without notice. The warranties for Compaq products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as constituting an additional warranty.
Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Second Edition (June 2001) Part Number 225081-002

Contents

About This Guide
Purpose .................................................................................................................... xiii
Audience.................................................................................................................. xiii
Scope ........................................................................................................................xiv
Referenced Manuals ..................................................................................................xv
Supplemental Documents .........................................................................................xvi
Text Conventions......................................................................................................xvi
Symbols in Text.......................................................................................................xvii
Symbols on Equipment............................................................................................xvii
Rack Stability ........................................................................................................ xviii
Getting Help .......................................................................................................... xviii
Compaq Technical Support ...............................................................................xix
Compaq Website................................................................................................xix
Compaq Authorized Reseller..............................................................................xx
Chapter 1
Clustering Overview
Clusters Defined ...................................................................................................... 1-2
Availability .............................................................................................................. 1-4
Scalability ................................................................................................................ 1-4
Compaq Parallel Database Cluster Overview.......................................................... 1-5
Chapter 2
Cluster Architecture
Compaq ProLiant Servers........................................................................................ 2-2
High-Availability Features of ProLiant Servers ............................................... 2-3
Shared Storage Components.................................................................................... 2-3
MA8000/EMA12000 Storage Subsystem ........................................................ 2-4
HSG80 Array Controller .................................................................................. 2-4
Fibre Channel SAN Switch .............................................................................. 2-5
iv Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Storage Hub ...................................................................................................... 2-6
KGPSA-BC and KGPSA-CB Host Adapter..................................................... 2-6
Gigabit Interface Converter-Shortwave ............................................................ 2-7
Fibre Channel Cables........................................................................................ 2-7
Configuring and Cabling the MA8000/EMA12000 Storage Subsystem
Components ............................................................................................................. 2-7
Configuring LUNS for Storagesets................................................................... 2-7
SCSI Cabling Examples.................................................................................... 2-8
UltraSCSI Cables.............................................................................................. 2-9
Using I/O Modules in the Controller Enclosure ...............................................2-9
Connecting EMUs Between MA8000/EMA12000 Storage Subsystems........ 2-12
I/O Path Configurations for Redundant Fibre Channel Fabrics ............................. 2-13
Overview of Fibre Channel Fabric SAN Topology ........................................ 2-13
Redundant Fibre Channel Fabrics................................................................... 2-13
Multiple Redundant Fibre Channel Fabrics.................................................... 2-15
Maximum Distances Between Nodes and Shared Storage Subsystem
Components .................................................................................................... 2-16
I/O Data Paths in a Redundant Fibre Channel Fabric ..................................... 2-17
I/O Path Definitions for Redundant Fibre Channel Fabrics............................ 2-20
I/O Path Configuration Examples for Redundant Fibre Channel Fabrics....... 2-21
Summary of I/O Path Failure and Failover Scenarios for Redundant Fibre
Channel Fabrics ..............................................................................................2-25
I/O Path Configurations for Redundant Fibre Channel Arbitrated Loops.............. 2-26
Overview of FC-AL SAN Topology............................................................... 2-26
Redundant Fibre Channel Arbitrated Loops ................................................... 2-26
Multiple Redundant Fibre Channel Arbitrated Loops..................................... 2-28
Maximum Distances Between Nodes and Shared Storage Subsystem
Components .................................................................................................... 2-30
I/O Data Paths in a Redundant FC-AL ...........................................................2-32
I/O Path Definitions for Redundant FC-ALs .................................................. 2-34
I/O Path Configuration Examples for Redundant FC-ALs............................. 2-35
Summary of I/O Path Failure and Failover Scenarios for Redundant
FC-ALs ........................................................................................................... 2-38
Cluster Interconnect Requirements........................................................................ 2-39
Ethernet Cluster Interconnect..........................................................................2-39
Local Area Network........................................................................................ 2-42
Chapter 3
Cluster Software Components
Overview of the Cluster Software............................................................................ 3-1
Microsoft Windows 2000 Advanced Server ............................................................ 3-2
Compaq Software..................................................................................................... 3-2
Compaq SmartStart and Support Software ....................................................... 3-2
Compaq System Configuration Utility ............................................................. 3-3
Compaq Insight Manager.................................................................................. 3-3
Compaq Insight Manager XE ........................................................................... 3-3
Compaq Options ROMPaq............................................................................... 3-4
Compaq StorageWorks Command Console ..................................................... 3-4
Compaq StorageWorks Secure Path for Windows 2000 .................................. 3-4
Compaq Operating System Dependent Modules.............................................. 3-5
Oracle Software ....................................................................................................... 3-5
Oracle8i Server Enterprise Edition................................................................... 3-6
Oracle8i Server................................................................................................. 3-6
Oracle8i Parallel Server Option........................................................................ 3-6
Oracle8i Enterprise Manager............................................................................ 3-7
Oracle8i Certification ....................................................................................... 3-7
Application Failover and Reconnection Software ................................................... 3-7
Chapter 4
Cluster Planning
Site Planning............................................................................................................ 4-2
Capacity Planning for Cluster Hardware ................................................................. 4-2
Compaq ProLiant Servers................................................................................. 4-2
Planning Shared Storage Components for Redundant Fibre Channel
Fabrics .............................................................................................................. 4-3
Planning Shared Storage Components for Redundant Fibre Channel
Arbitrated Loops............................................................................................... 4-4
Planning Cluster Interconnect and Client LAN Components........................... 4-6
Planning Cluster Configurations for Redundant Fibre Channel Fabrics.................. 4-7
Planning Dual Redundancy Configurations ..................................................... 4-7
Planning Quad Redundancy Configurations..................................................... 4-9
Planning Cluster Configurations for Redundant Fibre Channel Arbitrated
Loops ..................................................................................................................... 4-11
Planning Dual Redundancy Configurations ................................................... 4-11
Planning Quad Redundancy Configurations................................................... 4-13
RAID Planning for the MA8000/EMA12000 Storage Subsystem ........................ 4-15
Supported RAID Levels ................................................................................. 4-16
Raw Data Storage and Database Size............................................................. 4-17
Selecting the Appropriate RAID Level .......................................................... 4-18
Planning the Grouping of Physical Disk Storage Space ........................................ 4-19
Disk Drive Planning .............................................................................................. 4-20
Nonshared Disk Drives................................................................................... 4-20
Shared Disk Drives......................................................................................... 4-20
Network Planning .................................................................................................. 4-21
Windows 2000 Advanced Server Host Files for an Ethernet Cluster
Interconnect .................................................................................................... 4-21
Client LAN ..................................................................................................... 4-22
Contents v
vi Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Chapter 5
Installation and Configuration
Reference Materials for Installation......................................................................... 5-1
Installation Overview............................................................................................... 5-2
Installing the Hardware............................................................................................ 5-3
Setting Up the Nodes ........................................................................................ 5-3
Installing the KGPSA-BC and KGPSA-CB Host Adapters.............................. 5-4
Installing GBIC-SW Modules for the Host Adapters ....................................... 5-4
Cabling the Host Adapters to Fibre Channel SAN Switches or Storage
Hubs.................................................................................................................. 5-5
Installing the Cluster Interconnect Adapters..................................................... 5-6
Installing the Client LAN Adapters .................................................................. 5-6
Installing GBIC-SW Modules for the Array Controllers .................................. 5-6
Installing Hardware Into an MA8000/EMA12000 Storage Subsystem.......... 5-10
Cabling the Controller Enclosure to Disk Enclosures..................................... 5-11
Cabling EMUs to Each Other ......................................................................... 5-13
Cabling Array Controllers to Fibre Channel SAN Switches........................... 5-14
Cabling Array Controllers to Storage Hubs.................................................... 5-15
Installing Operating System Software.................................................................... 5-16
Guidelines for Clusters ...................................................................................5-17
Automated Installation Using SmartStart ....................................................... 5-17
Setting up and Configuring an MA8000/EMA12000 Storage Subsystem............. 5-21
Designating a Server as a Maintenance Terminal........................................... 5-21
Powering On the MA8000/EMA12000 Storage Subsystem........................... 5-21
Installing the StorageWorks Command Console (SWCC) Client................... 5-22
Configuring a Storage Subsystem for Secure Path Operation ........................ 5-22
Verifying Array Controller Properties ............................................................ 5-27
Configuring a Storageset................................................................................. 5-29
Installing Secure Path Software for Windows 2000 .............................................. 5-32
Overview of Secure Path Software Installation .............................................. 5-32
Description of the Secure Path Software ........................................................ 5-33
Installing the Host Adapter Drivers ................................................................ 5-33
Installing the Fibre Channel Software Setup (FCSS) Utility .......................... 5-34
Installing the Secure Path Drivers, Secure Path Agent, and Secure Path
Manager .......................................................................................................... 5-35
Specifying the Preferred_Path for Storage Units ............................................ 5-36
Powering Up All Other Fibre Channel SAN Switches or Storage Hubs ........ 5-38
Creating Partitions ..........................................................................................5-38
Installing Compaq OSDs ....................................................................................... 5-39
Verifying Cluster Communications ................................................................ 5-40
Mounting Remote Drives and Verifying Administrator Privileges ................ 5-41
Installing the Ethernet OSDs .......................................................................... 5-42
Installing Oracle Software ..................................................................................... 5-52
Configuring Oracle8i Software .............................................................................. 5-53
Installing Object Link Manager ............................................................................. 5-53
Additional Notes on Configuring Oracle Software......................................... 5-54
Verifying the Hardware and Software Installation ................................................ 5-55
Cluster Communications ................................................................................ 5-55
Access to Shared Storage from All Nodes...................................................... 5-55
OSDs .............................................................................................................. 5-55
Other Verification Tasks ................................................................................ 5-56
Power Distribution and Power Sequencing Guidelines ......................................... 5-56
Overview ........................................................................................................ 5-56
Server Power Distribution .............................................................................. 5-57
Storage Subsystem Power Distribution .......................................................... 5-57
Power Sequencing .......................................................................................... 5-58
Chapter 6
Cluster Management
Cluster Management Concepts ................................................................................ 6-2
Powering Off a Node Without Interrupting Cluster Services........................... 6-2
Managing a Cluster in a Degraded Condition................................................... 6-2
Managing Network Clients Connected to a Cluster ......................................... 6-3
Cluster Events................................................................................................... 6-3
Management Applications ....................................................................................... 6-4
Monitoring Server and Network Hardware ...................................................... 6-4
Monitoring Storage Subsystem Hardware........................................................ 6-5
Managing Shared Storage................................................................................. 6-5
Monitoring the Database .................................................................................. 6-7
Remotely Managing a Cluster .......................................................................... 6-7
Software Maintenance ............................................................................................. 6-8
Deinstalling the OSDs ...................................................................................... 6-8
Upgrading Oracle8i Server ............................................................................. 6-11
Upgrading the OSDs....................................................................................... 6-11
Deinstalling a Partial OSD Installation........................................................... 6-13
Managing Changes to Shared Storage Components.............................................. 6-14
Adding New Storagesets to Increase Shared Storage Capacity...................... 6-14
Replacing a Failed Drive in a Storage Subsystem.......................................... 6-15
Replacing a Host Adapter............................................................................... 6-16
Adding a Shared Storage Subsystem.............................................................. 6-19
Replacing a Cluster Node ...................................................................................... 6-19
Removing the Node........................................................................................ 6-20
Adding the Replacement Node....................................................................... 6-20
Adding a Cluster Node .......................................................................................... 6-24
Preparing the New Node................................................................................. 6-25
Preparing the Existing Cluster Nodes............................................................. 6-27
Installing the Cluster Software ....................................................................... 6-27
Monitoring Cluster Performance ........................................................................... 6-29
Tools Overview .............................................................................................. 6-29
Using Secure Path Manager............................................................................ 6-30
Uninstalling Secure Path ................................................................................ 6-33
Contents vii
viii Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Chapter 7
Troubleshooting
Basic Troubleshooting Tips ..................................................................................... 7-2
Power ................................................................................................................ 7-2
Physical Connections........................................................................................ 7-2
Access to Cluster Components .........................................................................7-3
Software Revisions ........................................................................................... 7-3
Firmware Revisions .......................................................................................... 7-4
Troubleshooting Oracle8i and OSD Installation Problems and Error Messages ..... 7-5
Potential Difficulties Installing the OSDs with the Oracle Universal
Installer ............................................................................................................. 7-5
Unable to Start OracleCMService..................................................................... 7-6
Unable to Start OracleNMService .................................................................... 7-6
Unable to Start the Database............................................................................. 7-7
Initialization of the Dynamic Link Library NM.DLL Failed............................ 7-7
Troubleshooting Node-to-Node Connectivity Problems.......................................... 7-7
Nodes Are Unable to Communicate with Each Other ...................................... 7-7
Unable to Ping the Cluster Interconnect or the Client LAN ............................. 7-8
Troubleshooting Client-to-Cluster Connectivity Problems...................................... 7-9
A Network Client Cannot Communicate With the Cluster............................... 7-9
Troubleshooting Shared Storage Subsystem Problems.......................................... 7-10
Verifying Host Adapter Device Driver Installation........................................ 7-10
Verifying KGPSA-BC Device Driver Initialization ....................................... 7-10
Verifying Connectivity to a Redundant Fibre Channel Fabric........................ 7-12
Verifying Connectivity to a Redundant Fibre Channel Arbitrated Loop........ 7-13
A Cluster Node Cannot Connect to the Shared Drives ................................... 7-15
Disk Management Shows Storagesets With the Same Label (Dual Image).... 7-15
Device or Devices Were Not Found by KGPSA-BC Device Driver.............. 7-15
Devices on One I/O Connection Path Cannot Be Seen by the Cluster
Nodes .............................................................................................................. 7-16
Troubleshooting Secure Path ................................................................................. 7-18
Secure Path Guidelines for Windows 2000 Advanced Server........................ 7-18
Secure Path Manager Shows Reversed Locations for Top and Bottom
Array Controllers ............................................................................................ 7-20
Secure Path Manager Cannot Start With Hosts That Use Hyphenated Host
Names ............................................................................................................. 7-20
Secure Path Manager Is Delayed In Reporting Path Failure Information....... 7-20
The Addition of New LUNs Causes an Error................................................. 7-21
A Configuration of More Than 64 LUNs Prevents the Secure Path Agent
From Starting .................................................................................................. 7-21
Appendix A
Diagnosing and Resolving Shared Disk Problems
Introduction............................................................................................................. A-1
Run Object Link Manager on All Nodes ................................................................ A-3
Restart All Affected Nodes in the Cluster ...............................................................A-4
Rerun and Validate Object Link Manager On All Affected Nodes .........................A-4
Run and Validate Secure Path Manager On All Nodes ...........................................A-5
Run Disk Management On All Nodes .....................................................................A-5
Run and Validate the StorageWorks Command Console From All Storage
Subsystems ..............................................................................................................A-6
Perform Cluster Software and Firmware Checks ....................................................A-6
Perform Cluster Hardware Checks ..........................................................................A-7
Contact Your Compaq Support Representative.......................................................A-8
Glossary
Index
Contents ix
x Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
List of Figures
Figure 1-1. Example of a two-node PDC/O5000 cluster ........................................ 1-3
Figure 2-1. SCSI bus numbers for I/O modules in the controller enclosure ......... 2-10
Figure 2-2. UltraSCSI cabling between a controller enclosure and three
disk enclosures ................................................................................................. 2-11
Figure 2-3. UltraSCSI cabling between a controller enclosure and six disk
enclosures......................................................................................................... 2-12
Figure 2-4. Two-node PDC/O5000 with a four-fabric redundant Fibre
Channel Fabric ................................................................................................. 2-14
Figure 2-5. Two-node PDC/O5000 with two redundant Fibre Channel
Fabrics.............................................................................................................. 2-16
Figure 2-6. Maximum distances between PDC/O5000 cluster nodes and
shared storage subsystem components in a redundant Fibre Channel
Fabric................................................................................................................ 2-17
Figure 2-7. Host adapter-to-Fibre Channel SAN Switch I/O data paths ............... 2-18
Figure 2-8. Fibre Channel SAN Switch-to-array controller I/O data paths........... 2-19
Figure 2-9. Dual redundancy configuration for a redundant Fibre Channel
Fabric................................................................................................................ 2-22
Figure 2-10. Quad redundancy configuration for a redundant Fibre Channel
Fabric................................................................................................................ 2-24
Figure 2-11. Two-node PDC/O5000 with a four-loop redundant Fibre
Channel Arbitrated Loop.................................................................................. 2-27
Figure 2-12. Two-node PDC/O5000 with two redundant Fibre Channel
Arbitrated Loops .............................................................................................. 2-29
Figure 2-13. Maximum distances between PDC/O5000 cluster nodes and
shared storage subsystem components in a redundant FC-AL......................... 2-31
Figure 2-14. Host adapter-to-Storage Hub I/O data paths..................................... 2-32
Figure 2-15. Storage Hub-to-array controller I/O data paths ................................ 2-33
Figure 2-16. Dual redundancy configuration for a redundant FC-AL .................. 2-36
Figure 2-17. Quad redundancy configuration for a redundant FC-AL.................. 2-37
Figure 2-18. Redundant Ethernet cluster interconnect for a two-node
PDC/O5000 cluster .......................................................................................... 2-41
Figure 4-1. Dual redundancy configuration for a redundant Fibre Channel
Fabric.................................................................................................................. 4-8
Figure 4-2. Quad redundancy configuration for a redundant Fibre Channel
Fabric................................................................................................................ 4-10
Figure 4-3. Dual redundancy configuration for a redundant FC-AL .................... 4-12
Figure 4-4. Quad redundancy configuration for a redundant FC-AL.................... 4-14
Figure 4-5 MA8000/EMA12000 Storage Subsystem disk grouping for a
PDC/O5000 cluster .......................................................................................... 4-19
Figure 5-1. Connecting host adapters to Fibre Channel SAN Switches or
Storage Hubs ...................................................................................................... 5-5
Figure 5-2. Redundant Ethernet cluster interconnect for a two-node
PDC/O5000 cluster ............................................................................................ 5-8
Figure 5-3. SCSI bus numbers for I/O modules in the controller enclosure ......... 5-11
Figure 5-4. UltraSCSI cabling between a controller enclosure and three
disk enslosures ................................................................................................. 5-12
Figure 5-5. UltraSCSI cabling between a controller enclosure and six disk
enclosures......................................................................................................... 5-13
Figure 5-6. Dual redundancy configuration for a redundant Fibre Channel
Fabric ............................................................................................................... 5-15
Figure 5-7. Dual redundancy configuration for a redundant FC-AL .................... 5-16
Figure 5-8. Server power distribution in a three-node cluster............................... 5-57
Figure A-1. Tasks for diagnosing and resolving shared storage problems .............A-2
List of Tables
Table 2-1 High-Availability Components of ProLiant Servers ............................... 2-3
Table 2-2 SCSI bus address ID assignments for the MA8000/EMA12000
Storage Subsystem ............................................................................................. 2-9
Table 2-3 I/O Path Failure and Failover Scenarios for Redundant Fibre
Channel Fabrics ............................................................................................... 2-25
Table 2-4 I/O Path Failure and Failover Scenarios for Redundant FC-ALs.......... 2-38
Table 5-1 Controller Properties ............................................................................. 5-28
Contents xi

Purpose

Audience

About This Guide

This administrator guide provides information about the planning, installation, configuration, implementation, management, and troubleshooting of the Compaq Parallel Database Cluster Model PDC/O5000 on Oracle8i software running on the Microsoft Windows 2000 Advanced Server operating system.
The expected audience of this guide consists primarily of MIS professionals whose jobs include designing, installing, configuring, and maintaining Compaq Parallel Database Clusters.
The audience of this guide must have a working knowledge of Microsoft Windows 2000 Advanced Server and Oracle databases or have the assistance of a database administrator.
This guide contains information for network administrators, database administrators, installation technicians, systems integrators, and other technical personnel in the enterprise environment for the purpose of cluster planning, installation, implementation, and maintenance.
IMPORTANT: This guide contains installation, configuration, and maintenance information that can be valuable for a variety of users. If you are installing the PDC/O5000 but will not be administering the cluster on a daily basis, please make this guide available to the person or persons who will be responsible for the clustered servers after you have completed the installation.
xiv Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Scope

This guide offers significant background information about clusters as well as basic concepts associated with designing clusters. It also contains detailed product descriptions and installation steps.
This administrator guide is designed to assist you in the following objectives:
Understanding basic concepts of clustering technology
Recognizing and using the high-availability features of the PDC/O5000
Planning and designing a PDC/O5000 cluster configuration to meet your
business needs
Installing and configuring PDC/O5000 hardware and software
Managing the PDC/O5000
Troubleshooting the PDC/O5000
The following summarizes the contents of this guide:
Chapter 1, “Clustering Overview,” provides an introduction to
clustering technology features and benefits.
Chapter 2, “Cluster Architecture,” describes the hardware components
of the PDC/O5000 and provides detailed I/O path configuration information.
Chapter 3, “Cluster Software Components,” describes software
components used with the PDC/O5000.
Chapter 4, “Cluster Planning,” outlines an approach to planning and
designing cluster configurations that meet your business needs.
Chapter 5, “Installation and Configuration,” outlines the steps you will
take to install and configure the PDC/O5000 hardware and software.
Chapter 6, “Cluster Management,” includes techniques for managing
and maintaining the PDC/O5000.
Chapter 7, “Troubleshooting,” contains troubleshooting information for
the PDC/O5000.
Appendix A, “Diagnosing and Resolving Shared Disk Problems,”
describes procedures to diagnose and resolve shared disk problems.
Glossary contains definitions of terms used in this guide.
Some clustering topics are mentioned, but not detailed, in this guide. For example, this guide does not describe how to install and configure Oracle8i on a cluster. For information about these topics, see the referenced and supplemental documents listed in subsequent sections.

Referenced Manuals

For additional information, refer to documentation related to the specific hardware and software components of the Compaq Parallel Database Cluster. These related manuals include, but are not limited to:
Documentation related to the ProLiant servers you are clustering (for
example, guides, posters, and performance and tuning guides)
Compaq StorageWorks documentation provided with the
MA8000/EMA12000 Storage Subsystem, HSG80 Array Controller, Fibre Channel SAN Switches, Storage Hubs, and the KGPSA-BC or KGPSA-CB Host Adapter
Microsoft Windows 2000 Advanced Server documentation
G Microsoft Windows 2000 Advanced Server Administrator’s Guide
Oracle8i documentation, including:
About This Guide xv
G Oracle8i Parallel Server Setup and Configuration Guide
G Oracle8i Parallel Server Concepts
G Oracle8i Parallel Server Administration, Deployment, and
Performance
G Oracle Enterprise Manager Administrator’s Guide
G Oracle Enterprise Manager Configuration Guide
G Oracle Enterprise Manager Concepts Guide
xvi Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Supplemental Documents

The following technical documents contain important supplemental information for the Compaq Parallel Database Cluster Model PDC/O5000:
Compaq Parallel Database Cluster Model PDC/O5000 Certification
Matrix for Windows 2000, at
www.compaq.com/solutions/enterprise/ha-pdc.html
Configuring Compaq RAID Technology for Database Servers, at
www.compaq.com/highavailability
Various technical white papers on Oracle and cluster sizing, which are
available from Compaq ActiveAnswers website, at
www.compaq.com/activeanswers

Text Conventions

This document uses the following conventions to distinguish elements of text:
Keys Keys appear in boldface. A plus sign (+) between
two keys indicates that they should be pressed simultaneously.
USER INPUT User input appears in a different typeface and in
uppercase.
FILENAMES File names appear in uppercase italics.
Menu Options, Command Names, Dialog Box Names
COMMANDS, DIRECTORY NAMES, and DRIVE NAMES
Type When you are instructed to type information, type
Enter When you are instructed to enter information, type
These elements appear in initial capital letters, and may be bolded for emphasis.
These elements appear in uppercase.
the information without pressing the Enter key.
the information and then press the Enter key.

Symbols in Text

These symbols may be found in the text of this guide. They have the following meanings:
WARNING: Text set off in this manner indicates that failure to follow directions in the warning could result in bodily harm or loss of life.
CAUTION: Text set off in this manner indicates that failure to follow directions could result in damage to equipment or loss of information.
IMPORTANT: Text set off in this manner presents clarifying information or specific instructions.
NOTE: Text set off in this manner presents commentary, sidelights, or interesting points of information.

Symbols on Equipment

About This Guide xvii
These icons may be located on equipment in areas where hazardous conditions may exist.
Any surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. WARNING: To reduce the risk of injury from electrical shock hazards, do not open this enclosure.
Any RJ-45 receptacle marked with these symbols indicates a Network Interface Connection. WARNING: To reduce the risk of electrical shock, fire, or damage to the equipment, do not plug telephone or telecommunications connectors into this receptacle.
xviii Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Any surface or area of the equipment marked with these symbols indicates the presence of a hot surface or hot component. If this surface is contacted, the potential for injury exists. WARNING: To reduce the risk of injury from a hot component, allow the surface to cool before touching.
Power Supplies or Systems marked with these symbols indicate the equipment is supplied by multiple sources of power.
WARNING: To reduce the risk of injury from electrical shock, remove all power cords to completely disconnect power from the system.

Rack Stability

WARNING: To reduce the risk of personal injury or damage to the equipment,
be sure that:

Getting Help

If you have a problem and have exhausted the information in this guide, you can get further information and other help in the following locations.
The leveling jacks are extended to the floor.
The full weight of the rack rests on the leveling jacks.
The stabilizing feet are attached to the rack if it is a single rack
installations.
The racks are coupled together in multiple rack installations.
Only one component is extended at a time. A rack may become unstable if
more than one component is extended for any reason.

Compaq Technical Support

You are entitled to free hardware technical telephone support for your product for as long you own the product. A technical support specialist will help you diagnose the problem or guide you to the next step in the warranty process.
About This Guide xix
In North America, call the Compaq Technical Phone Support Center at 1-800-OK-COMPAQ
Outside North America, call the nearest Compaq Technical Support Phone Center. Telephone numbers for world wide Technical Support Centers are listed on the Compaq website. Access the Compaq website by logging on to the Internet at
www.compaq.com
Be sure to have the following information available before you call Compaq:
Technical support registration number (if applicable)
Product serial number(s)
Product model name(s) and numbers(s)
Applicable error messages
Add-on boards or hardware
Third-party hardware or software
Operating system type and revision level
Detailed, specific questions

Compaq Website

1
. This service is available 24 hours a day, 7 days a week.
The Compaq website has information on this product as well as the latest drivers and Flash ROM images. You can access the Compaq website by logging on to the Internet at
www.compaq.com
1
For continuous quality improvement, calls may be recorded or monitored.
xx Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Compaq Authorized Reseller

For the name of your nearest Compaq Authorized Reseller:
In the United States, call 1-800-345-1518.
In Canada, call 1-800-263-5868.
Elsewhere, see the Compaq website for locations and telephone
numbers.
Chapter 1
Clustering Overview
For many years, companies have depended on clustered computer systems to fulfill two key requirements: to ensure users can access and process information that is critical to the ongoing operation of their business, and to increase the performance and throughput of their computer systems at minimal cost. These requirements are known as availability and scalability, respectively.
Historically, these requirements have been fulfilled with clustered systems built on proprietary technology. Over the years, open systems have progressively and aggressively moved proprietary technologies into industry-standard products. Clustering is no exception. Its primary features, availability and scalability, have been moving into client/server products for the last few years.
The absorption of clustering technologies into open systems products is creating less expensive, non-proprietary solutions that deliver levels of function commonly found in traditional clusters. While some uses of the proprietary solutions will always exist, such as those controlling stock exchange trading floors and aerospace mission controls, many critical applications can reach the desired levels of availability and scalability with non-proprietary client/server-based clustering.
These clustering solutions use industry-standard hardware and software, thereby providing key clustering features at a lower price than proprietary clustering systems. Before examining the features and benefits of the Compaq Parallel Database Cluster Model PDC/O5000 (referred to here as the PDC/O5000), it is helpful to understand the concepts and terminology of clustered systems.
1-2 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Clusters Defined

A cluster is an integration of software and hardware products that enables a set of loosely coupled servers and shared storage subsystem components to present a single system image to clients and to operate as a single system. As a cluster, the group of servers and shared storage subsystem components offers a level of availability and scalability far exceeding that obtained if each cluster node operated as a stand-alone server.
The PDC/O5000 uses the Oracle8i Parallel Server software, which is a parallel database that can distribute its workload among the cluster nodes. Refer to Chapter 3, “Cluster Software Components” to determine the specific releases your cluster kit supports.
Figure 1-1 shows an example of a PDC/O5000 that includes two nodes (Compaq ProLiant™ servers), two sets of storage subsystems, two Compaq
StorageWorks
TM
Fibre Channel SAN Switches or Compaq StorageWorks Fibre Channel Storage Hubs, a redundant cluster interconnect, and a client local area network (LAN).
Client LAN
Switch/Hub Cluster Interconnect
Clustering Overview 1-3
Host Adapters (2)
Fibre Channel SAN
Switch/Storage Hub #1
A
B
P1 P2
P1 P2
Storage Subsystem #1 Storage Subsystem #2
A
B
Host Adapters (2)
Node 2Node 1
Fibre Channel SAN
Switch/Storage Hub #2
P1 P2
P1 P2
Figure 1-1. Example of a two-node PDC/O5000 cluster
The PDC/O5000 can support redundant Fibre Channel Fabric Storage Area Network (SAN) and redundant Fibre Channel Arbitrated Loop (FC-AL) SAN topologies. In the example shown in Figure 1-1, the clustered nodes are connected to the database on the shared storage subsystems through a redundant Fibre Channel Fabric or redundant FC-AL. Clients access the database through the client LAN, and the cluster nodes communicate across an Ethernet cluster interconnect.
1-4 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Availability

When computer systems experience outages, the amount of time the system is unavailable is referred to as downtime. Downtime has several primary causes: hardware faults, software faults, planned service, operator error, and environmental factors. Minimizing downtime is a primary goal of a cluster.
Simply defined, availability is the measure of how well a computer system can continuously deliver services to clients.
Availability is a system-wide endeavor. The hardware, operating system, and applications must be designed for availability. Clustering requires stability in these components, then couples them in such a way that failure of one item does not render the system unusable. By using redundant components and mechanisms that detect and recover from faults, clusters can greatly increase the availability of applications critical to business operations.

Scalability

Simply defined, scalability is a computer system characteristic that enables improved performance or throughput when supplementary hardware resources are added. Scalable systems allow increased throughput by adding components to an existing system without the expense of adding an entire new system.
In a stand-alone server configuration, scalable systems allow increased throughput by adding processors or more memory. In a cluster configuration, this result is usually obtained by adding cluster nodes.
Not only must the hardware benefit from additional components, but also software must be constructed in such a way as to take advantage of the additional processing power. Oracle8i Parallel Server distributes the workload among the cluster nodes. As more nodes are added to the cluster, cluster-aware applications can use the parallel features of Oracle8i Parallel Server to distribute workload among more servers, thereby obtaining greater throughput.
Compaq Parallel Database Cluster Overview
As traditional clustering technology has moved into the open systems of client/server computing, Compaq has provided innovative, customer-focused solutions. The PDC/O5000 moves client/server computing one step closer to the capabilities found in expensive, proprietary cluster solutions, at a fraction of the cost.
The PDC/O5000 combines the popular Microsoft Windows 2000 Advanced Server operating system and the industry-leading Oracle8i Parallel Server with award-winning Compaq ProLiant servers and shared storage subsystems.
Clustering Overview 1-5
Chapter 2
Cluster Architecture
The Compaq Parallel Database Cluster Model PDC/O5000 (referred to here as the PDC/O5000) is an integration of a number of different hardware and software products. This chapter discusses how these products play a role in bringing a complete clustering solution to your computing environment.
The hardware products include:
Compaq ProLiant servers
Shared storage components
G Compaq StorageWorks Modular Array 8000 Fibre Channel Storage
Subsystems or the Compaq StorageWorks Enterprise Modular Array 12000 Fibre Channel Storage Subsystems (MA8000/EMA12000 Storage Subsystems)
G Compaq StorageWorks HSG80 Array Controllers (HSG80 Array
Controllers)
G Compaq StorageWorks Fibre Channel SAN Switches (Fibre Channel
SAN Switches) for redundant Fibre Channel Fabrics
G Compaq StorageWorks Storage Hubs (Storage Hubs) for redundant
Fibre Channel Arbitrated Loops
G KGPSA-BC PCI-to-Optical Fibre Channel Host Adapters
(KGPSA-BC Host Adapters) or KGPSA-CB PCI-to-Optical Fibre Channel Host Adapters (KGPSA-CB Host Adapters)
2-2 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Gigabit Interface Converter-Shortwave (GBIC-SW) modules
G
G Fibre Channel cables
Cluster interconnect components
G Ethernet NIC adapters
G Ethernet cables
G Ethernet switches/hubs
The software products include:
Microsoft Windows 2000 Advanced Server with Service Pack 1 or later
Compaq drivers and utilities
Oracle8i Enterprise Edition with the Oracle8i Parallel Server Option
For a description of the software products used with the PDC/O5000, refer to Chapter 3, “Cluster Software Components.”

Compaq ProLiant Servers

A primary component of any cluster is the server. Each Compaq Parallel Database Cluster Model PDC/O5000 consists of cluster nodes in which each node is a Compaq ProLiant server.
With some exceptions, all nodes in a PDC/O5000 must be identical in model. In addition, all components common to all nodes in a cluster, such as memory, number of CPUs, and the interconnect adapters, should be identical and identically configured.
NOTE: Certain restrictions apply to the server models and server configurations that are supported by the Compaq Parallel Database Cluster. For a current list of PDC-certified servers and details on supported configurations, refer to the Compaq Parallel Database Cluster Model PDC/O5000 Certification Matrix for Windows 2000. This document is available on the Compaq website at
www.compaq.com/solutions/enterprise/ha-pdc.html
High-Availability Features of ProLiant Servers
In addition to the increased application and data availability enabled by clustering, ProLiant servers include many reliability features that provide a solid foundation for effective clustered server solutions. The PDC/O5000 is based on ProLiant servers, most of which offer excellent reliability through redundant power supplies, redundant cooling fans, and Error Checking and Correcting (ECC) memory. The high-availability features of ProLiant servers are a critical foundation of Compaq clustering products. Table 2-1 lists the high-availability features found in many ProLiant servers.
Table 2-1
High-Availability Components of ProLiant Servers
Hot-pluggable hard drives Redundant power supplies
Digital Linear Tape (DLT) Array (optional) ECC-protected processor-memory bus
Uninterruptible power supplies (optional) Redundant processor power modules
ECC memory PCI Hot Plug slots (in some servers)
Offline backup processor Redundant cooling fans
Cluster Architecture 2-3

Shared Storage Components

The PDC/O5000 is based on a cluster architecture known as shared storage clustering, in which clustered nodes share access to a common set of shared disk drives. In this discussion, the shared storage includes these components:
MA8000/EMA12000 Storage Subsystem
HSG80 Array Controllers
Fibre Channel SAN Switches for redundant Fibre Channel Fabrics
Storage Hubs for redundant Fibre Channel Arbitrated Loops
KGPSA-BC or KGPSA-CB Host Adapters
Gigabit Interface Converter-Shortwave (GBIC-SW) modules
Fibre Channel cables
2-4 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

MA8000/EMA12000 Storage Subsystem

The MA8000/EMA12000 Storage Subsystem is the shared storage solution for the PDC/O5000. Each storage subsystem consists of one controller enclosure and up to six disk enclosures.
For detailed information about storage subsystem components, refer to the Compaq StorageWorks documentation provided with the MA8000/EMA12000 Storage Subsystem.
Controller Enclosure Components
The controller enclosure for the MA8000/EMA12000 Storage Subsystem houses the two HSG80 Array Controllers, one cache module for each controller, an environmental monitoring unit (EMU), one or two power supplies, and three dual-speed fans. In addition, the controller enclosure houses the six I/O modules that connect the enclosures six SCSI buses to up to six disk enclosures.
Disk Enclosure Components
Each disk enclosure houses up to 12 or 14 form factor hard disk drives, depending on the number of SCSI buses connected to the enclosures I/O module. A single-bus or dual-bus I/O module in each disk enclosure is connected by UltraSCSI cable to one single-bus I/O module in the controller enclosure. Disk enclosures using a single-bus I/O module can contain up to 12 disk drives. Disk enclosures using both connectors on a dual-bus I/O module can contain up to 14 disk drives (7 per SCSI bus). When you have more than three disk enclosures in your subsystem, they must all use single-bus I/O modules, for a maximum of 72 disk drives in each MA8000/EMA12000 Storage Subsystem.
Each disk enclosure also contains redundant power supplies, an EMU, and two variable-speed blowers.

HSG80 Array Controller

Two dual-port HSG80 Array Controllers are installed in the controller enclosure of each MA8000/EMA12000 Storage Subsystem.
From the perspective of the cluster nodes, each HSG80 Array Controller port is simply another device connected to one of the clusters I/O connection paths between the host adapters and the MA8000/EMA12000 Storage Subsystems. Consequently, each node sends its I/O requests to the array controllers just as
Cluster Architecture 2-5
it would to any SCSI device. An array controller port receives the I/O requests from a host adapter in a cluster node and directs them to the shared storage disks to which it has been configured.
Because the array controller processes the I/O requests, the cluster nodes are not burdened with the I/O processing tasks associated with reading and writing data to multiple shared storage devices.
Each HSG80 Array Controller port combines all of the logical disk drives that have been configured to it into a single, high-performance storage unit called a storageset. RAID technology ensures that every unpartitioned storageset, whether it uses 12 disks or 14 disks, looks like a single storage unit to the cluster nodes.
Both ports on each of the two HSG80 Array Controllers in the controller enclosure are simultaneously active, and access to a specific logical unit number (LUN) is distributed among and shared by these ports. This provides redundant access to the same LUNs if one port or array controller fails. If an HSG80 Array Controller in an MA8000/EMA12000 Storage Subsystem fails, Secure Path failover software detects the failure and automatically transfers all I/O activity to the defined backup path.
To further ensure redundancy, connect the two ports on each HSG80 Array Controller by Fibre Channel cables to different Fibre Channel SAN Switches or Storage Hubs.
For further information, refer to the Compaq StorageWorks documentation provided with the array controllers.

Fibre Channel SAN Switch

IMPORTANT: For detailed information about cascading two Fibre Channel SAN Switches,
refer to the latest Compaq StorageWorks documentation. This guide does not document cascaded configurations for the Fibre Channel SAN Switch.
Fibre Channel SAN Switches are installed between cluster nodes and shared storage subsystems in clusters to create redundant Fibre Channel Fabrics.
An 8-port Fibre Channel SAN Switch and 16-port Fibre Channel SAN Switch are supported. From two to four Fibre Channel SAN Switches can be used in each redundant Fibre Channel Fabric.
Fibre Channel SAN Switches provide full 100 MBps bandwidth on every port. Adding new devices to Fibre Channel SAN Switch ports increases the aggregate bandwidth.
2-6 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
For further information, refer to these manuals provided with each Fibre Channel SAN Switch:
Compaq StorageWorks Fibre Channel SAN Switch 8 Installation and
Hardware Guide
Compaq StorageWorks Fibre Channel SAN Switch 16 Installation and
Hardware Guide
Compaq StorageWorks Fibre Channel SAN Switch Management Guide
provided with the Fibre Channel SAN Switch

Storage Hub

Storage Hubs are installed between cluster nodes and shared storage subsystems in clusters to create redundant Fibre Channel Arbitrated Loops.
Storage Hubs connect the host adapters in cluster nodes with the HSG80 Array Controllers in MA8000/EMA12000 Storage Subsystems. From two to four Storage Hubs are used in each redundant FC-AL of a PDC/O5000. Using two or more Storage Hubs provides fault tolerance and supports the redundant architecture implemented by the PDC/O5000.
You can use either the Storage Hub 7 (with 7 ports) or the Storage Hub 12 (with 12 ports). Using the Storage Hub 7 may limit the size of the PDC/O5000. For more detailed information, refer to Chapter 4, “Cluster Planning, in this guide.
Refer to the Compaq StorageWorks Fibre Channel Storage Hub 7 Installation
Guide and the Compaq StorageWorks Fibre Channel Storage Hub 12 Installation Guide for further information about the Storage Hubs.
KGPSA-BC and KGPSA-CB Host Adapter
Each redundant Fibre Channel Fabric or redundant FC-AL contains a dedicated set of KGPSA-BC or KGPSA-CB Host Adapters in every cluster node. Each host adapter in a node should be connected to a different Fibre Channel SAN Switch or Storage Hub.
If the cluster contains multiple redundant Fibre Channel Fabrics or multiple redundant FC-ALs, then host adapters cannot be shared between them. Each redundant Fibre Channel Fabric or redundant FC-AL must have its own set of host adapters installed in each cluster node.
Compaq StorageWorks Secure Path failover software is installed on every cluster node to ensure the proper operation of components along each I/O path. For information about installing Secure Path, see the section entitled, Installing Secure Path for Windows 2000 in Chapter 5, Installation and Configuration.
Gigabit Interface Converter-Shortwave
A Gigabit Interface Converter-Shortwave (GBIC-SW) module is installed at both ends of a Fibre Channel cable. A GBIC-SW module must be installed in each host adapter, active Fibre Channel SAN Switch or Storage Hub port, and array controller.
GBIC-SW modules provide 100 MB/second performance. Fibre Channel cables connected to these modules can be up to 500 meters in length.

Fibre Channel Cables

Shortwave (multi-mode) fibre optic cables are used to connect the server nodes, Fibre Channel SAN Switches or Storage Hubs, and array controllers in a PDC/O5000.
Cluster Architecture 2-7

Configuring and Cabling the MA8000/EMA12000 Storage Subsystem Components

Configuring LUNS for Storagesets

If you install the optional Compaq StorageWorks Large LUN utility on your PDC/O5000 cluster, you can distribute a maximum of 256 LUNs among the four array controller ports in each storage subsystem.
If you do not install the Large LUN utility, you can distribute a maximum of 16 LUNs among the four array controller ports in each storage subsystem. This 16-LUN restriction is imposed by the Compaq Secure Path software and the MA8000/EMA12000 Storage Subsystems.
IMPORTANT: When offsets are used, always assign LUNs 0-99 to port 1 on each HSG80 Array Controller and assign LUNs 100-199 to port 2.
2-8 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
If you do not install the optional Large LUN utility in you cluster, there are two accepted methods for distributing LUNs for storagesets:
You can distribute eight LUNs between the same-numbered ports on
both array controllers so that a total of 16 LUNs are assigned across all four controller ports in each storage subsystem. For example, assign LUNs 0 through 3 to array controller A port 1, LUNs 4 through 7 to array controller B port 1, LUNs 100 through 103 to array controller A port 2, and LUNs 104 through 107 to array controller B port 2. If array controller A fails in multibus failover mode, Secure Path automatically gives controller B control over I/O transfers for LUNs 0-3 on port 1 and LUNs 100-103 on port 2.
You can distribute eight LUNs across all four array controller ports. For
example, assign LUNs 0 and 1 to array controller A port 1, LUNS 2 and 3 to array controller A port 2, LUNS 4 and 5 to array controller B port 1, and LUNs 6 and 7 to array controller B port 2. In this configuration, a single controller port can access all eight LUNs in the event of a path failure on the other three ports.
If you do install the optional Large LUN utility in your cluster, then you must follow these guidelines:
Do not assign unit offsets to any connections.
Use unit identifiers D0 through D63 for port 1 connections and D100
through D163 for port 2 connections.
Distribute the LUNs evenly across all four array controller ports.

SCSI Cabling Examples

SCSI Bus Addressing
The two array controllers in each MA8000/EMA12000 Storage Subsystem controller enclosure receive I/O data from the cluster nodes, process it, and distribute this data over single-ended UltraSCSI buses to as many as six disk enclosures. Each bus has 16 possible SCSI bus identifier IDs (0-15).
The following devices use a SCSI bus address ID and are classified as SCSI bus nodes”:
Array controllers (A and B)
EMUs in the controller enclosure
Physical disk drives in the disk enclosures
Cluster Architecture 2-9
Every node on a SCSI bus must have a unique SCSI bus ID.
Table 2-2 shows the SCSI bus address ID assignments for the MA8000/EMA12000 Storage Subsystem.
Table 2-2
SCSI bus address ID assignments for the
MA8000/EMA12000 Storage Subsystem
SCSI Bus ID SCSI Bus Node
0 - 5 Physical drives in disk enclosure
6 Array controller B
7 Array controller A
8, 9 EMUs in the controller enclosure
10 - 15 Physical drives in disk enclosure
Each disk enclosure has two internal SCSI buses, with each bus controlling half of the total available disk drive slots in the enclosure. The single-bus I/O module places all physical disk drives in the enclosure on a single bus of 12 devices. The dual-bus I/O module maintains two internal buses with seven physical disk drive slots on each bus.
For more detailed information about SCSI bus addressing for the MA8000/EMA12000 Storage Subsystem, refer to the Compaq documentation provided with the storage subsystem.

UltraSCSI Cables

Compaq recommends using the shortest cable length possible to connect disk enclosures to the controller enclosure. The maximum supported cable length is 2 meters (6.6 ft). Refer to the documentation provided with your storage subsystem for further information.

Using I/O Modules in the Controller Enclosure

As Figure 2-1 shows, both array controllers (A and B) connect to six I/O modules in the controller enclosure. UltraSCSI cables are installed between these I/O modules in the controller enclosure and one I/O module in each disk enclosure.
2-10 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
SCSI Bus 3
SCSI Bus 1
SCSI Bus 2
Controller Enclosure (rear view)
SCSI Bus 4
A B
SCSI Bus 5
SCSI Bus 6
Figure 2-1. SCSI bus numbers for I/O modules in the controller enclosure
Using Dual-Bus I/O Modules in the Disk Enclosures
Figure 2-2 shows UltraSCSI cables installed between a controller enclosure and three disk enclosures. When three or fewer disk enclosures are present in a storage subsystem, use one dual-bus I/O module in each disk enclosure to support up to 14 disk drives in each disk enclosure. Using single-bus I/O modules limits you to 12 disk drives per disk enclosure.
Disk Enclosure
Disk Enclosure
Disk Enclosure
Cluster Architecture 2-11
Dual-Bus I/O Module
I/O Modules (6)
A B
Controller Enclosure
Figure 2-2. UltraSCSI cabling between a controller enclosure and three disk enclosures
Using Single-Bus I/O Modules in the Disk Enclosures
Figure 2-3 shows UltraSCSI cables installed between a controller enclosure and six disk enclosures. When four or more disk enclosures are present in a storage subsystem, use one single-bus I/O module in each disk enclosure.
2-12 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Single-Bus I/O
Disk Enclosure
Disk Enclosure
Disk Enclosure
Module
A B
Controller Enclosure
Disk Enclosure
Disk Enclosure
Disk Enclosure
Single-Bus I/O
Module
I/O Modules (6)
Figure 2-3. UltraSCSI cabling between a controller enclosure and six disk enclosures

Connecting EMUs Between MA8000/EMA12000 Storage Subsystems

For information about connecting the EMUs between enclosures in an MA8000/EMA12000 Storage Subsystem and between different subsystems, refer to the Compaq documentation provided with the storage subsystem.

I/O Path Configurations for Redundant Fibre Channel Fabrics

Overview of Fibre Channel Fabric SAN Topology

Fibre Channel standards define a multi-layered architecture for moving data across the storage area network (SAN). This layered architecture can be implemented using the Fibre Channel Fabric or the Fibre Channel Arbitrated Loop (FC-AL) topology. The PDC/O5000 supports both topologies.
A redundant Fibre Channel Fabric is two to four Fibre Channel SAN Switches installed between host adapters in a PDC/O5000s cluster nodes and the array controllers in the shared storage subsystems. Fibre Channel SAN Switches provide full 100 MBps bandwidth per switch port. Whereas the introduction of new devices to FC-AL Storage Hubs further divides their shared bandwidth, adding new devices to Fibre Channel SAN Switches increases the aggregate bandwidth.

Redundant Fibre Channel Fabrics

Cluster Architecture 2-13
A redundant Fibre Channel Fabric consists of the PDC/O5000 cluster hardware used to connect host adapters to a particular set of shared storage devices using Fibre Channel SAN Switches. Each redundant Fibre Channel Fabric consists of the following hardware:
From two to four host adapters in each node
From two to four Fibre Channel SAN Switches
MA8000/EMA12000 Storage Subsystems, each containing two
dual-port HSG80 Array Controllers
Fibre Channel cables used to connect the host adapters to the Fibre
Channel SAN Switches and the Fibre Channel SAN Switches to the array controllers
GBIC-SW modules installed in host adapters, Fibre Channel SAN
Switches, and array controllers
IMPORTANT: For detailed information about cascading two Fibre Channel SAN Switches, refer to the latest Compaq StorageWorks documentation. This guide does not document cascaded configurations for the Fibre Channel SAN Switch.
2-14 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
A redundant Fibre Channel Fabric consists of from two to four individual fabrics. The number of fabrics present is determined by the number of host adapters installed in the redundant Fibre Channel Fabric: two host adapters create two fabrics and four host adapters create four fabrics.
Figure 2-4 shows a two-node PDC/O5000 with a redundant Fibre Channel Fabric that contains four fabrics, one for each host adapter.
Host Adapters (2) Host Adapters (2)
Node 2Node 1
Fibre Channel
SAN Switch # 1
P1 P2
A
B
P1 P2
Storage Subsystem #1 Storage Subsystem #2
A
B
Fibre Channel
SAN Switch # 2
P1 P2
P1 P2
Figure 2-4. Two-node PDC/O5000 with a four-fabric redundant Fibre Channel Fabric
Used in conjunction with the I/O path failover capabilities of Compaq Secure Path software, this redundant Fibre Channel Fabric configuration gives cluster resources increased availability and fault tolerance.

Multiple Redundant Fibre Channel Fabrics

The PDC/O5000 supports the use of multiple redundant Fibre Channel Fabrics within the same cluster. You would install additional redundant Fibre Channel Fabrics in a PDC/O5000 to:
Increase the amount of shared storage space available to the cluster’s
nodes. Each redundant Fibre Channel Fabric can connect to a limited number of MA8000/EMA12000 Storage Subsystems. This limit is imposed by the number of ports available on the Fibre Channel SAN Switches used. The storage subsystems are available only to the host adapters connected to that redundant Fibre Channel Fabric.
Increase the cluster’s I/O performance.
Adding a second redundant Fibre Channel Fabric to the cluster involves duplicating the hardware components used in the first redundant Fibre Channel Fabric.
The maximum number of redundant Fibre Channel Fabrics you can install in a PDC/O5000 is restricted by the number of host adapters your Compaq servers support. Refer to the Compaq server documentation for this information.
Figure 2-5 shows a two-node PDC/O5000 with two redundant Fibre Channel Fabrics. In this example, each redundant Fibre Channel Fabric has its own pair of host adapters in each node, a pair of Fibre Channel SAN Switches, and two MA8000/EMA12000 Storage Subsystems. In Figure 2-5, the hardware components that constitute the second redundant Fibre Channel Fabric are shaded.
Cluster Architecture 2-15
2-16 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Storage Subsystem #1 Storage Subsystem #2
Redundant
Fibre Channel
Fabric #1
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
A
P1 P2
B
Fibre Channel
SAN Switch #1
Host
Adapters (4)
Fibre Channel
SAN Switch #1
P1 P2
P1 P2
P1 P2
A
P1 P2
B
Fibre Channel
SAN Switch #2
Host
Adapters (4)
Node 2Node 1
Fibre Channel
SAN Switch #2
P1 P2
A
P1 P2
B
Redundant
Fibre Channel
Fabric #2
Figure 2-5. Two-node PDC/O5000 with two redundant Fibre Channel Fabrics

Maximum Distances Between Nodes and Shared Storage Subsystem Components

By using standard short-wave Fibre Channel cables with Gigabit Interface Converter-Shortwave (GBIC-SW) modules, the following maximum distances apply:
Each MA8000/EMA12000 Storage Subsystems controller enclosure
can be placed up to 500 meters from the Fibre Channel SAN Switches.
Each Fibre Channel SAN Switch can be placed up to 500 meters from
the host adapters in the cluster nodes.
Cluster Architecture 2-17
Figure 2-6 illustrates these maximum cable distances for a redundant Fibre Channel Fabric.
500 meter maximum
Host Adapters (2) Host Adapters (2)
Node 2Node 1
Fibre Channel
SAN Switch # 1
500 meter maximum
P1 P2
A
P1 P2
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
A
P1 P2
B
Figure 2-6. Maximum distances between PDC/O5000 cluster nodes and shared storage subsystem components in a redundant Fibre Channel Fabric

I/O Data Paths in a Redundant Fibre Channel Fabric

Within a redundant Fibre Channel Fabric, an I/O path connection exists between each host adapter and all four array controller ports in each storage subsystem.
Fibre Channel
SAN Switch # 2
2-18 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Host Adapter-to-Fibre Channel SAN Switch I/O Data Paths
Figure 2-7 highlights the I/O data paths that run between the host adapters and the Fibre Channel SAN Switches. Each host adapter has its own I/O data path.
Host Adapters (2)
Fibre Channel
SAN Switch # 1
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
Node 2Node 1
A
B
Host Adapters (2)
Fibre Channel
SAN Switch # 2
P1 P2
P1 P2
Figure 2-7. Host adapter-to-Fibre Channel SAN Switch I/O data paths
Secure Path monitors the status of the components along each active I/O path. If Secure Path detects the failure of a host adapter, Fibre Channel cable, or Fibre Channel SAN Switch along an active path, it automatically transfers all I/O activity on that path to the defined backup path.
Cluster Architecture 2-19
Fibre Channel SAN Switch-to-Array Controller I/O Data Paths
Figure 2-8 highlights the I/O data paths that run between the Fibre Channel SAN Switches and the two dual-port array controllers in each MA8000/EMA12000 Storage Subsystem. There is one I/O connection path to and from each array controller port.
Host Adapters (2)
Fibre Channel
SAN Switch # 1
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
Node 2Node 1
A
B
Host Adapters (2)
Fibre Channel
SAN Switch # 2
P1 P2
P1 P2
Figure 2-8. Fibre Channel SAN Switch-to-array controller I/O data paths
If any component along an active path fails, Secure Path detects the failure and automatically transfers all I/O activity to the components on the defined backup path.
2-20 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

I/O Path Definitions for Redundant Fibre Channel Fabrics

In a redundant Fibre Channel Fabric, devices are accessed by the operating system using conventional SCSI addressing terminology. An I/O path consists of the complete physical interconnections from a given host (node) to a specific RAID storageset in a shared storage subsystem. Each path is identified by a unique 4-bit value that contains the port number (host adapter), bus number, and the unit names target ID and LUN. Each unit names target ID and LUN values use the format “Dxxyy,” where xx is the target number (0-15) and yy is the LUN (0-7). A target number of 0 is always dropped from the unit number designation (for example, the unit number D0 is understood to be LUN0 on target 0). A host (node) uses the unit name to specify the source or target for every I/O request it sends to an array controller. The unit name can identify a single physical disk drive unit or a storageset that contains several disk drives.
The port number (HBA) is assigned by the Windows operating system. Except for the LUN, the rest of the SCSI address is created within the host adapter miniport driver and is determined by the actual connections between Fibre Channel SAN Switch ports and array controller ports. Controller ports connected to lower-numbered Fibre Channel SAN Switch ports are assigned lower SCSI bus and target ID values than those connected to higher-numbered switch ports. The LUN number is derived from the unit number that has been assigned to the storageset using the StorageWorks Command Console (SWCC) or Command Line Interface (CLI) commands.
Each storageset created in a MA8000/EMA12000 Storage Subsystem must have a unique unit name. Since these storage subsystems present an identical address space from both of their two array controllers, the only bit of address information that will be different across the I/O paths from a given node to a specific storageset is the port (HBA) number.
While cluster nodes use the unit number (Dxxyy) to identify and access a storageset or a single disk drive unit in the shared storage subsystems, the array controllers use a Port-Target-LUN (PTL) address to identify and access these resources in their storage subsystem. For the MA8000/EMA12000 Storage Subsystem, the PTL address contains the following information:
The SCSI port number (1-6) identifies the disk enclosure in which the
target physical disk drive is located.
The target ID number (0-5 and 8-15) of the device identifies the physical
disk drive.
The LUN of the device (For disk devices, the LUN is always 0.)

I/O Path Configuration Examples for Redundant Fibre Channel Fabrics

Every I/O path connection between host adapters and array controller ports is active at the same time. An active port does not become inactive unless it fails, whereupon all I/O activity on the failed path is automatically switched over to the pre-defined backup I/O path.
Dual Redundancy Configuration Example
In a redundant Fibre Channel Fabric, a dual redundancy configuration is the minimum allowable configuration that provides redundancy along the I/O paths. This configuration provides two of each component along the I/O paths. These include at least two nodes, two host adapters in each node, and two Fibre Channel SAN Switches.
Figure 2-9 shows the correct method for cabling the I/O path components in a dual redundancy configuration. In this example, both Fibre Channel SAN Switches connect to port 1 on one array controller and to port 2 on the other array controller in each MA8000/EMA12000 Storage Subsystem. If one Fibre Channel SAN Switch fails, the other switch still has access to port 1 on one array controller and port 2 on the other array controller in each storage subsystem. This ensures that the host adapters can still access the full range of LUNs.
Cluster Architecture 2-21
2-22 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Host Adapters (2) Host Adapters (2)
Node 2Node 1
Fibre Channel
SAN Switch # 1
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
A
B
Fibre Channel
SAN Switch # 2
P1 P2
P1 P2
Figure 2-9. Dual redundancy configuration for a redundant Fibre Channel Fabric
Quad Redundancy Configuration Example
In a redundant Fibre Channel Fabric, a quad redundancy configuration provides four host adapters in each of at least four nodes and four Fibre Channel SAN Switches.
Cluster Architecture 2-23
Figure 2-10 shows an acceptable method for cabling the I/O path components in a quad redundancy configuration. In this example, the following connections are made:
Each of four host adapters in each node connects to the same-numbered
Fibre Channel SAN Switch (host adapter #1 to Fibre Channel SAN Switch #1, host adapter #2 to Fibre Channel SAN Switch #2, host adapter #3 to Fibre Channel SAN Switch #3, host adapter #4 to Fibre Channel SAN Switch #4).
The two odd-numbered Fibre Channel SAN Switches connect to port 1
of the array controllers. Fibre Channel SAN Switch #1 connects to array controller A, port 1 in each storage subsystem. Fibre Channel SAN Switch #3 connects to array controller B, port 1 in each storage subsystem.
The two even-numbered Fibre Channel SAN Switches connect to port 2
of the array controllers. Fibre Channel SAN Switch #2 connects to array controller A, port 2 in each storage subsystem. Fibre Channel SAN Switch #4 connects to array controller B, port 2 in each storage subsystem.
2-24 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Host Adapters (4)
Node 1
Fibre Channel
SAN Switch #1
Fibre Channel
SAN Switch #3
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
Node 2 Node 4Node 3
P1 P2
A
P1 P2
B
Fibre Channel
SAN Switch #2
Fibre Channel
SAN Switch #4
Figure 2-10. Quad redundancy configuration for a redundant Fibre Channel Fabric
Cluster Architecture 2-25

Summary of I/O Path Failure and Failover Scenarios for Redundant Fibre Channel Fabrics

Table 2-3 identifies possible I/O path failure events for both dual redundancy or quad redundancy configurations in a redundant Fibre Channel Fabric and the failover response implemented by Secure Path for each failure.
Table 2-3
I/O Path Failure and Failover Scenarios for Redundant Fibre Channel Fabrics
Description of Failure Failover Response
A single port on an array controller fails.
One array controller in a storage subsystem fails.
The Fibre Channel cable connection between an array controller and its Fibre Channel SAN Switch is broken.
A Fibre Channel SAN Switch fails. The I/O paths for another fabric take over all I/O activity.
The Fibre Channel cable connection between a host adapter and a Fibre Channel SAN Switch is broken.
A host adapter fails. The I/O path from the host adapter in the same node that is
The I/O path to the defined backup array controller port takes over all I/O activity.
The I/O paths to both ports on the other array controller take over all I/O activity.
The I/O path to the defined backup array controller port takes over all I/O activity.
The I/O path from the host adapter in the node that is connected to the defined backup array controller port takes over all I/O activity.
connected to the defined backup array controller port takes over all I/O activity.
2-26 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

I/O Path Configurations for Redundant Fibre Channel Arbitrated Loops

Overview of FC-AL SAN Topology
Fibre Channel standards define a multi-layered architecture for moving data across the storage area network (SAN). This layered architecture can be implemented using the Fibre Channel Fabric or the Fibre Channel Arbitrated Loop (FC-AL) topology. The PDC/O5000 supports both topologies.
Storage Hubs are used to create redundant FC-ALs with a total 100 MBps bandwidth divided among all Storage Hub ports. The functional bandwidth available to any one device on a Storage Hub port is determined by the total population on the segment and the level of activity of devices on other ports. The more devices used, the less bandwidth that is available for each port.

Redundant Fibre Channel Arbitrated Loops

A redundant FC-AL consists of the PDC/O5000 cluster hardware used to connect host adapters to a particular set of shared storage devices using Storage Hubs. Each redundant FC-AL consists of the following hardware:
From two to four host adapters in each node
From two to four Storage Hubs
MA8000/EMA12000 Storage Subsystems, each containing two
dual-port HSG80 Array Controllers
Fibre Channel cables used to connect the host adapters to the Storage
Hubs and the Storage Hubs to the array controllers
GBIC-SW modules installed in host adapters, Storage Hubs, and array
controllers
A redundant FC-AL consists of from two to four individual loops. The number of loops present is determined by the number of host adapters installed in the redundant FC-AL: two host adapters create two loops and four host adapters create four loops.
Figure 2-11 shows a two-node PDC/O5000 with a redundant FC-AL that contains four loops, one for each host adapter.
Cluster Architecture 2-27
Host Adapters (2)
Storage Hub #1
A
B
Storage Subsystem #1 Storage Subsystem #2
Node 1
P1 P2
P1 P2
Node 2
A
B
Host Adapters (2)
Storage Hub #2
P1 P2
P1 P2
Figure 2-11. Two-node PDC/O5000 with a four-loop redundant Fibre Channel Arbitrated Loop
Used in conjunction with the I/O path failover capabilities of the Secure Path software, this redundant FC-AL configuration gives cluster resources increased availability and fault tolerance.
2-28 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Multiple Redundant Fibre Channel Arbitrated Loops

The PDC/O5000 supports the use of multiple redundant FC-ALs within the same cluster. You would install additional redundant FC-ALs in a PDC/O5000 to:
Increase the amount of shared storage space available to the cluster’s
nodes. Each redundant FC-AL can connect to a limited number of MA8000/EMA12000 Storage Subsystems. This limit is imposed by the number of ports available on the Storage Hubs used. The storage subsystems are available only to the host adapters connected to that redundant FC-AL.
Increase the cluster’s I/O performance.
Adding a second redundant FC-AL to the cluster involves duplicating the hardware components used in the first redundant FC-AL.
The maximum number of redundant FC-ALs you can install in a PDC/O5000 is restricted by the number of host adapters your Compaq servers support. Refer to the Compaq server documentation for this information.
Cluster Architecture 2-29
Figure 2-12 shows a two-node PDC/O5000 that contains two redundant FC-ALs. In this example, each redundant FC-AL has its own pair of host adapters in each node, a pair of Storage Hubs, and two MA8000/EMA12000 Storage Subsystems. In Figure 2-12, the hardware components that constitute the second redundant FC-AL are shaded.
Storage Subsystem #1 Storage Subsystem #2
Redundant
FC-AL #1
P1 P2
A
P1 P2
B
Storage
Hub #1
Host
Adapters (4)
Node 1
Storage
Hub #1
P1 P2
A
P1 P2
B
Storage Subsystem #1
P1 P2
A
P1 P2
B
Storage
Hub #2
Host
Adapters (4)
Node 2
Storage Hub #2
P1 P2
A
P1 P2
B
Storage Subsystem #2
Redundant
FC-AL #2
Figure 2-12. Two-node PDC/O5000 with two redundant Fibre Channel Arbitrated Loops
2-30 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Maximum Distances Between Nodes and Shared Storage Subsystem Components

By using standard short-wave Fibre Channel cables with Gigabit Interface Converter-Shortwave (GBIC-SW) modules, the following maximum distances apply:
Each MA8000/EMA12000 Storage Subsystems controller enclosure
can be placed up to 500 meters from the Storage Hubs.
Each Storage Hub can be placed up to 500 meters from the host adapters
in the cluster nodes.
Cluster Architecture 2-31
Figure 2-13 illustrates these maximum cable distances for a redundant FC-AL.
500 meter maximum
Host Adapters (2)
Storage Hub #1
500 meter
maximum
Host Adapters (2)
Node 2Node 1
Storage Hub #2
P1 P2
P1 P2
A
P1 P2
B
A
P1 P2
B
Storage Subsystem #1 Storage Subsystem #2
Figure 2-13. Maximum distances between PDC/O5000 cluster nodes and shared storage subsystem components in a redundant FC-AL
2-32 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
I/O Data Paths in a Redundant FC-AL
Within a redundant FC-AL, an I/O path connection exists between each host adapter and all four array controller ports in each storage subsystem.
Host Adapter-to-Storage Hub I/O Data Paths
Figure 2-14 highlights the I/O data paths that run between the host adapters and the Storage Hubs. Each host adapter has its own I/O data path.
Host Adapters (2) Host Adapters (2)
Node 2Node 1
Storage Hub #1
P1 P2
A
P1 P2
B
Storage Subsystem #1 Storage Subsystem #2
A
B
Figure 2-14. Host adapter-to-Storage Hub I/O data paths
Storage Hub #2
P1 P2
P1 P2
Cluster Architecture 2-33
Secure Path monitors the status of the components along each active path. If Secure Path detects the failure of a host adapter, Fibre Channel cable, or Storage Hub along an active path, it automatically transfers all I/O activity on that path to the defined backup path.
Storage Hub-to-Array Controller I/O Data Paths
Figure 2-15 highlights the I/O data paths that run between the Storage Hubs and the two dual-port HSG80 Array Controllers in each MA8000/EMA12000 Storage Subsystem. There is one I/O connection path to and from each array controller port.
Host Adapters (2) Host Adapters (2)
Storage Hub #1
A
B
Storage Subsystem #1 Storage Subsystem #2
Node 1
P1 P2
P1 P2
Node 2
A
B
Figure 2-15. Storage Hub-to-array controller I/O data paths
Storage Hub #2
P1 P2
P1 P2
2-34 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
If any component along an active path fails, Secure Path detects the failure and automatically transfers all I/O activity to the defined backup path.
I/O Path Definitions for Redundant FC-ALs
In a redundant FC-AL, devices are accessed by the operating system using conventional SCSI addressing terminology. An I/O path consists of the complete physical interconnections from a given host (node) to a specific RAID storageset in a shared storage subsystem. Each path is identified by a unique 4-bit value that contains the port number (host adapter), bus number, and the unit names target ID and LUN. Each unit names target ID and LUN values use the format “Dxxyy,” where xx is the target number (0-15) and yy is the LUN (0-7). A target number of 0 is always dropped from the unit number designation (for example, the unit number D0 is understood to be LUN0 on target 0). A host (node) uses the unit name to specify the source or target for every I/O request it sends to an array controller. The unit name can identify a single physical disk drive unit or a storageset that contains several disk drives.
The port number (HBA) is assigned by the Windows 2000 Advanced Server operating system. Except for the LUN, the rest of the SCSI address is created within the host adapter miniport driver and is derived from the Arbitrated Loop Physical Address (ALPA) assigned to each of the four array controller ports in a shared storage subsystem. The miniport driver uses a fixed mapping scheme to translate ALPA assignments to SCSI bus and target ID values. The LUN number is derived from the unit number that has been assigned to the storageset using the StorageWorks Command Console (SWCC) or Command Line Interface (CLI) commands. Each node must also have a unique ALPA.
Each storageset created in a MA8000/EMA12000 Storage Subsystem must have a unique unit name. Since these storage subsystems present an identical address space from both of their two array controllers, the only bit of address information that will be different across the I/O paths from a given node to a specific storageset is the port (HBA) number.
While cluster nodes use the unit number (Dxxyy) to identify and access a storageset or a single disk drive unit in the shared storage subsystems, the array controllers use a Port-Target-LUN (PTL) address to identify and access these resources in their storage subsystem. For the MA8000/EMA12000 Storage Subsystem, the PTL address contains the following information:
The SCSI port number (1-6) identifies the disk enclosure in which the
target physical disk drive is located.
The target ID number (0-5 and 8-15) of the device identifies the physical
disk drive.
The LUN of the device (for disk devices, the LUN is always 0).
I/O Path Configuration Examples for Redundant FC-ALs
Every I/O path connection between host adapters and array controller ports is active at the same time. An active port does not become inactive unless it fails, whereupon all I/O activity on the failed path is automatically switched over to the pre-defined backup I/O path.
Cluster Architecture 2-35
Dual Redundancy Configuration Example
In a redundant FC-AL, a dual redundancy configuration is the minimum allowable configuration that provides redundancy along the I/O paths. This configuration provides two of each component along the I/O paths. These include at least two nodes, two host adapters in each node, and two Storage Hubs.
Figure 2-16 shows the correct method for cabling the I/O path components in a dual redundancy configuration. In this example, both Storage Hubs connect to port 1 on one array controller and to port 2 on the other array controller in each MA8000/EMA12000 Storage Subsystem. If one Storage Hub fails, the other Storage Hub still has access to port 1 on one array controller and port 2 on the other array controller in each storage subsystem. This ensures that the host adapters can still access the full range of LUNs.
2-36 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Host Adapters (2) Host Adapters (2)
Node 1
Storage Hub #1
P1 P2
A
P1 P2
B
Storage Subsystem #1 Storage Subsystem #2
Node 2
A
B
Storage Hub #2
P1 P2
P1 P2
Figure 2-16. Dual redundancy configuration for a redundant FC-AL
Quad Redundancy Configuration Example
In a redundant FC-AL, a quad redundancy configuration provides four host adapters in each of at least four nodes and four Storage Hubs.
Figure 2-17 shows an acceptable method for cabling the I/O path components in a quad redundancy configuration. In this example, the following connections are made:
Each of four host adapters in each node connects to the same-numbered
Storage Hub (host adapter #1 to Storage Hub #1, host adapter #2 to Storage Hub #2, host adapter #3 to Storage Hub #3, host adapter #4 to Storage Hub #4).
Cluster Architecture 2-37
The two odd-numbered Storage Hubs connect to port 1 of the array
controllers. Storage Hub #1 connects to array controller A, port 1 in each storage subsystem. Storage Hub #3 connects to array controller B, port 1 in each storage subsystem.
The two even-numbered Storage Hubs connect to port 2 of the array
controllers. Storage Hub #2 connects to array controller A, port 2 in each storage subsystem. Storage Hub #4 connects to array controller B, port 2 in each storage subsystem.
Host Adapters (4)
Node 1
Storage Hub #1
Storage Hub #3
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
Node 2 Node 4Node 3
P1 P2
A
P1 P2
B
Storage Hub #2
Storage Hub #4
Figure 2-17. Quad redundancy configuration for a redundant FC-AL
2-38 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Summary of I/O Path Failure and Failover Scenarios for Redundant FC-ALs
Table 2-4 identifies possible I/O path failure events for both dual redundancy and quad configurations for a redundant FC-AL and the failover response implemented by Secure Path for each failure.
Table 2-4
I/O Path Failure and Failover Scenarios for Redundant FC-ALs
Description of Failure Failover Response
A single port on an array controller fails.
One array controller in a storage subsystem fails.
The Fibre Channel cable connection between an array controller and its Storage Hub is broken.
A Storage Hub fails. The I/O paths for another loop take over all I/O activity.
The Fibre Channel cable connection between a host adapter and a Storage Hub is broken.
A host adapter fails. The I/O path from the host adapter in the same node that is
The I/O path to the defined backup array controller port takes over all I/O activity.
The I/O paths to both ports on the other array controller take over all I/O activity.
The I/O path to the defined backup array controller port takes over all I/O activity.
The I/O path from the host adapter in the node that is connected to the defined backup array controller port takes over all I/O activity.
connected to the defined backup array controller port takes over all I/O activity.

Cluster Interconnect Requirements

The cluster interconnect is the data path over which all of the nodes in a PDC/O5000 cluster communicate. The nodes use the cluster interconnect data path to:
Communicate individual resource and overall cluster status.
Send and receive heartbeat signals.
Coordinate database locks through the Oracle Integrated Distributed
Lock Manager.
NOTE: Several terms for cluster interconnect are used throughout the industry. Others are: private LAN, private interconnect, system area network (SAN), and private network. Throughout this guide, the term cluster interconnect is used.
Compaq Parallel Database Clusters like the PDC/O5000 that use Oracle8i Parallel Server support an Ethernet cluster interconnect.
In a PDC/O5000, the Ethernet cluster interconnect must be redundant. A redundant cluster interconnect uses redundant hardware to provide fault tolerance along the entire cluster interconnect path.
Cluster Architecture 2-39

Ethernet Cluster Interconnect

IMPORTANT: The cluster management software for the Ethernet cluster interconnect
requires the use of TCP/IP. When configuring the Ethernet cluster interconnect, be sure to enable TCP/IP.
NOTE: Refer to the technical white paper Supported Ethernet Interconnects for Compaq Parallel Database Clusters Using Oracle Parallel Server (ECG062/0299) for detailed
information about configuring redundant Ethernet cluster interconnects. This document is available at
www.compaq.com/support/techpubs/whitepapers
2-40 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Ethernet Cluster Interconnect Components
The following components are used in a redundant Ethernet cluster interconnect:
Two Ethernet adapters in each cluster node
Ethernet cables and switches or hubs
G For two-node PDC/O5000 clusters, you can use two
100-Mbit/second Ethernet switches or hubs with cables to connect the servers.
G For PDC/O5000 clusters with three or more nodes, you use two
100-Mbit/second Ethernet switches connected by Ethernet cables to a separate Ethernet adapter in each server.
NOTE: In a redundant Ethernet cluster configuration, one Ethernet crossover cable must be installed between the two Ethernet switches or hubs that are dedicated to the cluster interconnect.
Ethernet Cluster Interconnect Adapters
To implement the Ethernet cluster interconnect, each cluster node must be equipped with Ethernet adapters capable of 100-Mbit/second transfer rates. Some adapters may be capable of operating at both 10-Mbit/second and 100-Mbit/second; however, Ethernet adapters used for the cluster interconnect must run at 100-Mbit/second.
The Ethernet adapters must have passed Windows 2000 Advanced Server HCT certification.
For more information about installing an Ethernet cluster interconnect, see Chapter 5, Installation and Configuration.
Ethernet Switch
IMPORTANT: The Ethernet switch or switches used with the Ethernet cluster
interconnect must be dedicated to the cluster interconnect. They cannot be connected to the client network (LAN) or to servers that are not part of the PDC/O5000 cluster.
When an Ethernet cluster interconnect is used in a cluster with three or more nodes, a 100-Mbit/second Ethernet switch is required for the cluster interconnect path. The 100-Mbit/second Ethernet switch handles the higher network loads essential to the uninterrupted operation of the cluster. An Ethernet hub cannot be used.
Cluster Architecture 2-41
Ethernet Cluster Interconnect Diagrams
Figure 2-18 shows the redundant Ethernet cluster interconnect components used in a two-node PDC/O5000 cluster.
Ethernet Switch/Hub #1
for Cluster Interconnect
Ethernet Switch/Hub #2
for Cluster Interconnect
Crossover
Cable
Dual-port Ethernet
Adapters (2)
Node 1
Crossover
Cable
Client LAN
Hub/Switch #2
Node 2
Client LAN
Hub/Switch #1
Dual-port Ethernet
Adapters (2)
Figure 2-18. Redundant Ethernet cluster interconnect for a two-node PDC/O5000 cluster
These components include two dual-port Ethernet adapters in each cluster node. The top port on each adapter connects by Ethernet cable to one of two Ethernet switches or hubs provided for the cluster interconnect. The bottom port on each adapter connects by Ethernet cable to the client LAN for the cluster. A crossover cable is installed between the two Ethernet switches or hubs used in the Ethernet cluster interconnect.
2-42 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Local Area Network

NOTE: For the PDC/O5000, the client LAN and the cluster interconnect must be treated as
separate networks. Do not use either network to handle the other network’s traffic.
Every client/server application requires a local area network, or LAN, over which client machines and servers communicate. In the case of a cluster, the hardware components of the client LAN are no different than in a standalone server configuration.
The software components used by network clients should have the ability to detect node failures and automatically reconnect the client to another cluster node. For example, Net8, Oracle Call Interface (OCI) and Transaction Process Monitors can be used to address this issue.
NOTE: For complete information on how to ensure client auto-reconnect in an Oracle8i Parallel Server environment, contact your Oracle representative.
Cluster Software Components

Overview of the Cluster Software

The Compaq Parallel Database Cluster Model PDC/O5000 (referred to here as the PDC/O5000) combines software from several leading computer vendors. The integration of these components creates a stable cluster management environment in which the Oracle database can operate.
For the PDC/O5000, the cluster management software is a combination of Compaq operating system dependent modules (OSDs) and this Oracle software:
Oracle8i Enterprise Edition with the Oracle8i Parallel Server Option
NOTE: For information about currently-supported software revisions for the PDC/O5000, refer to the Compaq Parallel Database Cluster Model PDC/O5000 Certification Matrix for Windows 2000 at
www.compaq.com/solutions/enterprise/ha-pdc.html
Chapter 3
3-2 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Microsoft Windows 2000 Advanced Server

This version of the PDC/O5000 supports Microsoft Windows 2000 Advanced Server with Service Pack 1 or later.
NOTE: The PDC/O5000 does not work in conjunction with Microsoft Cluster Server. Do not install Microsoft Cluster Server on any of the cluster nodes.

Compaq Software

Compaq offers an extensive set of features and optional tools to support effective configuration and management of the PDC/O5000:
Compaq SmartStart™ and Support Software
Compaq System Configuration Utility
Compaq Insight Manager™
Compaq Insight Manager XE
Compaq Options ROMPaq™
Compaq StorageWorks Command Console
Compaq StorageWorks Secure Path for Windows 2000
Compaq operating system dependent modules (OSDs)

Compaq SmartStart and Support Software

SmartStart, which is located on the SmartStart and Support Software CD, is the best way to configure Windows 2000 Advanced Server on a PDC/O5000 cluster. SmartStart uses an automated step-by-step process to configure the operating system and load the system software.
The Compaq SmartStart and Support Software CD also contains device drivers and utilities that enable you to take advantage of specific capabilities offered on Compaq products. These drivers are provided for use with Compaq hardware only.
The PDC/O5000 requires version 4.9 of the SmartStart and Support Software CD. For information about SmartStart, refer to the Compaq Server Setup and Management pack.

Compaq System Configuration Utility

The SmartStart and Support Software CD also contains the Compaq System Configuration Utility. This utility is the primary means to configure hardware devices within your servers, such as I/O addresses, boot order of disk controllers, and so on.
For information about the System Configuration Utility, see the Compaq Server Setup and Management pack.

Compaq Insight Manager

Compaq Insight Manager, loaded from the Compaq Management CD, is a software utility used to collect information about the servers in the cluster. Compaq Insight Manager performs these functions:
Monitors server fault conditions and status
Forwards server alert fault conditions
Remotely controls servers
The Integrated Management Log is used to collect and feed data to Compaq Insight Manager. This log is used with the Compaq Integrated Management Display (IMD), the optional Remote Insight controller, and SmartStart.
Cluster Software Components 3-3
In Compaq servers, each hardware subsystem, such as non-shared disk storage, system memory, and system processor, has a robust set of management capabilities. Compaq Full Spectrum Fault Management notifies the end user of impending fault conditions.
For information about Compaq Insight Manager, refer to the documentation you received with your Compaq ProLiant server.

Compaq Insight Manager XE

Compaq Insight Manager XE is a Web-based management system. It can be used in conjunction with Compaq Insight Manager agents as well as its own Web-enabled agents. This browser-based utility provides increased flexibility and efficiency for the administrator.
Compaq Insight Manager XE is an optional CD available upon request from the Compaq System Management website at
www.compaq.com/sysmanage
3-4 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Compaq Options ROMPaq

The Compaq Options ROMPaq diskettes allow a user to upgrade the ROM Firmware images for Compaq System product options, such as array controllers, disk drives, and tape drives used for non-shared storage.

Compaq StorageWorks Command Console

The Compaq StorageWorks Command Console (SWCC) is a graphical user interface (GUI) used to create, configure, and manage MA8000/EMA12000 Storage Subsystem resources in a PDC/O5000 cluster, including RAID implementation, disk mirroring, and disk partitioning.
The SWCC provides a user-friendly method of executing the Command Line Interface (CLI) commands for the HSG80 Array Controllers. While the CLI window provides very detailed control over the storage subsystem, the SWCC replicates most of the CLI window functions in a graphic form and provides a user-friendly method of executing CLI commands.
For details about using SWCC, refer to the documentation provided with the HSG80 Array Controller.

Compaq StorageWorks Secure Path for Windows 2000

Compaq StorageWorks Secure Path for Windows 2000 (Secure Path) must be installed on each server (node) in the PDC/O5000. Secure Path monitors all I/O paths to the storagesets in the shared storage subsystems. If any component along an active I/O path fails, Secure Path detects this failure and automatically reroutes I/O activity through the defined backup path. Failovers are transparent and non-disruptive to applications.
Secure Path Manager is the client application used to manage multiple path StorageWorks RAID array configurations. Secure Path Manager displays a graphical presentation of the current multiple I/O path environment and indicates the location and state of all configured storage units on each path. To facilitate load balancing, Secure Path Manager provides the capability to move storagesets from their configured paths to other paths during normal cluster operation. Secure Path Manager can be run locally at the cluster nodes or remotely at a management station.
You can also install the optional Compaq StorageWorks Large LUN utility with the Secure Path software. This utility allows cluster nodes to access up to 64 LUNs per target (array controller port).

Compaq Operating System Dependent Modules

Compaq supplies low-level services, called operating system dependent modules (OSDs), which are required by Oracle8i Parallel Server. The OSD layer monitors critical clustering hardware components, constantly relaying cluster state information to Oracle8i Parallel Server. Oracle8i Parallel Server monitors this information and takes pertinent action as needed.
For example, the OSD layer is responsible for monitoring the cluster interconnect of each node in the cluster. The OSD layer determines if one of the nodes is no longer responding to the cluster heartbeat. If the node still does not respond, the OSD layer determines it is unavailable. The OSD layer evicts the node from the cluster and informs Oracle8i Parallel Server. Oracle8i Parallel Server recovers the part of the database affected by that node, and reconfigures the cluster with the remaining nodes.
OSDs for Oracle8i Parallel Server
Cluster Software Components 3-5
For a detailed description of how the OSD layer interacts with Oracle8i Parallel Server, refer to the Oracle8i Parallel Server Setup and Configuration Guide provided with the Oracle8i software.
The OSD software is found on the Compaq Parallel Database Cluster Clustering Software for Oracle8i on Microsoft Windows 2000 CD. This CD is included in the cluster kit for the PDC/O5000.

Oracle Software

The PDC/O5000 supports Oracle8i software. If you are using a release other than Oracle8i Release 8.1.7, confirm that the release has been certified for the PDC/O5000 on the Compaq website at
www.compaq.com/solutions/enterprise/ha-pdc.html
3-6 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Oracle8i Server Enterprise Edition

The Oracle8i Server Enterprise Edition provides the following:
Oracle8i Server
Oracle8i Parallel Server Option
Oracle8i Enterprise Manager

Oracle8i Server

Oracle8i Server is the database application software and must be installed on each node in the PDC/O5000.
Refer to the documentation provided with the Oracle8i Server software for detailed information.

Oracle8i Parallel Server Option

Oracle8i Parallel Server Option is the key component in the Oracle8i clustering architecture. Oracle8i Parallel Server allows the database server to divide its workload among the physical cluster nodes. This is accomplished by running a distinct instance of Oracle8i Server on each node in the PDC/O5000.
Oracle8i Parallel Server manages the interaction between these instances. Through its Integrated Distributed Lock Manager, Oracle8i Parallel Server manages the ownership of database records that are requested by multiple instances.
At a lower level, Oracle8i Parallel Server monitors cluster membership. It interacts with the OSDs, exchanging information about the state of each cluster node.
For additional information, refer to:
Oracle8i Parallel Server Setup and Configuration Guide
Oracle8i Parallel Server Concepts
Oracle8i Parallel Server Administration, Deployment, and Performance

Oracle8i Enterprise Manager

Oracle8i Enterprise Manager is responsible for monitoring the state of both the database entities and the cluster members. It primarily manages the software components of the cluster. Hardware components are managed with Compaq Insight Manager.
Do not install Oracle8i Enterprise Manager on any of the PDC/O5000 nodes. To conserve cluster resources (memory, processes, etc.), it must be installed on a separate server that is running Oracle8i and has network access to the cluster nodes. Before installing Oracle8i Enterprise Manager, read its documentation to ensure it is installed and configured correctly for an Oracle8i Parallel Server environment.

Oracle8i Certification

To ensure that Oracle8i Parallel Server is used in a compatible hardware environment, Oracle has established a certification process, which is a series of test suites designed to stress an Oracle8i Parallel Server implementation and verify stability and full functionality.
All hardware providers who choose to deliver platforms for use with Oracle8i Parallel Server must demonstrate the successful completion of the Oracle8i Parallel Server for Windows 2000 Certification. Neither Oracle nor Compaq will support any implementation of Oracle8i Parallel Server that does not strictly conform to the configurations certified with this process. For a complete list of certified Compaq servers, see the Compaq Parallel Database Cluster Model PDC/05000 Certification Matrix for Windows 2000 at
Cluster Software Components 3-7
www.compaq.com/solutions/enterprise/ha-pdc.html

Application Failover and Reconnection Software

When a network client computer operates in a clustered environment, it must be more resilient than when operating with a stand-alone server. Because a client can access the database through any of the cluster nodes, the failure of the connection to a node does not have to prevent the client from reattaching to the cluster and continuing its work.
Oracle clustering software provides the capability to allow the automatic reconnection of a client and application failover in the event of a node failure.
3-8 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
To implement this application and connection failover, a software interface between the Oracle software and the client must be written.
Such a software interface would be responsible for detecting when the client’s cluster node is no longer available and then connecting the client to one of the remaining, operational cluster nodes.
NOTE: For complete information on how to ensure client auto-reconnect in an Oracle Parallel Server environment, contact your Oracle representative.
Chapter 4
Cluster Planning
Before connecting any cables or powering on any hardware on your Compaq Parallel Database Cluster Model PDC/O5000 (referred to here as the PDC/O5000), it is important that you understand how all the various cluster components fit together to meet your operational requirements. The major topics discussed in this chapter are:
Site planning
Capacity planning for cluster hardware
Planning cluster configurations for redundant Fibre Channel Fabrics
Planning cluster configurations for redundant Fibre Channel Arbitrated
Loops
RAID planning
Planning the grouping of physical disk storage space
Disk drive planning
Network planning
4-2 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Site Planning

You must carefully select and prepare the site to ensure a smooth installation and a safe and efficient work environment. To select and prepare a location for your cluster, consider the following:
The path from the receiving dock to the installation area
Availability of appropriate equipment and qualified personnel
Space for unpacking, installing, and servicing the computer equipment
Sufficient floor strength for the computer equipment
Cabling requirements, including the placement of network and Fibre
Channel cables within one room (under the subfloor, on the floor, or overhead) and possibly between rooms
Client LAN resource planning, including the number of hubs or
switches and cables to connect to the cluster nodes
Environmental conditions, including temperature, humidity, and air
quality
Power, including voltage, current, grounding, noise, outlet type, and
equipment proximity
IMPORTANT: Carefully review the power requirements for your cluster components to identify special electrical supply needs in advance.

Capacity Planning for Cluster Hardware

Compaq ProLiant Servers

The number of servers you install in a PDC/O5000 should take into account the levels of availability and scalability your site requires. Start by planning your cluster so that the failure of a single node will not adversely impact cluster operations. For example, when running a two-node cluster, the failure of one node leaves the one remaining node to service all clients. This could result in an unacceptable level of performance.
Within each server, the appropriate number and speed of the CPUs and memory size are all determined by several factors. These include the types of database applications being used and the number of clients connecting to the servers.
NOTE: Certain restrictions apply to the server models and server configurations that are supported by the Compaq Parallel Database Cluster. For a current list of PDC/O5000­certified servers and details on supported configurations, refer to the Compaq Parallel Database Cluster Model PDC/O5000 Certification Matrix for Windows 2000. This document is available on the Compaq website at
www.compaq.com/solutions/enterprise/ha-pdc.html

Planning Shared Storage Components for Redundant Fibre Channel Fabrics

Several key components make up the shared storage subsystem for the PDC/O5000. Each redundant Fibre Channel Fabric in a PDC/O5000 uses the following hardware components:
Two or more KGPSA-BC PCI-to-Optical Fibre Channel Host Adapters
(KGPSA-BC Host Adapters) or KGPSA-CB PCI-to-Optical Fibre Channel Host Adapters (KGPSA-CB Host Adapters) in each node.
Cluster Planning 4-3
Two or more Compaq StorageWorks Fibre Channel SAN Switches
(Fibre Channel SAN Switches), each of which connects all of the cluster nodes to both array controllers in each storage subsystem.
Compaq StorageWorks Modular Array 8000 Fibre Channel Storage
Subsystems or the Compaq StorageWorks Enterprise Modular Array 12000 Fibre Channel Storage Subsystems (MA8000/EMA12000 Storage Subsystems). Each storage subsystem holds up to 72 disk drives.
Two dual-port Compaq StorageWorks HSG80 Array Controllers
(HSG80 Array Controllers) installed in the controller enclosure of each MA8000/EMA12000 Storage Subsystem.
NOTE: For more information about redundant Fibre Channel Fabrics in a PDC/O5000 cluster, see Chapter 2, “Cluster Architecture.”
The Fibre Channel SAN Switch is available in two models:
Fibre Channel SAN Switch 8 (8 ports)
Fibre Channel SAN Switch 16 (16 ports)
4-4 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
To determine which Fibre Channel SAN Switch model is appropriate for a redundant Fibre Channel Fabric in your cluster, you need to know the following:
Whether you are implementing dual redundancy or quad redundancy in
the redundant Fibre Channel Fabric. Dual redundancy requires two host adapters per node and two Fibre Channel SAN Switches. Quad redundancy requires four host adapters per node and four Fibre Channel SAN Switches.
Total number of host adapters in all nodes
Total number of array controller ports in all of the storage subsystems in
the redundant Fibre Channel Fabric
You will need enough switch ports to accommodate the connections from the host adapters to the Fibre Channel SAN Switches and from the Fibre Channel SAN Switches to the array controllers.
The number of storage subsystems and shared disk drives used in a redundant Fibre Channel Fabric PDC/O5000 depends on the amount of shared storage space required by the database, the hardware RAID levels used on the shared storage disks, and the number and storage capacity of disk drives installed in the enclosures. Refer to “Raw Data Storage and Database Size” in this chapter for more details.
NOTE: For improved I/O performance and cluster integrity, as you increase the number of nodes in a PDC/O5000, you should also increase the aggregate bandwidth of the shared storage subsystem by adding more or higher-capacity disk drives.

Planning Shared Storage Components for Redundant Fibre Channel Arbitrated Loops

Several key components make up the shared storage subsystem for the PDC/O5000. Each redundant Fibre Channel Arbitrated Loop (FC-AL) in a PDC/O5000 uses the following hardware components:
Two or more host adapters in each node.
Two or more Compaq StorageWorks Fibre Channel Storage Hubs
(Storage Hubs), each of which connects all of the cluster nodes to both array controllers in each storage subsystem.
Cluster Planning 4-5
MA8000/EMA12000 Storage Subsystems, each of which holds up to
72 disk drives.
Two dual-port HSG80 Array Controllers installed in each storage
subsystem’s controller enclosure.
NOTE: For more information about redundant Fibre Channel Arbitrated Loops (FC-ALs) in a PDC/O5000, see Chapter 2, Cluster Architecture.
The Storage Hubs are available in two models:
Storage Hub 7 (7 ports)
Storage Hub 12 (12 ports)
To determine which Storage Hub model is appropriate for a redundant FC-AL in your cluster, you need to know the following:
Whether you are implementing dual redundancy or quad redundancy in
the redundant FC-AL. Dual redundancy requires two host adapters per node and two Storage Hubs. Quad redundancy requires four host adapters per node and four Storage Hubs.
Total number of host adapters in all nodes
Total number of array controller ports in all of the storage subsystems in
the redundant FC-AL
You will need enough Storage Hub ports to accommodate the connections from the host adapters to the Storage Hubs and from the Storage Hubs to the array controllers.
The number of storage subsystems and shared disk drives used in a PDC/O5000 redundant FC-AL depends on the amount of shared storage space required by the database, the hardware RAID levels used on the shared storage disks, and the number and storage capacity of disk drives installed in the enclosures. Refer to “Raw Data Storage and Database Size” in this chapter for more details.
NOTE: For improved I/O performance and cluster integrity, as you increase the number of nodes in a PDC/O5000, you should also increase the aggregate bandwidth of the shared storage subsystem by adding more or higher-capacity disk drives.
4-6 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Planning Cluster Interconnect and Client LAN Components

PDC/O5000 clusters running Oracle8i Parallel Server can use a redundant Ethernet cluster interconnect. A redundant cluster interconnect is required because it provides fault tolerance along the cluster interconnect paths.
Planning an Ethernet Cluster Interconnect
NOTE: Refer to the technical white paper Supported Ethernet Interconnects for Compaq
Parallel Database Clusters Using Oracle Parallel Server for detailed information about configuring redundant Ethernet cluster interconnects. This document is available at
www.compaq.com/support/techpubs/whitepapers
Before installing the Ethernet cluster interconnect in a PDC/O5000 cluster, review these planning considerations:
Whether to use two Ethernet switches or two Ethernet hubs for the
cluster interconnect. If your cluster will contain or grow to three or more nodes, you must use two Ethernet switches.
Whether to use two dual-port Ethernet adapters in each node that will
connect to both the cluster interconnect and the client LAN or to use separate single-port adapters for the Ethernet cluster interconnect and the client LAN.
Planning the Client LAN
Every client/server application requires a local area network (LAN) over which client machines and servers communicate. In the case of a cluster, the hardware components of the client LAN are no different than in a stand-alone server configuration.
In keeping with the redundant architecture of the cluster interconnect, you may choose to install a redundant client LAN, with redundant Ethernet adapters and redundant Ethernet switches or hubs.

Planning Cluster Configurations for Redundant Fibre Channel Fabrics

A redundant Fibre Channel Fabric is two or more Fibre Channel SAN Switches installed between host adapters in cluster nodes and array controllers in the shared storage subsystems. Each redundant Fibre Channel Fabric in a PDC/O5000 provides I/O path connections between each cluster node and the two ports on both array controllers in each MA8000/EMA12000 Storage Subsystem. Thus, each node can issue I/O requests to every storageset in the storage subsystems contained in that redundant Fibre Channel Fabric.
A redundant Fibre Channel Fabric increases a PDC/O5000 cluster’s availability by providing redundant components along the I/O paths between the cluster nodes and the storage subsystems. If a particular component fails, data communication can continue over an I/O path containing functional components.
To achieve redundancy in Fibre Channel Fabrics, you can implement either a dual redundancy configuration or a quad redundancy configuration.

Planning Dual Redundancy Configurations

Cluster Planning 4-7
A dual redundancy configuration is the minimum allowable configuration that provides redundancy along the I/O paths between the cluster nodes and the storage subsystems. This configuration provides two of each component along the I/O paths. These components include:
At least two cluster nodes
Two host adapters in each node
Two Fibre Channel SAN Switches
Two dual-port array controllers in each storage subsystem
Fibre Channel cables and GBIC-SW modules for the connections
between the host adapters and the Fibre Channel SAN Switches, and between the Fibre Channel SAN Switches and the array controllers
If a redundant component on one of the I/O paths fails, the cluster nodes can continue to access shared storage through the path containing the surviving component.
4-8 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Figure 4-1 shows the correct method for cabling the I/O path components for dual redundancy in a redundant Fibre Channel Fabric. In Figure 4-1, both Fibre Channel SAN Switches connect to port 1 on one array controller and to port 2 on the other array controller in each storage subsystem. If one Fibre Channel SAN Switch fails, the other switch still has access to port 1 on one array controller and port 2 on the other array controller in each storage subsystem. This configuration ensures that the host adapters can still access the full range of LUNs.
Host Adapters (2) Host Adapters (2)
Node 2Node 1
Fibre Channel
SAN Switch # 1
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
A
B
Fibre Channel
SAN Switch # 2
P1 P2
P1 P2
Figure 4-1. Dual redundancy configuration for a redundant Fibre Channel Fabric
See Chapter 2, Cluster Architecture, for more details on how to configure Fibre Channel Fabrics for dual redundancy.

Planning Quad Redundancy Configurations

For the highest possible level of I/O path redundancy, you can configure your cluster for quad redundancy. This configuration provides four host adapters in each of at least four nodes and four Fibre Channel SAN Switches for each redundant Fibre Channel Fabric.
The recommended quad redundancy configuration includes these I/O path components:
At least four cluster nodes
Four host adapters in each node
Four Fibre Channel SAN Switches in each redundant Fibre Channel
Fabric
Two dual-port array controllers in each storage subsystem
Fibre Channel cables and GBIC-SW modules for the connections
between the nodes and the Fibre Channel SAN Switches, and between the Fibre Channel SAN Switches and the storage subsystems
In a quad redundancy configuration, both ports on both array controllers in each storage subsystem access the same storagesets. Thus, up to three of the four paths between the controllers and the shared data can fail and the cluster nodes can still access the shared data. In addition, quad redundancy provides continued access to shared storage in the event of multiple component failures in the I/O paths between the nodes and the array controllers. For example, up to three host adapters in a node can fail, or up to three Fibre Channel SAN Switches can fail, and the nodes can still access shared storage over a path containing functioning components.
Cluster Planning 4-9
Figure 4-2 shows an example of a quad redundancy configuration in a redundant Fibre Channel Fabric. In this example, the following connections are made:
Each of four host adapters in each node connects to the same-numbered
Fibre Channel SAN Switch (host adapter 1 to Fibre Channel SAN Switch #1, host adapter 2 to Fibre Channel SAN Switch #2, host adapter 3 to Fibre Channel SAN Switch #3, host adapter 4 to Fibre Channel SAN Switch #4).
The two odd-numbered Fibre Channel SAN Switches connect to port 1
of the array controllers. Fibre Channel SAN Switch #1 connects to array controller A, port 1 in each storage subsystem. Fibre Channel SAN Switch #3 connects to array controller B, port 1 in each storage subsystem.
4-10 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
The two even-numbered Fibre Channel SAN Switches connect to port 2
of the array controllers. Fibre Channel SAN Switch #2 connects to array controller A, port 2 in each storage subsystem. Fibre Channel SAN Switch #4 connects to array controller B, port 2 in each storage subsystem.
Host Adapters (4)
Node 1
Fibre Channel
SAN Switch #1
Fibre Channel
SAN Switch #3
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
Node 2 Node 4Node 3
A
B
P1 P2
P1 P2
Fibre Channel
SAN Switch #2
Fibre Channel
SAN Switch #4
Figure 4-2. Quad redundancy configuration for a redundant Fibre Channel Fabric
A quad redundancy configuration provides greater component availability than a dual redundancy configuration, but requires a greater investment in hardware.
See Chapter 2, Cluster Architecture, for more details on implementing a quad redundancy configuration in redundant Fibre Channel Fabrics.

Planning Cluster Configurations for Redundant Fibre Channel Arbitrated Loops

A redundant Fibre Channel Arbitrated Loop (FC-AL) is two or more Storage Hubs installed between host adapters in cluster nodes and array controllers in the shared storage subsystems. Each redundant FC-AL in a PDC/O5000 provides I/O path connections between each cluster node and the two ports on both array controllers in each MA8000/EMA12000 Storage Subsystem. Thus, each node can issue I/O requests to every storageset in the storage subsystems contained in that redundant FC-AL.
A redundant FC-AL increases a PDC/O5000 clusters availability by providing redundant components along the I/O path between the cluster nodes and the storage subsystems. If a particular component fails, data communication can continue over an I/O path containing functional components.
To achieve redundancy in Fibre Channel Arbitrated Loops, you can implement either a dual redundancy configuration or a quad redundancy configuration.
Cluster Planning 4-11

Planning Dual Redundancy Configurations

A dual redundancy configuration is the minimum allowable configuration that provides redundancy along the I/O paths between the cluster nodes and the storage subsystems. This configuration provides two of each component along the I/O paths. These components include:
At least two cluster nodes
Two host adapters in each node
Two Storage Hubs
Two dual-port array controllers in each storage subsystem
Cables and GBIC-SW modules for the connections between the host
adapters and the Storage Hubs and between the Storage Hubs and the array controllers
If a redundant component along one of the I/O paths fails, the cluster nodes continue to access shared storage through the path containing the surviving component.
4-12 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
Figure 4-3 shows the correct method for cabling the I/O path components for dual redundancy in a redundant FC-AL. In Figure 4-3, both Storage Hubs connect to port 1 on one array controller and to port 2 on the other array controller in each storage subsystem. If one Storage Hub fails, the other Storage Hub still has access to port 1 on one array controller and port 2 on the other array controller in each storage subsystem. This configuration ensures that the host adapters can still access the full range of LUNs.
Host Adapters (2)
Storage
Hub #1
A
B
P1 P2
P1 P2
Storage Subsystem #1 Storage Subsystem #2
Node 1
Node 2
A
B
Host Adapters (2)
Storage Hub #2
P1 P2
P1 P2
Figure 4-3. Dual redundancy configuration for a redundant FC-AL
See Chapter 2, Cluster Architecture, for more details on how to configure redundant FC-ALs for dual redundancy.

Planning Quad Redundancy Configurations

For the highest possible level of I/O path redundancy, you can configure your cluster for quad redundancy. This configuration provides four host adapters in each of at least four nodes and four Storage Hubs for each redundant FC-AL.
The recommended quad redundancy configuration includes these I/O path components:
At least four cluster nodes
Four host adapters in each node
Four Storage Hubs in each redundant FC-AL
Two dual-port array controllers in each storage subsystem
Fibre Channel cables and GBIC-SW modules for the connections
between the nodes and the switches or hubs, and between the switches or hubs and the storage subsystems
In a quad redundancy configuration, both ports on both array controllers in each storage subsystem access the same storagesets. Thus, up to three of the four paths between the controllers and the shared data can fail and the cluster nodes can still access the shared data. In addition, quad redundancy provides continued access to shared storage in the event of multiple component failures in the I/O paths between the nodes and the array controllers. For example, up to three host adapters in a node can fail, or up to three Storage Hubs can fail, and the nodes can still access shared storage over an I/O path containing functional components.
Cluster Planning 4-13
Figure 4-4 shows an example of a quad redundancy configuration in a redundant FC-AL. In this example, the following connections are made:
Each of four host adapters in each node connects to the same-numbered
Storage Hub (host adapter 1 to Storage Hub #1, host adapter 2 to Storage Hub #2, host adapter 3 to Storage Hub #3, host adapter 4 to Storage Hub #4).
The two odd-numbered Storage Hubs connect to port 1 of the array
controllers. Storage Hub #1 connects to array controller A, port 1 in each storage subsystem. Storage Hub #3 connects to array controller B, port 1 in each storage subsystem.
4-14 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
The two even-numbered Storage Hubs connect to port 2 of the array
controllers. Storage Hub #2 connects to array controller A, port 2 in each storage subsystem. Storage Hub #4 connects to array controller B, port 2 in each storage subsystem.
Host Adapters (4)
Node 1
Storage
Hub #1
Storage
Hub #3
A
B
Storage Subsystem #1 Storage Subsystem #2
P1 P2
P1 P2
Node 2 Node 4Node 3
A
B
P1 P2
P1 P2
Storage
Hub #2
Storage
Hub #4
Figure 4-4. Quad redundancy configuration for a redundant FC-AL
Quad redundancy provides an even higher level of availability than dual redundancy. However, it requires a greater investment in hardware.
See Chapter 2, Cluster Architecture, for more details on how to configure redundant FC-ALs for quad redundancy.

RAID Planning for the MA8000/EMA12000 Storage Subsystem

Storage subsystem performance is one of the most important aspects of tuning database cluster servers for optimal performance. Efforts to plan, configure, and tune a PDC/O5000 cluster should focus on getting the most out of each disk drive and having an appropriate number of shared storage drives in the cluster. When properly configured, the I/O subsystem should not be the limiting factor in overall cluster performance.
RAID technology provides cluster servers with more consistent performance, higher levels of fault tolerance, and easier fault recovery than non-RAID systems. RAID uses redundant information stored on different disks to ensure that the cluster can survive the loss of any disk in the array without affecting the availability of data to users.
RAID also uses the technique of striping, which involves partitioning each drives storage space into units ranging from a sector (512 bytes) up to several megabytes. The stripes of all the disks are interleaved and addressed in order.
In this cluster, each node is connected to shared storage disk drive arrays that are housed in MA8000/EMA12000 Storage Subsystem disk enclosures. When planning the amount of shared storage for your cluster, you must consider the following:
The number of shared storage subsystems (MA8000/EMA12000
Storage Subsystems) you intend to install in the PDC/O5000 cluster. You install dedicated shared storage subsystems to each redundant FC-AL or redundant Fibre Channel Fabric in your cluster. You cannot share storage subsystems between redundant FC-ALs or between redundant Fibre Channel Fabrics The greater the number of redundant FC-ALs or redundant Fibre Channel Fabrics present, the more storage subsystems you can install in cluster.
Cluster Planning 4-15
The number of redundant FC-ALs or redundant Fibre Channel Fabrics
allowed in a cluster, in turn, depends upon the maximum number of host adapters that can be installed in the ProLiant server model you will be using. Refer to the server documentation for this information.
The appropriate number of shared storage subsystems in a cluster is
determined by the performance requirements of your cluster.
Refer to Planning Shared Storage Components for Redundant Fibre Channel Fabrics and Planning Shared Storage Components for Redundant Fibre Channel Arbitrated Loops in this chapter for more information.
4-16 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
The PDC/O5000 implements RAID at the hardware level, which is faster than software RAID. When you implement RAID on shared storage arrays, you use the hardware RAID to perform such functions as making copies of the data or calculating checksums. To implement the RAID sets, use the Compaq StorageWorks Command Console (SWCC) or the Command Line Interface (CLI) window for the HSG80 Array Controllers.
NOTE: Do not use the software RAID offered by the operating system to configure your shared storage disks.

Supported RAID Levels

RAID provides several fault-tolerant options to protect your clusters shared data. However, each RAID level offers a different mix of performance, reliability, and cost.
The MA8000/EMA12000 Storage Subsystem and HSG80 Array Controllers used with the PDC/O5000 supports these RAID levels:
RAID 1
RAID 0+1
RAID 3/5
RAID 5
For RAID level definitions and information about configuring hardware RAID, refer to the following:
Refer to the information about RAID configuration contained in the
StorageWorks documentation provided with the HSG80 Array Controller.
Refer to the Compaq white paper Configuring Compaq RAID
Technology for Database Servers, #ECG 011/0598, available at the
Compaq website at
www.compaq.com
Refer to the various white papers on Oracle8i, which are available at the
Compaq ActiveAnswers™ website at
www.compaq.com/activeanswers/

Raw Data Storage and Database Size

Raw data storage is the amount of storage available before any RAID levels have been configured. It is called raw storage because RAID volumes require some overhead. The maximum size of a database stored in a RAID system will always be less than the amount of raw data storage available, except for RAID 0, where no storage overhead is required.
To calculate the amount of raw data storage in a cluster, determine the total amount of shared storage space that will be available to the cluster. To do this, you need to know the following:
The number of storage subsystems in the cluster
The number and storage capacities of the physical disk drives installed
in each storage subsystem
Add together the planned storage capacity contained in all storage subsystems to calculate the total amount of raw data storage in the PDC/O5000. The maximum amount of raw data storage in an MA8000/EMA12000 Storage Subsystem depends on what type of disk drives you install in the subsystem. For example, an MA8000/EMA12000 Storage Subsystem with six disk enclosures and 12 9.1-GB Ultra2 disk drives per enclosure provides a maximum storage capacity of .65 TB. Using the 18.2-GB Ultra3 disk drives provides a maximum storage capacity of 1.3 TB.
Cluster Planning 4-17
The amount of shared disk space required for a given database size is affected by the RAID levels you select and the overhead required for indexes, I/O buffers, and logs. Consult your Oracle representative for further details.
4-18 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide

Selecting the Appropriate RAID Level

Many factors will affect the RAID levels you select for your cluster database. These include the specific availability, performance, reliability, and recovery capabilities required from the database. Each cluster must be evaluated individually by qualified personnel.
The following general guidelines apply to RAID selection for a PDC/O5000 cluster using Oracle8i Parallel Server:
Oracle recommends that some form of disk fault tolerance be
implemented in the cluster.
In order to ease the difficulty of managing dynamic space allocation in
an OPS raw volume environment, Oracle recommends the creation of spare raw volumes that can be used to dynamically extend tablespaces when the existing datafiles approach capacity. The number of these spare raw volumes should represent from 10 to 30 percent of the total database size. To allow for effective load balancing, the spares should be spread across a number of disks and controllers. The database administrator should decide, on a case by case basis, which spare volume to use based on which volume would have the least impact on scalability (for both speedup and scaleup).

Planning the Grouping of Physical Disk Storage Space

Figure 4-5 shows an example of how the physical storage space for one 12-drive array in an MA8000/EMA12000 Storage Subsystem disk enclosure could be grouped for an Oracle8i Parallel Server database.
Shared Disk Storage Array
Disk
Disk
Disk
Disk
Disk
Disk
Drive
Drive
Drive
Drive
Drive
Drive
Disk
Drive
Drive
Disk
Disk
Drive
Cluster Planning 4-19
Disk
Disk
Drive
Disk
Drive
Drive
Create RAIDset with CL Ior SWCC
Create a logical drive with CLI or SWCC
Create one extended partition with Disk Management
Create logical partitions with Disk Management
RAIDset
RAID Logical Drive
Extended Partition
. . . . . .
Figure 4-5 MA8000/EMA12000 Storage Subsystem disk grouping for a PDC/O5000 cluster
Each MA8000/EMA12000 Storage Subsystem contains up to six disk enclosures. You are advised to configure all the drives in each enclosure as a single separate logical drive. As Figure 4-5 shows, this requires that you create a RAID set, create one logical drive, create one extended partition, then create
4-20 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
logical partitions. For detailed information about these procedures, refer to
Configuring a Storageset and Creating Partitions in Chapter 5,Installation and Configuration.
You use either the CLI window for the HSG80 Array Controllers or the Compaq StorageWorks Command Console (SWCC) to create, configure, and partition the logical disks. While the CLI window provides very detailed control over the MA8000/EMA12000 Storage Subsystem, the SWCC replicates most of the CLI window functions in a graphic form and provides a more user-friendly tool for executing CLI commands.
IMPORTANT: To perform partitioning, all required drivers must already be installed for each server. These include the Compaq KGPSA driver for the host adapters, the HSG80 Array Controller (HSZDISK) driver, and the Secure Path (RAIDISK.SYS) driver.

Disk Drive Planning

Nonshared Disk Drives

Nonshared disk drives, or local storage, operate the same way in a cluster as they do in a single-server environment. These drives can be in the server drive bays or in an external storage enclosure. As long as they are not accessible by multiple servers, they are considered nonshared.
Treat non-shared drives in a clustered environment as you would in a non-clustered environment. In most cases, some form of RAID is used to protect the drives and aid in restoration of a failed drive. Since the Oracle Parallel Server application files are stored on these drives, it is recommended that you use hardware RAID.
Hardware RAID is the recommended solution for RAID configuration because of its superior performance. For the PDC/O5000, hardware RAID for the shared storage subsystem can be implemented using StorageWorks Command Console (SWCC) or the Command Line Interface (CLI) window for the array controllers.

Shared Disk Drives

Shared disk drives are contained in disk enclosures that are accessible to each node in a cluster. It is recommended that you use hardware RAID levels 1, 0+1, 3/5, or 5 on your shared disk drives.
If a logical drive is configured with a RAID level that does not support fault tolerance (for example, RAID 0), then the failure of the shared disk drives in that logical drive will disrupt service to all Oracle databases that are dependent on that disk drive. See Selecting the Appropriate RAID Level earlier in this chapter.
The array controllers are equipped with RAID. The array controller RAID is configured with RAID storageset commands issued from the CLI window or from the SWCC.
The array controller software (ACS) monitors the status of the shared disk drives. If it detects a drive failure, it puts that drive in the failed set. From the StorageWorks CLI or SWCC GUI, you can monitor the status of shared storage disk drives and failed sets. A failed drive is displayed from the SWCC GUI with an “X” marked over it. For further information, refer to Replacing a Failed Drive in a Storage Subsystem in Chapter 6, Cluster Management.

Network Planning

Windows 2000 Advanced Server Host Files for an Ethernet Cluster Interconnect

Cluster Planning 4-21
When an Ethernet cluster interconnect is installed between cluster nodes, the Compaq operating system dependent modules (OSDs) require a unique entry in the hosts and lmhosts files located at %SystemRoot%\system32\drivers\etc for each network port on each node.
Each node needs to be identified by the IP address assigned to the Ethernet adapter port used by the Ethernet cluster interconnect and by the IP address assigned to the Ethernet adapter port used by the client LAN. The suffix _san stands for system area network.
4-22 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
The following list identifies the format of the hosts and lmhosts files for a four-node PDC/O5000 cluster with an Ethernet cluster interconnect:
IP address node1
IP address node1_san
IP address node2
IP address node2_san
IP address node3
IP address node3_san
IP address node4
IP address node4_san

Client LAN

Physically, the structure of the client network is no different than that used for a nonclustered configuration.
To ensure continued access to the database when a cluster node is evicted from the cluster, each network client should have physical network access to all of the cluster nodes.
Software used by the client to communicate to the database must be able to reconnect to another cluster node in the event of a node eviction. For example, clients connected to cluster node1 need the ability to automatically reconnect to another node if cluster node1 fails.
Installation and Configuration

Reference Materials for Installation

The Compaq Parallel Database Cluster PDC/O5000 (referred to here as the PDC/O5000) is a combination of several individually available products. As you set up your cluster, have the following materials available during installation. You will find references to them throughout this chapter.
User guides for the clustered Compaq ProLiant servers
Installation posters for the clustered ProLiant servers
Chapter 5
Installation guides for the cluster interconnect and client LAN
interconnect adapters
Compaq SmartStart Installation Poster
Compaq SmartStart and Support Software CD
Microsoft Windows 2000 Advanced Server Administrator’s Guide
Microsoft Windows 2000 Advanced Server CD with Service Pack 1 or
later
StorageWorks documentation provided with the MA8000/EMA12000
Storage Subsystem
MA8000/EMA12000 Solution Software CD
KGPSA Windows 2000 Device Driver and Download Utility diskette
HSG80 Solution for Windows 2000 CD
5-2 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
StorageWorks Secure Path Software CD
Compaq Parallel Database Cluster Clustering Software for Oracle8i on
Microsoft Windows 2000 CD
Oracle8i Enterprise Edition CD
Oracle8i Parallel Server Setup and Configuration Guide
Other documentation for Oracle8i

Installation Overview

Installation and setup of your PDC/O5000 involves this sequence of tasks.
Installing the hardware, including:
G Cluster nodes
G Host adapters
G Cluster interconnect and client LAN adapters
G Ethernet Switches or hubs
G Fibre Channel SAN Switches or Storage Hubs
G MA8000/EMA12000 Storage Subsystem controller and disk
enclosure components, including power supplies, I/O modules, disk drives, array controllers, and cache memory modules
Installing and configuring operating system software, including:
G SmartStart 4.9 or later
G Windows 2000 Advanced Server with Service Pack 1 or later
Setting up and configuring an MA8000/EMA12000 Storage Subsystem,
including:
G Designating a server as a maintenance terminal
G Configuring a storage subsystem for a redundant Fibre Channel
Fabric
G Configuring a storage subsystem for a redundant Fibre Channel
Arbitrated Loop (FC-AL)
G Verifying array controller properties
G Configuring a storageset
Installation and Configuration 5-3
Installing and configuring Secure Path for Windows 2000, including:
G Installing host adapter drivers
G Installing the Fibre Channel Software Setup (FCSS) utility
G Installing the Secure Path Driver, Secure Path Agent, and Secure
Path Manager
G Installing the optional Compaq StorageWorks Large LUN utility
G Specifying preferred_path units
G Creating partitions
Installing and configuring the Compaq operating system dependent
modules (OSDs), including:
G Using Oracle Universal Installer to install OSDs for an Ethernet
cluster interconnect
Installing and configuring Oracle software, including:
G Oracle8i Enterprise Edition with Oracle8i Parallel Server Option
Installing Object Link Manager
Verifying the hardware and software installation, including:
G Cluster interconnect and client LAN communications
G Access to shared storage from all nodes
G Client access to the Oracle8i database
Power distribution and sequencing guidelines

Installing the Hardware

Setting Up the Nodes

Physically preparing the nodes (servers) for a cluster is not very different than preparing them for individual use. You will install all necessary adapters and insert all internal hard disks. You will attach network cables and plug in SCSI and Fibre Channel cables. The primary difference is in setting up the shared storage subsystem.
Set up the hardware on one node completely, then set up the rest of the nodes identically to the first one. Do not load any software on any cluster node until all the hardware has been installed in all cluster nodes. Before loading
5-4 Compaq Parallel Database Cluster Model PDC/O5000 for Oracle8i and Windows 2000 Administrator Guide
software, review the section in this chapter entitled “Installing Operating System Software” to understand the idiosyncrasies of configuring a cluster.
NOTE: Certain restrictions apply to the server models and server configurations that are supported by the Compaq Parallel Database Cluster. For a current list of PDC-certified servers and details on supported configurations, refer to the Compaq Parallel Database Cluster Model PDC/O5000 Certification Matrix for Windows 2000. This document is available on the Compaq website at
www.compaq.com/solutions/enterprise/ha-pdc.html
While setting up the physical hardware, follow the installation instructions in your Compaq ProLiant Server Setup and Installation Guide and in your Compaq ProLiant Server Installation Poster.

Installing the KGPSA-BC and KGPSA-CB Host Adapters

To install KGPSA-BC PCI-to-Optical Fibre Channel Host Adapters (KGPSA-BC Host adapter) or KGPSA-CB PCI-to-Optical Fibre Channel Host Adapters (KGPSA-CB Host Adapter) in your servers, follow the installation instructions in your Compaq ProLiant Server Setup and Installation Guide and in your Compaq ProLiant Server Installation Poster.
Install the host adapters into the same PCI slots in each server. If possible, the host adapter should be placed into PCI slots on different PCI buses (if the server has more than one PCI bus). For example, in a ProLiant 6500 server, host adapters could be installed into PCI slots 4 and 7. This distributes the load from the storage subsystem. You would then install two host adapter into PCI slots 4 and 7 in all other cluster nodes.

Installing GBIC-SW Modules for the Host Adapters

Each host adapter ships with two GBIC-SW modules. Verify that one GBIC-SW module is installed in each host adapter and another module is installed into the active port for that host adapter on its Storage Hub or Fibre Channel SAN Switch. Each end of the Fibre Channel cable connecting a host adapter to a Storage Hub or Fibre Channel SAN Switch plugs into a GBIC-SW module.
Loading...