warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP
shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Windows Server 2003 is a trademark of
Microsoft Corporation. Intel, Pentium, and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United
States and other countries. UNIX is a registered trademark of The Open Group.
September 2006 (Second Edition)
Part Number 364775-002
Audience assumptions
This document is for the person who installs, administers, and troubleshoots servers and storage systems.
HP assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards
in products with hazardous energy levels.
Contents
Overview of the ProLiant Cluster F500 ............................................................................................ 5
HP ProLiant Cluster F500 overview .............................................................................................................. 5
Hardware and software components ........................................................................................................... 5
F500 for EVA configurations....................................................................................................................... 5
The HP ProLiant Cluster F500 for Enterprise Virtual Array is a two-to-eight-node cluster solution (eight-node
clustering is supported by Microsoft® Windows® Server 2003, Enterprise Edition) composed of HP
ProLiant servers and the HP StorageWorks EVA storage system.
The HP ProLiant Cluster F500 for MA8000 is a two-node cluster solution composed of HP ProLiant servers
and HP StorageWorks storage components. These cluster solutions execute on a Microsoft® Windows®
Server 2003, Enterprise Edition platform or a Microsoft® Windows® 2000 Advanced Server platform
with Microsoft® Cluster Service (two-node).
Hardware and software components
For a current list of supported hardware and software components, refer to the High Availability website
(http://www.hp.com/servers/proliant/highavailability
).
F500 for EVA configurations
The HP ProLiant Cluster F500 for EVA is a cluster with two Fibre Channel Adapters in each server, two
switches, and two storage controllers. In an F500 configuration, each storage controller pair can be
attached to a maximum of 240 drives.
Overview of the ProLiant Cluster F500 5
Cluster cross-cable configuration
A cluster cross-cable configuration has no single point of failure. To enable dual paths to the storage, the
HP StorageWorks Secure Path software must be installed on all servers. With Secure Path, data can flow
simultaneously over both FCAs to the storage subsystem, and you can perform load balancing over the
two paths to help maximize performance.
A component failure in this cluster results in a failover to a second component, and you can continue
using the cluster. Some typical failures and responses in the enhanced configuration include:
•
A server failure causes Microsoft® cluster software to fail over to the other node.
•
An HBA or FCA failure causes subsequent data requests intended for the failed adapter to be routed
over the remaining good adapter.
•
A switch or cable failure is detected as an HBA or FCA failure, and a failover to the second adapter,
which is using the remaining good switch and good cables, occurs.
•
A controller failure causes the second controller to take over for the failed controller. Secure Path
then routes the data requests to the second controller.
In all of the typical failures, interruptions to the user are minimal and, in some cases, might not even be
noticeable.
F500 for MA8000 configurations
The HP ProLiant Cluster F500 for MA8000 configurations support Fibre Channel switches with the Array
Controller Software and disaster-tolerant configurations.
The F500 can be set up in several different configurations, involving servers, switches, and storage
subsystems connected through a Fibre Channel Switched Fabric:
•
The enhanced configuration is a cluster with two HBAs in each server, two switches, and two storage
controllers, giving it increased availability over the basic configurations.
•
Additionally, two to four clusters can be configured to use the same storage subsystems.
In an F500 configuration, a maximum of four storage controller pairs can be connected to a single
cluster. This limitation dictates how many storage subsystems can be used in the cluster (a maximum of
four storage subsystems or some combination of each type of storage unit).
Overview of the ProLiant Cluster F500 6
Enhanced configuration
No single points of failure occur in an enhanced configuration. It improves on the basic configuration by
adding a second HBA to each server and a second switch. The combination of second adapter, switch,
and controller form a second independent path to the storage subsystem.
To enable dual paths to the storage, the Secure Path software must be installed on all servers. With
Secure Path, data can flow simultaneously over both HBAs to the storage subsystem, and you can perform
load balancing over the two paths to help maximize performance.
A component failure in this cluster results in a failover to a second component, and you can continue
using the cluster. Some typical failures and responses in the enhanced configuration include:
•
A server failure causes Microsoft® cluster software to fail over to the other node.
•
An HBA or FCA failure causes subsequent data requests intended for the failed adapter to be routed
over the remaining good adapter.
•
A switch or cable failure is detected as an HBA or FCA failure, and a failover to the second adapter,
which is using the remaining good switch and good cables, occurs.
•
A controller failure causes the second controller to take over for the failed controller. Secure Path
then routes the data requests to the second controller.
In all of the typical failures, interruptions to the user are minimal and, in some cases, might not even be
noticeable.
Overview of the ProLiant Cluster F500 7
Multiple cluster configurations
Up to four clusters can be combined into a single F500 for MA8000 configuration with the clusters
accessing the same group of storage subsystems.
HP OpenView Storage Management Appliance
The HP OpenView Storage Management Appliance runs the HP StorageWorks Command View EVA
software and the HSG Element Manager software. The Command View EVA software is the
administrative interface to the EVA, and the HSG Element Manager software is the administrative
interface to the MA8000. The application is browser-based and can be used from any machine on the
same IP network as the management appliance.
Cluster interconnect
The cluster interconnect is a data path over which nodes of a cluster communicate. This type of
communication is termed intracluster communication. At a minimum, the interconnect consists of two
network adapters (one in each server) and a crossover cable connecting the adapters.
The cluster nodes use the interconnect data path to:
•
Communicate individual resource and overall cluster status
•
Send and receive heartbeat signals
•
Update modified registry information
Overview of the ProLiant Cluster F500 8
IMPORTANT: TCP/IP must be used as the cluster communication protocol. When configuring the
interconnects, be sure to enable TCP/IP.
Redundant interconnects
To reduce potential disruptions of intracluster communication, use a redundant path over which
communication can continue if the primary path is disrupted.
HP recommends configuring the client LAN as a backup path for intracluster communication. This provides
a secondary path for the cluster heartbeat in case the dedicated primary path for intracluster
communications fails. This is configured when installing the cluster software, or it can be added later
using the MSCS Cluster Administrator.
HP offers a feature that configures two HP Ethernet adapters (or two ports on a single adapter) so that one
is a hot backup for the other. There are two ways to achieve this configuration, called NIC Teaming, and
the method you choose depends on the hardware. One way is through the use of the Redundant NIC
Utility available on all HP 10/100/1000 Fast Ethernet products. The other option is through the use of
the Network Fault Tolerance feature designed to operate with the HP 10/100/1000 Intel® silicon-based
NICs.
For more information on recommended interconnect strategies, refer to the white paper, Best Practices
Checklist—Increasing Network Fault Tolerance in a Microsoft® Windows® Server 2003, Enterprise
Edition High Availability Server Cluster, available from the ProLiant High Availability website
NOTE: Only use NIC Teaming with NICs that connect to the client LAN. Do not use this feature with NICs
used for the dedicated intracluster communication link. For detailed information about interconnect
redundancy, refer to the HP white paper, Increasing Network Availability in a Microsoft® Windows® Cluster, available from the High Availability website
(http://www.hp.com/servers/proliant/highavailability
).
Interconnect adapters
Ethernet adapters and switches are supported as interconnects in ProLiant clusters. Either a 10-Mb/s, 100Mb/s, or 1000-Mb/s Ethernet adapter can be used.
NOTE: For a list of supported interconnect adapters, refer to the Microsoft® Windows® Server 2003,
Enterprise Edition, Windows® 2000 Advanced Server, and Microsoft® cluster software compatibility list
available from the Microsoft® website (http://www.microsoft.com
). Be sure that the adapter you select is on
the list.
NOTE: An Ethernet crossover cable is provided in the HP ProLiant Cluster F500 for the Enterprise SAN kit.
The crossover cable is for a two-node configuration only.
Client network
Every client/server application requires a LAN over which client machines and servers communicate. The
components of the LAN are no different than with a stand-alone server configuration.
Because clients desiring the full advantage of the cluster will now connect to the cluster rather than to a
specific server, configuring client connections will differ from those for a stand-alone server. Clients will
connect to virtual servers, which are cluster groups that contain their own IP addresses.
Overview of the ProLiant Cluster F500 9
Ethernet direct connection
A direct Ethernet connection uses only three components:
•
Two interconnect adapters
•
One Ethernet crossover cable
Connecting interconnect adapters directly to each other requires a special cable. If you are using
Ethernet, an Ethernet crossover cable (included in the HP ProLiant Cluster F500 for the Enterprise SAN kit)
must be used.
If you are using the Ethernet crossover cable supplied with your kit and installing Windows® 2000
Advanced Server, the interconnect network might not display during the cluster installation because the
connection displays only if it is currently active at the time of installation. If the other cluster nodes are
powered off at the time you install MSCS, the connection is considered inactive by Windows® Server
2003, Enterprise Edition and Windows® 2000 Advanced Server. In this case, define the existing public
network connection as all communications during the installation. After MSCS is configured on all
nodes, the interconnect network automatically shows in the networks group in Cluster Administrator.
To configure the networks for MSCS use after installing Windows® Server 2003, Enterprise Edition or
Windows® 2000 Advanced Server:
1.
Right-click the cluster name in Cluster Administrator.
2.
Select Properties.
3.
Select the Network Priority tab from the dialog box.
4.
Configure the network roles as necessary.
Cluster networking
For troubleshooting information on this topic, refer to the following Microsoft® articles and related
documentation on the Microsoft® website (http://www.microsoft.com/support
•
Q193890—Recommended WINS Configuration for Microsoft Cluster Server
•
Q254101—Network Adapter Teaming and Server Cluster
•
Q254651—Cluster Network Role Changes Automatically
•
Q258750—Recommended Private "Heartbeat" Configuration on a Cluster Server
).
Overview of the ProLiant Cluster F500 10
Setting up the ProLiant Cluster F500 for
Enterprise Virtual Array
Hardware setup and configuration ........................................................................................................... 11
Preinstallation instructions
Before setting up the F500, verify that the hardware and software kits are appropriate for this installation.
For a current list of supported hardware and software components, refer to the High Availability website
(http://www.hp.com/servers/proliant/highavailability
).
Hardware setup and configuration
Verify that you have all the necessary hardware (minimum setup):
•
Two ProLiant servers
•
Two FCA cards for each server
•
Two NIC cards for each server
•
One EVA storage system
•
Two Fibre Channel switches
•
One HP OpenView Storage Management Appliance
Verify that you have all the necessary software:
•
Command View EVA
•
SmartStart CD
•
Microsoft® Windows® Server 2003, Enterprise Edition or Microsoft® Windows® 2000 Advanced
Server
•
CD that came with the EVA platform kit
•
HP StorageWorks Secure Path software
Set up the cluster using the following procedures:
1.
"Setting up the HP StorageWorks Enterprise Virtual Array (on page 12)"
2.
"Setting up the HP OpenView storage management appliance (on page 12)"
3.
"Setting up the HP ProLiant servers (on page 12)"
4.
"Updating the FCA device driver (on page 13)"
5.
"Setting IP addressing and zoning for Fibre Channel switches (on page 13)"
6.
"Creating zones (on page 14)"
7.
"Creating storage aliases (on page 14)"
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 11
Loading...
+ 25 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.