Copyright 2007 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license required from HP for possession, use or
copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software,
Computer Software Documentation, and Technical Data for Commercial Items are
licensed to the U.S. Government under vendor’s standard commercial license.
The information contained herein is subject to change without notice. The only
warranties for HP products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be
construed as constituting additional warranty. HP shall not be liable for technical or
editorial errors or omissions contained herein.
Oracle is a registered US trademark of Oracle Corporation, Redwood City, California.
UNIX is a registered trademark of The Open Group.
The manual printing date and part number indicate its current edition. The printing
date will change when a new edition is printed. Minor changes may be made at reprint
without changing the printing date. The manual part number will change when
extensive changes are made.
Manual updates may be issued between editions to correct errors or document product
changes. To ensure that you receive the updated or new editions, you should subscribe to
the appropriate product support service. See your HP sales representative for details.
First Edition: March 1998
Second Edition: June 1998
Third Edition: August 1998
Fourth Edition: October 1998
Fifth Edition: December 1998
Sixth Edition: February 1999
Seventh Edition: April 1999
Eighth Edition: March 2000
Ninth Edition: June 2000
Tenth Edition: December 2000
Eleventh Edition: June 2001
Twelfth Edition: September 2002
Thirteenth Edition: March 2006
Fourteenth Edition: November 2006
Fifteenth Edition: February 2007
11
12
1Overview
This chapter contains the following sections that give general information about
HyperFabric:
•“Overview” on page 15
•“HyperFabric Products” on page 16
Chapter 1
13
Overview
•“HyperFabric Concepts” on page 18
14
Chapter 1
Overview
Overview
Overview
HyperFabric is a HP high-speed, packet-based interconnect for node-to-node
communications. HyperFabric provides higher speed, lower network latency and less
CPU usage than other industry standard protocols (e.g. Fibre Channel and Gigabit
Ethernet). Instead of using a traditional bus based technology, HyperFabric is built
around switched fabric architecture, providing the bandwidth necessary for high speed
data transfer. This clustering solution delivers the performance, scalability and high
availability required by:
•Client/Server Architecture Interconnects (for example, SAP)
•Multi-Server Batch Applications (for example, SAS Systems)
•Enterprise Resource Planning (ERP)
•Technical Computing Clusters
•Open View Data Protector (earlier known as Omniback)
•Network Backup
•Data Center Network Consolidation
•E-services
Notice of non support for Oracle 10g RAC
HyperFabric product suite was designed to optimize performance of Oracle 9i RAC
database running on HP-UX clusters. With the industry moving to standards-based
networking technologies for database clustering solutions, HP and Oracle have worked
together to optimize features and performance of Oracle 10g RAC database with
standards-based interconnect technologies including Gigabit Ethernet, 10Gigabit
Ethernet and Infiniband.
To align with the market trend for standards-based interconnects, Oracle 10g RAC
database is not currently supported on configurations consisting of HyperFabric product
suite and it will not be supported in the future either. As a result, customers must switch
to Gigabit Ethernet, 10Gigabit Ethernet or Infiniband technology if they plan to use
Oracle 10g RAC.
Please note that configurations comprising HyperFabric and Oracle 9i continue to be
supported.
Chapter 1
15
Overview
HyperFabric Products
HyperFabric Products
HyperFabric hardware consists of host-based interface adapter cards, interconnect
cables and optional switches. HyperFabric software resides in Application Specific
Integrated Circuits (ASICs) and firmware on the adapter cards and includes user space
components and HP-UX drivers.
Currently, fibre based HyperFabric hardware is available. In addition, a hybrid switch
that has 8 fibre ports is available to support HF2 clusters.
The various HyperFabric products are described below. See the HP HyperFabric ReleaseNote for information about the HP 9000 systems these products are supported on.
NOTEThis document uses the term HyperFabric (HF) is used in general to refer to the
hardware and software that form the HyperFabric cluster interconnect product.
The term HyperFabric2 (HF2) refers to the fibre based hardware components:
•The A6386A adapter.
•The A6384A switch chassis.
•The A6388A and A6389A switch modules. (Although the A6389A switch module has
4 copper ports it is still considered a HF2 component because it can only be used with
the A6384A HF2 switch chassis).
•The C7524A, C7525A, C7526A, and C7527A cables.
HyperFabric Adapters
The HyperFabric adapters include the following:
•A6386A HF2 PCI (4X) adapter with a fibre interface.
Switches and Switch Modules
The HyperFabric2 switches are as follows:
•A6384A HF2 fibre switch chassis with one integrated Ethernet management LAN
adapter card, one integrated 8-port fibre card, and one expansion slot. For the
chassis to be a functional switch, one of these two switch modules must be installed
in the expansion slot:
— The A6388A HF2 8-port fibre switch module. This gives the switch 16 fibre ports
(8 from the integrated fibre card and 8 from the A6388A).
16
— The A6389A HF2 4-port copper switch module. This gives the switch 12 ports—a
mixture of 8 fibre ports (from the integrated fibre card) and 4 copper ports (from
the A6389A module).
The A6384A HF2 switch chassis with either module installed is supported beginning
with the following HyperFabric software versions:
•HP-UX 11i v3: HyperFabric software version B.11.31.01
Chapter 1
Overview
HyperFabric Products
NOTEIn this manual, the terms HyperFabric2 switch or HF2 switch refer to the functional
switch (the A6384A switch chassis with one of the switch modules installed).
IMPORTANTHF2 adapters and switches are not supported by software versions earlier than those
listed in “HyperFabric Adapters” on page 16 and “Switches and Switch Modules” on
page 16.
To determine the version of HyperFabric you have, issue this command:
swlist | grep -i hyperfabric
Other Product Elements
The other elements of the HyperFabric product family are the following:
•The HyperFabric software: The software resides in ASICs and firmware on the
adapter cards and includes user space components and HP-UX drivers.
HyperFabric supports the IP network protocol stack, specifically TCP/IP and
UDP/IP.
HyperFabric software includes HyperMessaging Protocol (HMP). HMP provides
higher bandwidth, lower CPU overhead, and lower latency (the time it takes a
message to get from one point to another). However, these HMP benefits are only
available when applications that were developed on top of HMP are running.
Chapter 1
17
Overview
HyperFabric Concepts
HyperFabric Concepts
Some basic HyperFabric concepts and terms are briefly described below.
The fabric is the physical configuration that consists of all of the HyperFabric adapters,
the HyperFabric switches (if any are used) and the HyperFabric cables connecting them.
The network software controls data transfer over the fabric.
A HyperFabric configuration contains two or more HP 9000 systems and optional
HyperFabric switches. Each HP 9000 acts as a node in the configuration. Each node has
a minimum of one and a maximum of eight HyperFabric adapters installed in it. (See
Chapter 2, “Planning the Fabric,” on page 19for information about the maximum
number of adapters that can be installed in each system). HyperFabric supports a
maximum of 4 HyperFabric switches. HyperFabric switches can be meshed, and
configurations with up to four levels of meshed switches are supported.
A HyperFabric cluster can be planned as a High Availability (HA) configuration, when
it is necessary to ensure that each node can always participate in the fabric. This is done
by using MC/ServiceGuard, MC/LockManager, and the Event Monitoring Service (EMS).
Configurations of up to eight nodes are supported under MC/ServiceGuard.
Relocatable IP addresses can be used as part of an HA configuration. Relocatable IP
addresses permit a client application to reroute through an adapter on a remote node,
allowing that application to continue processing without interruption. The rerouting is
transparent. This function is associated with MC/ServiceGuard (see “Configuring
ServiceGuard for HyperFabric Relocatable IP Addresses” on page 81). When the monitor
for HyperFabric detects a failure and the backup adapter takes over, the relocatable IP
address is transparently migrated to the backup adapter. Throughout this migration
process, the client application continues to execute normally.
When you start HyperFabric (with the clic_start command, through SMH or by
booting the HP 9000 system), you start the management process. This process must
be active for HyperFabric to run. If the HyperFabric management process on a node
stops running for some reason (for example, if it is killed), all HyperFabric-related
communications on that node are stopped immediately. This makes the node
unreachable by other components in the fabric.
When you start HyperFabric, the fabric is, in effect, verified automatically. This is
because each node performs a self diagnosis and verification over each adapter installed
in the node. Also, the management process performs automatic routing and configuring
for each switch (if switches are part of the fabric). You can, if needed, run the clic_stat
command to get a textual map of the fabric, which can be used as another quick
verification.
Notice that the commands you use to administer HyperFabric all have a prefix of clic_ ,
and some of the other components have CLIC as part of their name (for example, the
CLIC firmware and the CLIC software). CLIC stands for CLuster InterConnect, and it is
used to differentiate those HyperFabric commands/components from other
commands/components. For example, the HyperFabric command clic_init is different
from the HP-UX init command.
18
Chapter 1
2Planning the Fabric
This chapter contains the following sections offering general guidelines and protocol
specific considerations for planning HyperFabric clusters that will run TCP/UDP/IP or
HMP applications.
•“Preliminary Considerations” on page 21
Chapter 2
19
Planning the Fabric
•“HyperFabric Functionality for TCP/UDP/IP and HMP Applications” on page 22
•“TCP / UDP / IP” on page 23
•“Hyper Messaging Protocol (HMP)” on page 30
20
Chapter 2
Planning the Fabric
Preliminary Considerations
Preliminary Considerations
Before beginning to physically assemble a fabric, follow the steps below to be sure all
appropriate issues have been considered:
Step 1. Read Chapter 1, “Overview,” on page 13 to get a basic understanding of HyperFabric and
its components.
Step 2. Read this chapter, Planning the Fabric, to gain an understanding of protocol specific
configuration guidelines for TCP/UDP/IP and HMP applications.
Step 3. Read “Configuration Overview” on page 63, “Information You Need” on page 64, and
“Configuration Information Example” on page 66, to gain an understanding of the
information that must be specified when the fabric is configured.
Step 4. Decide the number of nodes that will be interconnected in the fabric.
Step 5. Decide the type of HP 9000 system that each node will be (for a list of supported HP 9000
systems, see the HP HyperFabric Release Note).
Step 6. Determine the network bandwidth requirements for each node.
Step 7. Determine the number of adapters needed for each node.
Step 8. Determine if a High Availability (ServiceGuard) configuration will be needed.
Remember, If MC/ServiceGuard is used there must be at least two adapters in each
node.
Step 9. Decide what the topology of the fabric will be.
Step 10. Determine how many switches will be used based on the number of nodes in the fabric.
Remember, the only configuration that can be supported without a switch is the
node-to-node configuration (HA or non-HA). HyperFabric supports meshed switches up
to a depth of four switches.
Step 11. Draw the cable connections from each node to the switches (if the fabric will contain
switches). If you use an HA configuration with switches, note that for full redundancy
and to avoid a single point of failure, your configuration will require more than one
switch. For example, each adapter can be connected to its own switch, or two switches
can be connected to four adapters.
Chapter 2
21
Planning the Fabric
HyperFabric Functionality for TCP/UDP/IP and HMP Applications
HyperFabric Functionality for TCP/UDP/IP and HMP
Applications
The following sections in this chapter define HyperFabric features, parameters, and
supported configurations for TCP/UDP/IP applications and Hyper Messaging Protocol
(HMP) applications. There are distinct differences in supported hardware, available
features and performance, depending on which protocol is used by applications running
on the HyperFabric.
22
Chapter 2
Planning the Fabric
TCP / UDP / IP
TCP / UDP / IP
TCP/UDP/IP applications are supported on all HF2 (fibre) hardware. Although all
HyperFabric adapter cards support HMP applications as well, our focus in this section
will be on TCP/UDP/IP HyperFabric applications.
Application Availability
All applications that use the TCP/UDP/IP stack are supported such as Oracle 9i.
Features
•OnLine Addition and Replacement (OLAR): Supported
The OLAR feature allows the replacement or addition of HyperFabric adapter cards
while the system (node) is running. For a list of systems that support OLAR, see the
HyperFabric Release Notes (B6257-90056).
For more detailed information on OLAR, including instructions for implementing
this feature, see “Online Addition and Replacement” on page 42 in this manual, as
well as Interface Card OL* Support Guide (B2355-90698).
•Event Monitoring Service (EMS): Supported
In the HyperFabric version B.11.31.01, the HyperFabric EMS monitor allows the
system administrator to separately monitor each HyperFabric adapter on every node
in the fabric, in addition to monitoring the entire HyperFabric subsystem. The
monitor can inform the user if the resource being monitored is UP or DOWN. The
administrator defines the condition to trigger a notification (usually a change in
interface status). Notification can be accomplished with a SNMP trap or by logging
into the syslog file with a choice of severity, or by email to a user defined email
address.
For more detailed information on EMS, including instructions for implementing this
feature, see “Configuring the HyperFabric EMS Monitor” on page 75 in this manual,
as well as the EMS Hardware Monitors User’s Guide (B6191-90028).
•ServiceGuard: Supported
Within a cluster, ServiceGuard groups application services (individual HP-UX
processes) into packages. In the event of a single service failure (node, network, or
other resource), EMS provides notification and ServiceGuard transfers control of the
package to another node in the cluster, allowing services to remain available with
minimal interruption.
ServiceGuard via EMS, directly monitors cluster nodes, LAN interfaces, and services
(the individual processes within an application). ServiceGuard uses a heartbeat LAN
to monitor the nodes in a cluster. It is not possible to use HyperFabric as a heartbeat
LAN. Instead a separate LAN must be used for the heartbeat.
Chapter 2
For more detailed information on configuring ServiceGuard, see “Configuring
HyperFabric with ServiceGuard” on page 76 in this manual, as well as ManagingMC/ServiceGuard Part Number B3936-90065 March 2002 Edition.
•High Availability (HA): Supported
23
Planning the Fabric
TCP / UDP / IP
To create a highly available HyperFabric cluster, there cannot be any single point of
failure. Once the HP 9000 nodes and the HyperFabric hardware have been
configured with no single point of failure, ServiceGuard and EMS can be configured
to monitor and fail-over nodes and services using ServiceGuard packages.
If any HyperFabric resource in a cluster fails (adapter card, cable or switch port), the
HyperFabric driver transparently routes traffic over other available HyperFabric
resources with no disruption of service.
The ability of the HyperFabric driver to transparently fail-over traffic reduces the
complexity of configuring highly available clusters with ServiceGuard, because
ServiceGuard only has to take care of node and service failover.
A “heartbeat” is used by MC/ServiceGuard to monitor the cluster. The HyperFabric
links cannot be used for the heartbeat. Instead an alternate LAN connection
(Ethernet, Gigabit Ethernet, 10Gigabit Ethernet) must be made between the nodes
for use as a heartbeat link.
End To End HA: HyperFabric provides End to End HA on the entire cluster fabric
at the link level. If any of the available routes in the fabric fails, HyperFabric will
transparently redirect all the traffic to a functional route and, if configured, notify
ServiceGuard or other enterprise management tools.
Active-Active HA: In configurations where there are multiple routes between
nodes, the HyperFabric software will use a hashing function to determine which
particular adapter/route to send messages through. This is done on a
message-by-message basis. All of the available HyperFabric resources in the fabric
are used for communication.
In contrast to Active-Passive HA, where one set of resources is not utilized until
another set fails, Active-Active HA provides the best return on investment because
all of the resources are utilized simultaneously. MC/ServiceGuard is not required for
Active-Active HA operation.
For more information on setting up HA HyperFabric clusters, see figure 2-3
“TCP/UDP/IP High Availability Switched Configuration”.
•Dynamic Resource Utilization (DRU): Supported
When a new resource (node, adapter, cable or switch) is added to a cluster, a
HyperFabric subsystem will dynamically identify the added resource and start using
it. The same process takes place when a resource is removed from a cluster. The
difference between DRU and OLAR is that OLAR only applies to the addition or
replacement of adapter cards from nodes.
•Load Balancing: Supported
When a HP 9000 HyperFabric cluster is running TCP/UDP/IP applications, the
HyperFabric driver balances the load across all available resources in the cluster
including nodes, adapter cards, links, and multiple links between switches.
24
•Switch Management: Not Supported
Switch Management is not supported. Switch management will not operate properly
if it is enabled on a HyperFabric cluster.
•Diagnostics: Supported
Diagnostics can be run to obtain information on many of the HyperFabric
components via the clic_diag, clic_probe and clic_stat commands, as well as
the Support Tools Manager (STM).
Chapter 2
Planning the Fabric
TCP / UDP / IP
For more detailed information on HyperFabric diagnostics see “Running
Diagnostics” on page 115 on page 149.
Configuration Parameters
This section details, in general, the maximum limits for TCP/UDP/IP HyperFabric
configurations. There are numerous variables that can impact the performance of any
particular HyperFabric configuration. See the “TCP/UDP/IP Supported Configurations”
section for guidance on specific HyperFabric configurations for TCP/UDP/IP
applications.
•HyperFabric is only supported on the HP 9000 series unix servers.
•TCP/UDP/IP is supported for all HyperFabric hardware and software.
•Maximum Supported Nodes and Adapter Cards:
In point to point configurations the complexity and performance limitations of
having a large number of nodes in a cluster make it necessary to include switching in
the fabric. Typically, point to point configurations consist of only 2 or 3 nodes.
In switched configurations, HyperFabric supports a maximum of 64 interconnected
adapter cards.
A maximum of 8 HyperFabric adapter cards are supported per instance of the
HP-UX operating system. The actual number of adapter cards a particular node is
able to accommodate also depends on slot availability and system resources. See
node specific documentation for details.
A maximum of 8 configured IP addresses are supported by the HyperFabric
subsystem per instance of the HP-UX operating system.
•Maximum Number of Switches:
You can interconnect (mesh) up to 4 switches (16 port fibre or Mixed 8 fibre ports) in
a single HyperFabric cluster.
•Trunking Between Switches (multiple connections)
Trunking between switches can be used to increase bandwidth and cluster
throughput. Trunking is also a way to eliminate a possible single point of failure.
The number of trunked cables between nodes is only limited by port availability. To
assess the effects of trunking on the performance of any particular HyperFabric
configuration, consult with your HP representative.
•Maximum Cable Lengths:
HF2 (fibre): The maximum distance is 200m (4 standard cable lengths are sold and
supported: 2m, 16m, 50m and 200m).
TCP/UDP/IP supports up to four HF2 switches connected in series with a maximum
cable length of 200m between the switches and 200m between switches and nodes.
Chapter 2
TCP/UDP/IP supports up to 4 hybrid HF2 switches connected in series with a
maximum cable length of 200m between fibre ports.
25
Planning the Fabric
TCP / UDP / IP
•Speed and Latency:
Table 2-1HF2 Speed and Latency w/ TCP/UDP/IP Applications
Server ClassMaximum SpeedLatency
rp74202 + 2 Gbps full duplex per link< 42 microsec
For a list of HF2 hardware that supports TCP/UDP/IP applications (HP-UXN 11i v3), see
HyperFabric Release Notes (B6257-90056).
TCP/UDP/IP Supported Configurations
Multiple TCP/UDP/IP HyperFabric configurations are supported to match the cost,
scaling and performance requirements of each installation.
In the previous “Configuration Guidelines” section the maximum limits for TCP/UDP/IP
enabled HyperFabric hardware configurations were outlined. In this section the
TCP/UDP/IP enabled HyperFabric configurations that HP supports will be detailed.
These recommended configurations offer an optimal mix of performance, availability and
practicality for a variety of operating environments.
There are many variables that can impact HyperFabric performance. If you are
considering a configuration that is beyond the scope of the following HP supported
configurations, contact your HP representative.
Point-to-Point Configurations
Large servers like HP’s Superdome can be interconnected to run Oracle RAC 9i and
enterprise resource planning applications. These applications are typically consolidated
on large servers.
Point to point connections between servers support the performance benefits of HMP
without investing in HyperFabric switches. This is a good solution in small
configurations where the benefits of a switched HyperFabric cluster might not be
required (see configuration A and configuration C in Figure 2-1).
If there are multiple point to point connections between two nodes, the traffic load will
be balanced over those links. If one link fails, the load will fail-over to the remaining
links (see configuration B in Figure 2-1).
Running applications using TCP/UDP/IP on a HyperFabric cluster provides major
performance benefits compared to other technologies (such as ethernet). If a
HyperFabric cluster is originally set up to run enterprise applications using
TCP/UDP/IP and the computing environment stabilizes with a requirement for higher
performance, migration to HMP is always an option.
26
Chapter 2
Figure 2-1TCP/UDP/IP Point-To-Point Configurations
Planning the Fabric
TCP / UDP / IP
Chapter 2
27
Planning the Fabric
TCP / UDP / IP
Switched
This configuration offers the same benefits as the point to point configurations
illustrated in figure 1, but it has the added advantage of greater connectivity (see
Figure 2-2).
Figure 2-2TCP/UDP/IP Basic Switched Configuration
28
Chapter 2
High Availability Switched
This configuration has no single point of failure. The HyperFabric driver provides end to
end HA. If any HyperFabric resource in the cluster fails, traffic will be transparently
rerouted through other available resources. This configuration provides high
performance and high availability (see Figure 2-3).
Figure 2-3TCP/UDP/IP High Availability Switched Configuration
Planning the Fabric
TCP / UDP / IP
Chapter 2
29
Planning the Fabric
Hyper Messaging Protocol (HMP)
Hyper Messaging Protocol (HMP)
Hyper Messaging protocol (HMP) is HP patented, high performance cluster interconnect
protocol. HMP provides reliable, high speed, low latency, low CPU overhead, datagram
service to applications running on HP-UX platforms.
HMP was jointly developed with Oracle Corp. The resulting feature set was tuned to
enhance the scalability of the Oracle Cache Fusion clustering technology. It is
implemented using Remote DMA (RDMA) paradigms.
HMP is integral to the HP-UX HyperFabric driver. It is a functionality that can be
enabled or disabled at HyperFabric initialization using clic_init or SMH. The HMP
functionality is used by the applications listed in the Application Availability section
below.
HMP significantly enhances the performance of parallel and technical computing
applications.
HMP firmware on HyperFabric adapter cards provides a “shortcut” that bypasses
several layers in the protocol stack, boosting link performance and lowering latency. By
avoiding interruptions and buffer copying in the protocol stack, communication task
processing is optimized.
Application Availability
Currently there are two families of applications that can use HMP over the HyperFabric
interface:
•Oracle 9i Database, Release 1 (9.0.1) and Release 2 (9.2.0.1.0).
HMP has been certified on Oracle 9i Database Release 1 with 11i v3.
HMP has been certified on Oracle 9i Database Release 2 with 11i v3.
•Technical Computing Applications
Features
•OnLine Addition and Replacement (OLAR)
The OLAR feature, which allows the replacement or addition of HyperFabric adapter
cards while the system (node) is running, is supported when applications use HMP
to communicate.
30
Chapter 2
Loading...
+ 118 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.