HP Best Practices Security Solutions

Best Practices for HP BladeSystem
Technical
w
paper
Deployments using HP Serviceguard Solutions for HP-UX 11i
May 2010
Table of contents
Executive Summary.........................................................................................................................2
BladeSystem Overview....................................................................................................................2
Hardware Components................................................................................................................2
Eliminating Single Points of Failure in HP BladeSystem Configurations....................................................11
Redundant Hardware Configurations within Blade Enclosures...........................................................12
Using Serviceguard in HP BladeSystem Configurations........................................................................ 17
Serviceguard Clustering within a Single Blade Enclosure................................................................ .. 19
Clustering across Multiple Blade Enclosures or non-Blade Servers...................................................... 20
Additional Considerations for using Serviceguard with the HP BladeSystem............................................22
Conclusion.................................................................................................................................. 25
For More Information .................................................................................................................... 26
Call to Action...............................................................................................................................26
Executive Summary
HP continues to be tremendously successful in deploying server hardware consolidated into HP BladeSystem environments. The improved control of power consumption and workload management with HP Insight Dynamics – VSE software controlling the entire environment bring distinct advantages. HP Virtual Connect facilitates rapid deployment and infrastructure flexibility, reducing wiring and the effort to connect servers to network and SAN fabrics. This brings valuable benefits to customers in small, medium and large enterprises.
HP Serviceguard Solutions play an important role in these environments to ensure mission-critical application availability for HP Integrity servers. Configuring highly available applications in the HP BladeSystem has some special considerations that differ from standard server rack-mount or HP Superdome deployments with HP Serviceguard. Knowledge of HP BladeSystem component placement, configuring HP Virtual Connect and an understanding of where cluster elements such as server nodes, quorum devices and storage should be located and configured within a cluster is critical to maximizing the high availability benefits of Serviceguard. The purpose of this white paper is to highlight the considerations and best practices for implementing HP BladeSystem solutions that are made highly available through the use of HP Serviceguard for HP-UX on HP Integrity BL860c and BL870c blade servers. Note the concepts of designing highly available blade configurations presented in this white paper will also apply to future generation HP Integrity blade server products when released.
BladeSystem Overview
The HP BladeSystem is the general name for HP's Industry Standard blade server line. It consists of a number of hardware components, software and services that are all designed to work together to provide a rack-mounted, integrated infrastructure for compute, network, storage and power elements.
This section will briefly describe some of the HP BladeSystem hardware components that are the foundation for its integrated architecture, which will be used as a basis for understanding how these components can be configured to maximize server and application availability by eliminating Single Points of Failure (SPOFs) within a BladeSystem solution deployment. As you will learn in this section, many of the HP BladeSystem components have already been designed with redundancy in-mind to maximize availability and minimize downtime.
Hardware Components
The following are some of the major components of the HP BladeSystem.
Enclosures
The HP BladeSystem c-Class Enclosures are central component for joining computing resources into a consolidated, “wire-once” infrastructure. There are two c-Class enclosures available to best meet a customer’s business requirements, as shown in figure 1:
c3000 for remote sites & small to medium businesses (rack or tower configurations)
c7000 for enterprise data center applications
2
Figure 1: c-Class HP BladeSystem Enclosure Family
HP BladeSy stem c7000 enclosure
HP BladeSy stem c3000 enclosure
Both enclosures share common:
HP BladeSy stem c3000 Tower
Half-height /full-height server blades
Interconnect modules
Mezzanine Host Bus Adapter (HBA) cards
Storage blades
Power Supplies (hot swappable and redundant)
Fans (hot-swappable and redundant)
A comparison between the c3000 and c7000 enclosures is shown in Table 1.
Table 1: HP BladeSystem c-Class Enclosure Comparison
c3000 enclosure and tower enclosure c7000 enclosure
6U Height (rack) or tower 10U height Horizontal blade orientation
Vertical blade orientation for tower 8 HH (half-height) Blades,
4 FH (full-height) Blades,6HH/1FH 4 Interconnect bays 8 Interconnect bays 6 Power Supplies @ up to 1200W each 6 Power Supplies @ up to 2250W each 6 Active Cool Fans 10 Active Cool Fans
Vertical blade orientation
16 HH (half-height) Blades, 8 FH (full-height) Blades
3
Device and Interconnect Bays
The interconnect bays for each enclosure can support a variety of Pass-Thru modules and switch technologies, including Ethernet, Fibre Channel, and InfiniBand. The enclosures support redundant I/O fabrics and can yield up to a 94% reduction in cables compared to traditional rack-mounted server configurations.
One of the major differences between the c3000 and c7000 is in the number of available interconnect bays; the c3000 has 4 while the c7000 has 8. The four additional interconnect bays in the c7000 offer additional I/O flexibility and the ability to use redundant interconnects to eliminate single points of failure (SPOFs), which is extremely important to help protect mission-critical applications in the data center. Using redundant interconnect modules in high availability HP BladeSystem configurations will be described in later sections of this white paper.
Figure 2 shows a side-view of a c7000 enclosure and the major component connections.
Figure 2: HP BladeSystem c-Class Enclosure Side View
Fans
Half-height server blade
Switch
modules
Half-height server blade
Fans
Power supply modules
Fans
AC input moduleSignal midplane Power backplane
The c7000 enclosure, as with the c3000, enables easy connection of embedded server device ports from the device bays to the interconnect bays.
The enclosure signal midplane transfers I/O signals (PCIe, Gigabit Ethernet, Fiber Channel) between the server blades (half-height or full-height) and the appropriate interconnects, and has redundant signal paths between servers and interconnect modules. Since the connections between the device bays (in the front of the enclosure where the blade servers reside) and the interconnect bays (in the back of the enclosure containing the interconnect modules) are hard-wired through the signal midplane, the Mezzanine cards – host bus adapters (HBAs) used to connect the blade servers with an interconnect module - must be matched to the appropriate type of interconnect module. For example, a Fiber Channel Mezzanine card must be placed in the Mezzanine connector that connects to an interconnect bay holding a Fiber Channel switch. For port mapping purposes, it does not matter in which bay you install a server blade; Mezzanine connectors in the blade expansion slots always connect to the same interconnect bays.
4
To simplify the installation of the various Mezzanine cards and interconnect modules, the Onboard Administrator, which manages the components within the enclosure, uses an “electronic keying” process to detect any mismatch between the Mezzanine cards and the interconnect modules.
The power backplane provides 12V DC power to server blades, fans, and interconnects. Both the signal midplane and separate power backplane in the c7000 enclosure have no active components, thus improving reliability.
The AC input module providing power to the redundant power supply modules can be configured to use a variety of different power delivery modes, depending on customer availability requirements and cost constraints. The module can be configured to use either single-phase or three-phase AC for Non­Redundant Power, Power Supply Redundant and AC Redundant power delivery modes. A detailed description of these power modes and enclosure power configuration options is described in the technology brief titled “Technologies in the HP BladeSystem c7000 Enclosure” available at
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00816246/c00816246.pdf. It
is recommended to use both Power Supply Redundant and AC Redundant power delivery modes to achieve the highest levels of availability, if possible.
Figure 3 shows the interconnections between the device I/O from the server blades in the front of the c3000 enclosure to the interconnect switch module (SWM) ports located at the rear of the enclosure for data transfer. The color-coded symbols on the diagram are identical to the symbols used on the physical enclosure and Onboard Administrator port mapping displays to identify the interconnect bays (see figure 4). Each front device bay is connected through the signal backplane to each of the rear interconnect bays. Interconnect bays 1 (for the c3000 / c7000) and 2 (for the c7000) are dedicated to signals from the embedded NICs located on the server blade system board. The remaining interconnect bays are available to accept signals from Mezzanine HBA cards mounted directly on the server blades. The server blade Mezzanine card positions connect directly through the signal mid-plane to the interconnect bays. The interconnect bays are designed to accept single-wide or double-wide switch modules (SWMs) for interconnect bandwidth and form factor scalability.
HP Integrity BL860c and BL870c blade servers are full-height and provide connections for two 2-port embedded NICs, also known as “LAN on Motherboard”, or LOM, and up to three 4-port Mezzanine HBAs (labeled Mezz-1 – Mezz-3) as shown in figures 3 and 4. PCIe (PCI Express) connectivity from the blade system board to the LOM and Mezzanine HBAs uses paired groups of full-duplex communication “lanes”. A single-wide lane provides a 1x 500MB/s transfer rate, and a double-wide or two lanes provide a 2x transfer rate of1Gb/s. Mezzanine cards are categorized into “types” that describe their data transfer capabilities. Type I Mezzanine cards provide 1x transfer rate, while Type II Mezzanine cards provide 1Gb/s through a single lane. Embedded LOM and host bus adapters installed in Mezzanine card slot 1 support single-wide lane interconnects, while Mezzanine slots 2 and 3 support either single-wide or double-wide lane interconnects. Figures 3 and 5 show the PCIe lane connections available to the LOM and Mezzanine cards on the blade server.
The Integrity BL860c is a single-wide server blade. The designation “N” in the diagram is used to map single-wide server blade connections to the switch module bay ports. The BL870c is a double­wide server blade and follows a slightly different port mapping scheme in that:
If a BL870c server blade is in device bays 1 and 2, the value of "N" is 2
If a BL870c server blade is in device bays 3 and 4, the value of "N" is 4
Several points to note regarding the c3000 diagram are:
All four LOM ports on each server blade use the same interconnect switch module bay SWM-1
All four ports of Mezzanine card 1share the same interconnect switch module bay SWM-2
Ports on Mezzanine cards 2 and 3 are divided between interconnect switch module bays SWM-3
and SWM-4
5
Due to the limited number of available interconnect module slots in the c3000, it is not possible to configure the enclosure for complete redundancy to eliminate a Mezzanine card and interconnect module as a single point of failure between the server blade and connectivity to the outside system infrastructure. This is an important point to consider when deploying mission-critical environments an whether this configuration will meet defined availability requirements.
Figure 3: HP BladeSystem c-Class Enclosure Side View
PCIex4
PCIex4
PCIex8
PCIex4
PCIex8
Full-Height
Server Blade N
(N = 1…4)
2
NIC
1
4
NIC
3 2 1
4 3 2 1
2 1
4 3 2 1
Mezz-1
Mezz-2
Mezz-3
GbX1
GbX2
2x
2x
N N+8
N+4
N+12
N N+8
N+4 N+12
SWM-1 SWM-2
N N+8
N+4
N+12
N N+8
N+4
SWM-3 SWM-4
Blade Slot # N = 2,4 for Integrity BL870c
N+12
For visual reference, Figure 4 shows the c3000 enclosure rack and tower Interconnect bay numbering scheme.
6
Figure 4: HP BladeSystem c-3000 Enclosure Rack and Tower Interconnect Bay Numbering
Server blade signal Interconnect bay Interconnect bay label
NIC 1, 2, 3, 4 (embedded) 1 – Orange hexagon Mezzanine 1 2 – Yellow square Mezzanine 2 3 and 4 – Green circle Mezzanine 3 3 and 4 – Blue diamond
Figure 5 shows the interconnections between the server blades and interconnect switch module (SWM) ports for the c7000 enclosure, with a similar physical interconnect bay color-coding scheme (see figure 6). The mapping of the Bl860c and BL870c blade connections to the switch module bay ports is similar to the c3000 enclosure; however since the enclosure has 8 available device bays,
if a BL870c server blade is in device bays 1 and 2, the value of "N" is 2
if a BL870c server blade is in device bays 3 and 4, the value of "N" is 4
If a BL870c server blade is in device bays 5 and 6, the value of "N" is 6
If a BL870c server blade is in device bays 7 and 8, the value of "N" is 8
7
Figure 5: HP BladeSystem c7000 Enclosure Interconnect Diagram
Full-Height
Server Blade N
(N = 1…8)
GbX1
PCIe x4
NIC
2 1
2x
2x
N N+8 N N+8
SWM-1 SWM-2
N
PCIe x4
PCIe x8
Mezz-1
Mezz-2
4 3 2 1
4 3 2 1
N
N+8
SWM-3 SWM-4
N+8
N
N
N+8
N+8
SWM-5 SWM-6
PCIe x4
PCIe x8
Mezz-3
NIC
2 1
4 3 2 1
GbX2
N
Blade Slot # N = 2, 4,6,8 for Integrity BL870c
N+8
N
SWM-7 SWM-8
N+8
Several points to note regarding the c7000 diagram are:
The two LOM modules, each with a dedicated PCIe bus and two ports on each blade server, are
divided between interconnect switch module bays SWM-1and SWM-2 (although NIC ports 1 on each LOM controller share SWM1 and NIC ports 2 share interconnect switch module SW2)
Ports on Mezzanine cards 1, 2 and 3 are divided between interconnect switch module bays SWM-
3 thru SWM-8
With the additional interconnect module slots in the c7000, it is now possible to configure the enclosure to eliminate both Mezzanine cards and interconnect modules as single points of failure between the server blade and connectivity to the outside system infrastructure. Therefore, deploying c7000 enclosures is a best practice recommendation for mission-critical environments.
For visual reference, Figure 6 shows the c7000 enclosure Interconnect bay numbering layout.
8
Loading...
+ 18 hidden pages