Future-proof your infrastructure with 10 Gigabit networking................................................................... 1
Interconnect infrastructure in the HP BladeSystem c7000 enclosure..........................................................4
Enhancing setup and usability in HP Systems Insight Manager 5.2........................................................... 7
Using Virtual Connect and non-Virtual Connect modules in the same HP BladeSystem c-Class enclosure... 9
Storage HBAs are determined by the storage array provider................................................................. 11
ProLiant management tip of the month..................................................................................................11
Recently published industry standard server technology communications ..............................................14
Contact us .............................................................................................................................................14
Future-proof your infrastructure with 10 Gigabit networking
Until recently, ten Gigabit Ethernet (10GbE) was considered to be overkill for most corporate networking infrastructures…it was
just too powerful for the average datacenter’s requirements. But advances in processing power, coupled with the emergence of
the virtualized datacenter and networked storage, are driving the need for increased I/O bandwidth, making 10GbE more
attractive. In response to this trend, 10GbE networking solutions from HP are leading the way with solutions for existing ProLiant
servers, as presented in this article.
Scale up and scale out
Datacenters and large businesses are evaluating and sometimes installing 10GbE infrastructure. Looking ahead, IT
organizations will need to continually improve effectiveness, enhance scalability, and accommodate increasing performance
needs⎯while at the same time consolidating their datacenter infrastructure. As datacenters move toward next generation
architectures, the importance of 10GbE products, such as high-performance switches and multiple-use or multifunction network
adapters, will keep growing.
ISS Technology Update Volume 7, Number 3
HP 10GbE products
Server adapters
The multifunction NC510C and its fiber-based counterpart NC510F, now on the market, are the first HP 10 GbE adapters for
ProLiant servers (see Figure 1-1).
Figure 1-1. HP NC510C and NC510F PCIe 10 Gigabit server adapters (low profile bracket included)
Multifunction NC510C Fiber-based NC510F
These server adapters are ideal for a host of demanding applications, including:
• High-performance computing
• Database clusters
• Network attached storage
• Storage backups
• Grid systems
• Virtualization
• Server input/output (I/O)
• Fabric consolidation
Virtualization
One of the hottest topics in datacenter discussions is that of virtualization. The ability to configure individual servers into many
independent virtual servers has tremendous benefit. Today’s CPUs are certainly capable of supporting many virtual machines
from a processing standpoint. However, the primary deployment obstacle has been how to supply enough I/O bandwidth to
each virtual machine to keep it at maximum efficiency. The answer has been to deploy additional multi-port Gigabit (Gb)
adapters, typically from two to four per server, providing 4-8Gb/sec of bandwidth to be shared by all the virtual machines
running on that server. As the number of cores increases over time, allowing more virtual machines to run per server, the I/O
requirements increase linearly. The other problem is that as the number of ports increase so does the number of CPU cycles
required to do the TCP processing, which in turn takes away CPU cycles from each virtual machine. A ProLiant server with
10GbE support is built from the ground up with the virtualized datacenter in mind. As the virtual datacenter evolves to future
environments, the ProLiant DL and ML series servers have built-in investment protection with the highly programmable 10GbE
devices.
End-to-end solution
A true end-to-end HP10GbE solution can be achieved by combing the ProLiant 10GbE server adapters with a ProCurve 5400zl
switch, which supports 24 10-GbE ports or 144 mini-GBICs, or a combination of 10/100/1000 and 10GbE ports.
2
ISS Technology Update Volume 7, Number 3
Gain performance, simplification, and efficiency
The NC510C and NC510F are specifically designed for environments that require high-performance Ethernet networking,
corporate datacenters wanting to simplify their operations, or resource-constrained customers needing greater server
efficiencies.
Summary
10GbE will prove to be the core infrastructure technology upon which future datacenters are built. In summary, one can expect
the following benefits from adopting 10GbE technology:
• Minimize I/O connections to the server
• Reduce hardware and support needs
• More effective use of bandwidth
• Significant cost savings
Additional resources
For additional information on the topics discussed in this article, visit the following links:
Interconnect infrastructure in the HP BladeSystem c7000 enclosure
A key component of the c7000 enclosure is the I/O infrastructure—essentially, a NonStop signal midplane with internal wiring
between the server or storage blades and the interconnect modules. The midplane is an entirely passive board. The term
passive means there are no active electrical components on the board. On one side of the board are the sixteen connectors for
the server/storage blades. Internal traces in the printed circuit board link them to eight connectors on the other side of the
board for the interconnect modules (Figure 2-1).
Figure 2-1. See-through illustration showing both the server blade connectors and the interconnect module connectors
The NonStop signal midplane uses the similar four-trace differential SerDes transmit and receive signals to support either
network-semantic protocols (such as Ethernet, Fibre Channel, and InfiniBand) or memory-semantic protocols (PCI Express).
Figure 2-2 illustrates how the physical lanes can be logically overlaid onto sets of four traces. Interfaces such as Gigabit
Ethernet (1000-base-KX) or Fibre Channel need only a 1x lane, or a single set of four traces. Higher bandwidth interfaces, such
as InfiniBand DDR, use up to four lanes.
Figure 2-2. Logically overlaying physical lanes (right) onto sets of four traces (left)
4
ISS Technology Update Volume 7, Number 3
The NonStop signal midplane has eight 200-pin connectors to support eight individual switches, four double bay switches, or a
combination of the two. The eight interconnect bays at the rear of the enclosure accommodate eight single or four redundant
interconnect modules. All interconnect modules plug directly into these interconnect bays. Each HP BladeSystem c-Class
Enclosure requires two interconnect switches or two pass-thru modules, side-by-side, for a fully redundant configuration.
The Onboard Administrator is the terminating point for all interconnect bays. An interconnect module cannot use the connection
to the Onboard Administrator to communicate with another interconnect module. The signal midplane also carries the
management signals from each bay to the Onboard Administrator modules. However, the management signals are completely
isolated from the high-speed server-to-interconnect signals.
Fabric connectivity and port mapping
Because the connections between the device bays and the interconnect bays are hard-wired through the NonStop signal
midplane, the mezzanine cards must be matched to the appropriate type of interconnect module. For example, a Fibre Channel
mezzanine card must be placed in the mezzanine connector that connects to an interconnect bay holding a Fibre Channel
switch. To simplify the installation of the various mezzanine cards and interconnect modules, the Onboard Administrator uses
an “electronic keying” process to detect any mismatch between the mezzanine cards and the interconnect modules.
Interconnect bays 1 and 2 are reserved for Ethernet switches or pass-thru modules supporting server LAN on Motherboard
(LOM) NIC connections to ports on the Ethernet switch or pass-thru module. Supported bays for additional Ethernet switch
modules include unpopulated interconnect bays 3/4, 5/6, or 7/8. Redundant switches must be configured adjacent to one
another in interconnect bays 3/4, 5/6, or 7/8.
Connecting the ports of embedded devices to the interconnect bays in the HP BladeSystem c7000 Enclosure is relatively simple.
For port mapping, it does not matter in which bay a server blade is installed. The mezzanine connectors always connect to the
same interconnect bays.
Port mapping differs slightly between full-height and half-height server blades because full-height blades support additional
mezzanine cards. HP has simplified mapping mezzanine ports to switch ports by providing intelligent management tools such
as the Onboard Administrator and HP Systems Insight Manager software. The HP BladeSystem Onboard Administrator User
Guide provides specific port mapping details:
Four-trace SerDes signals between adjacent bays in the c7000 midplane permit bay-to-bay communications. Pairs of singlewide interconnect modules installed in adjacent horizontal bays provide redundant connectivity for dual-port interfaces in each
device bay. Adjacent interconnect modules also have high-speed cross-connect capability through the enclosure’s NonStop
signal midplane. For double-wide interconnects such as DDR Infiniband, two modules are installed in c7000 interconnect bays
5 and 7 to provide redundant high bandwidth connectivity.
Device bay crosslinks
Device bay crosslinks are wired between adjacent horizontal device bay pairs as indicated by the arrows in Figure 2-3. For
half-height server blades, these signals are used for four-lane PCIe connection to a partner device such as a tape blade or PCI
expansion blade. For full-height server blades, these signals are used for PCIe connection to a partner device in the lower
adjacent bay and require a PCIe pass-thru mezzanine card installed in mezzanine connector 3. The Onboard Administrator
disables the device bay crosslinks in instances where they cannot be used, for example between two server blades residing in
adjacent device bays.
5
ISS Technology Update Volume 7, Number 3
Figure 2-3. HP BladeSystem c7000, front view. The arrows indicate device bay crosslinks.
Interconnect bay crosslinks
Interconnect bay crosslinks are wired between adjacent interconnect bay pairs as indicated by the arrows in Figure 2-4. These
signals can be enabled to provide module-to-module connections, such as Ethernet crosslink ports between matching switches,
or to provide stacking links for Virtual Connect modules. The Onboard Administrator disables the interconnect bay crosslinks in
instances where they cannot be used, for example, if two different modules reside in adjacent horizontal interconnect bays.
Figure 2-4. HP BladeSystem c7000, rear view. The arrows indicate interconnect bay crosslinks.
Additional resources
Visit these resources for additional information on the topics discussed in this article:
Enhancing setup and usability in HP Systems Insight Manager 5.2
About HP SIM
HP Systems Insight Manager (HP SIM) 5.2, released in February 2008, represents a significant milestone in the continuing
evolution of the product. Although HP SIM 5.2 includes some new features and support, the primary focus for this release was
to greatly expand the product’s ease-of-use. This effort included making significant changes to the configuration process as well
as creating new organizational tools that make it easier for the user to perform important tasks such as managing
communications and launching tools.
Improving HP SIM configuration – enhancements to the First Time Wizard and to the
Configure and Repair Agents tool
With the introduction of HP SIM 5.2, the role of the First Time Wizard has been expanded beyond that of simple discovery to
include enhancements that help the user set up and configure many of the HP SIM options, including new options that
streamline the program’s interface:
•Selecting Managed Environment. This option allows the user to configure Insight Manager for only those operating systems
that will be managed. The HP SIM interface is then streamlined to remove any menus, collections, or reports that are not
needed.
•Configuring Managed Systems. The First Time Wizard now runs a subset of the full “Configure or Repair Agents” tool
immediately after initial discovery in order to allow configuration of events monitoring, SSH access, and trust relationships for
the managed systems.
In addition to these changes, the full Configure and Repair Agents tool has also been enhanced. For Windows-based systems,
users can now use this tool to both install and configure the required software agents needed to manage the systems, rather
than having to perform these tasks separately as in the past.
New Manage Communications tool
HP SIM 5.2 also introduces the Manage Communications tool, which makes it easier to manage communications between
systems and the HP SIM central management server. As shown in Figure 3-1, Manage Communications provides a single
location that displays the status of key communication pathways to the selected systems being managed:
• Identification (WBEM, SNMP, and so on)
• Events subscriptions
• Run Tools (SSH and trust configuration)
• Version Control configuration (Windows and Linux only)
7
ISS Technology Update Volume 7, Number 3
Figure 3-1. Manage Communications tool
Manage Communications also allows quick access to detailed information regarding communications status by selecting a
system and clicking on the “Advise and Repair” button.
Improved access to tools
HP SIM 5.2 provides several new features that make it easier to find, manage and launch the various HP SIM tools available
for a selected system or systems.
•Tool Search – Accessible from the search panel, Tool Search can quickly find and launch tools by searching through their
names, menu locations and descriptions.
•Quick Launch – Available from the System List and System pages, Quick Launch provides quick access to the tools that apply
to the currently selected set of systems. Quick Launch is also user-configurable to include favorite tools.
Select Targets by search – A part of the Select Target Systems function, this new predictive search capability allows the user to
•
quickly searc
h for target systems based on any of the common system attributes (OS name, system name, and so on)
8
ISS Technology Update Volume 7, Number 3
Other new support
In addition to the significant new features described above, HP SIM 5.2 also incorporates support for the following:
•Support for WMI agents. HP SIM now supports the new WMI-based agents for ProLiant servers running Windows, as well as
continuing to support SNMP-based agents for those devices (such as iLO) that do not yet have WMI-based providers.
ISS Technology Update
•Discovery of VM hosts and guests. Beginning with 5.2, HP SIM no longer requires the VMM plug-in be installed in order to
discover VM hosts and guests for VMWare ESX. The VMM plug-in is, however, still required in order to manage these in
Volume 7, Number 3
Insight Manager.
Improved HP-UX support. Users can now search on software and firmware versions for HP-UX machines as well as for
•
Keeping you informed of the latest ISS technology
Windows-based machines being managed by HP SIM. Additionally, property pages for HP-UX systems now include
information on cooling, po
wer, disk drives, HBAs, and Multi-processing.
Additional resources
For more information on Systems Insight Manager 5.2 and HP management software in general, please visit the following links:
Resource URL
Main page for HP Systems Insight
Manager
New HP Infrastructure Software
blog
HP Software Customer Connection.
This resource offers a consolidated
view of tips and best practices,
events and additional tools
regarding HP software.
www.hp.com/go/hpsim
www.hp.com/blogs/managementsoftware
www.hp.com/go/swcustomerconnection
Using Virtual Connect and non-Virtual Connect modules in the same HP
BladeSystem c-Class enclosure
Virtual Connect technology simplifies the setup and administration of server connections to the network. However, there may be
some cases in which customers want to install both Virtual Connect and non-Virtual Connect interconnect modules in the same
enclosure. Non-Virtual Connect interconnect modules can be installed in the BladeSystem c-Class enclosures, but these modules
and corresponding server connections do not inherit the benefits of Virtual Connect. Switches and Pass-Thru modules will
operate according to their default configurations. If server network interface controllers or host bus adapters are connected to
non-Virtual Connect modules, any changes to the server connectivity will require reconfiguration on the local area network
(LAN) and/or storage area network (SAN). Such changes include migrating existing servers and deploying new servers. The
Virtual Connect Manager will only manage server connections that are attached to the Virtual Connect modules.
To combine Virtual Connect (VC) and non-Virtual Connect interconnect modules, customers should follow these general
guidelines:
• In all Virtual Connect configurations, an HP 1/10Gb VC-Ethernet Module or an HP 1/10Gb-F VC-Ethernet Module must be
included in interconnect bay 1 because the Virtual Connect Manager software runs on a processor that resides on these VCEthernet modules.
• To support redundancy of the Virtual Connect environment, HP recommends that VC-Ethernet Modules be used in interconnect
bays 1 and 2.
9
ISS Technology Update Volume 7, Number 3
• If a VC-Ethernet Module is installed in an interconnect bay, the only module that can be installed in the adjacent interconnect
bay is another VC-Ethernet Module. A non-Virtual Connect module placed next to a Virtual Connect module will be marked
as incompatible and will not be managed by Virtual Connect.
• If an HP 4Gb VC-Fibre Channel module is installed in an interconnect bay, the only module that can be installed in the
adjacent interconnect bay is another VC-Fibre Channel module.
NOTE
The VC Manager creates bay-specific I/O profiles, assigns unique Media Access Control
(MAC) addresses and World-Wide Names (WWNs) to these profiles, and administers them
locally. The VC Manager will not provision VC-assigned MACs and WWNs to the server
connections that are linked to non-Virtual Connect modules.
Sample Supported Configurations
The Virtual Connect User Guide has several pages of sample configurations supported by Virtual Connect
(
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00865618/c00865618.pdf). It is not an exhaustive list,
but is meant to show typical supported configurations.
Tables 4-1 and 4-2 give examples of supported configurations for the BladeSystem c7000 Enclosure.
Table 4-1. HP BladeSystem c7000 Enclosure example of a typical supported configuration
Storage HBAs are determined by the storage array provider
Storage array vendors work with server and host bus adapter (HBA) manufacturers, such as QLogic and Emulex, to qualify their
storage arrays with particular server/HBA configurations. This means that each server/HBA combination is qualified with a
particular storage array, including software, firmware, and HBA configuration. For this reason, it is not possible to substitute an
“equivalent” HP HBA to work with a storage array if the HBA and driver have not been qualified by the array vendor.
This is true for “all” servers. HP ISS benefits from this industry practice because it allows HP servers and HBAs to be attached to
other vendors’ storage arrays. The bottom line is that if you need to find an HBA for a particular HP server connected to another
vendor’s storage array, it is always the responsibility of that vendor to designate the HBA and driver.
If an HP ISS customer uses an HP storage array, then HP can assist in determining support for the configuration. If an HP ISS
customer uses a third-party storage array, then that vendor’s support organization should be consulted to determine support and
any special configuration requirements.
ProLiant management tip of the month
Statement regarding HP Systems Insight Manager (SIM) 5.2 and HP Insight Control
Environment (ICE) 2.21 and Microsoft® Windows® host operating system compatibility
In the last several years, the makers of x86-compatible processors have added 64-bit extensions, primarily for memory
addressing. These extensions are Intel®64 (formerly known as ‘Intel Extended Memory 64 Technology) and AMD64, and are a
superset of the 32-bit Intel architecture (known as IA-32). As such, IA-32 applications run without modification on processors
with the 64-bit extensions, and run natively, that is, without the need for emulation. This results in no performance penalty. This
is in contrast with the 64-bit Intel® Itanium® processor family (IA-64) which uses the EPIC (Explicitly Parallel Instruction
Computing) architecture and must use emulation (with the associated performance degradation) to run IA-32 code.
Microsoft® Windows Server 2003 and Windows Server 2008 are available in versions for x86 and x64 (the Microsoft
nomenclature indicating support of both Intel64 and AMD64). When running the x64 version of the operating system, all
device drivers must be 64-bit in nature. HP provides 64-bit drivers for HP ProLiant servers. Applications can be either 32-bit or
64-bit, and the two types can execute concurrently. Both Intel and AMD use the term “Compatibility Mode” to indicate running
32-bit applications on a 64-bit operating system. This is implemented in Windows with what is called the “Windows on
Windows subsystem” (WoW). The WOW.DLL converts function arguments from 32-bit to 64-bit and return values from 64-bit to
32-bit. This conversion is very low overhead as most of the calculations involve adding leading zeros. Address translation is
from flat 32-bit to flat 64-bit.
Currently, according to International Data Corporation, the overwhelming installed base of servers is running x86-32 versions
of the Windows operating system. Even with new installations, many customers have chosen to install the x86-32 version of the
operating system even on 64-bit capable processors. In fact, 32-bit OS versions remain the majority for new installations until
after 2009, when 64-bit OS deployments start to become the norm.
11
ISS Technology Update Volume 7, Number 3
In order to service the broadest array of customers, HP is committed to providing versions of our infrastructure management
software applications that are compatible with both the x86 and x64 versions of the appropriate Windows operating systems.
This process has just begun, and only a few applications provide that compatibility to date. Because there is no performance
penalty to running 32-bit code in an x64 operating system, the simplest way to achieve both HP and customer goals is to
provide x86-32 code which runs on both, and not provide separate versions of the applications for 32-bit and 64-bit
environments. Additionally, since our infrastructure software is resource-intensive and not particularly processor-intensive, we do
not expect there to be any gains in performance if we were to offer the applications in x64 re-compiled versions. It is possible
that in the future, if the installed base has successfully migrated to predominantly x64 deployments of the operating system, we
will cease providing 32-bit versions of the applications and will re-compile for the x64 environment.
Note that this discussion is regarding ProLiant servers with x86 processors from Intel (Intel®64) and AMD (AMD64) and does
not pertain to HP Integrity servers with the Intel® Itanium® processor (IA-64). HP does not provide versions of HP SIM for IA-64
versions of Windows.
Table 5-1. Product support information
Product Hosting CMS on x64 OS Manages targets with
x64 OS
HP SIM 5.2 Yes Yes Documented in
Extensions for HP SIM
on Microsoft®
Windows®
HP BladeSystem
Integrated Manager
HP Insight Power
Manager (IPM)
HP Service Essentials
Remote Support Pack
HP Vulnerability and
Patch Management
Pack (VPM)
Insight Control
Environment for
BladeSystem and
ProLiant ML/DL
Yes Yes Not applicable (N/A)
Yes Yes N/A
Yes N/A Target is iLO
Yes Yes
No Limited VPM can scan systems
No Limited While some components
Notes
QuickSpecs
management processor
running x64 OS but
does not acquire and
distribute patches for
them
of ICE can be installed
on x64, ICE as a whole
cannot; limitations
documented in the “HP
Insight Control
Management Support
Matrix”
HP Insight Control
Management
Integrated Installer
HP Insight Control
Environment Advisor
No N/A Installer doesn’t apply to
targets
Yes N/A The advisor will fail the
installation environment,
stating that it is an
unsupported OS
12
ISS Technology Update Volume 7, Number 3
Product Hosting CMS on x64 OS Manages targets with
x64 OS
Notes
HP Insight Control
Management Services
HP Integrated LightsOut 2 (iLO) Advanced
Pack
HP Integrated LightsOut 2 Select Pack
HP Rapid Deployment
Pack (RDP)
HP Performance
Management Pack
(PMP)
HP Virtual Machine
Management Pack
(VMM)
HP Virtual Connect
Enterprise Manager
(VCEM)
No N/A Function only applies to
CMS
N/A N/A No CMS component;
Target is iLO
management processor
N/A N/A No CMS component;
Target is iLO
management processor
No Some limitations Limitations documented
in the “HP Insight
Control Management
Support Matrix”
Yes Some limitations Limitations documented
in the “HP Insight
Control Management
Support Matrix”
Yes Yes Documented in
QuickSpecs
No N/A Target is Virtual Connect
interconnect modules
HP Insight Dynamics VSE
No Yes N/A
Additional resources
For additional information on the topics discussed in this article, visit:
Resource URL
QuickSpecs and related product information www.hp.com/go/insightcontrol
AMD and AMD Opteron are trademarks of Advanced Micro Devices, Inc.
Intel, Intel Xeon, and Intel Itanium are trademarks of Intel Corporation in the United States and other countries.
Microsoft and Windows are US registered trademarks of Microsoft Corporation.
TC080305NL
March 2008
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.