HPE 764286-B21 Getting Started Guide

QuickSpecs
HPE InfiniBand Options for HPE BladeSystems c-Class

Overview

Page 1
HPE InfiniBand Options for HPE BladeSystems c-Class
Hewlett Packard Enterprise supports 56 Gbps Fourteen Data Rate (FDR) and 40Gbps Quad Data Rate (QDR) InfiniBand (IB) products that include mezzanine Host Channel Adapters (HCA) for server blades, dual adaptors, switch blades for c
focuses on mezzanine adaptors for server blades, and InfiniBand switch blades for c switches, standup PCI Express HCAs and FlexLOM Adaptors for
servers, as well as InfiniBand cables, please
refer to the
https://www.hpe.com/h20195/v2/GetHTML.aspx?docname=c04154440
The following InfiniBand products based on Mellanox technology are available for the
The 544+M Mezzanine adapters Mezzanine adapters utilize capable of support (a mix mode of one InfiniBan
. The 544+M Mezzanine
adapter are supported on
The 545M Mezzanine Adapter for utilizes and low latency for performance with a
NOTE:
InfiniBand host s on Mellanox technologies, Mellanox WinOF on M
For the latest information on Mellanox OFED software, please refer to:
http://www.mellanox.com/content/pages.php?pg=software_overview_ib&men
The enclosure. It is based on the Mellanox Switch X technology. The FDR IB switch blade has 16 downlink ports to connect up to server blades in the c7000 enclosure, and 18 QSFP uplink ports for inter capable of supporting 56Gbps (FDR) bandwidth. A subnet manager has to be provided; see the paragraph on subnet manage more details.
The Platinum enclosure. It is based on the Mellanox Switch X technology. The FDR IB switch blade has 16 downlink ports to to 16 server blades in the c7000 enclosure, and 18 QSFP uplink ports for inter ports are capable of supporting 56Gbps (FDR) bandwidth. The switch contains an integrated management module that is capable of managing a cluster of up to 648 nodes. links. The most commonly deployed fabric topology is a fat tree or its variations. A subnet manager is require control an InfiniBand fabric. The subnet manager functionality can be provided by either the rack-mount InfiniBand switch with an embedded fabric manager (aka internally managed switch) or host-based subnet manager
mode InfiniBand or Ethernet mezzanine
-Class enclosures, and rack switches and cables for building scale-out solutions. This QuickSpecs
-Class enclosures; for details on the InfiniBand rack
HPE ProLiant and Apollo
HPE InfiniBand for HPE ProLiant and Apollo servers QuickSpecs at:
HPE InfiniBand FDR/EN 10/40Gb 2-Port 544+M Adapter for HPE BladeSystem c-Class (Gen 9)
HPE InfiniBand QDR/EN 10Gb 2-Port 544+M Adapter for HPE BladeSystem c-Class (Gen 9)
HPE InfiniBand FDR 2-Port 545M Adapter for HPE BladeSystem c-Class (Gen8 and Gen9)
HPE BLc FDR IB Managed Switch for HPE BladeSystem c-Class
HPE BLc FDR IB Switch for HPE BladeSystem c-Class
for HPE BladeSystem c-Class are based on the Mellanox ConnectX-3 Pro technology. The 544+M
the PCI Express 3.0 x8 interface. The 544+M Mezzanine adapters are dual function adapters that are
dual InfiniBand ports or dual Ethernet ports when connected to supported Ethernet switches in c7000 enclosure
d port and one Ethernet port is not supported on 544+M Mezzanine adapters)
HPE BladeSystem c-Class Gen 9 blade servers.
HPE BladeSystem c-Class from HPE:
the PCI Express 3.0 x16 interface. Two ports provide more than 100Gb/s of throughput in dual port FDR InfiniBand mode
n improvement in message rate compared with previous generations of QDR InfiniBand cards.
The 544+M and 545M Mezzanine Adapters are only supported in the HPE BLc7000 Platinum Enclosures.
tack software (driver) is required to run on servers connected to the InfiniBand network. For mezzanine HCAs based
Hewlett Packard Enterprise supports Mellanox OFED Driver on Linux 64-bit operating systems, and
icrosoft Windows.
HPE BLc FDR IB Switch for HPE BladeSystem c-Class is a double wide switch for the HPE BladeSystem c7000 Platinum
HPE BLc FDR Managed IB Switch for HPE BladeSystem c-Class is a double wide switch for the HPE BladeSystem c7000
HPE BladeSystem c-Class is based on the Mellanox Connect-IB technology. The 545M adapter
-driven server and storage clustering applications. Additional application acceleration is achieved
u_section=34.
16
-switch links or to connect to external servers. All ports are
connect up
-switch links or to connect to external servers. All
An InfiniBand fabric consists of one or more InfiniBand switches connected via inter-switch
d to manage and
HPE BLc FDR Managed IB Switch or a
rs for
QuickSpecs
HPE InfiniBand Options for HPE BladeSystems c-
Class
Overview
Page 2
software on a server connected to the fabric. OpenSM is a host-based subnet manager that runs on a server connected to the InfiniBand fabric. Mellanox OFED software stack includes OpenSM for Linux, and Mellanox WinOF includes OpenSM for Windows. For comprehensive management and monitoring capabilities, Mellanox Unified Fabric Manager managing the InfiniBand fabric based on Mellanox/ InfiniBand switch products and Mellanox based mezzanine HCA.
An embedded fabric
QDR switches. Please refer to: mount
HP from 1M to 7M for HCA to switch, or inter for HCA to switch, or inter switch, or inter
Please refer to: InfiniBand cables.
What's New
PCI Express 3.0 x8
™ Advanced (UFM) is recommended for
manager is available on Mellanox internally managed 36-port FDR switch, and modular FDR and
https://www.hpe.com/h20195/v2/GetHTML.aspx?docname=c04154440 for information about InfiniBand rack
switches.
E supports InfiniBand copper and fiber optic cables with QSFP to QSFP connectors. The QSFP to QSFP QDR copper cables range
-switch links at QDR speed. The QSFP to QSFP FDR copper cables range from 0.5M to 3M
-switch links at FDR speed. The QSFP to QSFP FDR optical cables range from 3M to 30M for HCA to
-switch links at FDR speed.
https://www.hpe.com/h20195/v2/GetHTML.aspx?docname=c04154440 for more details on supported
HPE InfiniBand FDR/EN 10/40Gb 2-Port 544+M Mezzanine Adapter for HPE BladeSystem c-Class
Mellanox Dual Function ConnectX-3 Pro technology
Dual port FDR InfiniBand or 10/40G Ethernet
PCI Express 3.0 x8
HPE InfiniBand QDR/EN 10Gb 2-Port 544+M Mezzanine Adapter for HPE BladeSystem c-Class
Mellanox Dual Function ConnectX-3 Pro technology
Dual port QDR InfiniBand or 10Gb Ethernet
QuickSpecs
HPE InfiniBand Options for HPE BladeSystems c-
Class

Models

3
Mellanox CX-3 Pro Dual Function Mezzanine Adapters
HP InfiniBand QDR/Ethernet 10Gb 2-port 544+M Adapter
764282-B21
HP InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+M Adapter
764283-B21
NOTE: Supported on - HPE ProLiant BL460c Gen 9 NOTE:
BLc7000 enclosure part numbers 507014
B21 and 507019 For firmware release notes of please visit For firmware release notes of please visit
Mellanox Connect­IB InfiniBand Mezzanine Adapter
HP InfiniBand FDR 2-port 545M Adapter
702213-B21
NOTE: Supported on - HPE ProLiant BL460c Gen 8 & Gen 9, HPE ProLiant BL465c Gen 8, HPE ProLiant BL660c Gen 8 & Gen 9,
NOTE:
BLc7000 enclosure part numbers 507014
B21 and 507019 For firmware release notes of HPE InfiniBand FDR 2
support website
Mellanox Software License
Mellanox Unified Fabric Manager Advanced 1yr 24x7 Updates and Technical Support Flex License
BD571A
Mellanox Unified Fabric Manager Advanced 3yr 24x7 Updates and Technical Support Flex License
BD572A
Mellanox IB switch blades and options
HP 4X FDR InfiniBand Managed Switch Module for c-Class BladeSystem
648311-B21
HP 4X FDR InfiniBand Switch Module for c-Class BladeSystem
648312-B21
NOTE: The HPE BLc FDR IB Switches are only supported on the HPE BladeSystem c7000 E 507016
NOTE:
information at:
http://h18000.www1.hp.com/products/QuickSpecs/12810_div/12810_div.html
(Worldwide)
Designed for use with HPE BLc7000 Platinum Enclosures. If used with HPE
-B21, 507015-B21, 507016-B21, 507017-
-B21 must be installed in mezz slot 1. Not supported on earlier enclosures. HPE InfiniBand QDR/ED 10Gb 2-Port 544+M Adapter,
the support website then select your operating system.
HPE InfiniBand FDR/EN 10/40Gb 2-Port 544+N Adapter,
the support website then select your operating system.
HPE ProLiant WS460c Gen 8 & Gen 9
Designed for use with HPE BLc7000 Platinum Enclosures. If used with HPE
-B21, 507015-B21, 507016-B21, 507017-
-B21 must be installed in mezz slot 1. Not supported on earlier enclosures.
-Port 545M Adapter, please visit the
then select your operating system.
nclosures P/N's: 681840-B21, 681842-B21, 681844-B21, 507014-B21, 507015-B21,
-B21, 507017-B21 and 507019-B21.
Please see the HPE BladeSystem c7000 Enclosure QuickSpecs for additional
Page
Quic
kSpecs HPE InfiniBand Options for HPE BladeSystems c-
Class

Standard Features

4
Product Features
InfiniBand mezzanine HCAs based on Mellanox Connect-3 Pro technology for Gen 9 blades
dwidth. All uplink ports support copper and fiber optic cables. One
is externally managed, i.e. a subnet manager has
Performance
Bandwidth: Scalability and
Standards support:
HPE InfiniBand QDR/EN 10Gb 2-Port 544+M Adapter (764282-B21)
Mellanox ConnectX-3 Pro technology PCI Express 3.0 x8 Dual-port QDR InfiniBand Dual-port 10 Gb Ethernet
HPE InfiniBand FDR/EN 10/40Gb 2-Port 544+M Adapter (764283-B21)
Mellanox ConnectX-3 Pro technology PCI Express 3.0 x8 Dual port FDR InfiniBand in HPE BLc7000 Platinum Enclosures Dual-port QDR InfiniBand Dual-port 40 Gb Ethernet Dual-port 10 Gb Ethernet
Support Mellanox OFED Linux driver stack and Microsoft Windows Server
For firmware release notes, please visit the support website for part number 764282-B21
and the support website for part number 764283-B21, then select your operating system.
InfiniBand mezzanine HCAs based on Mellanox Connect-IB technology for Gen 8 & Gen 9 blades
InfiniBand switch blades based on Mellanox technology
HPE InfiniBand FDR 2-Port 545M Adapter (702213-B21)
Mellanox Connect-IB technology Dual port FDR InfiniBand PCI Express 3.0 x16 interface for increased throughput Improved message rate Designed for use with HPE BLc7000 Platinum enclosures
Support Mellanox OFED Linux driver stack r
For firmware release notes, please visit the support website system.
HPE BLc FDR IB Switch for HPE BladeSystem c-Class
Double wide switch blade for the Platinum c7000 enclosure with 16 downlink ports
to connect server blades via the midplane, and 18 QSFP unlink ports for the inter­switch links or to connect to external servers. All ports are capable of supporting 56Gbps (FDR) ban version of the switch (648312-B21) to be provided on the fabric (see the subnet manager discussion above). The other version of the switch (648311-B21) contains a management module and can run the subnet manager.
then select your operating
Reliability
All ports on FDR HCA cards and switch are capable of supporting 56Gbps signaling rate, with a peak
data rate of 52 Gbps in each direction.
All ports on QDR HCA card and switch are capable of supporting 40Gbps signaling rate, with a peak
data rate of 32 Gbps in each direction.
The 544+M Mezzanine Adapters: PCI Express revision 3.0 x8 compliant
The 545M Mezzanine Adapter: PCI Express revision 3.0x16 compliant
IBTA version 1.2 compatible
Page
Loading...
+ 7 hidden pages