HPE InfiniBand Options for HPE ProLiant and Apollo Servers
Overview
1
HPE InfiniBand Options for HPE ProLiant and Apollo Servers
Hewlett Packard Enterprise supports InfiniBand (IB) products that include Host Channel Adapters (HCA), HPE FlexLOM Adapters,
switches, and cables for
For 100Gbps EDR InfiniBand adapter products, please refer to
adapters.html
catalog/servers/server
For details on the InfiniBand support for
BladeS
Hewlett Packard Enterprise supports
The following InfiniBand
The
function adapter that
InfiniBand ports. This
The
interface. The 544+
of InfiniBand on Port 1 and Ethernet on Port 2
Gen 9
Gen 9
InfiniBand host stack software (driver) is required to run on servers connected to the InfiniBand fabric. For HCAs based on M
technologies,
WinOF driver stack on Microsoft Windows
An InfiniBand fabric is constructed with one or more InfiniBand switches connected via inter
required to manage an InfiniBand fabric. OpenSM is a host
fabric.
Mellanox Unified Fabric Manager
products.
The following InfiniBand
The switch
The following Mellanox software for InfiniBand switches and
HPE ProLiant servers and HPE Apollo Servers. This QuickSpecs covers 56Gbps FDR InfiniBand products.
• HPE IB FDR/EN 10/40Gb 2P 544+FLR-QSFP Adapter (Based on Mellanox ConnectX-3 Pro technology)
• HPE IB FDR/EN 10/40Gb 2P 544+QSFP Adapter (Based on Mellanox ConnectX-3 Pro technology)
547 adapter is based on the Mellanox ConnectX-5 technology. This adapter utilizes the PCI Express 3.0 x8 interface. It is a dual
is capable of supporting two Ethernet ports, a mix of InfiniBand on Port 1 and Ethernet on Port 2 or two
adapter is designed for PCI Express 3.0 x8 FlexibleLOM expansion slots on HPE Gen 9 and Gen 10 servers.
544+ adapters are based on the Mellanox ConnectX-3 Pro technology. The 544+ adapters utilize the PCI Express 3.0 x8
adapters are dual function adapters that also are capable of supporting single or dual Ethernet ports and a mix
. The 544+QSFP adapter is designed for PCI Express 3.0 x8 expansion slots on HPE
and Gen 10 servers. The 544+FLR-QSFP adapters are designed for PCI Express 3.0 x8 FlexibleLOM expansion slots on HPE
and Gen 10 servers.
Hewlett Packard Enterprise supports Mellanox OFED driver stacks on Linux 64-bit operating systems, and Mellanox
Operating System.
-switch links. A subnet manager is
-based subnet manager that runs on a server connected to the InfiniBand
The Mellanox OFED software stack includes OpenSM for Linux. For comprehensive management and monitoring capabilities,
Advanced is recommended for managing the InfiniBand fabric based on Mellanox InfiniBand
FDR switch products based on Mellanox technologies are available from HPE
• Mellanox IB FDR 36-port Managed switch
• Mellanox IB FDR 36-port switch
• Mellanox IB QDR/FDR 648, 324 and 216 port Modular Switches
air flow for cooling is from the power side (front of the rack) to the ports side (rear of the rack).
adapters is available from HPE
• Mellanox Unified Fabric Manager Advanced (UFM)
ellanox
Page
QuickSpecs
HPE InfiniBand Options for HPE ProLiant and Apollo Servers
Overview
2
Mellanox Unified Fabric Manager™ Advanced (UFM™) is a powerful platform for managing scale-out computing environments. UFM
enables data center operators to efficiently provision, monitor and operate the modern data center fabric.
UFM runs on a server and is used to monitor and analyze Mellanox fabrics health and performance. UFM also can be used to
automate provisioning and device management tasks. For example, UFM can communicate with devices, to reset or shut down ports
or devices, perform firmware and software upgrades, etc. UFM's extensive API enables it to easily integrate with existing
management tools for a unifie
systems and to activate user made scripts based on system events.
Hewlett Packard Enterprise supports
What's New
• Support for HPE IB FDR/EN 40/50Gb 2P 547FLR-QSFP Adapter
At A Glance
• Dual function InfiniBand/Ethernet cards based on Mellanox ConnectX-5 technology
FDR IB on port 1 and Ethernet on port 2
d cluster view. UFM also includes the ability to save historical information, to send alerts to external
InfiniBand copper and fiber optic cables with QSFP to QSFP connectors.
• QSFP to QSFP FDR copper cables range from .5M to 3M for HCA to switch, or inter-switch links at FDR speed.
• QSFP to QSFP FDR fiber optic cables range from 3M to 30M for HCA to switch, or inter-switch links at FDR speed.
Dual QSFP ports PCI Express 3.0 card based on ConnectX-5 technology
Support PCI Express 3.0 x8
Single or dual ports FDR InfiniBand
Single or dual ports 10/40/50Gb Ethernet
A mix of InfiniBand on Port 1 and Ethernet on Port 2 (Default)
Use UEFI to change the port personality
NCSI Shared-Network-port (SNP) and Wake-on-Lan are supported on Port 2 only
InfiniBand features: Improved message rate, Tag matching, Adaptive routing, Atomic operations
Additional features: RoCE, DPDK, SRIOV, VXLAN, NVGRE
Support the following Hewlett Packard Enterprise servers: Apollo 2000/XL170r Gen10, Apollo
2000/XL190r Gen10, Apollo 4200 Gen9, Apollo 4500/XL450 Gen10, DL360
Gen10, DL380 Gen10, DL385 Gen10, DL560 Gen10, DL580 Gen10
For firmware release notes, please visit the support website and search by part number and
operating system. Look for the most recent release.
• Dual function InfiniBand/Ethernet cards based on Mellanox ConnectX-3 Pro technology
Dual QSFP ports PCI Express 3.0 card based on ConnectX-3 Pro technology
Support PCI Express 3.0 x8
Single or dual ports FDR InfiniBand
Single or dual ports 10/40Gb Ethernet
FDR IB on port 1 and Ethernet on port 2
Support the following Hewlett Packard Enterprise servers: DL120 Gen9, DL160 Gen9, DL180 Gen9,
2000/XL1x0 Node, Apollo 4200, Apollo 4500/XL450 Gen9, Apollo 6000/XL230b Gen9
For firmware release notes, please visit the support website and search by part number and operating
system. Look for the most recent release.
Dual QSFP ports HPE Flexible LOM Adapter based on ConnectX-3 Pro technology
Support PCI Express 3.0 x8
Single or dual port FDR InfiniBand
Single or dual port 10/40Gb Ethernet
Page
QuickSpecs
HPE InfiniBand Options for HPE ProLiant and Apollo Servers
Overview
3
Support the following Hewlett Packard Enterprise servers: DL180 Gen9, DL360 Gen9, DL380 Gen9,
fan that has air flow from the front to the rear (ports side.
DL560 Gen9, DL580 Gen8, DL580 Gen9, Apollo 2000/XL1x0 Node, Apollo 4200, Apollo 4500/XL450
Gen9, Apollo 6000/XL230a Gen9, Apollo 6000/XL230b Gen9, Apollo 6000/XL250a Gen9
For firmware release notes, please visit the support website and search by part number and operating
• Mellanox Software
o Mellanox Unified Fabric Manager Advanced (UFM)
• InfiniBand switches based on Mellanox SwitchX-2 technology
o Mellanox IB FDR 36-port unmanaged switch (part number: 670767-B21)
system. Look for the most recent release.
Can run on one server, or on two servers for high availability.
NOTE: Supported on 64 bit Linux operating systems only.
Activates user made scripts based on system events
Advanced multicast optimizations through multicast tree management
User management with control and authorization groups
Multicast tree routing optimizations
User authorization management
36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables, with the front-to-rear cooling
Page
QuickSpecs
HPE InfiniBand Options for HPE ProLiant and Apollo Servers
Overview
4
Dual power supplies for redundancy.
o 10 Gb Ethernet requires the use of a QSFP to SFP+ Adapter 655874-B21
o Mellanox IB FDR 36-port unmanaged switch with reversed airflow fan unit (part number: 670768-B21)
o Mellanox IB FDR 36-port managed switch (part number: 670769-B21)
o Mellanox IB FDR 36-port managed switch with reversed airflow fan unit (670770-B21)
• InfiniBand cables
o For InfiniBand cables, please see the cables section below. Please also consult product’s firmware release notes for
• Ethernet cables
o The Dual Function cards support most HPE Ethernet cables. Consult product's firmware release notes for list of
36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables, with the rear-to-front cooling
fan that has air flow from the rear (ports side) to the front.
Dual power supplies for redundancy.
36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables
Integrated management module for Fabric Management
Dual power supplies for redundancy.
36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables, with the rear-to-front cooling
fan that has air flow from the rear (ports side) to the front.
Integrated management module for Fabric Management
Dual power supplies for redundancy.
list of support cables.
supported cables.
Page
QuickSpecs
HPE InfiniBand Options for HPE ProLiant and Apollo Servers
Models
5
HPE Dual Function Adapters
InfiniBand/Ethernet based on Mellanox ConnectX-5 technology