Document Revision History ......................................................................................62
6
About This Manual
This User Manual describes NVIDIA® Mellanox® ConnectX®-6 VPI adapter cards for Open Compute
Project (OCP), Spec 3.0. It provides details as to the interfaces of the board, specifications, required
software and firmware for operating the board, and relevant documentation.
Ordering Part Numbers
The table below provides the ordering part numbers (OPN) for the available ConnectX-6 VPI adapter
cards for OCP Spec 3.0.
OPNMarketing Description
MCX653435AHDAI
MCX653436AHDAI
MCX653435AEDAI
MCX653435AHDAE
ConnectX®-6 VPI adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host
management, Single-port QSFP56, PCIe4.0 x16, Internal Lock
ConnectX®-6 VPI adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host
management, Dual-port QSFP56, PCIe4.0 x16, Internal Lock
ConnectX®-6 VPI adapter card, 100Gb/s (HDR100, EDR IB and 100GbE) for OCP 3.0, with host
management, Single-port QSFP56, PCIe 3.0/4.0 x16, Internal Lock
ConnectX®-6 VPI adapter card, 200Gb/s (HDR IB and 200GbE) for OCP 3.0, with host
management, Single-port QSFP56, PCIe4.0 x16, Ejector Latch
Intended Audience
This manual is intended for the installer and user of these cards.The manual assumes basic familiarity
with InfiniBand/VPI network and architecture specifications.
Technical Support
Customers who purchased Mellanox products directly from Mellanox are invited to contact usthrough
the following methods:
•
URL:http://www.mellanox.com> Support
•
E-mail:support@mellanox.com
•
Tel: +1.408.916.0055
Customers who purchased Mellanox M-1 Global Support Services, please see your contract fordetails
regarding Technical Support.
Customers who purchased Mellanox products through a Mellanox approved reseller should first
seekassistance through their reseller.
Related Documentation
Mellanox OFED for Linux
User Manual and Release
Notes
WinOF-2 for WindowsUser
Manual and Release Notes
Mellanox VMware for
Ethernet User Manual and
Release Notes
User Manual describing OFED features, performance, band diagnostic, tools
content and configuration. SeeMellanox OFED for Linux Documentation.
User Manual describing WinOF-2 features, performance, Ethernet diagnostic,
tools content and configuration. SeeWinOF-2 for Windows Documentation.
User Manual describing the various components of the Mellanox ConnectX®
NATIVE ESXi stack. See http://www.mellanox.comProducts > Software >
Ethernet Drivers > VMware Driver
7
Mellanox Firmware Utility
(mlxup) User Manual and
Release Notes
Mellanox firmware update and query utility used to update the firmware. Seeh
User Manual describing the set of MFT firmware management tools for a
single node. SeeMFT User Manual.
IEEE Ethernet specification at http://standards.ieee.org/
Industry Standard PCI Express Base and Card Electromechanical
Specifications at https://pcisig.com/specifications
https://www.opencompute.org/
Mellanox LinkX InfiniBand cables and transceivers are designed to maximize
the performance of High-Performance Computing networks, requiring highbandwidth, low-latency connections between compute nodes and switch
nodes. Mellanox offers one of industry’s broadest portfolio of QDR/FDR10
(40Gb/s), FDR (56Gb/s), EDR/HDR100 (100Gb/s) and HDR (200Gb/s) cables,
including Direct Attach Copper cables (DACs), copper splitter cables, Active
Optical Cables (AOCs) and transceivers in a wide range of lengths from 0.5m
to 10km. In addition to meeting IBTA standards,Mellanox tests every product
in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15.Re
ad more at https://www.mellanox.com/products/interconnect/infiniband-
overview.php
Document Conventions
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega
Bytes. The use of Mb or Mbits (small b) indicates size in mega bits. In this document PCIe is used to
mean PCI Express.
8
Introduction
This is the User Guide for Mellanox Technologies Ethernet adapter cards based on the ConnectX®-6
VPI integrated circuit device for OCP Spec 3.0. These adapters connectivity provide the highest
performing low latency and most flexible interconnect solution for PCI Express Gen 3.0/4.0 servers
used in Enterprise Data Centers and High-Performance Computing environments.
The following provides the ordering part number, port speed, number of ports, and PCI Express speed.
Important Note:
ConnectX-6 OCP 3.0 cards were tested for Shock & Vibe in accordance with Mellanox specifications
and setups as defined in document XXX, as the OCP spec3.0 available at that time did not contain any
S&V definitions. A newer version of the OCP spec 3.0 has defined S&V specifications and Mellanox is in
the midst of retesting these cards to comply with OCP spec 3.0.
InfiniBand HDRA standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of
InfiniBand HDR100A standard InfiniBand data rate, where each lane of a 2X port runs a bit rate of
Up to 200 Gigabit EthernetMellanox adapters comply with the following IEEE 802.3 standards:
ConnectX-6 offers the highest throughput VPI adapter, supporting HDR 200Gb/s
InfiniBand and 200Gb/s Ethernet and enabling any standard networking,
clustering, or storage to operate seamlessly over any converged network
leveraging a consolidated software stack.
ConnectX-6 delivers low latency, high bandwidth, and computing efficiency for
performance-driven server and storage clustering applications. ConnectX-6 is
InfiniBand Architecture Specification v1.3 compliant.
53.125Gb/s with a 64b/66b encoding, resulting in an effective bandwidth of 200Gb/
s.
53.125Gb/s with a 64b/66b encoding, resulting in an effective bandwidth of 100Gb/
s.
InfiniBand EDRA standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of
25.78125Gb/s with a 64b/66b encoding, resulting in an effective bandwidth of
100Gb/s.
Memory ComponentsEEPROM - The EEPROM capacity is 32Kbit
•
•
SPI Quad - includes 256Mbit SPI Quad Flash device (MX25L25645GXDI-08G
device by Macronix)
10
FeatureDescription
Overlay NetworksIn order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels in
encapsulated formats such as NVGRE and VXLAN. While this solves network
scalability issues, it hides the TCP packet from the hardware offloading engines,
placing higher loads on the host CPU. ConnectX-6 effectively addresses this by
providing advanced NVGRE and VXLAN hardware offloading engines that
encapsulate and de-capsulate the overlay protocol.
RDMA and RDMA
overConverged Ethernet
(RoCE)
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA
over Converged Ethernet) technology, delivers low-latency and high-performance
over Band and Ethernet networks. Leveraging data center bridging (DCB)
capabilities as well as ConnectX-6 advanced congestion control hardware
mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and
Layer 3 networks.
Mellanox PeerDirect™PeerDirect™ communication provides high-efficiency RDMA access by eliminating
unnecessary internal data copies between components on the PCIe bus (for
example, from GPU to CPU), and therefore significantly reduces application run
time. ConnectX-6 advanced acceleration technology enables higher cluster
efficiency and scalability to tens of thousands of nodes.
CPU OffloadAdapter functionality enabling reduced CPU overhead allowing more available
CPU for computation tasks.
Open vSwitch (OVS) offload using ASAP
2(TM)
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Quality of Service (QoS)Support for port-based Quality of Service enabling various application
requirements for latency and SLA.
Hardware-based I/
OVirtualization
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines within the server.
Storage AccelerationA consolidated compute and storage network achieves significant cost-
performance advantages over multi-fabric networks. Standard block and file
access protocols can leverage RDMA for high-performance storage access.
• NVMe over Fabric offloads for the target machine
• Erasure Coding
• T10-DIF Signature Handover
SR-IOVConnectX-6 SR-IOV technology provides dedicated adapter resources and
guaranteed isolation and protection for virtual machines (VM) within the server.
NC-SIThe adapter supports a Network Controller Sideband Interface (NC-SI), MCTP over
SMBus and MCTP over PCIe - Baseboard Management Controller interface.
High-
PerformanceAcceleration
s
• Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing
Wake-on-LAN (WoL)The adapter supported Wake-on-LAN (WoL), a computer networkingstandard that
allows an adapter to be turned on orawakenedby a network message.
TBD: In STBY mode, only port0 is available.
Reset-on-LAN (RoL)Supported
11
Operating Systems/Distributions
•
RHEL/CentOS
•
Windows
•
FreeBSD
•
VMware
•
OpenFabrics Enterprise Distribution (OFED)
•
OpenFabrics Windows Distribution (WinOF-2)
Connectivity
•
Interoperable with 1/10/25/40/50/100/200 Gb/s Ethernet switches
•
Passive copper cable with ESD protection
•
Powered connectors for optical and active cable support
12
Interfaces
InfiniBand Interface
The network ports of the ConnectX®-6 adapter cards are compliant with the
Specification, Release 1.
3. InfiniBand traffic is transmitted through the cards' QSFP56 connectors.
InfiniBand Architecture
Ethernet QSFP56 Interfaces
The network ports of the ConnectX®-6 adapter card are compliant with the IEEE 802.3 Ethernet
standards listed inFeatures and Benefits. Ethernet traffic is transmitted through the QSFP56
connectors on the adaptercard.
The adapter card includes special circuits to protect from ESD shocks to the card/server when
plugging copper cables.
PCI Express Interface
The table below describes the supported PCIe interface in ConnectX-6 adapter cards.
Supported PCIe InterfaceFeatures
PCIe Gen 3.0/4.0 (1.1 and 3.0 compatible) through x16
edge connectors
Link Rates: 2.5. 5.0, 8.0GT/s or 16GT/s.
Auto Negotiation to: x16, x8, x4, x2 or x1.
Support for MSI/MSI-X mechanisms.
LED Interface
There are two I/O LEDs, LED0 and LED1, per port to indicate speed and link status. LED0 is bicolor
(yellow and green) LED and LED1 is a single color (green) LED.
Link Indications
LED and StateDescription
1Hz blinking YellowBeacon command for locating the adapter card
13
LED and StateDescription
4Hz blinking YellowIndicates an error with the link. The error can be one of the
following:
Error TypeDescriptionLED Behavior
Blinks until the error
is fixed
Blinks until the error
is fixed
LED0 Link Speed
LED1 Activity
I2CI2C access to the
networking ports fails
Over-currentOver-current condition
of the networking ports
•
A constant Green indicates a link withthe maximum
networking speed.
•
A constant Yellow indicates a link withless than the
maximum networking speed
•
If LED0 is off, then the link has not been established.
•
A blinking Green indicates a valid link with data transfer.
•
If LED1 is off, then there is no activity
FRU EEPROM
FRU EEPROM allows the baseboard to identify different types of Mezzanine cards. FRU EEPROM is
accessible through SMCLK and SMDATA. FRU EEPROM address is defined according SLOT_ID0 and
SLOT_ID1 and its capacity is 4Kb.
Heat Sink Interface
A heatsink is attached to the ConnectX-6 IC in order to dissipate the heat from the ConnectX-6 IC. It is
attached either by using four spring-loaded push pins that insert into four mounting holes.
ConnectX-6 IC has a thermal shutdown safety mechanism which automatically shuts down the
ConnectX-6 card in case of a high-temperature event, improper thermal coupling or heatsink removal.
SMBus Interface
ConnectX-6 technology maintains support for manageability through a BMC. ConnectX-6 OCP 3.0
adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a
standard Mellanox OCP 3.0 adapter. For configuring the adapter for the specific manageability solution
in use by the server, please contact Mellanox Support.
Voltage Regulators
The voltage regulator power is derived from the OCP 3.0 edge connector 12V and 3.3V supply pins.
These voltage supply pins feed onboard regulators that provide the necessary power to the various
components on the card.
14
CPLDInterface
The adapter card incorporates a CPLD device that controls the networking port LEDs and the scan
chain. It draws its power supply from 3.3V_EDGE.
15
Hardware Installation
Installation and initialization of ConnectX-6 adapter cards for OCP Spec 3.0 require attention to the
mechanical attributes, power specification, and precautions for electronic equipment.
Safety Warnings
Safety warnings are provided here in the English language. For safety warnings in other
languages, refer to the Adapter Installation Safety Instructions document available on
mellanox.com.
Unable to render include or excerpt-include. Could not retrieve page.
Installation Procedure Overview
The installation procedure of ConnectX-6 adapter cards for OCP Spec 3.0 involves the following steps:
StepProcedureDirect Link
1Check the system’s hardware and software requirements.Refer to System Requirements
2Pay attention to the airflow consideration within the host
system
3Follow the safety precautionsRefer to Safety Precautions
4Follow the pre-installation checklistRefer to Pre-Installation Checklist
5(Optional) Replace the assembled OCP 3.0 bracket with the
desired form factor bracket
6Install ConnectX-6 adapter card for OCP spec 3.0 in the
system
7Connect cables or modules to the cardRefer to Cables and Modules
8Identify ConnectX-6 adapter card in the systemRefer to Identify the Card in Your
Refer to Airflow Requirements
Refer toOCP 3.0 Bracket Replacement
Instructions
Refer to Installation Instructions
System
System Requirements
Unless otherwise specified, Mellanox products are designed to work in an environmentally
controlled data center with low levels of gaseous and dust (particulate) contamination.
The operating environment should meet severity level G1 as per ISA 71.04 for gaseous
contamination and ISO 14644-1 class 8 for cleanliness level.
16
Hardware Requirements
For proper operation and performance, please make sure to use a PCIe slot with a
corresponding bus width and that can supply sufficient power to your card. Refer to the
Specifications section of the manual for more power requirements.
A system with a PCI Express x16 slot for OCP spec 3.0 is required for installing the card.
Airflow Requirements
ConnectX-6 adapter cards are offered with two airflow patterns: from the heatsink to the network
ports, and vice versa, as shown below.
Please refer to the "Specifications" chapter for airflow numbers for each specific card model.
All cards in the system should be planned with the same airflow direction.
Hot Aisle Cooling
Heatsink-to-port AirflowDirection
Software Requirements
•
See Operating Systems/Distributions section under the Introduction section.
•
Software Stacks - Mellanox OpenFabric software package MLNX_OFED for Linux, WinOF-2 for
Windows, and VMware. See the Driver Installation section.
Cold Aisle Cooling
Port-to-heatsink Airflow Direction
17
Safety Precautions
The adapter is being installed in a system that operates with voltages that can be lethal.
Before opening the case of the system, observe the following precautions to avoid injury and
prevent damage to system components.
•
Remove any metallic objects from your hands and wrists.
•
Make sure to use only insulated tools.
•
Verify that the system is powered off and is unplugged.
•
It is strongly recommended to use an ESD strap or other antistatic devices.
1.
Remove any metallic objects from your hands and wrists.
2.
Make sure to use only insulated tools.
3.
Verify that the system is powered off and is unplugged.
4.
It is strongly recommended to use an ESD strap or other antistatic devices.
Pre-Installation Checklist
1.
Unpack the ConnectX-6 adapter card.
Unpack and remove the ConnectX-6 card. Check the parts for visible damage that may have
occurred during shipping. Please note that the cards must be placed on an antistatic surface.
Please note that if the card is removed hastily from the antistatic bag, the plastic
ziplock may harm the EMI fingers on the networking connector. Carefully remove the
card from the antistatic bag to avoid damaging the EMI fingers.
Shut down your system if active:
Turn off the power to the system, and disconnect the power cord. Refer to the system
documentation for instructions. Before you install the ConnectX-6 card, make sure that the
system is disconnected from power.
OCP 3.0 Bracket Replacement Instructions
Unable to render include or excerpt-include. Could not retrieve page.
OCP 3.0 Adapter Card Installation Instructions
This section provides detailed instructions on how to install your adapter card in a system. The below
table lists the different ConnectX-6 OCP 3.0 retention mechanisms and provides direct links to
installation instructions per bracket type.
Retention MechanismInstallation Instructions
Thumbscrew (Pull-tab) BracketInstallation Instructions for Cards with Thumbscrew (Pull-tab)
Bracket
18
Retention MechanismInstallation Instructions
Internal-Lock BracketInstallation Instructions for Cards with Internal Lock
Ejector-Latch BracketInstallation Instructions for Cards with Internal-Lock Bracket
Please note that the following figures are for illustration purposes only.
Cards with Thumbscrew (Pull-tab) Brackets
Please note that the following figures are for illustration purposes only.
1.
Before installing the card, make sure that the system is off and the power cord is not connected
to the server. Please follow proper electrical grounding procedures.
2.
Open the system case.
3.
Align the card with the system rails.
4.
Push the card until connectors are in full mate.
5.
Turn the captive screw clockwise until firmly locked.
19
Cards with Ejector Latch
This section applies to MCX653435A-HDAE.
1.
Before installing the card, make sure that the system is off and the power cord is not connected
to the server. Please follow proper electrical grounding procedures.
2.
Open the system case.
3.
Align the card with the system rails while making sure the ejector latch is open.
4.
Push the card until connectors are in full mate.
Make sure the ejector latch is open before inserting the card.
5.
To secure the card, close the ejector.
20
Cards with Internal Lock
This section applies to MCX653435A-HDAI, MCX653436A-HDAI, and MCX653435A-EDAI.
1.
Before installing the card, make sure that the system is off and the power cord is not connected
to the server. Please follow proper electrical grounding procedures.
2.
Open the system case.
3.
Align the card with the system rails.
4.
Push the card until connectors are in full mate and a clicking sound is heard.
To uninstall the adapter card, seeUninstalling the Card.
Cables and Modules
To obtain the list of supported Mellanox cables for your adapter, please refer to theCables Reference
Tableathttp://www.mellanox.com/products/interconnect/cables-configurator.php.
Cable Installation
1.
All cables can be inserted or removed with the unit powered on.
2.
To insert a cable, press the connector into the port receptacle until the connector is firmly
seated.
a.
Support the weight of the cable before connecting the cable to the adapter card. Do this
by using a cable holder or tying the cable to the rack.
b.
Determine the correct orientation of the connector to the card before inserting the
connector. Do not try and insert the connector upside down. This may damage the
adapter card.
c.
Insert the connector into the adapter card. Be careful to insert the connector straight into
the cage. Do not apply any torque, up or down, to the connector cage in the adapter card.
d.
Make sure that the connector locks in place.
Loading...
+ 44 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.