Short Bracket ...................................................................................................................84
PCIe Express Pinouts Description for Single-Slot Socket Direct Card....................85
Finding the GUID/MAC on the Adapter Card ............................................................86
Document Revision History ......................................................................................88
6
About This Manual
This User Manual describes NVIDIA® Mellanox® ConnectX®-6 InfiniBand/VPI adapter cards. It
provides details as to the interfaces of the board, specifications, required software and firmware for
operating the board, and relevant documentation.
Ordering Part Numbers
The table below provides the ordering part numbers (OPN) for the available ConnectX-6 InfiniBand/VPI
adapter cards.
MCX653105A-HDATConnectX®-6 VPI adapter card, HDRIB (200Gb/s) and 200GbE,
single-port QSFP56, PCIe3.0/4.0 x16, tall bracket
MCX653106A-HDATConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE,
dual-port QSFP56, PCIe3.0/4.0 x16, tall bracket
MCX654105A-HCATConnectX®-6 VPI adapter card, HDRIB (200Gb/s) and 200GbE,
single-port QSFP56, Socket Direct 2x PCIe3.0/4.0x16, tall
bracket
MCX654106A-HCATConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE,
dual-port QSFP56, Socket Direct 2x PCIe3.0/4.0x16, tall
bracket
Intended Audience
This manual is intended for the installer and user of these cards.
The manual assumes basic familiarity with InfiniBand and Ethernet network and architecture
specifications.
Technical Support
Customers who purchased Mellanox products directly from Mellanox are invited to contact usthrough
the following methods:
•
URL:http://www.mellanox.com> Support
•
E-mail:support@mellanox.com
7
• Tel: +1.408.916.0055
Customers who purchased Mellanox M-1 Global Support Services, please see your contract fordetails
regarding Technical Support.
Customers who purchased Mellanox products through a Mellanox approved reseller should first
seekassistance through their reseller.
Related Documentation
Mellanox OFED
for Linux User
Manual and
Release Notes
WinOF-2 for
WindowsUser
Manual and
Release Notes
Mellanox
VMware for
Ethernet User
Manual
Mellanox
VMware for
Ethernet Release
Notes
Mellanox
Firmware Utility
(mlxup) User
Manual and
Release Notes
User Manual describing OFED features, performance, band diagnostic, tools content and
configuration. SeeMellanox OFED for Linux Documentation.
User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools
content and configuration. SeeWinOF-2 for Windows Documentation.
User Manual describing the various components of the Mellanox ConnectX® NATIVE
ESXi stack. See http://www.mellanox.comProducts > Software > Ethernet Drivers >
VMware Driver > User Manual
Release notes for Mellanox ConnectX® NATIVE ESXi stack. See http://
User Manual describing the set of MFT firmware management tools for a single node.
SeeMFT User Manual.
IEEE Ethernet specification at http://standards.ieee.org
Industry Standard PCI Express Base and Card Electromechanical Specifications at
https://pcisig.com/specifications
8
Mellanox LinkX
Interconnect
Solutions
Mellanox LinkX InfiniBand cables and transceivers are designed to maximize the
performance of High-Performance Computing networks, requiring high-bandwidth, lowlatency connections between compute nodes and switch nodes. Mellanox offers one of
industry’s broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100
(100Gb/s) and HDR (200Gb/s) cables, including Direct Attach Copper cables (DACs),
copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of
lengths from 0.5m to 10km. In addition to meeting IBTA standards,Mellanox tests every
product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15.Read
more at https://www.mellanox.com/products/interconnect/infiniband-overview.php
Document Conventions
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega
Bytes. The use of Mb or Mbits (small b) indicates size in mega bits.IB is used in this document to mean
InfiniBand. In this document PCIe is used to mean PCI Express.
Revision History
A list of the changes made to this document are provided inDocument Revision History.
9
Introduction
Product Overview
This is the user guide for Mellanox technologies VPI adapter cards based on the ConnectX®-6
integrated circuit device. ConnectX-6 connectivity provides the highest performing low latency and
most flexible interconnect solution for PCI Express Gen 3.0/4.0 servers used in enterprise datacenters
and high-performance computing environments.
ConnectX-6 Virtual Protocol Interconnect® adapter cards provide up to two ports of 200Gb/s for
InfiniBand and Ethernet connectivity, sub-600ns latency and 200 million messages per second,
enabling the highest performance and most flexible solution for the most demanding HighPerformance Computing (HPC), storage, and datacenter applications.
ConnectX-6 is a groundbreaking addition to the Mellanox ConnectX series of industry-leading adapter
cards. In addition to all the existing innovative features of past ConnectX versions, ConnectX-6 offers a
number of enhancements that further improve the performance and scalability of datacenter
applications.
ConnectX-6 adapter cards are offered in a variety of PCIe configurations, as described in the below
table.
Make sure to use a PCIe slot that is capable of supplying the required power and airflow to the
The Socket Direct technology offers improved performance to dual-socket servers by enabling
direct access from each CPU in a dual-socket server to the network through its dedicated PCIe
interface.
Please note thatConnectX-6 Socket Direct cards do not support Multi-Host functionality (i.e.
connectivity to two independent CPUs). For ConnectX-6 Socket Direct card with Multi-Host
functionality, please contact Mellanox.
12
ConnectX-6 Socket Direct cards are available in two configurations: Dual-slot Configuration (2x PCIe
x16) and Single-slot Configuration (2x PCIe x8).
ConnectX-6 Dual-slot Socket Direct Cards (2x PCIe x16)
In order to obtain 200Gb/s speed, Mellanox offers ConnectX-6 Socket Direct that enable 200Gb/s
connectivity also for servers with PCIe Gen 3.0 capability. The adapter’s 32-lane PCIe bus is split into
two 16-lane buses, with one bus accessible through a PCIe x16 edge connector and the other bus
through an x16 Auxiliary PCIe Connection card. The two cards should be installed into two PCIe x16
slots and connected using two Cabline SA-II Plus harnesses, as shown in the below figure.
Part NumberMCX654105A-
HCAT
Form Factor/DimensionsAdapter Card: PCIe Half Height, Half Length / 167.65mm x 68.90mm
Auxiliary PCIe Connection Card: 5.09 in. x 2.32 in. (129.30mm x 59.00mm)
Two 35cm Cabline CA-II Plus harnesses
Data Transmission RateEthernet: 10/25/40/50/100/200
ConnectX-6 Single-slot Socket Direct Cards (2x PCIe x8 in a
row)
The PCIe x16 interface comprises two PCIe x8 in a row, such that each of the PCIe x8 lanes can be
connected to a dedicated CPU in a dual-socket server. In such a configuration, Socket Direct brings
lower latency and lower CPU utilization as the direct connection from each CPU to the network means
the interconnect can bypass a QPI (UPI) and the other CPU, optimizing performance and improving
latency. CPU utilization is improved as each CPU handles only its own traffic and not traffic from the
other CPU.
A system with a custom PCI Express x16 slot that includes special signals is required for installing the
card. Please refer toPCIe Express Pinouts Description for Single-Slot Socket Direct Cardfor pinout
definitions.
Part NumberMCX653105A-EFATMCX653106A-EFAT
Form Factor/DimensionsPCIe Half Height, Half Length / 167.65mm x 68.90mm
Data Transmission RateEthernet: 10/25/40/50/100 Gb/s
InfiniBand
Architectu
re
Specificati
on v1.3
compliant
Up to 200
Gigabit
Ethernet
Uses the following PCIe interfaces:
•
PCIe x8/x16 configurations:
PCIe Gen 3.0 (8GT/s) and Gen 4.0 (16GT/s) through an x8/x16 edge connector.
•
2x PCIe x16 configurations:
PCIe Gen 3.0/4.0 SERDES @ 8.0/16.0 GT/s through Edge Connector
PCIe Gen 3.0 SERDES @ 8.0GT/s through PCIe Auxiliary Connection Card
ConnectX-6 offers the highest throughput VPI adapter, supporting HDR 200b/s InfiniBand and
200Gb/s Ethernet and enabling any standard networking, clustering, or storage to operate
seamlessly over any converged network leveraging a consolidated software stack.
ConnectX-6 delivers low latency, high bandwidth, and computing efficiency for performance-driven
server and storage clustering applications. ConnectX-6 is InfiniBand Architecture Specification
v1.3 compliant.
Mellanox adapters comply with the following IEEE 802.3 standards:
200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
- IEEE 802.3ap based auto-negotiation and KR startup
- IEEE 802.3ad, 802.1AX Link Aggregation
- IEEE 802.1Q, 802.1P VLAN tags and priority
- IEEE 802.1Qau (QCN)
- Congestion Notification
- IEEE 802.1Qaz (ETS)
- IEEE 802.1Qbb (PFC)
- IEEE 802.1Qbg
- IEEE 1588v2
- Jumbo frame support (9.6KB)
InfiniBand
HDR100
InfiniBand
HDR
Memory
Componen
ts
A standard InfiniBand data rate, where each lane of a 2X port runs a bit rate of 53.125Gb/s with a
64b/66b encoding, resulting in an effective bandwidth of 100Gb/s.
A standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 53.125Gb/s with a
64b/66b encoding, resulting in an effective bandwidth of 200Gb/s.
•
SPI Quad - includes 256Mbit SPI Quad Flash device (MX25L25645GXDI-08G device by
Macronix)
•
FRU EEPROM - Stores the parameters and personality of the card. The EEPROM capacity is
128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe SMBus.
16
Overlay
Networks
In order to better scale their networks, datacenter operators often create overlay networks that
carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as
NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the
hardware offloading engines, placing higher loads on the host CPU. ConnectX-6 effectively
addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that
encapsulate and de-capsulate the overlay protocol.
RDMA and
RDMA over
Converged
Ethernet
(RoCE)
Mellanox
PeerDirect
™
CPU
Offload
Quality of
Service
(QoS)
Hardwarebased I/O
Virtualizati
on
Storage
Accelerati
on
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged
Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet
networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-6 advanced
congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services
over Layer 2 and Layer 3 networks.
PeerDirect™ communication provides high efficiency RDMA access by eliminating unnecessary
internal data copies between components on the PCIe bus (for example, from GPU to CPU), and
therefore significantly reduces application run time. ConnectX-6 advanced acceleration technology
enables higher cluster efficiency and scalability to tens of thousands of nodes.
Adapter functionality enabling reduced CPU overhead allowing more available CPU for
computation tasks.
•
Flexible match-action flow tables
•
Open VSwitch (OVS) offload using ASAP2(TM)
•
Tunneling encapsulation / decapsulation
Support for port-based Quality of Service enabling various application requirements for latency
and SLA.
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and protection for
virtual machines within the server.
A consolidated compute and storage network achieves significant cost-performance advantages
over multi-fabric networks. Standard block and file access protocols can leverage:
•
RDMA for high-performance storage access
•
NVMe over Fabric offloads for target machine
•
Erasure Coding
•
T10-DIF Signature Handover
SR-IOVConnectX-6 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines (VM) within the server.
HighPerforman
ce
Accelerati
ons
•
Tag Matching and Rendezvous Offloads
•
Adaptive Routing on Reliable Transport
•
Burst Buffer Offloads for Background Checkpointing
17
Operating Systems/Distributions
ConnectX-6 Socket Direct cards 2x PCIe x16 (OPNs: MCX654105A-HCAT, MCX654106A-HCAT
and MCX654106A-ECAT) are not supported in Windows and WinOF-2.
•
OpenFabrics Enterprise Distribution (OFED)
•
RHEL/CentOS
•
Windows
•
FreeBSD
•
VMware
•
OpenFabrics Enterprise Distribution (OFED)
•
OpenFabrics Windows Distribution (WinOF-2)
Connectivity
•
Interoperable with 1/10/25/40/50/100/200 Gb/s InfiniBand/VPI and Ethernet switches
•
Passive copper cable with ESD protection
•
Powered connectors for optical and active cable support
Manageability
ConnectX-6 technology maintains support for manageability through a BMC. ConnectX-6 PCIe stand-up
adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a
standard Mellanox PCIe stand-up adapter. For configuring the adapter for the specific manageability
solution in use by the server, please contact Mellanox Support.
18
Interfaces
InfiniBand Interface
The network ports of the ConnectX®-6 adapter cards are compliant with the
Specification, Release 1.3.
InfiniBand traffic is transmitted through the cards' QSFP56 connectors.
InfiniBand Architecture
Ethernet QSFP56 Interfaces
The adapter card includes special circuits to protect from ESD shocks to the card/server when
plugging copper cables.
The network ports of the ConnectX-6 adapter card are compliant with the IEEE 802.3 Ethernet
standards listed in Features and Benefits. Ethernet traffic is transmitted through the QSFP56
connectors on the adaptercard.
PCI Express Interface
ConnectX®-6 adapter cards support PCI Express Gen 3.0/4.0 (1.1 and 2.0 compatible) through x8/x16
edge connectors. The device can be either a master initiating the PCI Express bus operations, or a
slave responding to PCI bus operations.
The following lists PCIe interface features:
•
PCIe Gen 3.0 and 4.0 compliant, 2.0 and 1.1 compatible
•
2.5, 5.0, 8.0, or 16.0 GT/s link rate x16/x32
•
Auto-negotiates to x32, x16, x8, x4, x2, or x1
•
Support for MSI/MSI-X mechanisms
LED Interface
The adapter card includes special circuits to protect from ESD shocks to the card/server when
plugging copper cables.
There are two I/O LEDs per port:
•
LED 1 and 2: Bi-color I/O LED which indicates link status. LED behavior is described below for
Ethernet and InfiniBand port configurations.
19
• LED 3 and 4: Reserved for future use.
LED1 and LED2 Link Status Indications (Physical and Logical) - Ethernet Protocole:
LED Color and StateDescription
OffA link has not been established
Blinking amber1 Hz Blinking amber occurs due to running a beacon command
for locating the adapter card
4 Hz blinking amber indicates a problem with the physical link
Solid greenIndicates a valid link with no active traffic
Blinking greenIndicates a valid logical link with active traffic
LED1 and LED2Link Status Indications(Physical and Logical) - InfiniBand Protocole:
LED Color and StateDescription
OffA physical link has not been established
Solid amberIndicates an active physical link
Blinking amber1 Hz Blinking amber occurs due to running a beacon command
for locating the adapter card
4 Hz blinking amber indicates a problem with the physical link
Solid greenIndicates a valid logical (data activity) link with no active traffic
Blinking greenIndicates a valid logical link with active traffic
Heat Sink Interface
The heatsink is attached to the ConnectX-6 IC in order to dissipate the heat from the ConnectX-6 IC. It
is attached either by using four spring-loaded push pins that insert into four mounting holes, or by
screws.
ConnectX-6 IC has a thermal shutdown safety mechanism which automatically shuts down the
ConnectX-6 card in cases of high-temperature event, improper thermal coupling or heatsink removal.
For the required airflow (LFM) per OPN, please refer toSpecifications.
20
SMBus Interface
ConnectX-6 technology maintains support for manageability through a BMC. ConnectX-6 PCIe stand-up
adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a
standard Mellanox PCIe stand-up adapter. For configuring the adapter for the specific manageability
solution in use by the server, please contact Mellanox Support.
Voltage Regulators
The voltage regulator power is derived from the PCI Express edge connector 12V supply pins. These
voltage supply pins feed on-board regulators that provide the necessary power to the various
components on the card.
21
Hardware Installation
Installation and initialization of ConnectX-6 adapter cards require attention to the mechanical
attributes, power specification, and precautions for electronic equipment.
Safety Warnings
Safety warnings are provided here in the English language. For safety warnings in other
languages, refer to the Adapter Installation Safety Instructions document available on
mellanox.com.
Please observe all safety warnings to avoidinjury and prevent damage to system components. Note
that not all warnings are relevant to all models.
Unable to render include or excerpt-include. Could not retrieve page.
Installation Procedure Overview
The installation procedure of ConnectX-6 adapter cards involves the following steps:
StepProcedureDirect Link
1Check the system’s hardware and software
requirements.
2Pay attention to the airflow consideration within
the host system
3Follow the safety precautionsSafety Precautions
4Unpack the packageUnpack the package
5Follow the pre-installation checklistPre-Installation Checklist
6(Optional) Replace the full-height mounting
bracket with the supplied short bracket
7Install the ConnectX-6 PCIe x8/x16 adapter card
in the system
Install the ConnectX-6 2x PCIe x16 Socket Direct
adapter card in the system
8Connect cables or modules to the cardCables and Modules
9Identify ConnectX-6 in the systemIdentifying Your Card
ConnectX-6 Socket Direct (2x PCIe x16)
Installation Instructions
22
System Requirements
Hardware Requirements
Unless otherwise specified, Mellanox products are designed to work in an environmentally
controlled data center with low levels of gaseous and dust (particulate) contamination.
The operating environment should meet severity level G1 as per ISA 71.04 for gaseous
contamination and ISO 14644-1 class 8 for cleanliness level.
For proper operation and performance, please make sure to use a PCIe slot with a
corresponding bus width and that can supply sufficient power to your card. Refer to the
Specifications section of the manual for more power requirements.
Please make sure to install the ConnectX-6 cards in a PCIe slot that is capable of supplying
the required power as statedin Specifications.
ConnectX-6 ConfigurationHardware Requirements
PCIe x8/x16A system with a PCI Express x8/x16 slot is required for
installing the card.
Socket Direct 2x PCIe x8 in a row (single slot)A system with a custom PCI Express x16 slot (four special
pins) is required for installing the card. Please refer to PCIe
Express Pinouts Description for Single-Slot Socket Direct
Card for pinout definitions.
Socket Direct 2x PCIe x16 (dual slots)A system with two PCIe x16 slots is required for installing the
cards.
Airflow Requirements
ConnectX-6 adapter cards are offered with two airflow patterns: from the heatsink to the network
ports, and vice versa, as shown below.
Please refer to the Specificationssection for airflow numbers for each specific card model.
Airflow
fromthe heatsink
to the network ports:
23
Airflow
from the network ports
All cards in the system should be planned with the same airflow direction.
to the heatsink:
Software Requirements
•
See Operating Systems/Distributionssection under the Introduction section.
•
Software Stacks - Mellanox OpenFabric software package MLNX_OFED for Linux, WinOF-2 for
Windows, and VMware. See the Driver Installationsection.
Safety Precautions
The adapter is being installed in a system that operates with voltages that can be lethal. Before
opening the case of the system, observe the following precautions to avoid injury and prevent damage
to system components.
•
Remove any metallic objects from your hands and wrists.
•
Make sure to use only insulated tools.
•
Verify that the system is powered off and is unplugged.
•
It is strongly recommended to use an ESD strap or other antistatic devices.
24
Pre-Installation Checklist
•
Unpack the ConnectX-6 Card; Unpack and remove the ConnectX-6 card. Check against the
package contents list that all the parts have been sent. Check the parts for visible damage that
may have occurred during shipping. Please note that the cards must be placed on an antistatic
surface. For package contents please refer toPackage Contents.
Please note that if the card is removed hastily from the antistatic bag, the plastic
ziplock may harm the EMI fingers on the networking connector. Carefully remove the
card from the antistatic bag to avoid damaging the EMI fingers.
•
Shut down your system if active;Turn off the power to the system, and disconnect the power
cord. Refer to the system documentation for instructions. Before you install the ConnectX-6
card, make sure that the system is disconnected from power.
•
(Optional) Check the mounting bracket on the ConnectX-6 or PCIe Auxiliary Connection Card;If
required for your system, replace the full-height mounting bracket that is shipped mounted on
the card with the supplied low-profile bracket. Refer to Bracket Replacement Instructions
.
Bracket Replacement Instructions
The ConnectX-6 card and PCIe Auxiliary Connection card are usually shipped with an assembled highprofile bracket. If this form factor is suitable for your requirements, you can skip the remainder of this
section and move toInstallation Instructions. If you need to replace the high-profile bracket with the
short bracket that is included in the shipping box, please follow the instructions in this section.
Due to risk of damaging the EMI gasket, it is not recommended to replace the bracket more
than three times.
To replace the bracket you will need the following parts:
•
The new brackets of the proper height
•
The 2 screws saved from the removal of the bracket
Removing the Existing Bracket
1.
Using a torque driver, remove the two screws holding the bracket in place.
2.
Separate the bracket from the ConnectX-6 card.
Be careful not to put stress on the LEDs on the adapter card.
3.
Save the two screws.
Installing the New Bracket
1.
Place the bracket onto the card until the screw holes line up.
Do not force the bracket onto the adapter card.
2.
Screw on the bracket using the screws saved from the bracket removal procedure above.
25
Use a torque driver to apply up to 2 lbs-in torque on the screws.
Installation Instructions
This section provides detailed instructions on how to install your adapter card in a system.
Choose the installation instructions according to the ConnectX-6 configuration you have purchased.
ConnectX-6 Socket Direct (2x PCIe x16) Adapter Card
Cables and Modules
To obtain the list of supported Mellanox cables for your adapter, please refer to theCables Reference
Tableathttp://www.mellanox.com/products/interconnect/cables-configurator.php.
Cable Installation
1.
All cables can be inserted or removed with the unit powered on.
2.
To insert a cable, press the connector into the port receptacle until the connector is firmly
seated.
a.
Support the weight of the cable before connecting the cable to the adapter card. Do this
by using a cable holder or tying the cable to the rack.
b.
Determine the correct orientation of the connector to the card before inserting the
connector. Do not try and insert the connector upside down. This may damage the
adapter card.
c.
Insert the connector into the adapter card. Be careful to insert the connector straight into
the cage. Do not apply any torque, up or down, to the connector cage in the adapter card.
d.
Make sure that the connector locks in place.
When installing cables make sure that the latches engage.
Always install and remove cables by pushing or pulling the cable and connector
in a straight line with the card.
3.
After inserting a cable into a port, the Green LED indicator will light when the physical
connection is established (that is, when the unit is powered on and a cable is plugged into the
port with the other end of the connector plugged into a functioning port).See LED
After plugging in a cable, lock the connector using the latching mechanism particular to the
cable vendor. When data is being transferred the Green LED will blink. See LED Interfaceunder
the Interfaces section.
5.
Care should be taken as not to impede the air exhaust flow through the ventilation holes. Use
cable lengths which allow for routing horizontally around to the side of the chassis before
bending upward or downward in the rack.
6.
To remove a cable, disengage the locks and slowly pull the connector away from the port
receptacle. LED indicator will turn off when the cable is unseated.
Identifying the Card in Your System
On Linux
Get the device location on the PCI bus by running lspci and locating lines with the string“Mellanox
Technologies”:
Open Device Manager on the server. ClickStart=>Run, and then enterdevmgmt.msc.
2.
ExpandSystem Devicesand locate your Mellanox ConnectX-6 adapter card.
3.
Right click the mouse on your adapter's row and selectPropertiesto display the adapter card
properties window.
4.
Click theDetailstab and selectHardware Ids(Windows 2012/R2/2016) from thePropertypull-
down menu.
27
PCI Device (Example)
5.
In theValuedisplay box, check the fields VEN and DEV (fields are separated by ‘&’). In the
display example above, notice the sub-string “PCI\VEN_15B3&DEV_1003”: VEN is equal to
0x15B3 – this is the Vendor ID of Mellanox Technologies; and DEV is equal to 1018 (for
ConnectX-6) – this is a valid Mellanox Technologies PCI Device ID.
If the PCI device does not have a Mellanox adapter ID, return to Step 2 to check
another device.
The list of Mellanox Technologies PCI Device IDs can be found in the PCI ID
repository at http://pci-ids.ucw.cz/read/PC/15b3.
ConnectX-6 PCIe x8/16 Installation Instructions
Installing the Card
Applies to MCX651105A-EDAT, MCX654105A-HCAT, MCX654106A-HCAT and MCX654106A-
ECAT.
Loading...
+ 63 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.