All other trademarks are property of their respective owners.
For the most updated list of Mellanox trademarks, visit http://www.mellanox.com/page/trademarks
NOTE:
THIS HARDWARE , SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT (S) ) AND ITS RELATED
DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-ISﺴ WITH ALL FAULTS OF ANY
KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT
USE THE PRODUCTS IN DESIGNATED SOLUTIONS . THE CUSTOMER 'S MANUFACTURING TEST
ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY
QUALIFY THE PRODUCT (S) AND/OR THE SYSTEM USING IT . THEREFORE , MELLANOX TECHNOLOGIES
CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE
HIGHEST QUALITY . ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT LIMITED TO , THE
IMPLIED WARRANTIES OF MERCHANTABILITY , FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT ARE DISCLAIMED . IN NO EVENT SHALL MEL LANOX BE LIABLE TO CUSTOMER OR
ANY THIRD PARTIES FOR ANY DIRECT , INDIRECT , SPECIAL , EXEMPLARY , OR CONSEQUENTIAL
DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO , PAYMENT FOR PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , DATA , OR PROFITS ; OR BUSINESS INTERRUPTION )
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY ,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY FROM THE USE OF THE
PRODUCT (S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE .
October 201481.5•Added MCX545B-ECAN support across document
•Updated MCX545M-ECAN Specification Table on page 62
April 20181.4•Updated MCX545A-ECAN/MCX545B-ECAN Specification Table on page 61
July 20171.3•Updated Product Overview on page 11
•Updated Features and Benefits on page 13
•Added FRU EEPROM on page 18
•Updated MCX545A-ECAN Specifications on page 69
•Added Airflow Specifications on page 70
June 20171.2•Removed MCX545M-ECAN support across document
•Updated MCX545A-ECAN Specifications on page 69
May 20171.1Updated “OCP Spec 2.0 Type Stacking Heights” on page 12
March 20171.0First release
Rev 1.58Mellanox Technologies
About This Manual
This User Manual describes Mellanox Technologies ConnectX®-5 Single QSFP28 port PCI
Express x16 adapter cards for Open Compute Project, Spec 2.0. It provides details as to the inter
faces of the board, specifications, required software and firmware for operating the board, and
relevant documentation.
Intended Audience
This manual is intended for the installer and user of these cards.
The manual assumes basic familiarity with InfiniBand and Ethernet network and architecture
specifications.
Related Documentation
Table 2 - Documents List
-
Mellanox Firmware Tools (MFT) User Manual
Document no. 2204UG
Mellanox Firmware Utility (mlxup) User Manual
and Release Notes
Mellanox OFED for Linux
User Manual
Document no. 2877
Mellanox OFED for Linux Release Notes
Document no. 2877
WinOF-2 for Windows
User Manual
Document no. MLX-15-3280
Mellanox OFED for Windows Driver
Release Notes
User Manual describing the set of MFT firmware management
tools for a single node.
See http://www.mellanox.com => Products => Software =>
Firmware Tools
Mellanox firmware update and query utility used to update the
firmware.
See http://www.mellanox.com => Products => Software =>
Firmware Tools => mlxup Firmware Utility
User Manual describing OFED features, performance, Band
diagnostic, tools content and configuration.
See http://www.mellanox.com => Products => Software =>
InfiniBand/VPI Drivers=> Mellanox OpenFabrics Enterprise
Distribution for Linux (MLNX_OFED)
Release Notes for Mellanox OFED for Linux driver kit for Mellanox adapter cards:
See: http://www.mellanox.com =>Products => Software =>
InfiniBand/VPI Drivers => Linux SW/Drivers => Release
Notes
User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools content and configuration.
See http://www.mellanox.com => Products => Software =>
Windows SW/Drivers
Release notes for Mellanox Technologies' MLNX_EN for Linux
driver kit for Mellanox adapter cards:
See http://www.mellanox.com => Products => Software =>
Ethernet Drivers => Mellanox OFED for Windows => WinOF2 Release Notes
Open Compute Project 2.0 SpecificationOCP Spec 2.0
Rev 1.59Mellanox Technologies
Document Conventions
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega
Bytes. The use of Mb or Mbits (small b) indicates size in mega bits. B is used in this document to
mean InfiniBand. In this document PCIe is used to mean PCI Express.
Technical Support
Customers who purchased Mellanox products directly from Mellanox are invited to contact us
through the following methods.
•URL: http://www.mellanox.com => Support
•E-mail: support@mellanox.com
•Tel: +1.408.916.0055
Customers who purchased Mellanox M-1 Global Support Services, please see your contract for
details regarding Technical Support.
Customers who purchased Mellanox products through a Mellanox approved reseller should first seek
assistance through their reseller.
Firmware Updates
The Mellanox support downloader contains software, firmware and knowledge database information for Mellanox products. Access the database from the Mellanox Support web page,
http://www.mellanox.com=> Support
Or use the following link to go directly to the Mellanox Support Download Assistant page,
http://www.mellanox.com/supportdownloader/.
Rev 1.510Mellanox Technologies
1Introduction
This is the User Guide for Mellanox Technologies VPI adapter cards based on the ConnectX®-5
integrated circuit device for Open Compute Project. These adapters connectivity provide the
highest performing low latency and most flexible interconnect solution for PCI Express Gen 3.0
servers used in Enterprise Data Centers and High-Performance Computing environments.
This chapter covers the following topics:
•Section 1.1, “Product Overview”, on page 11
•Section 1.3, “Features and Benefits”, on page 13
•Section 1.5, “Operating Systems/Distributions”, on page 15
•Section 1.6, “Connectivity”, on page 16
1.1Product Overview
The following section provides the ordering part number, port speed, number of ports, and PCI
Express speed.
Table 3 - Single-Port VPI Adapter Card
Introduction
Single-host Cards with host management:
•MCX545A-ECAN- OCP Spec 2.0 Type 2
Ordering Part Number (OPN)
Data Transmission Rate
Network Connector Types
PCI Express (PCIe) SerDes Speed
RoHS
Adapter IC Part Number
Device ID (decimal)
a. See “OCP Spec 2.0 Type 2 Stacking Height- Single-port Card”
b. See “OCP Spec 2.0 Type 1 Stacking Height - Single-port Card”
PCIe 3.0 x16 8GT/s (through two x8 B2B FCI connectors)
RoHS Compliant
MT27808A0-FCCF-EV
4119 for Physical Function (PF)
4120 for Virtual Function (VF)
a
b
a
Rev 1.5
11Mellanox Technologies
1.2OCP Spec 2.0 Type Stacking Heights
1.2.1OCP Spec 2.0 Type 2 Stacking Height- Single-port Card
Applies to MCX545A-ECAN, MCX545M-ECAN.
The single-port adapter card follows OCP Spec 2.0 Type 2 with 12mm stacking height. See
Figure 2.
Figure 1: Type 2 Vertical Stack Front View - Single-port Card
Introduction
1.2.2OCP Spec 2.0 Type 1 Stacking Height - Single-port Card
Applies to MCX545B-ECAN only.
The Single port adapter card comply with OCP 2.0 Type 1 with 8mm stacking height. See
Figure 1 for the single port front view.
Figure 2: Type 1 Vertical Stack Front View - Single Port Cards
Rev 1.5
12Mellanox Technologies
1.3Features and Benefits
Table 4 - Features
100Gb/s Virtual Protocol
Interconnect (VPI) Adapter
PCI Express (PCIe)
Up to 100 Gigabit Ethernet
InfiniBand EDR
Memory
Overlay Networks
RDMA and RDMA over
Converged Ethernet (RoCE)
a
ConnectX-5 offers the highest throughput VPI adapter, supporting EDR 100Gb/
s InfiniBand and 100Gb/s Ethernet and enabling any standard networking, clus
tering, or storage to operate seamlessly over any converged network leveraging
a consolidated software stack.
Uses Gen 3.0 (8GT/s) through x16 edge connector (two B2B FCI x8 connectors)
A standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of
25.78125Gb/s with a 64b/66b encoding, resulting in an effective bandwidth of
100Gb/s.
PCI Express - stores and accesses InfiniBand and/or Ethernet fabric connection
information and packet data.
SPI - includes 128Mb SPI Flash device (W25Q128FVSIG by WINBONDNUVOTON).FRU EEPROM capacity is 2Kb.
In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels
in encapsulated formats such as NVGRE and VXLAN. While this solves net
work scalability issues, it hides the TCP packet from the hardware offloading
engines, placing higher loads on the host CPU. ConnectX-5 effectively
addresses this by providing advanced NVGRE and VXLAN hardware offload
ing engines that encapsulate and de-capsulate the overlay protocol.
ConnectX-5, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE
(RDMA over Converged Ethernet) technology, delivers low-latency and highperformance over Band and Ethernet networks. Leveraging data center bridging
(DCB) capabilities as well as ConnectX-5 advanced congestion control hard
ware mechanisms, RoCE provides efficient low-latency RDMA services over
Layer 2 and Layer 3 networks.
Introduction
-
-
-
-
Rev 1.5
13Mellanox Technologies
Introduction
Table 4 - Features
Mellanox PeerDirect™
CPU Offload
Quality of Service (QoS)
Hardware-based I/O
Virtualization
Storage Acceleration
SR-IOV
NC-SI
High-Performance
Accelerations
a
PeerDirect™ communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus
(for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-5 advanced acceleration technology enables higher
cluster efficiency and scalability to tens of thousands of nodes.
Adapter functionality enabling reduced CPU overhead allowing more available
CPU for computation tasks.
Open VSwitch (OVS) offload using ASAP
•Flexible match-action flow tables
•Tunneling encapsulation / decapsulation
Support for port-based Quality of Service enabling various application requirements for latency and SLA.
ConnectX-5 provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines within the server.
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access
protocols can leverage InfiniBand RDMA for high-performance storage access.
•NVMe over Fabric offloads for target machine
•Erasure Coding
•T10-DIF Signature Handover
ConnectX-5 SR-IOV technology provides dedicated adapter resources and
guaranteed isolation and protection for virtual machines (VM) within the
server.
The adapter supports a Network Controller Sideband Interface (NC-SI), MCTP
over SMBus and MCTP over PCIe - Baseboard Management Controller interface.
•Tag Matching and Rendezvous Offloads
•Adaptive Routing on Reliable Transport
•Burst Buffer Offloads for Background Checkpointing
2(TM)
Rev 1.5
NC-SI
Wake-on-LAN (WoL)Supported
Reset-on-Lan (RoL)Supported
a. This section describes hardware features and capabilities. Please refer to the driver release notes for feature availabil-
ity. See “Related Documentation” on page 9.
The adapter supports a slave Network Controller Sideband Interface (NC-SI)
that can be connected to a BMC.
.
14Mellanox Technologies
1.4Multi-Host Technology
ConnectX®-5 adapter card specifically designed for supported servers (as described in Section
3.1) implements Multi-Host technology to deliver direct and independent PCIe connections to
each of the four CPUs in the server.
The ConnectX-5 PCIe x16 interface is separated into four independent PCIe x4 interfaces. Each
interface is connected to a separate host with no performance degradation.
Connecting server CPUs directly to the network delivers performance gain as each CPU can send
and receive network traffic independently without the need to send network data to other CPUs
using QPI bus.
Figure 3: Multi-Host Technology
Introduction
1.5Operating Systems/Distributions
•RHEL/CentOS
•Windows
•FreeBSD
•VMware
•OpenFabrics Enterprise Distribution (OFED)
•OpenFabrics Windows Distribution (WinOF-2)
Rev 1.5
15Mellanox Technologies
1.6Connectivity
•Interoperable with 1/10/25/40/50/100 Gb/s Ethernet switches
•Passive copper cable with ESD protection
•Powered connectors for optical and active cable support
Introduction
Rev 1.5
16Mellanox Technologies
2Interfaces
The adapter card includes special circuits to protect from ESD shocks to the card/server when
plugging copper cables.
Each adapter card includes the following interfaces:
•“InfiniBand Interface”
•“Ethernet QSFP28 Interface”
•“PCI Express Interface”
•“LED Interface”
2.1InfiniBand Interface
The network ports of the ConnectX®-5 adapter cards are compliant with the InfiniBand Architecture Specification, Release 1.3. InfiniBand traffic is transmitted through the cards' QSFP28 con-
nectors.
2.2Ethernet QSFP28 Interface
Interfaces
The network ports of the ConnectX®-5 adapter card are compliant with the IEEE 802.3 Ethernet
standards listed in
Table 4. Ethernet traffic is transmitted through the cards' QSFP28 connectors.
2.3PCI Express Interface
The ConnectX®-5 adapter card supports PCI Express Gen 3.0 (1.1 and 2.0 compatible) through
two x8 FCI B2B connectors: connector A and connector B. The device can be either a master ini
tiating the PCI Express bus operations, or a slave responding to PCI bus operations.
The following lists PCIe interface features:
•PCIe Gen3.0 compliant, 2.0 and 1.1 compatible
•2.5, 5.0, 8.0GT/slink rate x16
•Auto-negotiates to x16, x8, x4, x2, or x1
•Support for MSI/MSI-X mechanisms
2.4LED Interface
There are two I/O LEDs per port located on the adapter card. For LED specifications, please refer
Section 7.3, “Adapter Card LED Operations”, on page 64.
to
-
Rev 1.5
17Mellanox Technologies
2.5FRU EEPROM
FRU EEPROM allows the baseboard to identify different types of Mezzanine cards. MEZZ FRU
EEPROM is accessible through MEZZ_SMCLK and MEZZ_SMDATA (Connector A18 and
A19). MEZZ FRU EEPROM address is 0xA2 and its capacity is 2Kb.
Interfaces
Rev 1.5
18Mellanox Technologies
3Hardware Installation
3.1System Requirements
3.1.1Hardware
Unless otherwise specified, Mellanox products are designed to work in an environmentally
controlled data center with low levels of gaseous and dust (particulate) contamination.
The operation environment should meet severity level G1 as per ISA 71.04 for gaseous
contamination and ISO 14644-1 class 8 for cleanliness level.
A system with PCI Express x16 slot (two FCI B2B x8 connectors) is required for installing the
card.
3.1.2Operating Systems/Distributions
Please refer to Section 1.5, “Operating Systems/Distributions”, on page 15.
Hardware Installation
3.1.3Software Stacks
Mellanox OpenFabric software package MLNX_OFED for Linux and WinOF-2 for Windows
See
Chapter 4, “Driver Installation”.
3.2Safety Precautions
The adapter is being installed in a system that operates with voltages that can be lethal.
Before opening the case of the system, observe the following precautions to avoid injury and
prevent damage to system components.
1. Remove any metallic objects from your hands and wrists.
2. Make sure to use only insulated tools.
3. Verify that the system is powered off and is unplugged.
4. It is strongly recommended to use an ESD strap or other antistatic devices.
3.3Pre-Installation Checklist
1. Verify that your system meets the hardware and software requirements stated above.
2. Shut down your system if active.
3. After shutting down the system, turn off the power and unplug the cord.
Rev 1.5
19Mellanox Technologies
4. Remove the card from its package.
Please note that if the card is removed hastily from the antistatic bag, the plastic ziplock may
harm the EMI fingers on the QSFP connector. Carefully remove the card from the antistatic
bag to avoid damaging the EMI fingers. See
Figure 4: EMI fingers on QSFP28 Cage
Hardware Installation
Figure 4 and Figure 6.
Figure 5: EMI Fingers on QSFP Connector
Rev 1.5
20Mellanox Technologies
Hardware Installation
Figure 6: Plastic Ziplock
5. Please note that the card must be placed on an antistatic surface.
6. Check the card for visible signs of damage. Do not attempt to install the card if damaged.
.
3.4Card Installation Instructions
Please note that the following figures are for illustration purposes only.
1. Before installing the card, make sure that the system is off and the power cord is not connected to the server. Please follow proper electrical grounding procedures.
2. Open the system case.
3. Make sure the adapter clips or screws are open.
Rev 1.5
21Mellanox Technologies
4. Place the adapter card on the clips without applying any pressure.
Open Clips
Open Clips
Hardware Installation
Rev 1.5
22Mellanox Technologies
Hardware Installation
5. Applying even pressure on four corners of the card (as shown in the below picture), insert the
adapter card into the PCI Express slot until firmly seated.
6. Secure the adapter with the adapter clip or screw.
7. Close the system case.
Do not use excessive force when seating the card, as this may damage the system or the
adapter.
3.5Cables and Modules
To obtain the list of supported Mellanox cables for your adapter, please refer to the Cables Refer-
ence Table.
3.5.1Cable Installation
1. All cables can be inserted or removed with the unit powered on.
2. To insert a cable, press the connector into the port receptacle until the connector is firmly
seated.
a. Support the weight of the cable before connecting the cable to the adapter card. Do this by using a cable
holder or tying the cable to the rack.
b. Determine the correct orientation of the connector to the card before inserting the connector. Do not try and
insert the connector upside down. This may damage the adapter card.
c. Insert the connector into the adapter card. Be careful to insert the connector straight into the cage. Do not
apply any torque, up or down, to the connector cage in the adapter card.
Rev 1.5
23Mellanox Technologies
Loading...
+ 52 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.