ConnectX®-6 EN Card
200GbE Ethernet Adapter Card
ADAPTER CARD
PRODUCT BRIEF
†
World’s first 200GbE Ethernet network interface card, enabling
industry-leading performance smart offloads and in-network
computing for Cloud, Web 2.0, Big Data, Storage and Machine
Learning applications
ConnectX-6 EN provides up to two ports of 200GbE connectivity, sub 0.8usec latency and 215 million
messages per second, enabling the highest performance and most exible solution for the most
demanding data center applications.
ConnectX-6 is a groundbreaking addition to the Mellanox ConnectX series of industry-leading
adapter cards. In addition to all the existing innovative features of past versions, ConnectX-6 offers
a number of enhancements to further improve performance and scalability, such as support for
200/100/50/40/25/10/1 GbE Ethernet speeds and PCIe Gen 4.0. Moreover, ConnectX-6 Ethernet cards
can connect up to 32-lanes of PCIe to achieve 200Gb/s of bandwidth, even on Gen 3.0 PCIe systems.
Cloud and Web 2.0 Environments
Telco, Cloud and Web 2.0 customers developing their platforms on Software Dened Network (SDN)
environments are leveraging the Virtual Switching capabilities of the Operating Systems on their servers
to enable maximum exibility in the management and routing protocols of their networks.
Open vSwitch (OVS) is an example of a virtual switch that allows Virtual Machines to communicate
among themselves and with the outside world. Software-based virtual switches, traditionally residing
in the hypervisor, are CPU intensive, affecting system performance and preventing full utilization of
available CPU for compute functions.
To address this, ConnectX-6 offers ASAP2 - Mellanox Accelerated Switch and Packet Processing®
technology to ofoad the vSwitch/vRouter by handling the data plane in the NIC hardware while
maintaining the control plane unmodied. As a result, signicantly higher vSwitch/vRouter performance
is achieved without the associated CPU load.
The vSwitch/vRouter ofoad functions supported by ConnectX-5 and ConnectX-6 include encapsulation
and de-capsulation of overlay network headers, as well as stateless ofoads of inner packets, packet
headers re-write (enabling NAT functionality), hairpin, and more.
In addition, ConnectX-6 offers intelligent exible pipeline capabilities, including programmable exible
parser and exible match-action tables, which enable hardware ofoads for future protocols.
HIGHLIGHTS
FEATURES
– Up to 200GbE connectivity per port
– Maximum bandwidth of 200Gb/s
– Up to 215 million messages/sec
– Sub 0.8usec latency
– Block-level XTS-AES mode hardware
encryption
– Optional FIPS-compliant adapter card
– Support both 50G SerDes (PAM4) and
25G SerDes (NRZ) based ports
– Best-in-class packet pacing with
sub-nanosecond accuracy
– PCIe Gen4/Gen3 with up to x32 lanes
– RoHS compliant
– ODCC compatible
BENEFITS
– Most intelligent, highest performance
fabric for compute and storage
infrastructures
– Cutting-edge performance in
virtualized HPC networks including
Network Function Virtualization (NFV)
– Advanced storage capabilities
including block-level encryption and
checksum offloads
– Host Chaining technology for
economical rack design
– Smart interconnect for x86, Power,
Arm, GPU and FPGA-based platforms
– Flexible programmable pipeline for
new network flows
– Enabler for efficient service chaining
– Efficient I/O consolidation, lowering
data center costs and complexity
©2020 Mellanox Technologies. All rights reserved.
†
For illustration only. Actual products may vary.
Mellanox ConnectX-6 EN Adapter Card
page 2
Storage Environments
NVMe storage devices are gaining momentum, offering very fast
access to storage media. The evolving NVMe over Fabric (NVMe-oF)
protocol leverages RDMA connectivity to remotely access NVMe
storage devices efciently, while keeping the end-to-end NVMe model
at lowest latency. With its NVMe-oF target and initiator ofoads,
ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU
utilization and scalability.
Security
ConnectX-6 block-level encryption offers a critical innovation to
network security. As data in transit is stored or retrieved, it undergoes
encryption and decryption. The ConnectX-6 hardware ofoads the IEEE
AES-XTS encryption/decryption from the CPU, saving latency and CPU
utilization. It also guarantees protection for users sharing the same
resources through the use of dedicated encryption keys.
By performing block-storage encryption in the adapter, ConnectX-6
excludes the need for self-encrypted disks. This gives customers the
freedom to choose their preferred storage device, including byteaddressable and NVDIMM devices that traditionally do not provide
encryption. Moreover, ConnectX-6 can support Federal Information
Processing Standards (FIPS) compliance.
Mellanox Socket Direct
®
Mellanox Socket Direct technology improves the performance of dual-
socket servers, such as by enabling each of their CPUs to access the
network through a dedicated PCIe interface. As the connection from
each CPU to the network bypasses the QPI (UPI) and the second CPU,
Socket Direct reduces latency and CPU utilization. Moreover, each
CPU handles only its own trafc (and not that of the second CPU), thus
optimizing CPU utilization even further.
Mellanox Socket Direct also enables GPUDirect® RDMA for all CPU/
GPU pairs by ensuring that GPUs are linked to the CPUs closest to the
adapter card. Mellanox Socket Direct enables Intel® DDIO optimization
on both sockets by creating a direct connection between the sockets
and the adapter card.
Mellanox Socket Direct technology is enabled by a main card that
houses the ConnectX-6 adapter card and an auxiliary PCIe card
bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct
card is installed into two PCIe x16 slots and connected using a 350mm
long harness. The two PCIe x16 slots may also be connected to the
same CPU. In this case the main advantage of the technology lies in
delivering 200GbE to servers with PCIe Gen3-only support.
Please note that when using Mellanox Socket Direct in virtualization
or dual-port use cases, some restrictions may apply. For further details,
Contact Mellanox Customer Support.
Machine Learning and Big Data
Environments
Data analytics has become an essential function within many
enterprise data centers, clouds and hyperscale platforms. Machine
learning relies on especially high throughput and low latency to train
deep neural networks and to improve recognition and classication
accuracy. As the rst adapter card to deliver 200GbE throughput,
ConnectX-6 is the perfect solution to provide machine learning
applications with the levels of performance and scalability that
they require. ConnectX-6 utilizes the RDMA technology to deliver
low-latency and high performance. ConnectX-6 enhances RDMA
network capabilities even further by delivering end-to-end packet level
ow control.
Compatibility
PCI Express Interface
– PCIe Gen 4.0, 3.0, 2.0, 1.1 compatible
– 2.5, 5.0, 8, 16 GT/s link rate
– 32 lanes as 2x 16-lanes of PCIe
– Support for PCIe x1, x2, x4, x8, and
x16 congurations
– PCIe Atomic
– TLP (Transaction Layer Packet)
Processing Hints (TPH)
– PCIe switch Downstream Port
Containment (DPC) enablement for
PCIe hot-plug
– Advanced Error Reporting (AER)
– Access Control Service (ACS) for
peer-to-peer secure communication
– Process Address Space ID (PASID)
Address Translation Services (ATS)
– IBM CAPIv2 (Coherent Accelerator
Processor Interface)
– Support for MSI/MSI-X mechanisms
Host Management
Mellanox host management and control capabilities include NC-SI over
MCTP over SMBus, and MCTP over PCIe - Baseboard Management
Controller (BMC) interface, as well as PLDM for Monitor and Control
DSP0248 and PLDM for Firmware Update DSP0267.
Operating Systems/Distributions*
– RHEL, SLES, Ubuntu and other major
Linux distributions
– Windows
– FreeBSD
– VMware
– OpenFabrics Enterprise Distribution
(OFED)
– OpenFabrics Windows Distribution
(WinOF-2)
Connectivity
– Up to two network ports
– Interoperability with Ethernet
switches (up to 200GbE, as 4 lanes
of 50GbE data rate)
– Passive copper cable with ESD
protection
– Powered connectors for optical and
active cable support
©2020 Mellanox Technologies. All rights reserved.