Mellanox ConnectX-5 User Manual

0 (0)
ConnectX®-5 EN Card
5
ADAPTER CARD
PRODUCT BRIEF
100Gb/s Ethernet Adapter Card
Intelligent RDMA-enabled network adapter card with advanced
application offload capabilities for High-Performance Computing,
Web2.0, Cloud and Storage platforms
ConnectX-5 EN supports two ports of 100Gb Ethernet connectivity, while delivering low sub-600ns
latency, extremely high message rates, PCIe switch and NVMe over Fabric ofoads. ConnectX-5 providing the highest performance and most exible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.
ConnectX-5 delivers high bandwidth, low latency, and high computation efciency for high performance, data intensive and scalable compute and storage platforms. ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and Rendezvous Tag Matching ofoad, hardware support for out-of-order RDMA Write and Read operations, as well as additional Network Atomic and PCIe Atomic operations support.
ConnectX-5 EN utilizes RoCE (RDMA over Converged Ethernet) technology, delivering low-latency and high performance. ConnectX-5 enhances RDMA network capabilities by completing the Switch Adaptive­Routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability and efcient support for all network topologies including DragonFly and DragonFly+.
ConnectX-5 also supports Burst Buffer ofoad for background checkpointing without interfering in the main CPU operations, and the innovative transport service Dynamic Connected Transport (DCT) to ensure extreme scalability for compute and storage systems.
STORAGE ENVIRONMENTS
NVMe storage devices are gaining popularity, offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target ofoads, enabling very efcient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.
Moreover, the embedded PCIe switch enables customers to build standalone storage or Machine
Learning appliances. As with the earlier generations of ConnectX adapters, standard block and le access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves signicant cost-performance advantages over multi-fabric networks.
HIGHLIGHTS
NEW FEATURES
– Tag matching and rendezvous offloads
– Adaptive routing on reliable transport
– Burst buffer offloads for background
checkpointing
– NVMe over Fabric (NVMe-oF) offloads
– Back-end switch elimination by host
chaining
– Embedded PCIe switch
– Enhanced vSwitch/vRouter offloads
– Flexible pipeline
– RoCE for overlay networks
– PCIe Gen 4 support
BENEFITS
– Up to 100Gb/s connectivity per port
– Industry-leading throughput, low
latency, low CPU utilization and high message rate
– Maximizes data center ROI with
Multi-Host technology
– Innovative rack design for storage
and Machine Learning based on Host Chaining technology
– Smart interconnect for x86, Power,
Arm, and GPU-based compute and storage platforms
– Advanced storage capabilities
including NVMe over Fabric offloads
– Intelligent network adapter supporting
flexible pipeline programmability
– Cutting-edge performance in
virtualized networks including Network Function Virtualization (NFV)
– Enabler for efficient service chaining
capabilities
– Efficient I/O consolidation, lowering
data center costs and complexity
©2018 Mellanox Technologies. All rights reserved.
For illustration only. Actual products may vary.
Mellanox ConnectX-5 EN Adapter Card
page 2
Block Device / Native Application
Fabric Target
Data Path (IO Cmds,
Data Fetch)
NVMe
Control Path
(Init, Login,
etc.)
NVMe Transport Layer
NVMe
NVMe
Local
Fabric Initiator
NVMe
Device
NVMe
Fabric Target
NVMe Device
RDMA
Fabric
RDMA
Target SW
SCSI
iSCSI
iSER
iSER
iSCSI
SCSI
NVMe
Local
NVMe
Device
ConnectX-5 enables an innovative storage rack design, Host Chaining,
by which different servers can interconnect directly without involving
the Top of the Rack (ToR) switch. Alternatively, the Multi-Host technology that was rst introduced with ConnectX-4 can be used. Mellanox Multi-Host™ technology, when enabled, allows multiple
hosts to be connected into a single adapter by separating the PCIe
interface into multiple and independent interfaces. With the various
new rack design alternatives, ConnectX-5 lowers the total cost of
ownership (TCO) in the data center by reducing CAPEX (cables, NICs, and switch port expenses), and by reducing OPEX by cutting down on switch port management and overall power usage.
Mellanox Accelerated Switching And Packet Processing (ASAP
2
)
Direct technology allows to ofoad vSwitch/vRouter by handling the
data plane in the NIC hardware while maintaining the control plane
unmodied. As a result there is signicantly higher vSwitch/vRouter performance without the associated CPU load.
The vSwitch/vRouter ofoad functions that are supported by ConnectX-5 include Overlay Networks (for example, VXLAN, NVGRE, MPLS, GENEVE, and NSH) headers’ encapsulation and de-capsulation, as well as stateless ofoads of inner packets, packet headers’ re-write enabling NAT functionality, and more.
Moreover, the intelligent ConnectX-5 exible pipeline capabilities, which include exible parser and exible match-action tables, can be programmed, which enable hardware ofoads for future protocols.
ConnectX-5 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. Moreover, with ConnectX-5 Network Function Virtualization (NFV), a VM can be used as a virtual appliance. With full data-path operations ofoads as well as hairpin hardware capability
and service chaining, data can be handled by the Virtual Appliance
with minimum CPU utilization. With these capabilities data center administrators benet from better
Para-Virtualized
SR-IOV
Eliminating
Backend
Switch
Host Chaining for Storage Backend
Traditional Storage Connectivity
CLOUD AND WEB2.0 ENVIRONMENTS
Cloud and Web2.0 customers that are developing their platforms on Software Dened Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to enable maximum exibility.
Open V-Switch (OVS) is an example of a virtual switch that allows
Virtual Machines to communicate with each other and with the
outside world. A virtual switch traditionally resides in the hypervisor and switching is based on twelve-tuple matching on ows. The virtual
switch or virtual router software-based solution is CPU intensive,
affecting system performance and preventing fully utilizing available bandwidth.
VM
VM VM VM
Hypervisor
vSwitch
NICSR-IOV NIC
Hypervisor
Physical
Function (PF)
Virtual
Function
(VF)
eSwitch
server utilization while reducing cost, power, and cable complexity,
allowing more Virtual Appliances, Virtual Machines and more tenants
on the same hardware.
HOST MANAGEMENT
Mellanox host management and control capabilities include
NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.
©2018 Mellanox Technologies. All rights reserved.
Loading...
+ 2 hidden pages