Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
Support for InfiniBand FDR speeds of up to 56 Gbps (auto-negotiation FDR-10, DDR and SDR)
Support for Virtual Protocol Interconnect (VPI), which enables one adapter for both InfiniBand and
10/40 Gb Ethernet. Supports three configurations:
2 ports InfiniBand
2 ports Ethernet
1 port InfiniBand and 1 port Ethernet
SR-IOV support; 16 virtual functions supported by KVM and Hyper-V (OS dependant) up to a
maximum of 127 virtual functions supported by the adapter
Enables Low Latency RDMA over 40Gb Ethernet (supported with both non-virtualized and SR-IOV
enabled virtualed servers) -- latency as low as 1 µs
Microsoft VMQ / VMware NetQueue support
Sub 1 µs InfiniBand MPI ping latency
Support for QSFP to SFP+ for 10 GbE support
Traffic steering across multiple cores
Legacy and UEFI PXE network boot support (Ethernet mode only)
The Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter and ThinkSystem Mellanox ConnectX-3
Pro ML2 FDR 2-Port QSFP VPI Adapter have the same features as the ConnectX-3 40GbE / FDR IB VPI
Adapter with these additions:
Mezzanine LOM Generation 2 (ML2) form factor
Offers NVGRE hardware offloads
Offers VXLAN hardware offloads
Performance
Based on Mellanox's ConnectX-3 technology, these adapters provide a high level of throughput
performance for all network environments by removing I/O bottlenecks in mainstream servers that are
limiting application performance. With the FDR VPI IB/E Adapter, servers can achieve up to 56 Gb transmit
and receive bandwidth. Hardware-based InfiniBand transport and IP over InfiniBand (IPoIB) stateless offload engines handle the segmentation, reassembly, and checksum calculations that otherwise burden the
host processor.
RDMA over InfiniBand and RDMA over Ethernet further accelerate application run time while reducing CPU
utilization. RDMA allows very high-volume transaction-intensive applications typical of HPC and financial
market firms, as well as other industries where speed of data delivery is paramount. With the ConnectX-3based adapter, highly compute-intensive tasks running on hundreds or thousands of multiprocessor
nodes, such as climate research, molecular modeling, and physical simulations, can share data and
synchronize faster, resulting in shorter run times.
In data mining or web crawl applications, RDMA provides the needed boost in performance to enable
faster search by solving the network latency bottleneck that is associated with I/O cards and the
corresponding transport technology in the cloud. Various other applications that benefit from RDMA with
ConnectX-3 include Web 2.0 (Content Delivery Network), business intelligence, database transactions, and
various cloud computing applications. Mellanox ConnectX-3's low power consumption provides clients
with high bandwidth and low latency at the lowest cost of ownership.
TCP/UDP/IP acceleration
Applications utilizing TCP/UDP/IP transport can achieve industry leading data throughput. The hardwarebased stateless offload engines in ConnectX-3 reduce the CPU impact of IP packet transport, allowing
more processor cycles to work on the application.