Qlogic InfiniPath QLE7140 Specifications

®
InfiniPath QLE7140
PCI Express x8 to InfiniBand 4X Adapter
The InfiniPath InfiniBand adapter delivers industry-leading per-
formance in a cluster interconnect, allowing organizations to
gain maximum advantage and return from their investment in
clustered systems.
D A TA S H E E T
The InfiniPath adapter yields the lowest latency, the highest
message rate and highest effective bandwidth of any cluster
interconnect available. As a result, organizations relying on
clustered systems for critical computing tasks will experience
a significant increase in productivity.
New applications being developed or deployed on very large
clusters now can avoid the bandwidth, latency, or message-rate
limitations imposed by traditional interconnects. By allowing
you to drive up the utilization of your computing infrastructure,
InfiniPath adapters increase the ROI of your computing assets.
Benefits
Increases cluster efficiency and application productivity
Provides superior application scaling to 1000s of CPUs
Enables faster application run times for faster
time-to-solution
Increases utilization of computing infrastructure and
increases ROI of computing assets
Features
PCI Express x8 to InfiniBand 4X adapter
PCI half-height short form factor
1.6 µs one-way MPI latency through an InfiniBand switch
954 MB/s uni-directional bandwidth
385 byte n
3 year hardware warranty
1/2
streaming message size (1 CPU core)
1
1
1
Superior Application Performance. The InfiniPath adapter’s low latency and high
message rates result in superior real-world application scalability across nearly all
modeling and simulation applications.
Well-known applications that have demonstrated superior scaling and outstanding
performance when running on clusters with the InfiniPath interconnect include: NAMD,
Amber8, PETSc, Star-CD, Fluent, NWChem, DL_POLY, LS-DYNA, WRF, POP, MM5,
LAMMPS, GAMESS, CPMD, AM2, CHARMM, GROMACS and many others.
Highest Effective Bandwidth and Message Rate.
Express adapter delivers significantly more bandwidth at message sizes typical of
real-world HPC applications and many enterprise applications.
The InfiniPath InfiniBand adapter also delivers the highest effective bandwidth of any
cluster interconnect because it achieves half its peak bandwidth (n
size of just 385 bytes, the lowest in the industry. This means that applications run
faster on the InfiniPath adapter than on any other interconnect.
Such superior performance is a benefit of the unique, highly-pipelined, cutthrough
design that initiates a new message much faster than competitive alternatives. This
approach allows application message transmission to scale close to linearly when ad-
ditional CPU cores are added to a system, dramatically reducing application run times.
Other less effective interconnects can become a performance bottleneck, lowering the
return on investment of your computing resources.
Lowest MPI & TCP Latency. The InfiniPath industry-leading MPI pingpong latency of
1.6 microseconds2 (µs) is less than half of the latency of other InfiniBand adapters.
Because of its high messaging rate,
1/2
) 2 at a message
InfiniPath QLE7140
D AT AS HE ET
by the HPC Challenge Benchmark Suite, is nearly identical to its ping-pong
latency, even as you increase the number of nodes.
The InfiniPath adapter, using a standard Linux distribution, also achieves the
lowest TCP/IP latency and outstanding bandwidth.3 Eliminating the excess
latency found in traditional interconnects reduces communications wait time
and allows processors to spend more time computing, which results in ap-
plications that run faster and scale higher.
Lowest CPU Utilization. The InfiniPath connectionless environment eliminates
overhead that wastes valuable CPU cycles. It provides reliable data transmis-
sion without the vast resources required by connection-oriented adapters,
thus increasing the efficiency of your clustered systems.
PCI Express Interface
PCIe v1.1 x8 compliant
PCIe slot compliant (fits into x8 or x16 slot)
InfiniPath Interfaces and Specifications
4X speed (10+10 Gbps)
Uses standard IBTA 1.2 compliant fabric and cables;
Link layer compatible
Connectivity
Single InfiniBand 4X port (10+10 Gbps) - Copper
External fiber optic media adapter module support
Compatible with InfiniBand switches from Cisco®,
SilverStorm™, Mellanox®, Microway, and Voltaire®
Interoperable with host channel adapters (HCAs)
from Cisco, SilverStorm, Mellanox and Voltaire run ning the OpenIB software stack
Configurable MTU size (4096 maximum)
Integrated SERDES
Management Support
Includes InfiniBand 1.1 compliant SMA (Subnet
Management Agent)
-
Interoperable with management solutions from
Cisco, SilverStorm, and Voltaire
Open SM
Host Driver/Upper Level Protocol (ULP)
Support
MPICH version 1.2.6 with MPI 2.0 ROMIO I/O 126
TCP, NFS, UDP, SOCKETS through Ethernet driver
emulation
Optimized MPI protocol stack supplied
32- and 64-bit application ready
IPoIB, SDP, UDP using OpenIB stack
Regulatory Compliance
FCC Part 15, Subpart B, Class A
ICES-003, Class A
EN 55022, Class A
VCCI V-3/2004.4, Class A
Built On Industry Standards. The InfiniPath adapter supports a rich com
bination of open standards to achieve industry-leading performance. The
InfiniPath OpenIB software stack has been proven to be the highest perfor-
mance implementation of the OpenIB Verbs layer, which yields both superior
latency and bandwidth compared to other InfiniBand alternatives.
InfiniBand 1.1 4X Compliant
Standard InfiniBand fabric management
MPI 1.2 with MPICH 1.2.6
OpenIB supporting IPoIB, SDP, UDP and SRP
PCI Express x8 Expansion Slot Compatible
Supports SUSE, Red Hat, and Fedora Core Linux
Operating Environments
Red Hat Enterprise Linux 4.x
SUSE Linux 9.3 & 10.0
Fedora Core 3 & Fedora 4
InfiniPath Adapter Specifications
Typical Power Consumption: 5 Watts
Available in PCI half height, short-form factors
Operating Temperature: 10 to 45°C at 0-3km -30 to
60°C (Non-operating)
Humidity 20% to 80% (Non-condensing, Operating)
5% to 90% (Non-operating)
InfiniPath PCIe ASIC Specifications
HSGBA package, 484 pin, 23.0 mm x 23.0 mm 1 mm
ball pitch
2.6 Watts (typical)
Requires 1.0V and 3.3V supplies, 33V supplies, plus
InfiniBand interface reference voltages.
-
1
Ping-pong latency and uni-directional bandwidth are based on the Ohio State University Ping-pong latency test.
2
1/2
The n
measurement was done with a single processor node communicating to a single processor node through a single level of switch
3
TCP/IP bandwidth and latency are based on using Netperf and a standard Linux TCP/IP software stack.
Note: Actual performance measurements may differ from the data published in this document. All current performance data is available at www.pathscale.com/infinipath.php
Corp orate Headq uarters QLog ic C orporat ion 26650 Aliso Viejo Parkway Aliso Viejo , CA 9265 6 949.38 9.60 00
Euro pe Headquarters Q Logic (UK) LTD. Surr ey Technology Centre 40 Occ am Road Gui ldfo rd Surrey GU 2 7YG UK +44 0(0) 1483 295825
2006 QLogic Co rporation. All rights reserv ed. QLogic, the QLogic logo, and Infi niPath are trademarks or registered trademarks o f QLogic Corporation. Other trademarks are the prop erty of their respective owners.
SN0058044-00 Rev E 11/06
Loading...