Hp PROLIANT BL480C, PROLIANT BL685C G5, PROLIANT BL465C, PROLIANT BL685C, PROLIANT BL460C Using InfiniBand for high performance computing

...
Using InfiniBand for high performance computing
technology brief, 2nd edition
Abstract.............................................................................................................................................. 2
Introduction......................................................................................................................................... 2
InfiniBand technology........................................................................................................................... 3
InfiniBand components...................................................................................................................... 5
InfiniBand software architecture ......................................................................................................... 5
MPI............................................................................................................................................. 7
IPoIB ........................................................................................................................................... 7
DAPL........................................................................................................................................... 7
SDP ............................................................................................................................................ 7
SRP............................................................................................................................................. 7
iSER............................................................................................................................................ 7
InfiniBand physical architecture ......................................................................................................... 8
Link operation.................................................................................................................................. 9
InfiniBand summary........................................................................................................................ 11
InfiniBand and HP BladeSystem c-Class products ................................................................................... 11
Conclusion........................................................................................................................................ 12
For more information.......................................................................................................................... 13
Call to action .................................................................................................................................... 13

Abstract

With business models constantly changing to keep pace with today’s Internet-based, global economy, IT organizations are continually challenged to provide customers with high performance platforms while controlling their cost. With broader adoption of high performance computing (HPC) in various industry segments, more enterprise businesses are implementing parallel compute cluster architectures to provide a cost-effective approach for scalable, HPC platforms.
InfiniBand is one of the most important technologies that enable the adoption of cluster computing. This technology brief describes InfiniBand as an interconnect technology used in cluster computing, provides basic technical information, and explains the advantages of implementing the InfiniBand architecture.

Introduction

The overall performance of enterprise servers is determined by the synergetic relationship between three main subsystems: processing, memory, and input/output (Figure 1). Using multiple processing cores sharing common memory space, the multiprocessor architecture of a single server provides a high degree of parallel processing capability.
Figure 1. Single server (node) architecture
CPU/Memory subsystem
Frontside Bus
>
1066 MB/s
I/O subsystem
320 MB/s
400 MB/s
PCIe
Ports
1 GB/s
However, multiprocessor server architecture cannot scale cost effectively to a large number of processing cores. Cluster computing that builds an entire system by connecting stand-alone systems with interconnect technology has become widely implemented at the HPC and enterprise data centers around the world.
Figure 2 shows an example of cluster architecture that integrates computing, storage, and visualization functions into a single system. Applications are usually distributed to compute nodes through job scheduling tools.
Figure 2. Sample clustering architecture
Clustered systems allow infrastructure architects to meet performance and reliability goals, but interconnect performance, scalability, and cost are key areas that must be carefully considered. A cluster infrastructure works best when built with an interconnect technology that scales easily and economically with system expansion.

InfiniBand technology

InfiniBand (IB) is an industry-standard, channel-based architecture that features high-speed, low­latency interconnects for cluster computing infrastructures. InfiniBand uses a multiple-layer architecture to transfer data from one node to another. In the InfiniBand layer model (Figure 3), separate layers perform different tasks in the message passing process.
The upper layer protocol (ULP) layer works closest to the operating system and application; it defines how much software overhead will be required by the data transfer. The InfiniBand transport layer is responsible for communication between applications. Each message can be up to 2 GB in size. The transport layer splits the messages into data payloads and encapsulates each data payload and an identifier of the destination node into one or more packets. Packets can contain data payloads of up to four kilobytes, although one to two kilobytes is typical depending on the IB adapter and type of transport protocol being used. The packets are passed to the network layer, which selects a route to the destination node and attaches the route information to the packets. The data link layer attaches a local identifier (LID) to the packet for communication at the subnet level. The physical layer transforms the packet into an electromagnetic signal based on the type of network media—copper or fiber.
Figure 3. Distributed computing using InfiniBand architecture
InfiniBand Link
NOTE:
While InfiniBand infrastructures usually include the use of switches, direct host-channel-adaptor to host-channel-adapter (HCA-to-HCA) operation is supported in some implementations.
InfiniBand offers key advantages including:
Increased bandwidth with double data rate (DDR) at 20 Gbps available now and Quad Data Rate
(QDR) in the future
Low latency end-to-end communication
Hardware-based protocol handling, resulting in faster throughput due to efficient message passing
and memory-efficient data transfers such as RDMA
Loading...
+ 9 hidden pages