Qlogic SilverStorm 9080, InfiniPath Series Supplementary Manual

CD-adapco
QLogic InfiniPath® Adapters Yield Supercomputing Results from Affordable Linux Clusters
Challenge
Solution
30 QLogic® InfiniPath QLE7140 PCI Express adapters interconnect through a SilverStorm (SilverStorm Technologies™ acquired November 2006 by QLogic Corporation) 9080 multiprotocol fabric director switch to 30 IBM® System e326 compute nodes in an IBM System Cluster 1350 running the Linux operating system. Each node is powered by two dual-core 2.4 GHz AMD Opteron processors, each with 2GB of memory per core
Result
Based on lab tests, the QLogic InfiniPath inter-node links significantly cut the execution time of the CFD application —approaching supercomputing performance levels without a supercomputing price tag
C A S E S T U D Y
High-performance computing is changing the world of high-end auto racing thanks to CD-adapco, one of the world’s leading computational fluid dynamics (CFD) software companies. A significant percentage of the cars in the FIA Formula One Driver’s World Championship are designed with the aid of the company’s STAR-CD modeling software—including the winners in 2005 and
2006. In addition to meeting the exacting demands of elite race car design, STAR-CD has become the modeling software of choice for industries as diverse
®
as automotive, aerospace and defense, oil and gas production, chemicals, marine design, and electric power generation.
According to Steven Feldman, director of product development and IT at CD­adapco, the company is also gaining traction in other industries, thanks to new economical ways to host its compute-intensive modeling applications. “CFD software, which simulates everything from environmental conditions to material compositions, has traditionally required expensive UNIX®-based supercomputers. But we can now attain comparable performance on inexpensive distributed memory clusters made up of Intel® architecture–based servers running Linux,” he said.
In addition to CFD software, CD-adapco provides consulting services to customers, helping them architect their clusters for the optimal cost and performance. “Anything we can do to speed up the underlying infrastructure makes our software more attractive and that, of course, can boost revenue for our company.”
CD-adapco kicks the QLogic tires
As part of their evaluation of new hardware coming onto the market, Feldman and his colleagues were looking for ways to increase the speed of inter-node links. Servers had become so fast that the communications links were limiting cluster performance. Estimates by the team showed that new InfiniPath QLE7140 PCI Express cards from QLogic had the potential to significantly increase the performance of a cluster. By achieving 10X-MR, InfiniPath delivers over 10 times more messages per second than any other cluster interconnect.
“QLogic InfiniPath adapters allow compute-intensive applications to achieve high performance on affordable server clusters.”
— Steven Feldman, director of product development
and IT, CD-adapco
CD-adapco
C AS E S T U DY
Increasing cluster throughput would allow customers to run more detailed models or simply get more modeling runs completed in less time. The result, said Feldman, could be very significant for CD-adapco customers, because some simulations take weeks to complete. The QLogic interconnects could cut this time down to days. Not only would this allow projects to be completed faster—more simulations could be performed simultaneously, increasing the capabilities of the STAR-CD application in clustered environments.
To verify their performance expectations and create sizing data for customers to use, Feldman’s team configured a test cluster using the new QLogic InfiniBand adapters connected through a SilverStorm 9080 multiprotocol fabric director. A 30-node test cluster was built using IBM e326 servers running Linux. Each node has two dual-core 64-bit 2.4 GHz AMD Opteron processors, each with 2GB of memory per core. A 6TB RAID storage system is connected to all the nodes in the cluster through a separate GigE network.
Looking under the hood, QLogic InfiniPath adapters rev up the test cluster
To measure performance, the team ran an aerodynamics automobile model on an increasingly larger set of compute nodes. Said Feldman, “The speed of the analysis continued to increase, indicating excellent scalability beyond what has been seen in other clusters,” Feldman said. “As we add nodes to clusters using other interconnects, the
inter-node links can become bottlenecks. But with the performance of the QLogic interconnects, our experience shows that cluster performance can scale linearly as we add nodes.”
Feldman attributes the improved cluster scalability to higher bandwidth of the QLogic interconnects combined with their extremely low ping pong and random ring latency. “Ping pong latency, which measures communication between two nodes, was 1.6 microseconds lower than other InfiniBand adapters. As for random ring communication— which is measured between many nodes in a cluster—the latency was an extremely low 1.29 microseconds.”
The new cluster zooms off to create new opportunities
According to Feldman, the performance of the QLogic-based cluster approaches supercomputing levels at a very attractive price point. “With QLogic InfiniPath adapters, our applications become accessible to more companies because we have dramatically reduced the cost of the compute platform. That creates new opportunities for our company,” he said. “Whether our customers use our CFD software to design the fastest race cars in the world or simply need the fastest modeling capabilities available for other products, using QLogic adapters is critical to getting precise results with low capital equipment costs.”
As seen in this drawing, QLogic InfiniPath QLE7140 PCI adapters interconnect through a SilverStorm 9080 multiprotocol fabric director (now part of the QLogic product family) to 30 IBM® e326 compute nodes in two racks of an IBM 1350 System Cluster running Linux. Each node has two dual-core 64- bit 2.4 GHz AMD Opteron processors, each with 2GB of memory. A 6TB RAID storage system is connected to the cluster over a dedicated GigE network.
Corp orat e He adquarters QLogic Corporation 26 650 Aliso V iejo Parkway Ali so Viejo, C A 92656 94 9.38 9.6000 www.qlogic.com
Euro pe Headquarters QLo gic (UK) LTD. Surrey Technology Centre 4 0 Oc cam Road Guil dfor d Surrey GU2 7 YG UK +44 (0)1 483 295825
©2006 QLogic C orporation. all rights rese rved. QLogic, the QLogic Logo, the Powered by QLogic Logo, Infin iPath are registered trad emarks or trademarks of Q Logic Corporation. all othe r brands and product nam es are trademarks or regi stered trademarks of their respect ive owners. Information supplied by QLogic is believed to be accura te and reliable. QLogic C orporation assumes no responsibility for an y errors in this broch ure. QLogic Corporation r eserves the right, witho ut notice, to makes chang es in product design or spec ifications.
sN0130926-00 R ev a 11/06
Loading...