The 8Gb Fibre Channel Adapter of Choice
in Microsoft Exchange Environments
QLogic Offers Best Performance and Scalability for Microsoft
Exchange Workloads
Key Findings
The QLogic® FabricCache™ 10000 Series 8Gb Fibre Channel Adapter enhances transaction and latency performance for
Microsoft Exchange Servers® by optimizing random read-intensive workloads that are inherent in Exchange and e-mail applications.
The QLogic 10000 Series FabricCache technology is implemented as a PCIe®-based intelligent I/O device that provides integrated Fibre
Channel storage network connectivity, flash memory caching, and embedded processing. The 10000 Series technology makes caching
I/O data from Exchange transparent to the host.
QLogic’s caching technology* in the QLogic 10000 Series Fibre Channel Adapter enhances Microsoft Exchange with the following:
• Increased transactional IOPS by up to 400 percent
• Decreased Exchange I/O read latency by up to 80 percent
• Overall improved performance in the Exchange environment
The QLogic 2500 Series 8Gb Fibre Channel Adapter offers better throughput and IOPS compared to the Emulex® 8Gb
LightPulse™ Adapter, and in turn supports the additional demand of Microsoft Exchange e-mail volume and increasing workloads.
• Driving Microsoft Exchange-like I/O loads in an attempt to saturate the adapters showed 133 percent better IOPS while pushing
more than 279 MBps throughput for the QLogic 2500 Series 8Gb Fibre Channel Adapters versus the Emulex 8Gb LightPulse Adapter.
• The Emulex LPe12000 fails to keep up with increasing Exchange workloads as workloads increased past 16 outstanding
I/Os. In addition, the QLogic adapter continued to scale, achieving more than 35 percent better MBps than Emulex with
64 outstanding I/Os.
*QLogic patent pending, caching technology.
Executive Summary
WHITE PAPER
E-mail has grown to be one of the most important communication
and collaboration applications for worldwide businesses. According
to a report from The Radicati Group1, Microsoft Exchange Server
will have a total installed base that is expected to reach 470 million
by 2014, with an average annual growth rate of 12 percent. With
this expanding penetration of Exchange, it is imperative that the
user experience remains unaffected by increasing Exchange server
workloads. Exchange IOPS and latency relate directly to the usability
of Exchange and its associated end-user e-mail systems, as well
as the number of mailboxes that can be effectively supported on a
server. Today, Exchange is mission critical, demanding the highest
levels of performance from servers and their associated SAN storage
infrastructure. The introduction of multiprocessor CPUs coupled
with virtualization technologies provide the resources to meet these
demanding requirements for Exchange servers, but compound the
demand for high-performance, low-latency, scalable I/O connectivity
between servers and SAN storage.
It is imperative to deploy a high-performance, scalable server
adapter technology to meet this ever-growing requirement for I/O.
1 The Radicati Group, Inc.’s latest study, Microsoft Exchange Server and Outlook Analysis 2010 – 2014
The QLogic FabricCache 10000 Series 8Gb Fibre Channel Adapter,
featuring server-based caching operation, and the QLogic 2500 Series
8Gb Fibre Channel Adapter, featuring standard Host Bus Adapter
operation, provide high-performance, scalable solutions for Exchange
server environments.
To help IT decision makers determine the best, most informed adapter
choice, QLogic has developed a series of benchmarks showing the
performance and scalability of QLogic FabricCache 10000 Series 8Gb
Fibre Channel Adapters. Benchmarks also show the performance and
scalability advantages of the QLogic 2500 Series 8Gb Fibre Channel
Adapter over the Emulex 8Gb LightPulse Adapter in a variety of realworld scenarios using SAN-attached Exchange environments.
QLogic compared the overall performance of the 8Gb adapters when
configured to emulate a real-world SAN with maximum throughput
workloads. These tests highlight the advantage delivered by the
QLogic 10000 Series Adapters and QLogic 2500 Series Adapters in
Exchange environments.
Introduction
This paper provides performance testing information for two specific
QLogic solutions for Exchange environments:
• Standard Fibre Channel Adapters featuring high-performance and low-
power requirements for SAN attachment
For FabricCache Adapters, QLogic used Microsoft’s Jetstress workload tool
to test end-to-end server, network, and storage performance operating in
an infrastructure that includes a Fibre Channel attached storage array with
standard hard disk drives as the target, and the QLogic FabricCache 10000
Series 8Gb Fibre Channel Adapter as the device under test (DUT). QLogic
chose this configuration to characterize the operational performance of the
FabricCache Adapter in a real-world environment, where an existing legacy
SAN would typically be present. The results provide baseline performance
data for the configured environment without caching enabled, and with
various levels of caching enabled to provide a performance comparison.
Microsoft provides the Jetstress 2010 tool for simulating Microsoft
HSG-WP08008 SN0230963-00 Rev. F 06/13 2
Exchange Server 2010 database I/O load and for validating hardware
deployments against design targets.
The test’s objectives are selected to enable the best understanding of
application performance and environmental usage under various user
loads with key indicators such as transaction latency and IOPS.
For the testing performed on the standard Fibre Channel Adapters, QLogic
used the industry-standard IOmeter workload tool and a solid-state storage
system as the target to facilitate a maximum performance environment for
the QLogic and Emulex Fibre Channel Adapters as the DUT.
Server-Side Storage Acceleration
Storage I/O is the primary performance bottleneck for data-intensive
applications such as Microsoft Exchange Server. Processor and memory
performance have grown in-step with Moore’s Law—that is, getting faster
and smaller—while storage performance has lagged far behind. Over the
past two decades, delivering performance from data storage has become
the bane of application administrators, and recent challenges are poised
to severely constrain the capabilities of the infrastructure in the face of
increasing virtualization, consolidation, and user loads.
A new class of server-side storage acceleration is the latest innovation in
the market addressing this performance disparity. The idea is simple: fast,
reliable, solid-state flash memory connected to the server brings faster
data access to the Exchange server’s CPU. Flash memory is highly available
in the market and promises to perform much faster than any rotational disk
under typical, small, highly random, enterprise I/O workloads.
The 10000 Series Adapter from QLogic provides I/O acceleration to
storage traffic for Exchange servers. The 10000 Series Adapter is a PCIebased I/O device that provides the integrated storage network (Fibre
Channel) connectivity, I/O caching technology, integrated flash memory,
and embedded processing that are required to make management and
caching tasks transparent to the host server. The QLogic solution delivers
the application performance acceleration benefits of transparent, serverbased cache without the limitations of solutions that require separate,
server-based storage management software and operating system (OS)
filter drivers. As a shared caching resource, QLogic 10000 Series Adapters
extend server-based caching to virtualized and clustered environments,
which up to now have been precluded by the single server “captive” nature
of existing server-based caching solutions.
For optimal Microsoft Exchange performance, maximized transactional
I/O rate and minimized I/O latency indicate a peak-performing Exchange
system. By optimizing these performance values, the platform can
absorb additional users, thereby reducing server, management, and
overall infrastructure costs. The key benefits of the QLogic 10000 Series
Adapter identified in this paper are improved performance and reduced
overall costs.
Test Setup
The test setup was configured with an Intel® Xeon® server connected
to HP® Enterprise Virtual Array (EVA) storage through the QLogic 10000
Series Adapter and the QLogic 5800V/5802V Fibre Channel Switch.
Jetstress runs were executed with cache levels varying in size from zero to
100 percent of the actual Exchange database size. See Appendix B for actual
configuration information.
Figure 1 shows the basic setup for the Exchange test.
WHITE PAPER
Figure 1. Exchange Test Setup
Exchange Performance Results Using Jetstress
The tests demonstrate that, at various levels of LUN caching in the 10000
Series Adapter, caching delivers as much as a 400 percent increase in
the quantity of IOPS and an 80 percent reduction in I/O latency. These
performance improvements are directly related to caching data closer to
the Exchange server processor, which eliminates Fibre Channel transit time
over the SAN. Table 1 shows these results.
Table 1. Exchange Performance Test Results
Criteria
Exchange
I/O Read
Latency (ms)
Exchange
Transactions
(IOPS)
Latency
Reduction
IOPS
Increase
Figure 2 shows the total Exchange transactional IOPS as measured by the
Jetstress test. With no cache enabled in the 10000 Series Adapter, Jetstress
measured 1,056 transactional IOPS. In this case, all I/O traffic makes the
round trip from the server to the SAN storage array. At 100 percent cache,
the cache size is equal to the database size in this test. The measured IOPS
of 5,270 has increased by nearly 400 percent.
No
Cache
4.7242.6482.4842.0211.5580.922
105618192017227731545270
0%(44%)(47%)(57%)(67%)(80%)
0%72%91%116%199%399%
25%30%50%75%100%
HSG-WP08008 SN0230963-00 Rev. F 06/13 3
WHITE PAPER
Exchange Transactional IOPS
6000
5000
4000
3000
IOPS
2000
1056
1000
0
No Cache 25%30%50%75%100%
Figure 2. Exchange Transactional IOPS Measured by Jetstress
1819
Cache Size (as Percentage of LUN Cached)
2017
at Varying Cache Levels
2277
3154
5270
Figure 3 shows the Exchange I/O read latency as measured by Jetstress.
With no cache enabled in the 10000, Jetstress measured the latency at
4.724ms. As with transactional IOPS, all I/O traffic makes the round trip
from the server to the SAN storage array. At 100 percent cache, the latency
fell to 0.922ms, an 80 percent reduction in this test.
4.724
5
4.5
4
3.5
3
2.5
2
1.5
Latency (msec)
1
0.5
0
No Cache 25%30%50%75%100%
Figure 3. Exchange I/O Read Latency Measured by Jetstress at Varying Cache Levels
Exchange I/O Read Latency
2.648
Cache Size (as Percentage of LUN Cached)
2.484
2.021
1.558
0.922
One initiator and four target RamSan ports were configured in a zone to the
QLogic 5000 Series Stackable Fibre Channel Switch. Eight NTFS-formatted
LUNs were created on the RamSan: four for the database and four for
log files.
Figure 4. IOmeter Test Setup
When the disk latency is reduced (the disk I/O is striped across an
increased number of spindles), a greater I/O load (the number of users is
increased) can be driven by Microsoft Exchange. Using the RamSan-325
and the IOmeter load generation tool allowed the emulation of both of these
conditions, as illustrated in Figure 4.
The tests were performed using the latest commercially available hardware,
software, and drivers. All measurements were made with default settings of
the adapters from both companies.
Test Procedure
Engineers in the QLogic Solutions lab performed tests on the configuration
described in Figure 4 as follows:
QLogic and Emulex 8Gb Standard Fibre Channel Adapter
Comparison: Exchange Server Benchmark Using
IOmeter Test
The IOmeter tool was used to benchmark the QLogic 2500 Series 8Gb Fibre
Channel Adapter versus the Emulex 8Gb LightPulse Adapter in an Exchange
environment with minimum subsystem latency.
1. Installed a QLogic 8Gb single-channel PCI Express® to Fibre
Channel Adapter (QLE2560) on the test server using the appropriate
miniport driver.
2. Created workers separately for 8K random read/write to simulate
database access and 4K random reads for log file operations (four
workers for 8K files at 35 percent read and 65 percent write reads, and
one worker for 4K log file reads to maintain IOPS at a 4:1 database to
log ratio similar to Exchange 2007). Created eight LUNs and mapped
The IOmeter test setup (Figure 4) consisted of the latest 8Gb adapters from
QLogic and Emulex, running on current, commercially available drivers,
installed in a 2.93GHz Intel Nehalem Quad Core (dual socket) server
running the Windows Server® 2008 R2 OS. The Intel Nehalem server was
connected to a Texas Memory System RamSan®-325 (with 32GB total
capacity) through a QLogic 5000 Series Stackable Fibre Channel Switch.
them to four target ports.
3. Ran tests for one minute, repeated them several times for integrity, and
then averaged the results.
4. Repeated these steps for the Emulex 8Gb single-channel PCI Express to
Fibre Channel Adapter (LPe12000).
Using a solid-state disk removes the latency introduced by slower spinning
disk drives and provides a performance benchmark expected from nextgeneration storage arrays.
HSG-WP08008 SN0230963-00 Rev. F 06/13 4
WHITE PAPER
Test Results
The tests demonstrate that, as the quantity of users increased and more
LUNs were provisioned, the performance gains multiplied with the QLogic
8Gb Fibre Channel Adapter, showing a multifold improvement over the
Emulex 8Gb Fibre Channel Adapter in scalability.
Fibre Channel Adapter Scalability in Modeling the
Microsoft Exchange I/O Load
The RamSan-325 contributes to very low latencies for I/O completions,
while the IOmeter tool contributes by driving large I/O loads through the
Fibre Channel Adapter, thus satisfying both conditions for this test. IOmeter
replicated the Microsoft Exchange I/O workload against the RamSan-325
and found that the IOPS performance of a QLogic Fibre Channel Adapter
with a minimum workload of two outstanding I/Os was 57 percent better
than Emulex’s Fibre Channel Adapter (Figure 5). Similarly, the QLogic
adapter scaled better, achieving 35 percent more throughput than the
Emulex Fibre Channel Adapter (Figure 6).
Light Workload
70000
60000
50000
40000
30000
20000
10000
IOPS (Transacons/sec)
0
Two Outstand ing I/Os
Figure 5. IOPS Comparison
QLE256 0
Emulex
LPe 1200 0
Heavy Workload
12000 0
10000 0
80000
60000
40000
MBps (Throughput)
20000
0
64 Outstanding I/Os
Figure 6. Throughput Comparison
QLE2560
Emulex
LPe 1200 0
Fibre Channel Adapter Scalability with Outstanding
I/O Workloads
Six tests were performed to assess Microsoft Exchange workload
scalability. Each test included an increase to the I/O workload; test one had
a workload of two outstanding I/Os, with increases through test six, which
had 64 outstanding I/Os.
The performance benefit of the QLogic 2500 Series 8Gb Fibre Channel
Adapter is illustrated in Figure 7. When the outstanding I/O workloads were
increased, the QLogic 8Gb adapter scaled to meet the increased workloads.
In contrast, the Emulex 8Gb Fibre Channel Adapter could not sustain the
performance for the larger workloads.
Figure 7. Microsoft Exchange Workload Scalability
As identified, these tests show that the QLogic 8Gb adapter increased
IOPS and performance scaled as the workloads increased. In test four,
the Emulex adapter also reached its maximum performance. However, in
tests five and six, which provided 32 and 64 outstanding I/O workloads
respectively, the Emulex adapter’s performance dropped significantly. In
test six, at 64 outstanding I/Os, QLogic displayed a 35 percent performance
advantage over the Emulex 8Gb adapter. The QLogic 2500 Series 8Gb
Fibre Channel Adapter continues to provide scalability and performance as
demonstrated in the aforementioned benchmarks. IOmeter was configured
to place the maximum possible load on the QLogic 8Gb and Emulex 8Gb
adapters. As customers purchase and integrate 8Gb adapter technology
into 8Gb SANs and infrastructure, they expect optimum throughput and
IOPS, as demonstrated by the QLogic 2500 Series 8Gb Adapter.
Summary and Conclusion
QLogic continues to be the industry leader in delivering high-performance
I/O solutions to data center customers.
QLogic FabricCache 10000 Series 8Gb Fibre Channel Adapter
The IOPS and latency of the QLogic FabricCache 10000 Series 8Gb Fibre
Channel Adapter are best-in-class and provide an unprecedented high
level of performance, superior scalability, and enhanced reliability that
HSG-WP08008 SN0230963-00 Rev. F 06/13 5
WHITE PAPER
exceeds the requirements for next-generation Exchange environments.
The unique caching solution of the 10000 Series Adapter has the increased
performance needed to meet the escalating requirements of Microsoft
Exchange Server. The results of the benchmark tests demonstrate the
I/O performance and scalability advantages of the QLogic 10000 Series
Adapter for Exchange server performance, which results in overall reduced
total cost of ownership (TCO).
Because Microsoft Exchange is one of the most prevalent, business-critical
applications in enterprises, administrators go to great lengths to ensure
optimal performance. Administrators consider scalability and performance
as major factors when they work to improve Exchange server operation.
The QLogic 10000 Series Adapter provides superior scalability and
performance for Exchange servers, which makes it the right choice for
enterprise data centers.
Exchange environments are carefully designed to cater to varying and
unpredictable loads. As heavier demands occur in the data center, the SAN
must contain the resources to meet the minimum business requirements
for higher performance. QLogic 10000 Series Adapters support the required
performance scalability—400 percent increased IOPS and 80 percent
reduced latency—which makes a positive impact on an organization’s
bottom line. The QLogic 10000 Series 8Gb Fibre Channel Adapter with
built-in I/O caching is designed to meet the ever-increasing demands being
placed on Microsoft Exchange environments.
QLogic 2500 Series 8Gb Fibre Channel Adapter
The IOPS and throughput of the QLogic 2500 Series 8Gb Fibre Channel
Adapter is best in class and provides an unprecedented level of high
performance, superior scalability, and enhanced reliability that exceeds the
requirements for next-generation data centers. As existing infrastructures
transition to 8Gb and beyond, increasing performance is necessary to meet
the escalating requirements of Microsoft Exchange Server. The results
of the benchmark tests demonstrate the I/O performance and scalability
advantages of the QLogic 2500 Series 8Gb Fibre Channel Adapter over
the Emulex 8Gb Fibre Channel LightPulse Adapter. Two of the key reasons
why the QLogic 2500 Series is a great fit for environments with Microsoft
Exchange are the following:
performance required with increasing users and user mailbox sizes,
which will make a positive impact on an organization’s bottom line.
The QLogic 2500 Series 8Gb Fibre Channel Adapter scales to wire speed
as the quantity and size of I/Os increase. This efficiency is accentuated by
QLogic’s use of proprietary RISC processors in its ASIC, which is custom
built for high throughput, low power, and low latency operation.
IT managers know that choosing the right adapter for their SAN
infrastructures can have a strong impact on the performance, agility, and
scalability of their enterprise storage environments. The results of this
study demonstrate that QLogic 2500 Series 8Gb Fibre Channel Adapters
consistently outperform Emulex LightPulse 12000 Adapters in today’s realworld environments and provide significant headroom for future scalability.
QLogic continues to meet the following growing SAN infrastructure
demands that discerning IT managers must support:
• Increasing use of server consolidation techniques through the
deployment of server virtualization technologies such as those from
AMD®, HP, Dell®, IBM®, Intel, Microsoft®, Oracle®, VMware®, and
Xensource™ technologies
• An expected 470 million mailboxes by 2014, an annual growth rate of
12 percent
• Business continuity and compliance requirements that drive the
deployment of the following:
– Backup, remote, replication, and disaster recovery technologies
– Archiving technologies, indexing, and search technologies
A high-performance and scalable Fibre Channel Adapter, such as the
QLogic 2500 Series 8Gb Fibre Channel Adapter, will handle the load
efficiently without degrading the overall performance of the Microsoft
Exchange Server. Storage trends demand higher utilization of bandwidth
and throughput from Fibre Channel Adapters, and the QLogic 2500 Series
8Gb Fibre Channel Adapters already deliver on this promise.
Related Video: QLogic Adapters in Microsoft Exchange
Server Environment
• Microsoft Exchange is the number-one, business-critical application
in small, medium, and large enterprises. Administrators go to great
lengths to ensure delivery on the guaranteed uptimes. Scalability and
performance are major consideration factors when choosing a Fibre
Channel Adapter. The QLogic 2500 Series 8Gb Fibre Channel Adapter
has proven its scalability and performance superiority, making it the
right choice for Exchange administrators.
• Exchange environments are carefully designed to cater to varying and
unpredictable loads. As heavier demands occur in the data centers,
the SAN must meet minimum business requirements for higher
performance. QLogic 2500 Series Adapters support the scalability in
HSG-WP08008 SN0230963-00 Rev. F 06/13 6
Click on the Image to Watch a YouTube Video
Appendix A
Server Configuration
Intel Server
Processor Type and Speed
Memory
OS Type
Fibre Channel Adapter Hardware
Fibre Channel Adapters
QLogic
Emulex
Fibre Channel Adapter Configuration
WHITE PAPER
Intel Nahalem Quad Processor (dual
socket) – total 8 cores
24GB RAM
Windows Server 2008 R2
QLE2560
LPe12000
QLogic and Emulex Fibre Channel Adapters
QLogic Driver
QLogic Driver Version
Emulex Driver
Emulex Driver Version
STOR miniport
9.1.8.25
STOR miniport
7.2.32.002
External Storage Configuration
Storage
Solid State Disk
LUNs
RamSan-325 solid state storage
8 LUNs, 4 target ports
Fibre Channel Switch Hardware
Switch
8Gb Switch
QLogic 5000 Series Stackable Fibre
Channel Switch
HSG-WP08008 SN0230963-00 Rev. F 06/13 7
Appendix B
FabricCache Adapter Test Configuration
Server
Host System
Processor
RAM
Host OS
Fibre Channel Adapter
Storage Array
Targets
Drive Speed
Quantity of Drives on Array
Dell PowerEdge® R720
Intel Xeon CPU E5-2640 0 @
2.50GHz, 24 processors
32GB
Windows 2008 R2
QLogic FabricCache 10000 Series
8Gb Fibre Channel Adapter
WHITE PAPER
HP EVA 6300 1 target port,
2 target controllers
10K
24
Size of LUN
Raid Type
100GB × 7
RAID5
Switch
Fabric Switch
QLogic 5800V 8Gbps Fibre
Channel Switch
Test Configuration (Dependent on Test Tools)
IO Load Generator
Performance Tuning
Database LUN Size
Cache Sizes %
Cache Type
Other Settings
Microsoft Jetstress
Read_write_through Cache Enabled on
SAN LUN (0 – 100%)
200GB
0, 25, 30, 50, 75, 100
Read Write Through
Quantity of Mailboxes = 1,000
Mailbox Size = 200MB
IOPS per Mailbox = 1
HSG-WP08008 SN0230963-00 Rev. F 06/13 8
WHITE PAPER
Disclaimer
Reasonable efforts have been made to ensure the validity and accuracy of these performance tests. QLogic Corporation is not liable for any error
in this published white paper or the results thereof. Variation in results may be a result of change in configuration or in the environment. QLogic
specifically disclaims any warranty, expressed or implied, relating to the test results and their accuracy, analysis, completeness or quality.