QLogic 10000 Series Accelerating Microsoft SQL Server Beyond Large Server Memory User Manual

Loading...

White Paper

Accelerating Microsoft SQL Server Beyond

Large Server Memory

The QLogic 10000 Series Adapter Provides

Greater Benefits Than DRAM Caching

The QLogic FabricCache™ 10000 Series Fibre Channel Adapter addresses the shortcomings of server-based DRAM caching and accelerates the performance of SQL databases to new levels.

EXECUTIVE SUMMARY

With the volume of data worldwide expected to grow by 50 percent each month, new challenges are being created for companies that are determined to get the highest performance from their database resources while increasing efficiencies wherever possible. With dropping DRAM prices, a tempting solution may be to use large amounts of server DRAM as cache for Microsoft® SQL Server® database acceleration. While this method can provide positive results, the approach is not without burdens and limitations. Among them, DRAM cache is “captive” to the individual server it is housed in. Thus, it is not sharable with clustered server groups. DRAM cache is cost-prohibitive for larger-sized databases when specialized servers or ultra-high density DRAM modules must be employed. Combined, these drawbacks prescribe that the best use for DRAM caching is for smaller databases in single-node servers where database sizes and high-availability requirements are not expected to evolve significantly.

QLogic® FabricCache™ 10000 Series Adapter delivers an optimized solution to address enterprise application acceleration challenges head on. It transparently combines enterprise server I/O connectivity with industry-standard, flash-based storage to enable sharable, server-based caching. This shared caching architecture provides dramatic and scalable performance improvements for today’s virtualized and distributed workloads. The enterprise workloads span the data center and run complex, business-critical applications like online transaction processing (OLTP) and online analytical processing (OLAP) on Microsoft SQL Server databases. The QLogic 10000 Series Adapter is a distinctive approach to server-based caching and addresses the problems of using large amounts of server DRAM for caching SQL databases.

KEY FINDINGS

Acceleration of Microsoft SQL Server databases is driven by the performance capability of the storage solution. While server-based Dynamic Random Access Memory (DRAM) caching can accelerate SQL performance, it has several limitations. The QLogic FabricCache 10000 Series Fibre Channel Adapter addresses the shortcomings of server-based DRAM caching and accelerates the performance of SQL databases to new levels. By integrating Fibre Channel storage network connectivity, sharable flash caching, and full hardware offload, the QLogic FabricCache 10000 Series Adapter makes caching I/O data from SQL entirely transparent to the host, improves scalability, and reduces complexity.

INTRODUCTION

Storage I/O limitations are the primary performance bottleneck for enterprise database applications like Microsoft SQL Server. Recent advancements in multi-core CPU technology have only served to widen the gap between processors and storage performance. SQL database administrators have long been searching for ways to accelerate and scale performance as the size of the database and the number of users grows larger. Caching has evolved to be the most effective method to accelerate SQL performance, and there are several ways to implement caching. This paper compares two of the methods: server DRAM caching and sharable, server-based, flash-based caching as supported by the QLogic FabricCache 10000 Series Adapter. The information provided includes guidance on when it is appropriate to use DRAM caching and when it is

advantageous to use the FabricCache Adapter to accelerate Microsoft SQL Server application performance.

SN0430970-00 Rev. A 02/14

1

Accelerating Microsoft SQL Server Beyond

Large Server Memory

White Paper

DRAM CACHING

Like most database applications, SQL was designed at a time when server main memory (DRAM) was very expensive. DRAM pricing has since dropped, especially over the last few years. With tremendous advances

in multi-core server computing and an ever-widening gap between CPU and storage I/O capabilities, a case could be made that the time is now for caching databases with server DRAM. But while the low latency benefits derived from DRAM caching can lead to desirable database performance improvements, the approach has its limitations. Some of the drawbacks to DRAM caching include the following:

DRAM cache is “captive” to the individual server it is housed in. Thus, it is not sharable with clustered servers. While a DRAM cache can

be very effective at improving the performance of individual servers, providing storage acceleration across clustered server environments or virtualized infrastructures that use multiple physical servers is outside a DRAM cache’s capability. This limits the performance benefits of DRAM caching to a relatively small set of single-server SQL situations.

DRAM cache becomes cost-prohibitive for larger-sized databases when specialized servers or ultra-high density DRAM modules must be employed. Tier-1 server companies typically offer servers with a base configuration of 12 DRAM DIMM sockets and charge a premium for 24 socket servers. DRAM modules show an increasing “price per gigabyte” (GB) as bit density increases.

Best practice recommendations from Microsoft call for local “swap and crash” recovery dump files equal to 2.5×the physical DRAM capacity. The swap file and the dump file become so large that it creates an enormous amount of disk fragmentation. It can take additional free space and many hours of defragmentation to repair the damage done by these massive files. However, maintenance windows are already tight. In addition, more DRAM leads to larger swap and dump files, which lead to longer maintenance cycles. The swap and dump files also create additional virus scanning overhead, incurring more contention and latency.

Using DRAM for cache is an all-or-nothing proposition. A DRAM cache is owned and managed by the application, and every transaction is going to compete for cache resources under the processing domain of the server CPU. If the CPU is spending significant cycles running caching algorithms, it may have a negative effect on other CPU tasks. With today’s multi-core processors, the impact may be minimal. However, with older CPUs, this may affect the performance of other server applications or services, or it may change the server’s ability to scale virtual machines (VMs).

Most enterprise SQL servers will also need access to networked storage to service requests for data not held within the DRAM cache, known as a “cache miss”. To handle a cache miss, an additional Fibre Channel Adapter or similar storage network Host Bus Adapter will be required in the server and it will have to be managed separately.

CACHING BENEFITS WITHOUT LIMITATIONS

The QLogic FabricCache 10000 Series Adapter is a new approach to server-based caching. It is designed to address the drawbacks of DRAMbased and other types of server-based caching. Rather than creating a discrete captive cache for each server, the QLogic FabricCache 10000 Series Adapter integrates Host Bus Adapter functionality as part of the flash-based cache functionality. The QLogic 10000 Series Adapter features a caching implementation that uses the existing SAN infrastructure to create a shared cache resource distributed over multiple physical servers, as identified in the graphic below, “Shared Cache within Application Clusters.” This capability eliminates the limitations of single-server caching and enables the performance benefits of flash-based acceleration for Microsoft SQL Server environments.

Owned

Cached LUNs

1

Data01

Shared LUNs

 

 

 

SAN LUNs

Data01

2

Data02

Data02 Data03

Application

Cluster

Nodes

FC SAN

Data04

3

Data03

 

SAN LUNs

Data04

4

SAN LUNs

Shared Cache within Application Clusters

SN0430970-00 Rev. A 02/14

2

+ 2 hidden pages