QLogic 10000 Series Realize Significant Performance Gains in Oracle RAC with QLogic FabricCache User Manual


White Paper

Realize Significant Performance Gains in Oracle RAC with QLogic FabricCache

Guide to Best Practices

Key Findings

••QLogic® recommends using the QLogic FabricCache™ 10000 Series Adapter in an Oracle® Real Application Cluster (RAC) database with Oracle’s Automatic Storage Management (ASM) feature enabled and provisioning LUNs as a multiple of the quantity of nodes in the RAC cluster. This configuration ensures distribution of the cache load over the QLogic 10000 Series Adapters in the cluster.

••Oracle recommends four LUNs for a disk group and, for best cache optimization, making the LUN quantity a multiple of the nodes.

••Each 10000 Series Adapter may cache up to 256 LUNs.

••All 10000 Series Adapters in the cluster must be zoned to be visible to each other.

••QLogic FabricCache 10000 Series Adapters enable:

––OLTP processing to be almost twice the transactions at 56 percent of the response time.

––OLAP processing to be 3.25 times the transactions at 25 percent of the response time.

White Paper

Executive Summary

QLogic FabricCache 10000 Series Adapters bring shared, server-based caching to the SAN. The 10000 Series Adapters are purposely designed to address the high I/O demands and distributed nature of clustered applications and virtualized environments, including Oracle RAC. Using the following best practice guidelines, 10000 Series Adapters have been shown to improve Online Analytical Processing workloads in Oracle RAC environments by up to 3.25 × more transactions in one quarter the response time. Online Transaction Processing workloads were accelerated to almost twice the transactions at 56 percent of the response time.

The 10000 Series Adapter integrates a flash-based cache with a Fibre Channel Adapter that uses the existing SAN infrastructure to create a

shared cache resource distributed over multiple servers. This cachecoherent implementation extends caching performance benefits to the widest range of enterprise applications, including clustered applications that use shared SAN storage.

10000 Series Adapters lower transaction latency while increasing storage I/O capacity in Oracle RAC environments. The 10000 Series Adapter offloads traffic from the SAN infrastructure, which lowers disk array IOPS, while maintaining SAN data protection and compliance policies. These benefits translate to improved resource utilization, increased return on investment (ROI), extended useful life of the SAN infrastructure, reduced costs, and overall improved customer satisfaction.

Considerations for Business Continuity, High Availability, and Scalability

Oracle RAC databases ensure that the critical business application is resilient because multiple instances (nodes) provide for business continuity and high availability. The loss of a single node does not result in an application outage. The Oracle RAC database can be scaled by adding more nodes to provide the processing capacity the application may need going forward.

Based on this scalability, some database applications require high I/O with low latency to support the increasing demand for quick response to the application. If the SAN cannot provide the required performance in IOPS for high-value, critical applications, administrators have some options to improve this, including:

••Select and install a new SAN

••Add a localized, server-based cache

••Add additional nodes to an Oracle RAC cluster

••Add the QLogic FabricCache 10000 Series Adapter

The following sections examine each of the preceding options.

Installing a New SAN

Installing a new SAN is a major operation that takes significant time, significant investment, and significant planning to minimize operational disruptions during the installation. While a new SAN also increases the overall I/O pool and performance for the entire SAN, these increases

may be overkill if only a small quantity of applications require the increased performance.

Adding a Cache

Caching devices, such as a solid-state disk (SSD) cache in each node, require that each node be brought down to install the cache. This cache is local, captive to the node, and not shared. This limited solution provides fewer benefits because, within the Oracle RAC environment, the LUNs being cached must be shared among all of the Oracle nodes. The biggest caching benefit comes from full table scans (direct reads) that are not cached in the Oracle block buffers, which requires that the cache be shared among the servers in the cluster.

Adding RAC Nodes

To add more RAC nodes to the existing cluster requires adding new physical hardware, modifying the existing SAN zoning and mapping, and adding network connectivity. Doing so can increase performance, but it does not provide increased I/O from the SAN.

Adding a QLogic FabricCache 10000 Series Adapter

Adding the 10000 Series Adapter provides increased I/Os by caching read operations. I/O is improved by replacing the existing QLogic Host Bus Adapters within the nodes with the QLogic 10000 Series Adapters. The 10000 Series Adapter uses the same QLogic drivers and incurs minimal disruption to the hardware infrastructure. The 10000 Series Adapters function as Host Bus Adapters within the SAN fabric, the same as the non-


SN0451405-00 Rev. B 07/13


caching Fibre Channel Host Bus Adapters. In an Oracle RAC environment, Host Bus Adapters can be replaced using a process that is non-destructive to application operation. The replacement process includes the following general steps that must be performed on each server (node) in the Oracle RAC cluster:

1.Shut down the node.

2.Place the new 10000 Series Adapter in the node, replacing the existing Host Bus Adapter.

3.Add the 10000 Series Adapter to the fabric zone, enabling it to see the storage.

4.Add the 10000 Series Adapter to a fabric zone for clustering the 10000 Series Adapter.

5.Bring the node back online.

After all the 10000 Series Adapters are in place in the nodes, configure the adapters to enable LUN caching.

System Architecture and Requirements

At a high level, the Oracle RAC database comprises four nodes that are connected to the SAN with the QLogic 10000 Series Fibre Channel Adapter. Both the database public network and the private interconnect are gigabit Ethernet (GbE).

The load-generation application simulates a business workload and is LANconnected to the RAC database.

Hardware requirements include the following:

••Four servers: Intel 64-bit, 24 CPUs, 16GB memory (one QLogic 10000 Series Fibre Channel Adapter in each server in the cluster)


––Storage array

––Fabric switch with support for 8Gb Fibre Channel

••Ethernet switch

Oracle RAC Best Practices

QLogic suggests following Oracle’s best practice recommendations for RAC, which include:

••Creating four LUNs for each disk group.

••Using the Oracle ASM feature to balance the I/O load for a disk group over the LUNs in that disk group.

••Using the Oracle Flash Recovery Area (FRA) to hold the backups. However, because the QLogic examples used in this document do not use or create a FRA, the FRA does not require caching.

White Paper

QLogic FabricCache 10000 Series Adapter Best Practices

In the Oracle RAC database, multiple nodes (servers) are actively reading and changing data on shared LUNs. The database engine manages this with a global lock manager.

The 10000 Series Adapter communicates “in-band” through the Fibre Channel fabric to coordinate the cache activity between the 10000 Series Adapters. For these adapters to function effectively, each card must have visibility to the other cards in the cluster. Visibility is accomplished by creating a fabric zone that includes all of the 10000 Series Adapters.

Because the 10000 Series Adapter-enabled cluster has multiple nodes sharing the same LUNs, this shared-cache environment can improve the throughput of the application by distributing the cache across all of the nodes. The distribution works well in a RAC environment because the Oracle RAC database groups multiple LUNs into a single storage pool called a disk group (or LUN set). The Oracle ASM distributes the I/O across all of the disks (LUNs) in the disk group, of which there can be an unlimited quantity.

The example used in this document includes four configured disk groups. One of the disk groups (with four LUNs) is specified for cluster management; this disk group is not cached because of the low I/O demand on it. The other three disk groups have cache enabled in the best practice environment.

Figure 1 shows the mapping required by each of four servers for the LUNs to support the database. Note that each node in the cluster is presented with the same set of LUNs.

Figure 1. Mapping Required to Support the Database

Online REDO Logs

The online REDO log files are in their own LUN set, which allows these files with high write performance to be placed on the highest performing storage. Caching these LUNs permits the archive activity to read from the cache and not access the SAN storage.


SN0451405-00 Rev. B 07/13


+ 4 hidden pages