Qlogic SANbox 9200, SANbox 6142, SANbox 9000 Supplementary Manual

Page 1
W H I T E P A P E R
Extending the SAN for SANbox 9000 Core Switches
QLogic Intellegent Storage Routers
Extending the SAN Beyond the Data Center
Executive Summary
A new challenge faces IT professionals, dubbed the corporate “extrastructure.” This includes devices and technologies outside the IT department’s direct control, but are vital to the proper operation of dispersed SANs and data resources. For example, most corporations rely on third-party vendors for the bandwidth and connectivity technology outside of the brick-and-mortar confines. Since implementing a private global WAN would be cost prohibitive, the company needs to bridge data centers without sacrificing the benefits of a single-site operation.
The best way to mitigate the limitations of a decentralized data environment is to create a core-to-core bridging strategy using standard, proven technologies. Core-to-core bridging is the QLogic term for linking the core networks between data centers. Data center switches, such as the SANbox 9200, can access remote SAN islands with the proper deployment of gateway routers. The SANbox 6142 Intelligent Storage Router can bridge core switches across the WAN with fast and efficient Layer 3 routing. The routing can even be easily implemented across core switching technologies from other vendors, for example Brocade and Cisco.
Mission critical applications, such as data migration, replication, and disaster recovery, can now be integrated across multiple sites for better utilization, higher levels of protection, and faster recovery times.
Key Findings
The SANbox product family can implement a cost-effective and efficient data center core-to-core bridging strategy to achieve the following solutions:
Greater resource utilization in SAN islands
Higher data protection levels with off-site migration,
replication, and disaster recovery
Better operational efficiencies
Faster gateway bridging across homogeneous and
heterogeneous SAN fabrics
Easier management of remote data resources
Core-to-core fabrics can be bridged without merging SANs
Page 2
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
Introduction
The corporate infrastructure has evolved over the past few decades to the point where “infrastructure” no longer provides an adequate definition. With global offices, remote locations, and a dispersed sales force, the computing environment now requires an “extrastructure” to provide data when and where it’s needed. The extrastructure is much different than an infrastructure because so much of the WAN connectivity is beyond the control of the IT professional. This paper looks at best practices for implementing distance connectivity topology that still maintains the requirements for flexibility, speed and management available within the data center.
The standard data center architecture consists of servers at the edge of the Fibre Channel SAN connected to data storage through distribution and core switches. System administrators can easily design, control, and manage this environment. However, when they need to connect multiple data centers, system administrators must investigate and properly evaluate each one with their own edge-to­core topology, new challenges, and criteria. This paper investigates the options for creating core-to-core SAN solutions.
An intelligent storage router can provide SAN-over-WAN connectivity to extend the data center core beyond a single location or campus environment. This can be done while still maintaining all of the data center requirements mentioned above. This allows for a unified business approach to computing needs and data access. Not that all users in all locations gain unlimited access, but the policies and procedures required to run a sound business can be extended
beyond the headquarters or main data center to all users throughout the enterprise.
There are three basic segments to managing the extrastructure:
Capacity allocation approach – Too often, departments and/or
1.
locations demand storage capacity for the wrong reasons. They “what if” themselves into deploying excess capacity: “What if we grow faster than expected? What if the requirements were under estimated? What if we have a surge in peak demand?” A capacity allocation approach focuses on assigning data storage when and where it’s needed.
Centralized management approach – An IT department spends
2.
a great deal of time implementing policies and procedures that guard the company against pitfalls of intrusion, collusion, and regulation. However, these safeguards are difficult to manage and enforce across a fractured extrastructure. A centralized management approach focuses on standard management policies across the extrastructure.
Distributed network approach – Even though many companies
3.
have basic connectivity between remote offices and sites, the capabilities of the global network remain fractured. Each office or data center operates as an independent island within the larger scheme of data management and resource sharing. A distributed network approach focuses on high-speed resource sharing between SAN islands.
SSG-WP07009 2
SN0130966-00 A
Page 3
Extending the SAN for SANbox 9000 Core Switches
Business Needs within the Extrastructure
Data protection – After keeping applications available to run the business, the most important function of the corporate IT department is to secure and protect the data assets of the institution. Growing data and expanding extrastructure push operating scenarios and data systems to the limit. The opportunity cost of lost data could be catastrophic to the company.
Regulatory compliance - Across the globe, changing regulatory requirements from most governments place greater demands on businesses. The data outlined by these regulations is often outside of the data center, but needs to be protected, secured and accessed globally. US regulations such as SEC 17a-4, Sarbanes Oxley section 404, and HIPAA drive the need for new storage strategies in every organization. This means that those servers, and the storage that supports them, need data protection and management capabilities, often associated with enterprise SANs, traditionally found only in central IT departments.
Resource limitations – Shrinking IT budgets require that utilization and leveraging of available resources be maximized. Redundancies or over capacity from one SAN island to another can’t be tolerated. Bridging and sharing existing data devices can help ease the burden on equipment and management.
Evaluation Factors for Building the Extrastructure
The next step is to assess the requirements of the SAN-over­WAN application. To help define the requirements of your specific application, consider the following:
Data Targets – The volume of the target data is fundamental
for establishing the performance requirements of the WAN connections.
Availability Requirements – Does the connection need to be a
“best effort” or 24/7? This is key when determining the type of software, equipment, and WAN connection to implement.
Write Transfer Objective – How much time is available to transfer
the data? This is one of the key factors in determining the size and/or number of telecommunication lines needed.
Recovery Time Objectives (RTO) – How fast does the data have
to be recovered? How often do you think you will have to do this recovery?
W HI TE P AP ER
Budget – Examine not only the cost of the networking equipment,
extra disk storage, and required software, but also look at ongoing WAN line cost to get a full TCO/ROI analysis.
Data Access Model - What kind of data access is required
asynchronous or synchronous transfers? If asynchronous is acceptable, using it can lower the cost and performance requirements, instead of accommodating peak synchronous demands.
By answering the above questions, you can systematically eliminate or include specific solutions and offerings. For example, by determining that there are varying classes of data being transferred from the SAN over the WAN, it may be discovered that QoS functionality is required within your connecting equipment. Knowing the answers to the above questions, as well as your application availability requirements, places your IT team in the best possible position to narrow down choices between solutions.
Issues with FC Core Bridging
The core of the FC fabric is built around high-end director class switches, such as the QLogic SANbox 9200. This switch provides solid and proven technology for sustaining storage area networks. However, expanding the enterprise usually leads to multiple SAN cores and a patchwork architecture. The investment in the SANbox 9200 should be leveraged for maximum utilization and sharing of SAN resources. An ideal infrastructure would locate all core devices close enough to bridge with standard, short-wave Fibre Channel products; however, this is usually not possible. Although the issues may be diverse, the solutions are straightforward.
Some of the specific issues include:
Long-distance bridging (>10km) of homogeneous cores – So
the IT department was lucky enough to maintain a single-vendor policy for their FC fabric, but SAN islands have developed, causing issues with policy management, utilization, and disaster recovery. Bridging the islands is imperative to maintaining best practices.
Long-distance bridging (>10km) of heterogeneous cores
– The worst-case scenario is usually what hits us in real life, and this situation is no different. Multiple SAN islands have evolved within the organization, requiring that the SANbox 9200 bridge to Brocade/McData or Cisco fabrics. Not only do we have to solve the distance issue, but the compatibility issue as well.
SSG-WP07009 3
SN0130966-00 A
Page 4
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
Bridging Topologies
To achieve higher levels of functionality and better management, vendors usually define proprietary ports, such as E_port, F_Port, G_port, and TE_port. Within those port types, vendors can have their own unique classifications and requirements.
If you need to bridge SAN fabrics across vendors, you must often sacrifice these proprietary features. This requires an intermediary between one core and the other to allow communication and maintain neutrality without merging the SANs.
The best way to bridge SANs without merging them, which forces IT managers to abandon the advanced vendor features, is to use routing based on N_ports. The N_port definition allows routers or other SAN devices to log into a SAN as a native device. It doesn’t cause compatibility issues with the core switches because it does not merge the fabrics.
A long-distance bridge can be created between SANs with a router using N_ports. Each core sees an attached native device, but no direct view of the other switches. SAN management is simplified, without merging the fabrics. The router can also use Ethernet ports to bridge across the WAN to the remote SAN island.
The SANbox 6142 provides this type of routing capability. The QLogic SANbox 6142 Intelligent Storage Router leverages N_ports for simplified connection to the SANbox 9200 switch and gigabit Ethernet ports for communication across the LAN or WAN. The basic topology is shown below.
This network configuration allows SANbox 9200 switches in separate locations to share data resources across the WAN, without having to merge the SANs. See the section below on SmartWrite™ technology for a full description of this technology and the benefits of Layer-3 routing.
SSG-WP07009 4
SN0130966-00 A
Page 5
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
Using a combination of core and distribution switching, very cost­effective strategies can be deployed across the extrastructure. Distribution switches, such as the SANbox 5600, provide flexible and expandable fabric services in locations with medium-duty data traffic. Therefore, various combinations of the SANbox 9200, 5600, and 6142 can meet the changing needs of the corporation, without an over­investment in SAN hardware. The basic concept for a multi-tiered switching topology is shown below.
In this scenario, smaller data centers use distribution switches to create the SAN fabric. The SANbox 5600 provides 10GB ISL backbone ports for expansion, without sacrificing switch ports. Meanwhile, the main data center utilizes the power and cost-efficiency of the core chassis to handle the major switching needs of the enterprise. As shown, each location is connected with the SANbox 6142, allowing for data migration, replication, and disaster recovery, as discussed further below.
SSG-WP07009 5
SN0130966-00 A
Page 6
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
Furthermore, this type of routing can extend the SANbox 9200 beyond QLogic-based networks. As described earlier, the gateway router uses N_ports to connect into the SAN. Therefore, core switches from other vendors can be bridged with the SANbox core. This can even be expanded to multiple sites, each based on a different vendor, as shown below.
Connecting each SAN island across the WAN with the SANbox 6142 removes core conflicts because router N_ports appear as just one more native fabric device. Without merging the SANs, this network configruation supports resource sharing and improves operational efficiency. This also supports software management tools unique to each SAN and enables global monitoring and policy enforcement.
SSG-WP07009 6
SN0130966-00 A
Page 7
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
FCIP – Layer 2 Routing
The SANbox 6142 supports FCIP to provide maximum compatibility with replication and WAN applications. However, FCIP has some inherent limitations. The standard FCIP protocol uses many remote procedure calls (RPCs) to move data across the network. When they were designed, LAN environments existed within the same building or campus. Its designers didn’t plan for efficiency over long distances with multiple switching “hops.” RPCs communicate a lot of commands and handshaking across the LAN/WAN; this creates an overhead burden within the protocol known as “chattiness.”
For example, to move data across the WAN, the local server needs to gain access to the remote target. Some operations require
roughly 75% of the RPCs for access, while the actual read and write operations need only 10%. In a LAN environment, the latency of RPCs is essentially unnoticed. However, across the WAN, each RPC can have a latency of 50ms to 100ms (highly dependent on technology, distance, and network settings). A data request that would take only a few seconds within the LAN can turn into five to twenty minutes across the WAN.
The figure below illustrates a basic example of transferring 64KB of data across the WAN in packets of 32KB and a WAN latency of 50ms. This latency exists regardless of the packet size; it is inherent in the IP protocol. Once again, this is a simplified example. For clarity, this example doesn’t show all of the handshaking. A typical file transfer will have hundreds of command and status transfers.
The initiating system issues a write request to the remote system: 50ms delay
1.
The remote system responds with a Ready status: 50ms delay
2.
The initiating system sends the first block of 32KB data: 50ms delay
3.
The remote system issues a transfer ready status: 50ms delay
4.
The initiating system sends the second block of 32KB data: 50ms delay
5.
The requesting system confirms that the transfer is complete: 50ms delay
6.
SSG-WP07009 7
SN0130966-00 A
Page 8
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
For example, transferring 64KB across the WAN would take over 300ms. Larger data blocks on the order of megabytes would consume an unacceptable amount of time. This is a direct result of FCIP using Layer 2 routing. In addition to the extra handshaking, Layer 2 routing results in merging the SAN islands.
FCIP SAN Bridging – FCIP is a Layer 2 tunnel that relies on
proprietary E-ports to bridge SANs. Since this type of bridging creates a tunnel between the two SANs, the SAN fabrics must be the same on both ends; it also requires merging the SANs.
Layer 3 Router to Router Bridging – Layer 3 routing provides
a way to connect SANs that do not merge the fabrics and enable IT managers to gain the benefits of SAN bridging and extension without having to give up the value-added functionality of each fabric vendor.
SmartWrite™ – Advanced Layer 3 Routing
SmartWrite™, unlike FCIP, is an intelligent, patent pending, routing technology. It understands how to move the data from one point to another while optimizing the transfer. It is the only intelligent SAN over WAN protocol to leverage SCSI Layer 3 routing. This allows SmartWrite to not only optimize transfers over the WAN, it simplifies configuration and management.
SmartWrite:
Eliminates double addressing of SAN devices (IP and FC)
Eliminates the need to have unique names on each SAN
Eliminates 50% of the management overhead of FCIP
Leverages the WAN resources for resiliency and encryption
Reduces management overhead
Optimizes performance over the WAN
A SAN application issues a command to send an FC packet to an FC target over SmartWrite Layer 3 SCSI routing
1.
SmartWrite intelligently determines:
2.
The total wire transfers to reduce WAN latency
a.
Compresses the data to improve WAN line speed
b.
Leverages load balancing to minimize transfer time over multiple lines (if available)
c.
SmartWrite converts the FC packet to an iSCSI packet and routes the data without merging the SAN fabrics
3.
SmartWrite sends multiple transfers in one process, reducing WAN latency
4.
SmartWrite delivers the FC packets to the target device
5.
SSG-WP07009 8
SN0130966-00 A
Page 9
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
First, SmartWrite can understand the total transfer size to go over the WAN. This enables SmartWrite to tell the receiving router how many transfers to expect over the WAN. For example, a source router (located in New York) can tell the destination (located in London) that it will receive only a write and how many transfers to expect before completing the file. The SmartWrite protocol can send data in a constant stream from the source to the destination without having to wait for an individual acknowledgement for each write. This dramatically reduces the transfer latency, which increases line usage for data transfer and reduces WAN line costs.
Second, SmartWrite can compress the data before transferring it, which reduces the volume of transferred data, as well as the number of transfers required to move the data. In addition to reducing the data transfer time, this lowers WAN costs.
Third, SmartWrite can load balance transfers over multiple WAN/LAN lines to improve performance, lower total transfer time, and reduce the number of transfers over the WAN.
All of these factors allow for a cost-efficient, high-speed, and easy­to-manage core-to-core bridging strategy. The figure below shows how SmartWrite’s performance optimization works in this New York to London WAN example.
The total transfer time has been reduced to 100ms in this example. As mentioned previously, real-world protocol handshaking requires hundreds of transfers, each with its own latency, compounding the total transfer time. By combining most of these commands locally and then sending them, SmartWrite easily bridges remote core switches, and the associated SAN island to the other resources.
SSG-WP07009 9
SN0130966-00 A
Page 10
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
Core-to-Core Applications
With the creation of a bridged core-to-core topology, several mission-critical functions can now be easily implemented across the extrastructure. Across the globe, changes in regulatory requirements from most governments place greater demands on businesses. Although these regulations affect data that is often outside the data center, the IT department still needs to protect, secure, and make the data accessible globally. US regulations such as SEC 17a­4, Sarbanes Oxley section 404, and HIPAA drive the need for new storage strategies into every organization. This requires that those servers provide data protection and management capabilities, often associated with enterprise SANs, which are traditionally found only in central IT departments. Each of the core-to-core applications outlined below can be used to support sound data protection, regulatory compliance policy and add flexibility to operational policies.
Data Migration
There are several reasons for migrating data from the SAN storage to a remote array. In addition to regulatory compliance, the administrator may want to safeguard the data with Continuous Data Protection (CDP) or Snapshots. With either of these strategies, the metadata for tracking changes should be kept on a different storage system, and preferably in a different facility.
Incorporating the SANbox 6142 into the network allows quick and seamless data transfer from the FC storage to the remote SAN. Depending on the FC storage vendor and SAN management software used, a server connected to the SAN may control the CDP or Snapshot process instead of an in-fabric device.
Backup and Remote Replication
The most basic form of replication is data backup. Typically, the backup system, either tape or VTL, is managed within the SAN. Since most of the disk storage resides within the SAN, it makes the most sense to co-locate the library on the same network as the data. A SAN backup provides an advantage when many applications support “serverless backup” within the SAN. The backup server initiates the backup job, but the data flow is managed within the fabric, decreasing backup times and server overhead. However, these benefits of SAN backup can leave SAN islands separated from the most efficient backup scenario. Bridging the other core switches into the primary SAN (or the location where the backup resides) overcomes this problem.
Furthermore, replication allows the administrator to make a copy of the dataset for various purposes such as archiving, disaster recovery (discussed below), and application testing. For example, the FC primary storage could have a live OLTP database. Let’s say the marketing department wants to mine the database for customer
trends and demographics. Data mining on a live database risks data integrity and reduces system performance. Specifically, the mining slows down processing on the OLTP application (customer orders) to where it’s nearly useless.
In cases like this, an administrator could move a copy of the database to remote storage where the data mining application could run independently. The replication could be scheduled for off-peak hours and depending upon the size of the database, the duplicate database could be available within a few hours to a couple of days. Many scenarios such as this exist where a copy of live data or “aged” data needs to move from the primary SAN storage onto other systems.
Two-site Disaster Recovery
Every organization has different needs and goals for data protection and disaster recovery. When the RPO (Recovery Point Objective) and RTO (Recovery Time Objective) are extremely small, such as a few seconds to less than a minute, a two-site disaster recovery model is the best solution. This scenario creates two data centers that are functionally identical, although they may have some differences in the underlying fabric and storage arrays. Both sites are live, synchronously mirrored, and continuously provide data services to the company. This almost instantly replicates every data write within a site on the other site. The capacity, bandwidth, and functionality of both sites are large enough to support the requirements of the entire organization, in the event that one site should fail.
If a catastrophic event occurs at one location, the other location takes over all SAN functions. While the mirror site is broken, the live site logs any changes and transactions. When the failed site returns to full operation, it re-establishes the mirror and synchronizes the data back to normal operation.
As you can see, this scenario offers a very high degree of protection. However, it also requires a large investment in infrastructure and software. Bridging the mirrored sites with the SANbox 6142 can reduce the cost for connectivity, software, and bandwidth.
Hot-site Disaster Recovery
When the RTO or RPO is not quite as severe as the example above, the organization should consider the hot-site disaster recovery model. This scenario is similar to the two-site model, but in this case the second “hot-site” site operates in standby mode. Although it is online and synchronized, the hot-site doesn’t support the normal operating needs of the enterprise. When the primary site fails, the hot-site handles all data transactions. Depending on the organization and mission-criticality of the data, the hot-site might have less bandwidth or storage capacity than the primary site. For example, users could still access current data, but archived data may be off line.
SSG-WP07009 10
SN0130966-00 A
Page 11
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ER
If the RPO is for a snapshot minutes or hours ago, the company may consider using asynchronous replication to the hot-site. This reduces the cost and complexity of the total solution, but still delivers an acceptable level of protection. The basic question is, “How much productivity can we afford to lose compared to the protection costs?” For some companies, losing a few minutes of productivity or information could be fatal, while others could tolerate recovering to a point several hours or even days in the past.
Regardless of the RPO and synchronization scheme, once again the SANbox 6142 bridges remote core switches over the WAN efficiently and cost-effectively.
Company Software Product Migration Replication DR
EMC Powerpath
RepliStor
SRDF-A
Open Replicator
MirrorView
SAN Copy
Networker
HDS TrueCopy
HP Secure Path
Continuous Access (CA)
IBM RDAC (failover)
SAN Volume Controller (SVC)
Enhanced Remote Mirroring
Microsoft MPIO
Shadow Copy
NetApp SnapMirror
Red Hat Device Mapper MPIO
Sun MPXIO
SuSE Device Mapper MPIO
Veritas (Symantec) Volume Manager
NetBackup
Backup Exec
Volume Replicator
Replication Exec
The SANbox products have been extensively tested with a large variety of operating systems, storage arrays, and software. The chart below shows some of the major applications for migration, replication, and disaster recovery. Although the SANbox 6142 has been tested with all of these applications, be sure to consult the compatibility documents of each vendor for interoperability with the specific combination of servers, operating systems, storage software, data arrays, and network fabric before implementing a core-to-core bridging strategy.
SSG-WP07009 11
SN0130966-00 A
Page 12
Extending the SAN for SANbox 9000 Core Switches
Summary and Conclusion
The enterprise environment continues to evolve and change. Regardless of technology breakthroughs and system designs, SANs continue to provide IT administrative challenges. By implementing a cost-effective and robust core-to-core solution, system administrators can reduce the management burdens and provide greater access to network resources.
As we have seen, the SANbox 9200 core switch can leverage the following benefits of the SANbox 6142 Intelligent Storage Router:
Intelligent Storage Routers are cost-effective versus director
based solutions.
Intelligent Storage Routers with advanced Layer 3 routing improve
performance without merging SANs.
Intelligent Storage Routers provide a flexible solutions that work
with all switches and directors in the data center, versus just a single vendor.
W HI TE P AP ER
Intelligent Storage Routers are heterogeneous via our N_port
technology and support all SAN Fabrics (Brocade, McDATA, Cisco and QLogic).
Intelligent Storage Routers create the only true, open, core-to-core
connectivity to expand SANs across the extrastructure.
QLogic SANbox product family meets the current challenges of bridging FC cores to remote data centers, storage arrays and servers. While there is never a “one-size-fits-all” solution, especially within computer networking, reasonable design practices in conjunction with deploying the SANbox 6142 can minimize the burden of managing the corporate extrastructure. Providing fast and error-free data to users is the end-game of corporate computing. That goal can be achieved by bridging SANs from core-to-core with the QLogic intelligent storage router.
For more information on these solutions please contact us at:
http://www.qlogic.com/products/san_MultiProtocol.asp
SSG-WP07009 12
SN0130966-00 A
Page 13
Extending the SAN for SANbox 9000 Core Switches
References
Websites of QLogic, Brocade, SNIA, IETF, IBM, Cisco
Trademarks
Brocade is a registered trademark of Brocade Corporation.
Cisco is a registered trademark of Cisco Systems
Powerpath, RepliStor, Open Replicator, MirrowView, SAN Copy, and
Networker are trademarks of EMC Corporation
TrueCopy is a trademark of Hitachi Data Systems
W HI TE P AP ER
Secure Path and Continuous Access are trademarks of Hewlett-
Packard
SAN Volume Controller and Enhanced Remote Mirroring are
trademarks of IBM
Shadow Copy is a trademark of Microsoft Corporation
SnapMirror is a trademark of Network Appliance
NetBackup, Backup Exec, Volume Replicator, and Replication Exec
are trademarks of Symantec
SANbox is a registered trademark of QLogic Corporation.
SmartWrite is a trademark of QLogic Corporation.
SSG-WP07009 13
SN0130966-00 A
Page 14
Extending the SAN for SANbox 9000 Core Switches
W HI TE P AP ERW HI TE P AP ER
Disclaimer
Reasonable efforts have been made to ensure the validity and accuracy of these comparative performance tests. QLogic Corporation is not liable for any error in this published white paper or the results thereof. Variation in results may be a result of change in configuration or in the environment. QLogic specifically disclaims any warranty, expressed or implied, relating to the test results and their accuracy, analysis, completeness or quality. All brand and product names are trademarks or registered trademarks of their respective companies.
SSG-WP07009 14
SN0130966-00 A
Loading...