Cisco Systems OL-12518-01 User Manual

Overview

CHA PTER
4
FCIP over IP/MPLS Core
This chapter discusses the transport of Fibre Channel over IP (FCIP) over IP/Multiprotocol Label Switching (MPLS) networks and addresses the network requirements from a service provider (SP) perspective. This chapter also describes service architectures and storage service offerings using FCIP as a primary storage transport mechanism.
Storage extension solutions offer connectivity between disparate storage “islands,” and promote transport solutions that are specifically geared towards carrying storage area network (SAN) protocols over WAN and MAN networks. This emerging demand is providing a new opportunity for carriers. SPs can now deliver profitable SAN extension services over their existing optical (Synchronous Optical Network [SONET]/Synchronous Digital Hierarchy [SDH] and Dense Wavelength Division Multiplexing [DWDM]) or IP infrastructure. DWDM networks are ideal for high-bandwidth, highly resilient networks and are typically deployed within metro areas. Transporting storage traffic over the existing SONET/SDH infrastructure allows SPs to maximize the use of their existing SONET/SDH ring deployments. Some applications do not mandate stringent requirements offered by optical networks. These applications can be easily transported over IP networks using FCIP interfaces. The obvious advantage of transporting storage over IP is the ubiquitous nature of IP.
Disk replication is the primary type of application that runs over an extended SAN network for business continuance or disaster recovery. The two main types of disk replication are array-based (provided by
2
EMC
SRDF, Hitachi True Copy, IBM PPRC XD, or HP DRM, and host-based (for example, Veritas Volume Replicator). Both disk replication types run in synchronous and asynchronous modes. In synchronous mode, an acknowledgement of a host-disk write is not sent until a copy of the data to the remote array is completed. In asynchronous mode, host-disk writes are acknowledged before the copy of the data to the remote array is completed.
Applications that use synchronous replication are highly sensitive to response delays and might not work with slow-speed or high-latency links. It is important to consider the network requirements carefully when deploying FCIP in a synchronous implementation. Asynchronous deployments of FCIP are recommended in networks with latency or congestion issues. With FCIP, Fibre Channel SAN can be extended anywhere an IP network exists and the required bandwidth is available. FCIP can be extended over metro, campus, or intercontinental distances using MPLS networks. FCIP may be an ideal choice for intercontinental and coast-to-coast extension of SAN.
OL-12518-01
Data Center High Availability Clusters Design Guide
4-1

Typical Customer Requirements

Typical Customer Requirements
Small-to-medium businesses (SMBs) represent about 90 percent of all companies in the United States. These companies typically employ a few hundred employees and are highly focused on their core services or products. They usually lack IT expertise and manpower to develop, deploy, and maintain LAN, WAN, and SAN infrastructures. A typical SMB may have one or two offices in multiple metro areas with one head office (corporate office) in one of the metro areas. The corporate office is considered as the home office, where the majority of the business activities occur, and other locations are usually designated as satellite offices where most of the activities are directed to the home office.
These SMBs use IP connectivity to connect between satellite and home offices. Connectivity varies from Frame Relay, TI, and fractional Ethernet, depending on the demand and size of the SMB. Currently, these networks are used to carry data and voice traffic. A similar connectivity is considered for storage, but is not currently installed because of cost constraints. Most of the data at the home office location may be consolidated into a local SAN, and the data at the satellite offices can be consolidated into small SAN islands. This introduces the problem of storage connectivity between SAN islands for disaster recovery and business continuance. There are several options to interconnect the SAN, but the IP network is the ideal choice because of its availability at the client site and its comparatively low cost.
Figure 4-1 shows a typical customer SAN extension through an SP network.
Chapter 4 FCIP over IP/MPLS Core
Figure 4-1 SAN Extension Through SP Network
Corporate HQ
SprintLink Network
Cisco MDS 9216
Multilayer Edge
Switch
Fabric
FCIP
FC
Bandwidth options: DS1 – OC3 TDM Facilities FastE and GigE Facilities
FCIP
Remote Sites
FC
Cisco7206 VXR with FCIP PA
FC
Cisco7206 VXR with FCIP PA
FC
FC
Fabric
FC
Fabric
FC
Fabric
132421
4-2
In most cases, the SMB customers have connectivity that is less than DS3 speed and, in some cases, may be up to OC-3 speeds. Therefore, in some cases, compressing the Fibre Channel data before transporting might become a requirement. In any network, security is key to protect valuable data from being misused by intruders. FCIP traffic must be secured before transporting it across the SP network.
Data Center High Availability Clusters Design Guide
OL-12518-01
Chapter 4 FCIP over IP/MPLS Core
The requirements are as follows:
FCIP transport over an optimized IP/MPLS network
Some type of compression mechanism (software or hardware)
Security mechanism (IPSec, encryption, and VPN networks)
End-to-end management of FCIP traffic

Compression

The primary objective of compression is to reduce the amount of overall traffic on a particular WAN link. This is achieved when a data rate equal to the WAN link speed is compressed, thereby reducing the total amount of data on the WAN link. In this case, non-compressed storage data requires all of the 45 Mb/sec DS3 WAN connection. By enabling compression on the storage data (assuming an average of 2 to 1 compression), the effective utilization of the WAN link by storage traffic would be 22.5 Mb/sec. This allows the WAN link to be used by other IP traffic. The second objective for compression may be to carry more data over a WAN link than it is normally capable of carrying. An example of this is to compress a 90-Mbps Fibre Channel data stream and carry it over a 45-Mbps WAN link (still assuming an average of compression ratio of 2 to 1).
Typical Customer Requirements
There are several types of compression algorithms. The most common type used in data networks is lossless data compression (LZS). This type of compression converts the original data into a compressed format that then can be restored into the original data. The service adapter modules (7600-SA-VAM, SA-VAM2) and the storage services module (MDS-IPS-8 IP) use the IP Payload Compression Protocol (IPPCP)/LZS (RFC 2395) algorithm for compressing data.
The LZS compression algorithm works by searching for redundant data strings in the input data stream and then replaces these strings with data tokens that are shorter in length than the original data. A table is built of these string matches, pointing to previous data in the input stream. The net result is that future data is compressed based on previous data. The more redundant the data in the input stream, the better the compression ratio. Conversely, the more random the data, the worse the compression ratio will be.
The compression history used by LZS is based on a sliding window of the last 2000 bytes of the input stream. When the data is transmitted, it contains both literal data and compressed tokens. Literal data are input data streams that cannot be compressed and are transmitted uncompressed. Compressed tokens are pointer offsets and data length that point to the compression history table. The remote side rebuilds the data from the compressed history table based on the pointers and length fields.
Note A full description of IPPCP and LZS are available in RFC 2395 and in ANSI X.3241-1994.

Compression Support in Cisco MDS

Both software- and hardware-based compression are supported by the Cisco MDS product line. Depending on the SAN-OS version and the hardware used, customers can determine which compression methods apply.
The software-based compression solution is available on the IPS-IP Storage Service Module for the Cisco MDS 9216/MDS 9216i fabric switch and the Cisco MDS 9500 series storage directors. This feature is available in SAN-OS version 1.3(2a) and later releases. The software-based compression is available on each of the eight IPS-8 Gigabit Ethernet ports. The number of Gigabit Ethernet ports used on the IPS does not affect the performance of the compression with this feature enabled.
OL-12518-01
Data Center High Availability Clusters Design Guide
4-3
Typical Customer Requirements
Hardware-based compression is available with SAN-OS version 2.0 and with new hardware (MDS 9216i/MLS14/2). Compression is applied per FCIP interface (tunnel) with a variety of modes available. Beginning with SAN-OS 2.0, three compression modes are configurable with additional support for the MPS-14/2 module.
Compression Modes and Rate
In SAN-OS 1.3, the following two compression modes can be enabled per FCIP interface on the IPS-4 and IPS-8:
High throughput ratio—Compression is applied to outgoing FCIP packets on this interface with
higher throughput favored at the cost of a slightly lower compression rate.
High compression ratio—Compression is applied to outgoing FCIP packets on this interface with a
higher compression ratio favored at the cost of a slightly lower throughput.
In SAN-OS 2.0, three compression modes are available per FCIP interface on the IPS-4, IPS-8, and MPS-14/2:
Mode 1—Equivalent to the high throughput ratio of SAN-OS 1.3. Use Mode 1 for WAN paths up to
100 Mbps on the IPS-4 and IPS-8; and WAN paths up to 1 Gbps on the MPS-14/2.
Mode 2—Higher compression ratio than Mode1, but applicable only to slow WAN links up to 25
Mbps.
Chapter 4 FCIP over IP/MPLS Core
Mode 3—Higher compression ratio than Mode 1 and slightly higher than Mode 2. Applicable to
very slow WAN links up to 10 Mbps.
The following are the software-based compression options for FCIP for the Cisco MDS 9000 IP Storage Services Module:
SAN-OS 1.3—Two algorithms: high throughput and high compression
SAN-OS 2.0—Three algorithms: Modes 1–3
The following is the hardware- and software-based compression and hardware-based encryption for FCIP for the Cisco MDS 9000 Multi-Protocol Services module:
SAN-OS 2.0—Three algorithms: Modes 1–3
The choice between these solutions should be based on the following factors:
Available link speed or bandwidth
Choice of FCIP solution (IPS-8/4 FCIP, MPS-14/2, or PA-FC-1G port adapter)
New or existing SAN fabric installations
Note For more information, see the following: LZS (RFC 1974), IPPCP with LZS (RFC 2395), Deflate (RFC
1951), and IPPCP with Deflate (RFC 2394).
Figure 4-2 shows a comparison of the Cisco compression solutions.
4-4
Data Center High Availability Clusters Design Guide
OL-12518-01
Chapter 4 FCIP over IP/MPLS Core
Figure 4-2 Cisco Compression Solutions
200
150
MB/sec
100
50
Typical Customer Requirements

Security

SA-VAM
2001
SA-VAM2
2003
IPS
2003
MPS 2004
132422
The following performance data applies to Figure 4-2:
VAM—9.9–12 MB/sec –10.9 MB/sec average
VAM2—19.7–25.4 MB/sec – 19 MB/sec average
IPS—18.6–38.5 MB/sec – 24.6 MB/sec average
MPS—136–192 MB/sec – 186.5 MB/sec average
The security of the entire Fibre Channel fabric is only as good as the security of the entire IP network through which an FCIP tunnel is created. The following scenarios are possible:
Unauthorized Fibre Channel device gaining access to resources through normal Fibre Channel
processes
Unauthorized agents monitoring and manipulating Fibre Channel traffic that flows over physical
media used by the IP network
Security protocols and procedures used for other IP networks can be used with FCIP to safeguard against any known threats and vulnerabilities. FCIP links can be secured by the following methods:
Using the IPSec Security Protocol Suite with encryption for cryptographic data integrity and
integrity of authentication
OL-12518-01
Data Center High Availability Clusters Design Guide
4-5
Typical Customer Requirements
SPs providing VPN service to transport FCIP traffic to provide additional security
Using an MPLS extranet for application-specific security

Cisco Encryption Solutions

For selecting compression solutions for FCIP SAN extension, a user needs to determine the requirements for the encryption solution. These requirements may include the speed of the link that needs encryption, the type of encryption required, and the security requirements of the network. Cisco offers three hardware-based encryption solutions in the data center environment. The SA-VAM and SA-VAM2 service modules for the Cisco 7200 VXR and 7400 series routers and the IPSec VPN Services Module (VPNSM) for the Catalyst 6500 switch and the Cisco 7600 router.
Each of these solutions offers the same configuration steps, although the SA-VAM2 and IPSec VPNSM have additional encryption options. The SA-VAM and SA-VAM2 are used only in WAN deployments, whereas the IPSec VPNSM can support 1.6 Gb/sec throughput, making it useful in WAN, LAN, and MAN environments.
The SA-VAM is supported on the 7100, 7200 VXR, and 7401 ASR routers with a minimum Cisco IOS version of 12.1(9)E or 12.1(9)YE. For use in the 7200 VXR routers, the SA-VAM has a bandwidth cost of 300 bandwidth points. The SA-VAM has a maximum throughput of 140 Mps, making it suitable for WAN links up to DS3 or E3 line rates.
Chapter 4 FCIP over IP/MPLS Core
The SA-VAM2 is supported on the 7200 VXR routers with a minimum Cisco IOS version of 12.3(1). The SA-VAM2 has a bandwidth cost of 600 bandwidth points. The SA-VAM2 has a maximum throughput of 260 Mps, making it suitable for WAN links up to OC-3 line rates.
The IPSec VPNSM is supported on the Catalyst 6500 switch and the Cisco 7600 router with a minimum Native IOS level of 12.2(9)YO. For increased interoperability with other service modules and additional VPN features, it is recommended that a minimum of 12.2(14)SY be used when deploying this service module.
The choice between these solutions should be based primarily on the following two factors:
Available link speed or bandwidth
Security encryption policies and encryption methods required
The Cisco MDS 9000 with MLS14/2 and the Cisco 9216i support encryption with no performance impact. The MPS Service Module and the Cisco 9216i support line rate Ethernet throughput with AES encryption.
The following are encryption methods supported per module:
SA-VAM—DES, 3DES
SA-VAM2—DES, 3DES, AES128, AES192, AES256
VPNSM—DES, 3DES
MDS MPS—DES, 3DES, AES192
Note An encrypted data stream is not compressible because it results in a bit stream that appears random. If
encryption and compression are required together, it is important to compress the data before encrypting it.
4-6
Data Center High Availability Clusters Design Guide
OL-12518-01
Chapter 4 FCIP over IP/MPLS Core

Write Acceleration

Write Acceleration is a configurable feature introduced in SAN-OS 1.3 that enhances FCIP SAN extension with the IP Storage Services Module. Write Acceleration is a SCSI protocol spoofing mechanism that improves application performance by reducing the overall service time for SCSI write input/output (I/O) operations and replicated write I/Os over distance. Most SCSI Fibre Channel Protocol (FCP) write I/O exchanges consist of two or more round trips between the host initiator and the target array or tape. Write Acceleration reduces the number of FCIP WAN round trips per SCSI FCP write I/O to one.
Write Acceleration is helpful in the following FCIP SAN extension scenarios:
Distance and latency between data centers inhibits synchronous replication performance and
Upper layer protocol chattiness inhibits replication throughput, and the underlying FCIP and IP
Distance and latency severely reduces tape write performance during remote tape backup because
Shared data clusters are stretched between data centers and one host must write to a remote storage
The performance improvement from Write Acceleration typically approaches 2 to 1, but depends upon the specific situation.

Using FCIP Tape Acceleration

impacts overall application performance.
transport is not optimally utilized.
tapes typically allow only a single outstanding I/O. Write Acceleration can effectively double the supported distance or double the transfer rate in this scenario.
array.
Write Acceleration increases replication or write I/O throughput and reduces I/O response time in most situations, particularly as the FCIP Round Trip Time (RTT) increases. Each FCIP link can be filled with a number of concurrent or outstanding I/Os. These I/Os can originate from a single replication source or a number of replication sources. The FCIP link is filled when the number of outstanding I/Os reaches a certain ceiling. The ceiling is mostly determined by the RTT, write size, and available FCIP bandwidth. If the maximum number of outstanding I/Os aggregated across all replication sessions (unidirectional) is less than this ceiling, then the FCIP link is underutilized and thus benefits from Write Acceleration.
Using FCIP Tape Acceleration
FCIP Tape Acceleration is a new feature introduced in SAN-OS 2.0 to improve remote tape backup performance by minimizing the effect of network latency or distance on remote tape applications. With FCIP Tape Acceleration, the local Cisco MDS 9000 IPS or MPS module proxies as a tape library. The remote MDS 9000, where the tape library is located, proxies as a backup server.
Similar to Write Acceleration, the MDS 9000 recognizes and proxies elements of the upper level SCSI protocol to minimize the number of end-to-end round trips required to transfer a unit of data and to optimally use the available network bandwidth. FCIP Write Acceleration achieves this by proxying the SCSI Transfer Ready and Status responses (in contrast, Write Acceleration proxies the Transfer Ready only). Write Filemarks and other non-write operations are not proxied and are passed directly to the remote tape library. The Write Filemarks operation corresponds to a checkpoint within the tape backup application. This is typically a tunable parameter but may default to 100 or 200 records depending upon the tape backup product.
OL-12518-01
Data Center High Availability Clusters Design Guide
4-7
Using FCIP Tape Acceleration

FCIP

Chapter 4 FCIP over IP/MPLS Core
FCIP Tape Acceleration maintains data integrity in the event of a variety of error conditions. Link errors and resets are handled through Fibre Channel-tape Ethernet LAN services (ELS) recovery mechanisms. Should the remote tape unit signal an error for an I/O that the status has already been returned to “good”, a Deferred Error is signaled to the tape backup application. The backup application either corrects the error and replays the command or rolls back to the previous file mark and replays all I/Os from that point.
You can enable FCIP Tape Acceleration on any FCIP interface on the Cisco IPS-4, IPS-8, and MPS-14/2 modules, or the Gigabit Ethernet interfaces on the Cisco MDS 9216i.
FCIP encapsulates Fibre Channel frames and transports these frames within TCP packets. The FCIP tunnel acts as an Inter-Switch Link (ISL) between two fabric switches. The endpoint devices detect each other as they would between two local switches interconnected with standard ISL. FCIP endpoints are associated to virtual e-ports and these ports communicate with themselves and exchange information such as reconfigure fabric (RCF), Fabric Shortest Path First (FSPF), build fabric (BF), and so on. FCIP relies on the TCP/IP protocol to provide contention control and orderly delivery of packets. Figure 4-3 shows the FCIP encapsulation process.
Figure 4-3 FCIP Encapsulation

TCP Operations

TCP implemented on traditional servers or hosts tends to overreact to packet drops. The throttling back that occurs in the traditional TCP implementation is not acceptable to storage traffic. The TCP stack implemented for FCIP (in the Cisco MDS 9000) is optimized for carrying storage traffic by reducing the probability of drops and increasing the resilience to drops when they occur.
Fibre Channel traffic can be highly bursty, and traditional TCP can amplify that burstiness. With traditional TCP, the network must absorb these bursts through buffering in switches and routers. Packet drops occur when there is insufficient buffering at these intermediate points. To reduce the probability of drops, the FCIP TCP implementation reduces the burstiness of the TCP traffic that leaves the Gigabit Ethernet interface.
In the FCIP TCP stack, burstiness is limited through the use of variable rate, per-flow shaping, and by controlling the TCP congestion window size. After idle or partially idle periods, the FCIP interface does not send large packet bursts at Gigabit interface speeds. If not controlled, large Gigabit Ethernet bursts can overflow downstream routers or switches and speed mismatches can occur. For example, a Gigabit Ethernet feeding into a DS3 (45 Mbps) link through a router may overflow the router buffers unless the traffic is controlled or shaped in a way that the router can handle the transmission.
IP
TCP
FCIP
FC
SCSI
132423
Data

TCP Parameters

Data Center High Availability Clusters Design Guide
4-8
TCP parameters may require adjustments when implementing SAN extension that uses FCIP. This section provides general information and recommendations for key TCP parameters that require adjustments. The following parameters are considered:
OL-12518-01
Chapter 4 FCIP over IP/MPLS Core
TCP window size
TCP maximum bandwidth
TCP minimum available bandwidth
Round Trip Time (RTT)
TCP Window Size
TCP uses a sliding window to control the flow of data from end to end. The TCP maximum window size (MWS) is the maximum amount of data the sender allows to be outstanding without acknowledgment at one time. The minimum MWS is 14 KB; the maximum is 32 MB.
The sender can use a larger window size to allow more outstanding data and to make sure that the pipe remains full. However, sending too much data at once can overrun intermediate routers, switches, and end devices. The TCP congestion control manages changes to the window size.
You cannot configure the TCP window size directly. This value is automatically calculated from the product of the maximum bandwidth x RTT x 0.9375 + 4 KB. In SAN-OS 1.3 and later, the RTT can dynamically adjust up to four times the configured value in the FCIP profile according to network conditions. The TCP sender dynamically changes the maximum window size accordingly.
Using FCIP Tape Acceleration
TCP Maximum Bandwidth
The TCP maximum bandwidth is the maximum amount of bandwidth an FCIP link consumes from the point of view of the TCP sender. The maximum bandwidth settings for an FCIP link can be asymmetric. Set the TCP maximum bandwidth to the maximum amount of bandwidth you want the FCIP link to consume. Set it no higher than the bandwidth of the slowest link in the FCIP link path. For example, if the FCIP link is mapped over a dedicated DS3 WAN link, set the maximum bandwidth to 45 Mbps.
The TCP maximum bandwidth value is used as the bandwidth value in the bandwidth-delay product calculation of the TCP MWS.
Observe the following guidelines when selecting a value for TCP maximum bandwidth:
Set the TCP maximum bandwidth value no higher than the maximum path bandwidth available to
the FCIP.
If deploying FCIP over a shared link with critical traffic, lower the maximum bandwidth to a level
that allows the other traffic to coexist with minimal retransmissions. Quality of service (QoS) should be considered in these situations.
When using the Cisco MDS 9000 software compression, set the maximum bandwidth value as
though compression is disabled. The Cisco MDS 9000 uses a dynamic moving average feedback mechanism to adjust the TCP window size according to the compression rate.
TCP Minimum Available Bandwidth
The value should represent the minimum amount of bandwidth in the FCIP path that you expect to be always available. This value determines the aggressiveness of FCIP—a higher value is more aggressive, a lower value is less aggressive. A value that is too high can cause congestion and packet drops for any traffic traversing the shared network links.
Bandwidth allocation strongly favors the FCIP traffic when mixed with conventional TCP traffic, which recovers from drops more slowly. To cause FCIP to behave more fairly, use a lower value for the min-available-bw parameter. FCIP starts at a lower rate and increments the send rate every RTT, just like classic TCP slow-start.
OL-12518-01
Data Center High Availability Clusters Design Guide
4-9
Loading...
+ 21 hidden pages