Information furnished in this manual is believed to be accurate and reliable. However, QLogic Corporation assumes no
responsibility for its use, nor for any infringements of patents or other rights of third parties which may result from its
use. QLogic Corporation reserves the right to change product specifications at any time without notice. Applications
described in this document for any of these products are for illustrative purposes only. QLogic Corporation makes no
representation nor warranty that such applications are suitable for the specified use without further testing or
modification. QLogic Corporation assumes no responsibility for any errors that may appear in this document.
5-10Link De-instantiated by the Switch 20 Seconds after the Last Keep Alive Frame . .5-10
5-11Time Difference Between the Last FIP Keep Alive Sent and the FIP Clear
This document describes how to install and certify a pilot unified fabric
configuration. This configuration demonstrates lossless Ethernet and data center
bridging (DCB), which includes priority flow control (PFC), enhanced transmission
selection (ETS), and data center bridging exchange protocol (DCBX) for a Fibre
Channel and 10Gb Ethernet unified fabric.
Intended Audience
This guide is for system engineers and planners who want to provide converged
networking products, solutions, and services to their custome rs. It is also intended
for network planners and administrators who are implementing a converged
network for their company. This guide describes how to install and validate a pilot
converged network in preparation for production deployment.
This guide assumes basic knowledge of Enhanced Ethernet and the associated
standards. If the you are not familiar with Fibre Channel over Ethernet (FCoE) and
Enhanced Ethernet, review the documents listed in “Related Materials” on
page vii.
Related Materials
In addition to this guide, the FCoE Design Guide, developed jointly by Cisco and
QLogic (Cisco document number C11-569320-01), is a valuable resource and is
referenced throughout this guide.
The following links provide more detailed information, and connect to the IEEE
documents that define the Enhanced Ethernet functions:
This guide uses the following documentation conventions:
NOTE: provides additional information.
CAUTION!
causing damage to data or equipment.
WARNING!!
causing personal injury.
Text in blue font indicates a hyperlink (jump) to a figure, table, or section in
this guide, and links to Web sites are shown in underlined blue
example:
Table 9-2 lists problems related to the user interface and remote agent.
See “Installation Checklist” on page 3-6.
For more information, visit www.qlogic.com
Text in bold font indicates user interface elements such as a menu items,
buttons, check boxes, or column headings. For example:
Click the Start button, point to Programs, point to Accessories, and
Under Notification Options, select the Warning Alarms check box.
Text in Courier font indicates a file name, directory path, or command line
text. For example:
To return to the root directory from anywhere in the file structure:
indicates the presence of a hazard that has the potential of
indicates the presence of a hazard that has the potential of
.
then click Command Prompt.
Type
cd /root and press ENTER.
. For
Enter the following command: sh ./install.bin
Key names and key strokes are indicated with UPPERCASE:
Press CTRL+P.
Press the UP ARROW key.
viii51031-00 A
Preface
Documentation Conventions
Text in italics indicates terms, emphasis, variables, or document titles. For
example:
For a complete listing of license agreements, refer to the QLogic
Software End User License Agreement.
What are shortcut keys?
To enter the date type mm/dd/yyyy (where mm is the month, dd is the
day, and yyyy is the year).
Topic titles between quotation marks identify related topics either within this
manual or in the online help, which is also referred to as the help system
throughout this document.
51031-00 Aix
Preface
Documentation Conventions
x51031-00 A
1Overview
Martin (2010) defines a converged network as follows: "A unified data center
fabric is a networking fabric that combines traditional LAN and storage area
network (SAN) traffic on the same physical network with the aim of reducing
architecture complexity and enhancing data flow and access. To make this work,
the traditional Ethernet network must be upgraded to become lossless and
provide additional data center networking features and functions. In turn, the
storage protocol must be altered to run on Ethernet." Lossless means that no
Fibre Channel packets are dropped.
Deploying FCoE over a unified fabric reduces data center costs by converging
data and storage networking. S tandard TCP/IP and Fibre Channel traffic share the
same high-speed 10Gbps Ethernet wire, resulting in cost savings through
reduced adapter , switch, cabling, power, cooling, and management requirements.
FCoE has rapidly gained market acceptance because it delivers excellent
performance, reduces data center total cost of ownership (TCO), and protects
current data center investments. A unified fabric with FCoE preserves existing
investments in Fibre Channel and Ethernet while providing Enhanced Ethernet for
unified data networking.
To provide some assurance that FCoE can be deployed in a data center without
disrupting operations, Cisco
collaborated to produce this guide to simplify the installation and certification of an
FCoE pilot. The guide provides system engineers, architects, and end users with
a step-by-step method to implement and validate a unified fabric and measure
performance of a pilot operation. This guide does not provid e methods to measure
performance under load or to contrast performance between various protocols,
media types, or file systems. This guide is intended to assist in implementing an
FCoE and unified fabric pilot using current storage and protocols. It is also
intended to assist system engineers in implementing an FCoE and unified fabric
pilot for their customers.
51031-00 A1-1
®
, JDSU™ (formerly Fi nisar®), and QLogic have
1–Overview
1-251031-00 A
2Planning
Selecting a Test Architecture
When planning the pilot of a unified network, it is important to choose both Fibre
Channel and traditional Ethernet-based traffic flows. Combining a test SAN
infrastructure and a test LAN infrastructure is often the easiest and most available
option for a pilot project. Alternatively, a critical business application test system
can closely simulate a production environment. The architecture you choose must
demonstrate that a unified network improves efficiency and performance in your
environment. The reference architecture described in this guide was assembled
from equipment that was available in the QLogic NETtrack Developer Cente r. You
will need to substitute your own equipment, and modify the installation and
validation process accordingly.
A critical factor for successfully implementing a unified data center fabric is the
stability of network and storage management practices. Cooperation between the
network and storage management teams is important as they configure the
converged data center fabric.
Where and How to Deploy
The unified fabric has two components:
10Gb Ethernet switches that support DCB and FCoE—These switches
support the connection of traditional Ethernet and Fibre Channel
infrastructures. These switches are known as a top of rack (TOR) switches,
implementing DCB and encapsulating Fibre Channel frames into Ethernet
frames for transport over 10Gb Ethernet media.
10Gb Converged Network Adapters that support both Ethernet LAN and
Fibre Channel SAN over 10Gb Ethernet media—These adapters replace the
NIC and Fibre Channel host bus adapter, and connect to a DCB-enabled
10Gb Ethernet switch.
51031-00 A2-1
2–Planning
Where and How to Deploy
Currently, a Converged Network Adapter must always be connected to a switch
that has DCB. There are two types of switches that have DCB: a DCB switch and
an FCoE switch. The DCB switch has enhanced Ethernet support, but does not
have Fibre Channel forwarder (FCF) capabilities, and does not support the
conversion of Fibre Channel frames to FCoE frames. A DCB switch supports
converging-Ethernet-based protocols, but does not support Fibre Channel
protocols. The DCB switch requires an external device to manage Fibre Channel
and FCoE functions. An FCoE switch supports both DCB and Fibre Channel.
There are three ways to connect Fibre Channel storage to a unified fabric:
The adapter connects to the FCoE switch with Ethernet infrastructure, and
the FCoE switch connects to storage through a Fibre Channel switch
(Martin, 2010). This is the most common implementation in today's data
centers because the Fibre Channel switch and SAN storage are typically
already in place.
The DCB switch requires an external device to provide FCF function to the
attached Fibre Channel storage. This approach is not as common because
most data centers do not have an FCF device, and will acquire an FCoE
switch to connect to their Fibre Channel Infrastructure.
This implementation is not common because, most data centers use Fibre
Channel SAN storage. As more storage vendors deliver FCoE storage,
more pilot projects will support direct Ethernet connection from the FCoE
switch to FCoE-capable storage controllers (Martin, 2010).
In all cases, Ethernet LAN and iSCSI storage connect directly to Ethernet ports o n
the DCB or FCoE switch.
The reference architecture, described in Section 3, uses direct-attached native
FCoE storage and Fibre Channel switch-connected SAN storage. Section 4
describes the implementation of the reference architecture and Section 5
describes the validation.
2-251031-00 A
3Architecture
Approach
The test configuration described in this section was installed and validated at the
QLogic NETtrack Developer Center (NDC) located in Shakopee, MN. The NDC
provides QLogic customers and alliance partners with the test tracks and
engineering staff to test interoperability and optimize performance with the latest
server, networking, and storage technology.
Process Summary
To establish a repeatable process, the team created a converged network in the
NDC and installed a validation environment based on JDSU hardware and
software. Screen shots and trace dat a were ca ptured to show the re sult s from the
seven-step validation process, and to demonstrate the various DCB functions.
Reference Architecture Description
Architecture Overview
Figure 3-1 illustrates the unified Ethernet infrastructure enhanced with DCB.
FCoE and iSCSI storage traffic was tested and validated with LAN traffic, which
shared the unified 10GbE bandwidth driven by Converged Network Adapters.
JDSU testing tools were installed in the fabric, either in-line or at the edge, with
hardware and software to test and certify system-level behavior. These tools also
provided expert-system support that simplified troubleshooting and reduced
installation time.
51031-00 A3-1
3–Architecture
Reference Architecture Description
Figure 3-1. Reference Architecture Diagram
3-251031-00 A
Equipment Details
Table 3-1 lists the referenced architecture equipment. Table 3-2 lists the JDSU
Xgig equipment and testing software.
3–Architecture
Reference Architecture Description
Table 3-1. Converged Network Equipment Inventory
QuantityProduct
3
Dell® PowerEdge® servers
a
Model
Number
1950
4QLogic QLE8142 adapters (server connectivity)QLE8142
1Cisco UCS general-purpose rack-moun t ser ver
®
Windows
2008
1Cisco UCS high-density rack-mount server
UCS C210 M1
UCS C200 M1
Windows 2008
1
NetApp® FAS3040 storage system
FAS3040
(10Gb iSCSI storage and 10Gb native FCoE storage)
1
®
AMS500 Storage Array
HDS
AMS500
(4Gb Fibre Channel storage)
1QLogic 5800V 8Gb Fibre Channel switch
SB5802V
(Fibre Channel connection)
1Cisco Nexus™ 5020 FCoE switch
Nexus 5020
(unified fabric interconnect)
a
Two Windows 2008, one VMware® ESX™ with one Windows 2008 and Red Hat® Enterprise Linux
guest with Microsoft
®
iSCSI initiator. All servers use the QLogic QLE8142 adapter.
Xgig four-slot chassisXgig-C042
10GbE/10G Fibre Channel multi-functional bladeXgig-B2100C
Four-port, 1, 2, 4, 8Gbps Fibre Channel Analyzer bladeXgig-B480FA
Two-port, Xgig 10GbE Fibre Channel Analyzer function keyXgig-2FG10G1-SW
Two-port, 10GE Analyzer and Jammer function keyXgig-S20JFE
Two-port, 10GEE FCoE Anal yzer and Load Tester function keyXgig-S20LEA
Four-port, 1, 2, 4, 8Gbps Fibre Channel Analyzer function keyXgig-S48AFA
51031-00 A3-3
Product
Model/Part
Number
3–Architecture
Reference Architecture Description
3-451031-00 A
4Installation
This section describes how to set up an FCoE environmen t. It assumes a general
understanding of SAN administration concepts.
Determine Configuration
QLogic FCoE adapters are supported on multiple hardware platforms and
operating systems. Generally, the following specifications apply, but you should
always check the QLogic We b site for curr ent information. This configuration uses
a subset of the following equipment:
Operating systems—Windows Server® 2003, 2008, 2008 R2 (targeted); Red
Hat EL AP 5.x; Novell
Solaris
tab for QLogic adapters on the QLogic Web site.
Storage—The following storage systems are in most data centers:
Fibre Channel
iSCSI
FCoE (NetApp)
Switches—The following switches are typical in this configuration:
Fibre Channel
FCoE
Ethernet
Cabling
Fiber optic cable (OM2/OM3) between servers, switches, and storage
Cat5e and Cat6 Ethernet for device management and 1GbE iSCSI
®
10, OpenSolaris™. This list can be found under the specifications
storage
®
SLES 10.x, 11; VMware ESX/ESXi 3.5 and 4.0;
51031-00 A4-1
4–Installation
Equipment Installation and Configuration
Equipment Installation and Configuration
This section focuses on the converged network installation and configuration. You
do not have to change your current storage and network management practices.
Install the Converged Network Adapter Hardware
Begin by identifying a pilot server that meets Converged Network Adapter
hardware requirements (PCI slot type, length, available slot) and install the
adapters.
To install the adapter hardware:
1.Use a ground strap to avoid damaging the card or server.
2.Power off the computer and disconnect the power cable.
3.Remove the computer cover, and find an empty PCIe x8 bus slot (Gen1) or
PCIe x4 bus slot (Gen2).
4.Pull out the slot cover (if any) by removing the screw or releasing the lever.
5.Install the low-profile bracket, if required.
6.Grasp the adapter by the top edge, and then insert it firmly into the
appropriate slot.
7.Refasten the adapter's retaining bracket using the existing screw or lever.
8.Close the computer cover.
9.Plug the appropriate Ethernet cable (either copper or optical) into the
adapter.
Optical models ship with optical transceivers installed. QLogic 8100
Series adapters used for this project operate only with optical
transceivers sold by QLogic.
The list of approved copper cables is available at the following link:
2.In the table at the bottom of the page (Figure 4-2), select Converged Network Adapters, the adapter model, your operating system, and then
click Go.
.
Figure 4-2. Driver Download Page Driver Link
3.On the download page under Management Tools (Figure 4-3), select
SANsurfer FC HBA Manager and download it to your system.
Figure 4-3. Driver Download Page Driver and Documentation
4.Follow the included instructions for installing the downloaded software.
4-451031-00 A
Cabling
To connect the Fibre Channel and Ethernet cables:
1.Connect the Fibre Channel cables from the servers to the Cisco FCoE
Nexus switch.
2.Connect the Fibre Channel cables from the storage to the Cisco FCoE
Nexus switch.
3.Connect any necessary Ethernet cables for device management and iSCSI
storage.
Fibre Channel Switches
If you are connecting Fibre Channel devices, such as storage, through a Fibre
Channel switch, then you must connect the Fibre Channel switch to a Fibre
Channel port on the FCoE switch.
In addition, set up a zoning configuration so that the servers can discover the disk
LUNs you are mapping. Refer to the Fibre Channel switch documentation for
zoning instructions.
4–Installation
Equipment Installation and Configuration
FCoE Switches
QLogic and Cisco have jointly developed the QLogic and Cisco FCoE Design
Guide for implementing a unified data center using Cisco Nexus 5000 Series
switches and QLogic second-generation Converged Network Adapters. Refer to
the design guide for detailed instructions on how to implement an FCoE network
and configure the Cisco Nexus switch and QLogic adapters (Cisco and QLogic,
2010). The design guide also describes how to configure N_Port ID Virtualization
(NPIV) to resolve fabric expansion concerns related to domain IDs. The QLogic and Cisco FCoE Design Guide does not describe the configuration of the PFC
and ETS DCB parameters, which will be required for the tests described in this
document.
For information about configuring DCB on a Cisco Nexus 5000 Series switch, see
“Configuring DCB on a Nexus Switch” on page 4-6; some variables may need to
be adjusted for your configuration.
51031-00 A4-5
Loading...
+ 53 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.