Port Channel Fabric Interface Connection ............................................................................................................ 9
Configuring a Fabric Port Channel ...................................................................................................................... 10
Virtual Port Channel Connection .......................................................................................................................... 13
Configuring a vPC ................................................................................................................................................ 14
Server Network Teaming ....................................................................................................................................... 16
Creating Host-Side vPC for Server Links with LACP .......................................................................................... 17
Fibre Channel over Ethernet ................................................................................................................................. 21
show fex ............................................................................................................................................................... 35
show fex detail ..................................................................................................................................................... 36
show interface brief .............................................................................................................................................. 37
show interface ethernet 191/1/1 .......................................................................................................................... 42
show vlan ............................................................................................................................................................. 43
show interface fex-fabric ...................................................................................................................................... 44
For More Information ............................................................................................................................................. 70
The Cisco Nexus® B22 Blade Fabric Extender for IBM® extends the Cisco Nexus switch fabric to the server edge.
Logically, it behaves like a remote line card to a parent Cisco Nexus 5000 or 6000 Series Switch. The fabric
extender and the parent Cisco Nexus 5000 or 6000 Series Switch together form a distributed modular system.
The Cisco Nexus B22 for IBM forwards all traffic to the parent Cisco Nexus 5000 or 6000 Series Switch over eight
10 Gigabit Ethernet uplinks. Low-cost uplink connections of up to 10 meters can be made with copper Twinax
cable, and longer connections of up to 100 meters can use the Cisco® 10-Gbps fabric extender transceiver (FET10G). Standard 10-Gbps optics such as short reach (SR), long reach (LR), and extended reach (ER) are also
supported. Downlinks to each server are 10 Gigabit Ethernet and work with all Ethernet and converged network
adapter (CNA) mezzanine cards, allowing customers a choice of Ethernet, Fibre Channel over Ethernet (FCoE),
or Small Computer System Interface over IP (iSCSI) connections. Because the Cisco Nexus B22 for IBM is a
transparent extension of a Cisco Nexus switch, traffic can be switched according to policies established by the
Cisco Nexus switch using a single point of management.
The Cisco Nexus B22 for IBM provides the following benefits:
Highly scalable, consistent server access: This distributed modular system creates a scalable server access
environment with no reliance on Spanning Tree Protocol and with consistent features and architecture between
blade and rack servers.
Simplified operations: The availability of one single point of management and policy enforcement using upstream
Cisco Nexus 5000 Series Switches eases the commissioning and decommissioning of blades through zero-touch
installation and automatic configuration of fabric extenders.
Increased business benefits: Consolidation, reduced cabling, investment protection through feature inheritance
from the parent switch, and the capability to add functions without the need for a major equipment upgrade of
server-attached infrastructure all contribute to reduced operating expenses (OpEx) and capital expenditures
(CapEx).
The Cisco Nexus B22 for IBM integrates into the I/O module slot of a third-party blade chassis, drawing both
power and cooling from the blade chassis itself.
Network Diagram
Figure 1 presents a sample network topology that can be built using the Cisco Nexus B22 for IBM, 2000 Series
Fabric Extenders, and 5000 or 6000 Series Switches. In this topology, the Cisco Nexus 5000 or 6000 Series
Switch serves as the parent switch, performing all packet switching and policy enforcement for the entire
distributed modular system. The Cisco Nexus switch also serves as the only point of management for both
configuration and monitoring within the domain, making it simple to manage blade server and rack server
connections together.
The Cisco Nexus switches, along with the Cisco Nexus 2000 Series and B22 for IBM, create a distributed
modular system that unifies the data center architecture. Within this distributed modular system, both IBM Flex
System® computing nodes and rack servers are managed identically. This approach allows the use of the same
business and technical processes and procedures.
The left-most blade chassis in Figure 1 contains dual Cisco Nexus B22 for IBM fabric extenders. Each Cisco
Nexus B22 for IBM is singlely attached to a parent Cisco Nexus 5500 platform switch, a connection mode referred
to as straight-through mode. The fabric links can be either statically pinned or put into a Port Channel. This
connection mode helps ensure that all data packets from a particular Cisco Nexus B22 for IBM enter the same
parent Cisco Nexus switch. This approach may be necessary when certain types of traffic must be restricted to
either the left or right Cisco Nexus 5500 platform switch: for instance, to maintain SAN A and SAN B separation.
Also, in this example the connections to individual computing nodes are in active-standby mode, which helps
ensure traffic flow consistency but does not make full use of the server network interface card (NIC) bandwidth.
The second IBM Flex System chassis from the left in Figure 1 improves on the first with the creation of an
Ethernet virtual Port Channel (vPC) from the computing node to the Cisco Nexus parent switch. This vPC places
the Ethernet portion of the NICs in an active-active configuration, giving increased bandwidth to each host. The
FCoE portion of the CNA is also configured as active-active but maintains SAN A and SAN B separation because
each virtual Fibre Channel (vFC) interface is bound to a particular link at the server. This configuration also
achieves high availability through redundancy, and it can withstand a failure of a Cisco Nexus 5500 platform
switch, a Cisco Nexus B22 for IBM, or any connecting cable. This topology is widely used in FCoE deployments.
The third blade chassis from the left in Figure 1 contains Cisco Nexus B22 for IBM fabric extenders that connect
to both Cisco Nexus 5500 platform switches through vPC for redundancy. In this configuration, active-active load
balancing using vPC from the blade server to the Cisco Nexus 5500 platform switch cannot be enabled. However,
the servers can still be dual-homed with active-standby or active-active transmit-load-balancing (TLB) teaming.
This topology is only for Ethernet traffic because SAN A and SAN B separation between the fabric extender and
the parent switch is necessary.
The fourth blade chassis from the left in Figure 1 contains Cisco Nexus B22 for IBM fabric extenders that connect
Card
Connection Blades
LAN on motherboard (LoM) plus
mezzanine card in slot 1
I/O module bays 1 and 2
Mezzanine card in slot 2
I/O module bays 3 and 4
Card
Connection Blades
Mezzanine 1 ports 1 to 4
I/O module bays 1 and 2
Mezzanine 2 ports 1 to 4
I/O module bays 3 and 4
Card
Connection Blades
Mezzanine 1 ports 1 to 4
I/O module bays 1 and 2
Mezzanine 2 ports 1 to 4
I/O module bays 3 and 4
Mezzanine 3 ports 1 to 4
I/O module bays 1 and 2
Mezzanine 4 ports 1 to 4
I/O module bays 3 and 4
Blade Chassis
Server Manager Firmware
IBM PureFlex™ System Model 8721HC1
DSA:9.41, IMM2:2.6, UEFI:1.31, and CMM: 2PET12E
to both Cisco Nexus 5500 platform switches with enhanced vPC (EvPC) technology. This configuration allows
active-active load balancing from the fabric extenders and the computing nodes.
The last two configurations show how rack-mount servers can connect to the same Cisco Nexus parent switch
using rack-mount Cisco Nexus 2000 Series Fabric Extenders. The topology for blade servers and rack-mount
servers can be identical if desired.
Hardware Installation
Installation of the Cisco Nexus B22 for IBM in the rear of the blade server chassis is similar to the installation of
other connection blades. The layout of the blade server chassis, as well as the server types and mezzanine cards
used, determines the slots that should be populated with the Cisco Nexus B22 for IBM for 10 Gigabit Ethernet
connectivity. Tables 1 through 3 summarize the typical options for servers using dual-port 10 Gigabit Ethernet
devices.
Table 1 Mapping of Third-Party Half-Wide Server Dual-Port Mezzanine Card to I/O Module
Table 2 Mapping of Third-Party Half-Wide Server Quad-Port Mezzanine Card to I/O Module
Table 3 Mapping of Third-Party Full-Wide Server Quad-Port Mezzanine Card to I/O Module
After the Cisco Nexus B22 for IBM fabric extenders are installed, the chassis management module (CMM) should
be updated to at least the minimum version shown in Table 4.
No configuration is required from the chassis MMB. Only the minimum CMM firmware is required to properly
detect and enable the Cisco Nexus B22 for IBM in the blade chassis (Figure 2).
Figure 2: Cisco Nexus B22 for IBM Fabric Extenders as Seen in the CMM
Fabric Extender Management Model
The Cisco Nexus fabric extenders are managed by a parent switch through the fabric interfaces using a zerotouch configuration model. The switch discovers the fabric extender by using a detection protocol.
After discovery, if the fabric extender has been correctly associated with the parent switch, the following
operations are performed:
1. The switch checks the software image compatibility and upgrades the fabric extender if necessary.
2. The switch and fabric extender establish in-band IP connectivity with each other. The switch assigns an IP
address in the range of loopback addresses (127.15.1.0/24) to the fabric extender to avoid conflicts with IP
addresses that might be in use on the network.
3. The switch pushes the configuration data to the fabric extender. The fabric extender does not store any
configuration locally.
4. The fabric extender updates the switch with its operating status. All fabric extender information is displayed
using the switch commands for monitoring and troubleshooting.
This management model allows fabric extender modules to be added without adding management points or
complexity. Software image and configuration management is also handled automatically, without the need for
user intervention.
The Cisco Nexus B22 for IBM creates a distributed, modular chassis with the Cisco Nexus parent switch after a
fabric connection has been made over standard 10-Gbps cabling. This connection can be accomplished using
any of the following types of interconnects:
Cisco passive direct-attach cables (1m, 3m, or 5m)
Cisco active direct-attach cables (7m or 10m)
Cisco standard Enhanced Small Form-Factor Pluggable (SFP+) optics (SR, LR, and ER)
Cisco Fabric Extender Transceivers (FET modules)
After the fabric links have been physically established, the logical configuration of the links must be established.
The fabric links to the Cisco Nexus B22 for IBM can use either of two connection methods:
Static pinning is the default method of connection between the fabric extender and the Cisco Nexus parent switch.
In this mode of operation, a deterministic relationship exists between the host interfaces and the upstream parent;
up to eight fabric interfaces can be connected. These fabric interfaces are equally divided among the 16 serverside host ports. If fewer fabric ports are allocated, more server ports are assigned to a single fabric link. The
advantage of this configuration is that the traffic path and the amount of allocated bandwidth are always known for
a particular set of servers.
Since static pinning will group host-side ports into individual fabric links, you should understand how ports are
grouped. The size of the port groups is determined by the number of host ports divided by the max link
parameter value. For example, if the max link parameter is set to 2, eight host ports would be assigned to each
link. The interfaces will be grouped in ascending order starting from interface 1. Thus, interfaces 1 to 8 will be
pinned to one fabric link, and interfaces 9 to 16 will be pinned to a different interface (Table 5).
Table 5 Interface Assignment with Two Fabric Links
The relationship of host-side ports to parent switch fabric ports is static. If a fabric interface fails, all its associated
host interfaces are brought down and will remain down until the fabric interface is restored. Figure 3 shows static
port mappings.
Figure 3: Static Port Mapping Based on Max Link Parameter
Port Channel Fabric Interface Connection
The Port Channel fabric interface provides an alternative way of connecting the parent switch and the Cisco
Nexus B22 for IBM fabric extender. In this mode of operation, the physical fabric links are bundled into a single
logical channel. This approach prevents a single fabric interconnect link loss from disrupting traffic to any one
server. The total bandwidth of the logical channel is shared by all the servers, and traffic is spread across the
members through the use of a hash algorithm.
For a Layer 2 frame, the switch uses the source and destination MAC addresses.
For a Layer 3 frame, the switch uses the source and destination MAC addresses and the source and
destination IP addresses.
Since both redundancy and increased bandwidth are possible, configuration of the fabric links on a Port Channel
is the most popular connection option.
A pair of fabric extenders now is configured in straight-through mode, also known as a single-attached
configuration, and each is communicating with its respective Cisco Nexus switch. The links between the two
Cisco Nexus switches and the Cisco Nexus B22 fabric extenders use Port Channels for connectitivity.
vPCs allow links that are physically connected to two different Cisco Nexus switches to form a Port Channel to a
downstream device. The downstream device can be a switch, a server, or any other networking device that
supports IEEE 802.3ad Port Channels. vPC technology enables networks to be designed with multiple links for
redundancy while also allowing those links to connect to different endpoints for added resiliency (Figure 5).
More information about vPC technology can be found at
Now the two switches have been configured to support vPC links to other devices. These connections can be
used for upstream links to the data center core. These vPC links can be used for connections to hosts in the data
center, allowing additional bandwidth and redundant links.
Server Network Teaming
Server NIC teaming provides an additional layer of redundancy to servers. It makes it possible for multiple links to
be available, for redundancy. In the blade server environment, server network teaming typically is limited to
active-standby configurations and cannot provide active-active links, because active-active links require an
EtherChannel or Link Aggregation Control Protocol (LACP) connection to a single switch. However, because the
Cisco Nexus B22 for IBM fabric extender is an extension of the parent switch, EtherChannel or LACP connections
can be created between the blade server and the virtual chassis. Dual Cisco Nexus switches can be used with
vPC for additional switch redundancy while providing active-active links to servers, thus enabling aggregate 40Gbps bandwidth with dual links (Figure 6).
To verify that the vPC is formed, go to one of the Cisco Nexus switches to check the status of the server Port
Channel interface. The pair of Cisco Nexus switches is in a vPC configuration, so each has a single port in the
Port Channel. A check of the status of the Port Channel on each parent switch shows that channel group 201 is in
the “P - Up in port-channel” state on each switch. A check from the OneCommand utility will show that the status
is “Active” for each link that is up in the Port Channel.
FCoE combines LAN and storage traffic on a single link, eliminating the need for dedicated adapters, cables, and
devices for each type of network, resulting in savings that can extend the life of the data center. The Cisco Nexus
B22 for IBM is the building block that enables FCoE traffic to travel outside the blade chassis.
Best practices for unified fabric are listed in the Cisco NX-OS operations guide for the Cisco Nexus 5000 Series at
2. Verify and, if necessary, install the FCoE drivers in the server OS.
3. Enable FCoE on the parent switches.
4. Configure quality of service (QoS) to support FCoE on the Cisco Nexus parent switch.
5. Enable the FCoE feature on the Cisco Nexus switch.
6. Create the SAN A and SAN B VLANs.
7. Create vFC interfaces.
1. Enable FCoE on the CNA.
The CNA personality should be set to FCoE according to the CNA documentation.
2. Verify and, if necessary, install the FCoE drivers in the server OS.
Verify that the latest FCoE drivers and firmware are loaded for the operating system. The latest versions can be
obtained from the third-party support website. The FCoE drivers are separate from the Ethernet NIC drivers.
Generally, the latest versions of the CNA drivers and the CNA firmware should be used.
You can run these commands on a second Cisco Nexus Switches to verify the fabric.
Figure 9 shows a server that has successfully connected to the SAN.
Figure 9: Server with FCoE Connected to Volumes on a Fibre Channel Array
iSCSI Configuration
iSCSI provides an alternative to FCoE for block-level storage. Through the use of the iSCSI type-length-value
(TLV) settings, iSCSI TLV-capable NICs/CNAs, and Cisco Nexus 5000/ 6000 Series Switches, configuration can
be simplified. The iSCSI TLV settings tell the host which QoS parameters to use, similar to the process for Data
Center Bridging Exchange (DCBX) Protocol and FCoE; DCBX negotiates the configuration between the switch
and the adapter through a variety of TLV and sub-TLV settings. The TLV settings can be used for traditional TCP
and drop-behavior iSCSI networks as well as for complete end-to-end lossless iSCSI fabrics. If you enable
Enchanced Transmisson Selection (ETS) and Priority Flow Control (PFC), storage traffic will be separated from
other IP traffic, allowing more accurate and error-free configurations to be transmitted from the switch to the
adapter.
Follow these steps to configure iSCSI TLV settings on each Cisco Nexus switch:
1. Define a class map for each class of traffic to be used in QoS policies.
2. Use QoS policies to classify the interesting traffic. QoS policies are used to classify the traffic of a specific
system class identified by a unique QoS group value.
3. Configure a no-drop class. If you do not specify this command, the default policy is drop.
1. Define a class map of QoS policies on the first switch to identify the iSCSI traffic (here, iSCSI traffic is
matched to class-of-service [CoS] 5):
N5548-Bottom(config)# class-map type qos match-all iSCSI-C1
N5548-Bottom(config-cmap-qos)# match protocol iscsi
N5548-Bottom(config-cmap-qos)# match cos 5
2. Configure the type of QoS policies used to classify the traffic of a specific system class (here, the
QoS-group value 2 is used):
N5548-Bottom(config)# policy-map type qos iSCSI-C1
N5548-Bottom(config-pmap-qos) class iSCSI-C1
N5548-Bottom(config-pmap-c-qos)# set qos-group 2
N5548-Bottom(config-pmap-c-qos)# exit
N5548-Bottom(config-pmap-c-qos)# class class-default
3. Configure the no-drop policy maps:
N5548-Bottom(config)# class-map type network-qos iSCSI-C1
N5548-Bottom(config-cmap-nq)# match qos-group 2
N5548-Bottom(config-cmap-nq)# exit
N5548-Bottom(config)# policy-map type network-qos iSCSI-C1
N5548-Bottom(config-pmap-nq)# class type network-qos iSCSI-C1
N5548-Bottom(config-pmap-nq-c)# pause no-drop
N5548-Bottom(config-pmap-nq-c)# class type network-qos class-default
N5548-Bottom(config-pmap-nq-c)# mtu 9216
4. Apply the system service policies:
N5548-Bottom(config-sys-qos)# service-policy type qos input iSCSI-C1
N5548-Bottom(config-sys-qos)# service-policy type network-qos iSCSI-C1
5. Identify the iSCSI traffic on the other Cisco Nexus switch using the same process as for the first
switch by defining a class map for each class of traffic to be used in the QoS policies:
N5548-Top(config)# class-map type qos match-all iSCSI-C1
The storage array should then be visible as shown in Figure 11.
Figure 11: IBM Flex System X440+10GB Fabric Blade Running VMware ESXi 5.1.0,1065491
Virtual Network Adapter Partitioning
Various IBM adapters can present a single Ethernet link to the server operating system as if it were different
physical adapters. This capability allows bare-metal servers and hypervisors to offer multiple NICs and adapters
while physically having a pair of high-bandwidth links. This feature provides the flexibility to limit the bandwidth
allocated to each virtual adapter without the need for a server administrator to know the network QoS
configuration parameters.
To configure the virtual adapter function, follow this procedure:
1. Install the license.
2. Configure the virtual network adapters.
3. Configure the switch interface for the correct VLANs.
a. During the boot cycle, press F1 to open the UEFI menu.
b. Select the adapter port by opening the UEFI menu and choosing System Settings > Network and
selecting the adapter port.
c. Select the Emulex NIC.
d. Select Advanced Mode option: NIC, iSCSI, or FCoE.
3. Configure unique VLANs as necessary for each Ethernet vNIC.
a. This feature works by applying VLAN tags to the traffic egressing the adapter and entering the
network. Thus, for correct operation, the VLAN tags on the physical network port to the adapter must
match. Note that a VLAN ID cannot be assigned for the FCoE vNIC.
b. Make sure that the VLANs are configured and allowed on the internal and external switch ports as
needed.
c. Configure the network port attached to the server. Use the following configuration as a reference.
This command displays the details of the fabric extender module, including the connection blade bay number,
rack name, and enclosure information for the blade server chassis.
IBM, IBM Flex System, and PureFlex are trademarks of International Business Machines Corp., registered in
many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other
companies.
A current list of IBM trademarks is available on the web at http://www.ibm.com/legal/us/en/copytrade.shtml.
C07-730422-00 02/14
Conclusion
The advent of Cisco Nexus 2000 Series Fabric Extenders has enabled customers to benefit from both top-of-rack
(ToR) and end-of-row (EoR) designs. This technology achieves these benefits while reducing the costs
associated with cabling and cooling in EoR models and without introducing any additional management points, in
contrast to traditional ToR designs. This unique architecture has been tremendously successful in the first
generation of Cisco Nexus fabric extenders and rack-mount servers.
The Cisco Nexus B22 for IBM Blade Fabric Extender brings these innovations to third-party blade server chassis
and offers unified fabric with FCoE deployments for blade server chassis. This solution brings Cisco networking
innovations to the server access layer from rack-mount servers using Cisco Nexus 2000 Series Fabric Extenders
into third-party blade chassis.
For More Information
Cisco NX-OS operations guide for Cisco Nexus 5000 Series Switches: