The purpose of this Virtual Connect Cookbook is to provide users of Virtual Connect with a better
understanding of the concepts and steps required when integrating HP BladeSystem and Virtual
Connect Flex-10 or FlexFabric components into an existing network.
The scenarios in this Cookbook vary from simplistic to more complex while covering a range of
typical building blocks to use when designing Virtual Connect Flex-10 or FlexFabric solutions.
Although these scenarios are shown individually, some scenarios could be combined to create a
more complex and versatile Virtual Connect environment, such as the combined use of Shares
Uplink Sets (SUS) and vNet Tunnels. Or Active/Active networks for North/South traffic flows, such
as iSCSI or VDI, while also having the primary network traffic configured in a separate Shared Uplink
Set with Active/Standby uplinks.
Existing users of Virtual Connect will quickly realize that as of VC firmware release 3.30 that the
selection between “Mapped” and “Tunneled” modes are no longer of concern. The capabilities
provided in those modes are now available in the default installation of VC firmware 3.30 and
beyond. These capabilities and changes will be discussed in further detail later in this paper.
In addition to the features added in release 3.30, 4.01 is a major release containing several new
features, including QoS and Min/Max downlink speed settings among others. This Cookbook will
highlight and discuss some of these added features.
The scenarios as written are meant to be self-contained configurations and do not build on earlier
scenarios, with this you may find some repetition or duplication of configuration across scenarios.
This paper is not meant to be a complete or detailed guide to Virtual Connect Flex-10 or FlexFabric,
but is intended to provide the reader with some valid examples of how Virtual Connect Flex-10 or
FlexFabric could be deployed within their environments. Many additional configurations or
scenarios could also be implemented. Please refer to the following section for additional reference
material on Virtual Connect, Flex-10 and FlexFabric.
Documentation feedback
HP welcomes your feedback. To make comments and suggestions about product documentation,
send a message to docsfeedback@hp.com. Include the document title and manufacturing part
number. All submissions become the property of HP.
Purpose 4
Introduction to Virtual Connect Flex-10 and
FlexFabric
Virtual Connect is an industry standards-based implementation of server-edge virtualization. It
puts an abstraction layer between the servers and the external networks so the LAN and SAN see a
pool of servers rather than individual servers. Once the LAN and SAN connections are physically
made to the pool of servers, the server administrator uses Virtual Connect management tools
(Virtual Connect Manager (VCM) or Virtual Connect Enterprise Manager (VCEM)) to create a profile
for each server.
Virtual Connect FlexFabric is an extension to Virtual Connect Flex-10 which leverages Fibre Channel
over Ethernet (FCoE) protocols. By leveraging FCoE for connectivity to existing Fibre Channel SAN
networks, we can reduce the number of switch modules and HBAs required within the server blade
and enclosure. This in turn further reduces cost, complexity, power and administrative overhead.
This paper will discuss the differences between Flex-10 and FlexFabric and provide information and
suggestions to assist the reader in determining the best option for their implementation of
BladeSystem and Virtual Connect. For additional information on Virtual Connect, Flex-10 and/or
FlexFabric, please review the documents below.
New Features:
Version 3.70 of Virtual Connect contains support for the following enhancements:
The user guide contains information about the following changes in VC 3.70:
Discontinued support for old hardware:
o HP 1/10Gb Virtual Connect Ethernet Module
o HP 1/10Gb-F Virtual Connect Ethernet Module
Support for new hardware:
o HP Virtual Connect Flex-10/10D Module
o HP ProLiant BL660c Gen8 Server series
o HP ProLiant WS460c Gen8 Workstation series
o HP Integrity BL860c i4 Server Blades
o HP Integrity BL870c i4 Server Blades
o HP Integrity BL890c i4 Server Blades
o HP 7m C-series Active Copper SFP+ cables (QK701A)
o HP 10m C-series Active Copper SFP+ cables (QK702A)
o Cisco 7m copper active Twinax cables (SFP-H10GB-ACU7M)
o Cisco 10m copper active Twinax cables (SFP-H10GB-ACU10M)
Virtual Connect Direct-Attach Fibre Channel for HP 3PAR Storage Systems
Manageability enhancements:
o VCM GUI access to telemetry information
o Advanced telemetry and statistics for Link Aggregation Groups and FlexNICs
o GUI access to the FC Port Statistics for HP FlexFabric 10Gb/24-port Modules
o Improvements to the Statistics Throughout display and data collection
o Display of factory default MACs and WWNs in server profiles
o Added an FC/FCoE “Connect To” field to help identify how server ports are
connected to the uplink ports
o LLDP enhancements to more easily identify VC Ethernet modules on the network
o Improvements to the display of the MAC Address table to show the network name
and VLAN ID where the MAC address was learned, as well as display of the LAG
membership table
o Support for 2048 bit SSL certificates and configurable SSL-CSR
Introduction to Virtual Connect Flex-10 and FlexFabric 5
o Activity logging improvements for TACACS+ accounting
o Option to disable local account access when LDAP, RADIUS, or TACACS+
authentication is enabled
o Increased the default VCM local user account minimum required password length
o SNMP access security to prevent access from unauthorized management stations
SmartLink failover improvements
IGMP “NoFlood” option when IGMP snooping is enabled
Browser support:
o Internet Explorer 8 and 9
o Firefox 10 and 11
Firmware upgrade rollback from a previous firmware upgrade without domain deletion
Please refer to the VC 3.70 User Guide for additional VCEM feature enhancements
Please refer to the VC 3.70 Release notes and User Guides for further information
Virtual Connect Firmware 4.01 includes the following new features:
Version 4.01 of Virtual Connect contains support for the following enhancements:
Manageability enhancements:
Extended support for FCoE protocol on Flex-10/10D and FlexFabric modules, which
includes FIP snooping support but is limited to dual-hop configurations. FlexFabric
module dual-hop FCoE support is restricted to uplink ports X1-X4
IMPORTANT: For more information about the installation and limitations for Virtual
Connect dual-hop FCoE support, see the HP Virtual Connect Dual-Hop FCoE Cookbook,
which can be found on the Installing tab of the HP BladeSystem Technical Resources
website (http://www.hp.com/go/bladesystem/documentation)
Prioritization of critical application traffic with QoS
Minimum and maximum bandwidth optimization for efficient allocation of bandwidth
in virtualized environments with Flex-10 and FlexFabric adapters. Flex-10 and
FlexFabric adapter firmware and drivers must be updated to SPP version 2013.02.00,
or the latest hotfix thereafter, to take advantage of this enhancement
Note: This feature excludes support for the following adapters:
o HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter
o HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter
o HP NC550m 10Gb 2-port PCIe x8 Flex-10 Ethernet Adapter
VC SNMP MIB enhancements for improved troubleshooting and failure analysis Virtual
Connect SNMP Domain MIB (vc-domain-mib.mib) traps now contain detailed
information
with the root cause of each event. Update SNMP management stations with the HP MIB
Kit version 9.30 prior to installing Virtual Connect version 4.01 to take advantage of
this enhancement. Download the update from the HP website
(http://h18006.www1.hp.com/products/servers/management/hpsim/mibkit.html).
Enhanced support for LLDP MIB, Bridge MIB, Interface MIB, and Link aggregation MIB
The domain status alerts screen includes cause and root cause for each alert
Customization of VC user roles and privileges
The VCM GUI now allows searching for Network Access Groups, modules, interconnect
bays, and device bay items from the left navigation tree
Configurable long or short LACP timer
VCM CLI TAB key auto-completion
The Network, SUS, and hardware pages now display the remote system name instead
Introduction to Virtual Connect Flex-10 and FlexFabric 6
of the MAC address.
Security enhancements:
o IGMP Snooping enhancements with multicast group host membership filtering
o Ability to set session timeout for idle VCM CLI or VCM GUI management sessions
o Protection of VC Ethernet modules from buffer exhaustion due to flooding of
Pause packets from servers
VCEM compatibility:
If you are running VCEM 6.3.1 or later to manage a VC 4.01 domain, the 4.01 domain
can be in a VCDG in 3.30 firmware mode or later. To enable new features in VC 4.01,
you must upgrade to VCEM 7.2 or later. VCEM 7.2 does not support VC versions prior to
3.30
Configurable role operations must be delegated to one of the following roles if they
are to be performed while the domain is in Maintenance Mode: Network, Storage or
Domain. Administrators logging into VCM with a Server role account while the domain
is in Maintenance mode will be denied access to perform delegated operations such as
exporting support files, updating firmware, configuring port monitoring or saving or
restoring domain configuration
In VC 4.01, the telemetry port throughput is Enabled by default. You must do the
following to add a fresh VC 4.01 installation to your existing VCDG:
3.30-3.70 VCDG with statistics throughput disabled—Clear the Enable
Throughput Statistics checkbox on the Ethernet Settings (Advanced Settings)
screen, or run the following VCM CLI command:
set statistics-throughput Enabled=false
3.30-3.70 VCDG with statistics throughput enabled—Add the domain as is. No
change is required
In VC 4.01, the VLAN Capacity is set to Expanded by default. You must do the following
to add a fresh VC 4.01 installation to your existing VCDG:
3.30-3.70 with Legacy VLAN VCDG—You cannot add the domain. Select a different
VCDG
3.30-3.70 with Enhanced VLAN VCDG—Add the domain as is. No change is
required
Please refer to the VC 4.01 Release notes for further information
Virtual Connect can be used to support both Ethernet and Fibre Channel connections. The Virtual
Connect 1Gb Ethernet Cookbook is provided with basic Virtual Connect configurations in a 1Gb
environment. Earlier releases of the Virtual Connect Ethernet Cookbook cover both 1Gb and 10Gb
solutions; however, the most recent release of the Virtual Connect 1Gb Cookbook cover only 1Gb
Ethernet Solutions up to Virtual Connect firmware release 3.6x.
Virtual Connect 4.01 now provides the ability to pass FCoE (Dual Hop) to an external FCoE capable
network switch. This guide is focused on both the Virtual Connect and Network switch
configurations needed to support this connectivity.
For Dual Hop FCoE connectivity, please refer to the Dual-Hop FCoE with HP Virtual Connect modules
Virtual Connect can be used to support both Ethernet and Fibre Channel connections; however, this
guide is focused completely on the Ethernet configuration.
For Fibre Channel connectivity, please refer to the Virtual Connect Fibre Channel Cookbook
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01702940/c01702940.pdf
(www.hp.com/go/blades)
Virtual Connect iSCSI Cookbook
Virtual Connect can be used to support iSCSI accelerated connections, including iSCSI boot,
however, this guide is focused completely on the Ethernet and iSCSI configuration.
For iSCSI connectivity, please refer to the Virtual Connect iSCSI Cookbook
Introduction to Virtual Connect Flex-10 and FlexFabric 8
Virtual Connect Ethernet Modules
Virtual Connect Flex-10 Module Uplink Port Mappings
It is important to note how the external uplink ports on the Flex-10 module are configured. The
graphic below outlines the type and speed each port can be configured as.
Ports X1 – X8; Can be configured as 1Gb or 10Gb Ethernet
Ports X7 – X8; Are also shared as internal cross connect and should not be used for
external connections, at the very least one horizontal stacking link is required.
Uplink Ports X1-X8 support 0.5–7m length DAC as stacking or uplink
The CX-4 port is shared with port X1, only one of these connections can be used at a time.
Figure 1 – Virtual Connect Flex-10 Module port configuration, speeds and types
Note: The Virtual Connect Flex-10 module shown above was introduced in in Late 2008 and is
replaced by the Flex-10/10D module, shown next and was released in August of 2012. The Flex-10
module above will go end of sales life in late 2013.
Figure 2 - FlexNIC Connections – It is important to note that Physical Function two (pf2) can be
configured as Ethernet or iSCSI (iSCSI is supported with Flex-10 and G7 and Gen 8 blades using the
Emulex based BE2 and BE3 chipsets). Physical Functions 1, 3 and 4 would be assigned as Ethernet
only connections
Introduction to Virtual Connect Flex-10 and FlexFabric 9
Virtual Connect Flex-10/10D Module Uplink Port Mappings
It is important to note how the external uplink ports on the Flex-10 module are configured. The
graphic below outlines the type and speed each port can be configured as.
Ports X1 – X10; Can be configured as 1Gb or 10Gb Ethernet or FCoE (ALL external ports can
be used, no sharing of these ports with internal stacking, as with previous modules)
Ports X11-X14; Internal cross connections for horizontal stacking and are NOT shared with
any external connections
Uplink Ports X1-X10 support 0.5–15m length DAC as stacking or uplink. If greater lengths
are required, fibre optic cables would be required
Figure 3 – Virtual Connect Flex-10/10D Module port configuration, speeds and types
Figure 4 - FlexNIC Connections – It is important to note that Physical Function two (pf2) can be
configured as Ethernet, iSCSI (iSCSI and Dual Hop FCoE are supported with Flex-10/10D and G7
blades using the Emulex based BE2 and BE3 chipsets). Physical Functions 1, 3 and 4 would be
assigned as Ethernet only connections. Dual Hop FCoE connections are supported on all external
uplink ports
Introduction to Virtual Connect Flex-10 and FlexFabric 10
Virtual Connect FlexFabric Module Uplink Port Mappings
It is important to note how the external uplink ports on the FlexFabric module are configured. The
graphic below outlines the type and speed each port can be configured as.
Ports X1 – X4; Can be configured as 10Gb Ethernet or Fibre Channel, FC speeds supported =
2Gb, 4Gb or 8Gb using 4Gb or 8Gb FC SFP modules, please refer to the FlexFabric Quick
Spec for a list of supported SFP modules
Ports X5 – X8: Can be configured as 1Gb or 10Gb Ethernet
Ports X7 – X8; Are also shared as internal stacking links and should not be used for
external connections, at the very least one horizontal stacking link is required, if modules
are in adjacent bays. Note: Within FlexFabric Stacking only applies to Ethernet traffic.
Uplink ports X1-X4 support 0.5–5m length DAC as stacking or uplink
Uplink Ports X5-X8 support 0.5–7m length DAC as stacking or uplink
Note: 5m DAC cables are supported on all ports with FlexFabric, in addition, 7-15m DAC cables are also
supported on ports X5 through X8. Flex-10 supports 15m DAC cables on ALL ports.
Figure 5 – Virtual Connect FlexFabric Module port configuration, speeds and types
Figure 6 - FlexNIC Connections – It is important to note that Physical Function two (pf2) can be
configured as Ethernet, iSCSI or FCoE (iSCSI and FCoE are supported with VC FlexFabric and G7
blades using the Emulex based BE2 and BE3 chipsets). Physical Functions 1, 3 and 4 would be
assigned as Ethernet only connections. Dual Hop FCoE connections are supported on external ports
X1 through X4
Introduction to Virtual Connect Flex-10 and FlexFabric 11
Virtual Connect 8Gb 20-Port Fibre Channel Module Uplink Port Mappings
It is important to note how the external uplink ports on the VC-FC module are configured. The
graphic below outlines the type and speed each port can be configured as.
Ports 1 - 4; Can be operate at Fibre Channel speeds of 2Gb, 4Gb or 8Gb using 4Gb or 8Gb FC
SFP modules,
The VC 8Gb 2o Port module ships with NO SFP modules
Refer to the VC 8Gb 20 Port module Quick Spec for a list of supported SFP modules
Figure 7 - Virtual Connect 8Gb 20 Port Module port configuration and speed types
Virtual Connect 8Gb 24-Port Fibre Channel Module Uplink Port Mappings
It is important to note how the external uplink ports on the VC-FC module are configured. The
graphic below outlines the type and speed each port can be configured as.
Ports 1 - 8; Can be operate at Fibre Channel speeds of 2Gb, 4Gb or 8Gb using 4Gb or 8Gb FC
SFP modules
The VC 8Gb 24 Port module ships with TWO 8Gb FC SFP modules installed
Refer to the VC 8Gb 20 Port module Quick Spec for a list of supported SFP modules
Figure 8 - Virtual Connect 8Gb 20 Port Module port configuration and speed types
Introduction to Virtual Connect Flex-10 and FlexFabric 12
Mode
Link Init/Fill Word
Mode 0
IDLE/IDLE
Mode 1
ARBF/ARBF
Mode 2
IDLE/ARBF
Mode 3
If ARBF/ARBF fails use IDLE/ARBF
Connecting to Brocade Fibre Channel Fabric at 8Gb
NOTE: When VC 8Gb 20-port FC or VC FlexFabric 10Gb/24-port module Fibre Channel uplink ports
are configured to operate at 8Gb speed and connecting to HP B-series (Brocade) Fibre Channel SAN
switches, the minimum supported version of the Brocade Fabric OS (FOS) is v6.3.1 and v6.4.x. In
addition, a fill word on those switch ports must be configured with option “Mode 3” to prevent
connectivity issues at 8Gb speed.
On HP B-series (Brocade) FC switches use the command;
portCfgFillWord (portCfgFillWord <Port#> <Mode>) to configure this setting:
Although this setting only affects devices logged in at 8G, changing the mode is disruptive
regardless of the speed the port is operating at. The setting is retained and applied any time an 8G
device logs in. Upgrades to FOS v6.3.1 or v6.4 from prior releases supporting only modes 0 and 1
will not change the existing setting, but a switch or port reset to factory defaults with FOS v6.3.1 or
v6.4 will be configured to Mode 0 by default. The default setting on new units may vary by vendor.
Please use portcfgshow CLI to view the current portcfgfillword status for that port.
Modes 2 and 3 are compliant with FC-FS-3 specifications (standards specify the IDLE/ARBF behavior
of Mode 2 which is used by Mode 3 if ARBF/ARBF fails after 3 attempts). For most environments,
Brocade recommends using Mode 3, as it provides more flexibility and compatibility with a wide
range of devices. In the event that the default setting or Mode 3 does not work with a particular
device, contact your switch vendor for further assistance. When connecting to Brocade SAN
Switches at 8Gb, “portCfgFillWord” must be set to Mode 3 – If ARBF/ARBF fails use IDLE/ARBF. In
order to use Mode 3, FOS v6.3.1 or v6.4.x or better is required.
Tunneled VLAN and Mapped VLANS
Readers that are familiar with earlier releases of Virtual Connect firmware features will realize that
Virtual Connect 3.30 firmware removed the need to configure Virtual Connect in Mapped vs.
Tunneled mode. As of Virtual Connect 3.30 firmware release, Virtual Connect now provides the
ability to simultaneously take advantage of the features and capabilities that were provided in
either mapped or tunneled modes, there is no need to choose the domain’s mode of operation. The
key feature gained here is the ability to now use Mapped VLANs (multiple networks) and Tunneled
networks within the same profile.
Virtual Connect VLAN Support – Shared Uplink Set
Shared Uplink Sets provide administrators with the ability to distribute VLANs into discrete and
defined Ethernet Networks (vNet.) These vNets can then be mapped logically to a Server Profile
Network Connection allowing only the required VLANs to be associated with the specific server NIC
port. This also allows the flexibility to have various network connections for different physical
Operating System instances (i.e. VMware ESX host and physical Windows host.)
Introduction to Virtual Connect Flex-10 and FlexFabric 13
Legacy VLAN Capacity
Legacy VLAN capacity mode allows up to 320 VLANs per Ethernet module, 128 VLANs per Shared
Uplink Set and, up to 28 VLANs are allowed per FlexNIC port. Care must be taken not to exceed the
limit per physical server port.
The following Shared Uplink Set rules apply to legacy capacity mode:
320 VLANs per Virtual Connect Ethernet Module
128 VLANs per Shared Uplink Set (single uplink port)
28 unique server mapped VLANs per server profile network connection
The above configuration rules apply only to a Shared Uplink set. If support for a larger numbers of
VLANs is required, a VLAN Tunnel can be configured to support a large number of VLANs. Please see
the Virtual Connect Release Notes for future details.
Expanded VLAN Capacity – Added in Virtual Connect 3.30 Release
This mode allows up to 1000 VLANs per domain when implementing a Share Uplink Set (SUS). The
number of VLANs per shared uplink set is restricted to 1000. In addition, up to 162 VLANs are
allowed per physical server port, with no restriction on how those VLANs are distributed among the
server connections mapped to the same physical server port. Care must be taken not to exceed the
limit per physical server port. For example, if you configure 150 VLAN mappings for a server
connection (FlexNIC:a) of a FlexFabric physical server port, then you can only map 12 VLANs to the
remaining three server connections (FlexNIC:b, FlexNIC:c, and FlexNIC:d) of the same physical server
port. If you exceed the 162 VLAN limit, the physical server port is disabled and the four server
connections are marked as Failed. Also, keep in mind that the FCoE SAN or iSCSI connection is also
counted as a network mapping. In the event that greater numbers of VLANs are needed a vNet
Tunnel can be used simultaneously with VLAN mapping.
The following Shared Uplink Set rules apply:
1000 VLANs per Virtual Connect Ethernet domain,
162 VLANs per Ethernet server port
The above configuration rules apply only to a Shared Uplink set. If support for a greater
numbers of VLANs is required, a VLAN Tunnel can be configured to support a large number
of VLANs. Please see the Virtual Connect Release Notes for further details.
When creating the Virtual Connect Domain, the default configuration in 3.30 is Legacy VLAN
Capacity Mode (in Virtual Connect 4.01, the default mode is now Expanded VLAN Capacity),
however, Multiple Networks and Tunnel mode can be used simultaneously. After Expanded VLAN
Capacity mode is configured, in order to revert back to Legacy VLAN capacity mode, you must delete
and recreate the Virtual Connect Domain.
Note: Expanded VLAN Capacity mode is not supported on the following 1Gb based Virtual Connect
Ethernet modules, such as:
HP 1/10Gb VC Ethernet Module
HP 1/10Gb-F VC Ethernet Module
If these modules are inserted into an enclosure that is in Expanded VLAN Capacity mode, they are
marked as incompatible. If these modules are installed in an enclosure, converting to Expanded
VLAN Capacity mode will not be permitted.
Introduction to Virtual Connect Flex-10 and FlexFabric 14
Figure 9 - Configuring Expanded VLAN Capacity support
Bulk VLAN Creation
In addition to providing support for a greater number of VLANs, Virtual Connect now provides the
ability to create several VLANs, within a Shared Uplink Set (SUS), in a single operation. Using the
Bulk VLAN creation feature in the GUI or the add network-range command in the CLI many VLANs
can be added to a SUS. In addition, copying an existing SUS is also now possible. When creating an
Active/Active SUS configuration, you can create the first SUS, and then copy it.
Figure 10 - Example of adding multiple VLANs to a SUS through the GUI
Here is an example of creating a shared Uplink Set using the CLI command “add network-range” to
create the more than 400 VLANs shown above.
Introduction to Virtual Connect Flex-10 and FlexFabric 15
Note: Earlier release of Virtual Connect firmware supported only 320 VLANs, in addition, to create
each VLAN with SmartLink enabled required two lines of script. In the example above, over 300
VLANs are created in a single statement.
Copying a Shared Uplink Sets
Virtual Connect provides the ability to copy a Shared Uplink Set. This can be very handy when
defining an Active/Active Shared Uplink Set design. You simply create the first SUS, and then copy
it.
For example, after creating Shared Uplink Set VLAN-Trunk-1 you can copy it to VLAN-Trunk-2. You
will then need to add uplinks to the new SUS and ensure all networks have SmartLink enabled. This
can be accomplished as follows;
There are two types of vNets. The first is a simple vNet that will pass only untagged frames. The
second is a vNet tunnel which will pass tagged frames for one or many VLANs.
vNet
The vNet is a simple network connection between one or many server NICs to one or many uplink
ports.
A vNet could be used to connect a single VLAN, without tagging, to one or many server NICs. If this
network is configured as a VLAN, by configuring the upstream switch port as an access or untagged
port, by extension, any server connected to this vNet would reside in that VLAN, but would not need
to be configured to interpret the VLAN tags.
Benefits of a vNet
A vNet can be utilized in one of two ways, a simple vNet, used to pass untagged frames and a
tunneled vNet. A tunneled vNet can be used to pass many VLANs without modifying the VLAN tags,
functioning as a transparent VLAN Pass-Thru module.
vNet Tunnel
A tunneled vNet will pass VLAN tagged frames, without the need to interpret or forward those
frames based on the VLAN tag. Within a tunneled vNet the VLAN tag is completely ignored by
Virtual Connect and the frame is forwarded to the appropriate connection (server NIC[s] or uplinks)
depending on frame direction flow. In this case, the end server would need to be configured to
interpret the VLAN tags. This could be a server with a local operating system, in which the network
stack would need to be configured to understand which VLAN the server was in, or a virtualization
host with a vSwitch supporting multiple VLANs.
The tunneled vNet can support up to 4096 VLANs.
Benefits of a vNet Tunnel
A vNet Tunnel can present one or many VLANs to a server NIC. When additional VLANs are added to
the upstream switch port, they are made available to server with no changes required within Virtual
Connect. All presented VLANs are pass through the tunnel, unchanged.
Introduction to Virtual Connect Flex-10 and FlexFabric 16
Shared Uplink Set (SUS)
The SUS provides the ability to support VLAN tagging and forward frames based on the VLAN tags
of those frames. The SUS connects one or many server NICs to one or many uplink ports. A SUS
would be configured for the specific VLANs it will support. If support for additional VLANs is
required, those VLANs need to be configured within the SUS.
When connecting a server NIC to a network within a SUS, there are two choices provided. The key
difference between these two options is the state in which the frame is passed to the server NIC.
When configuring a server NIC for network connection;
1. Selecting a single network – which would be mapped to a specific VLAN.
If a single network is selected, the frames will be presented to the server NIC WITHOUT a
VLAN tag. In this case the host operating system does not need to understand which VLAN it
resides in. When the server transmits frames back to Virtual Connect, those frames will not
be tagged, however; Virtual Connect will add the VLAN tag and forward the frame onto the
correct VLAN.
2. Selecting multiple networks – which would provide connectivity to several VLANs.
The Map VLAN Tags feature provides the ability to use a Shared Uplink Set to present
multiple networks to a single NIC. If you select Multiple Networks when assigning a
Network to a server NIC, you will have the ability to configure multiple Networks (VLANS) on
that server NIC. At this point Virtual Connect tags ALL the packets presented to the NIC —
unless the Native check box is selected for one of the networks, in which case packets from
this network (VLAN) will be untagged, and any untagged packets leaving the server will be
placed on this Network (VLAN).
With Mapped VLAN Tags, you can create a Shared Uplink Set that contains ALL the VLANs
you want to present to your servers, then present only ONE network (the one associated
with the VLAN we want the server NIC in) to the Windows, LINUX or the ESX Console NIC,
then select Multiple Networks for the NIC connected to the ESX vSwitch and select ALL the
networks that we want presented to the ESX host vSwitch. The vSwitch will then break out
the VLANs into port groups and present them to the guests. Using Mapped VLAN Tags
minimizes the number of uplinks required.
Benefits of a SUS
A Shared Uplink Set can be configure to support both tagged and un-tagged network traffic to a
server NIC, which simplifies the overall configuration and minimizes the number of uplink cables
required to support the network connections.
MAC Cache Failover
When a Virtual Connect Ethernet uplink that was previously in standby mode becomes active, it can
take several minutes for external Ethernet switches to recognize that the c-Class server blades can
now be reached on this newly-active connection. Enabling Fast MAC Cache Failover causes Virtual
Connect to transmit Ethernet packets on newly-active links, which enables the external Ethernet
switches to identify the new connection more quickly (and update their MAC caches appropriately).
This transmission sequence repeats a few times at the MAC refresh interval (5 seconds
recommended) and completes in about 1 minute.
When implementing Virtual Connect in an Active/Standby configuration, where some of the links
connected to a Virtual connect Network (whether a SUS or vNet) are in standby, MAC Cache Fail-over
would be employed to notify the switch as a link transitions from Standby to Active within Virtual
Connect.
Note: Be sure to set switches to allow MAC addresses to move from one port to another without
waiting for an expiration period or causing a lock out.
Introduction to Virtual Connect Flex-10 and FlexFabric 17
Role Management
New to Virtual Connect 4.01 is the ability to provide a more granular control of each of the
operational user roles provided. In prior releases, each role had a set level of access.
Figure 11 – Role Operations provides the ability to set the level of access a specific operational role
is provided
Virtual Connect DirectAttach Virtual Connect SAN fabrics (FlatSAN with 3PAR)
Virtual Connect Direct Attached SAN fabrics, provides the ability to directly connect HP FlexFabric to
an HP 3PAR storage array and completely eliminate the need for a traditional SAN fabric and the
administrative overhead associated with maintaining the fabric. FlatSAN is supported on FlexFabric
modules through Ports X1-X4, simply connect the FlexFabric modules to available ports on the
3PAR array and configure the Virtual Connect fabrics for “DirectAttach”.
Figure 12 - When configuring FlatSAN, chose the Fabric Type of “DirectAttach”
Note: See Scenario 6 in the Virtual Connect Fibre Channel Cookbook for a details on implementation
of FlatSAN.
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01702940/c01702940.pdf
Virtual Connect QoS
QoS is used to provide different priorities for designated networking traffic flows and guarantee a
certain level of performance through resource reservation. QoS is important for reasons such as:
Providing Service Level Agreements for network traffic and to optimize network utilization
Different traffic types such as management, back up, and voice having different
requirements for throughput, jitter, delays and packet loss
IP-TV, VOIP and expansion of internet is creating additional traffic and latency
requirements
Introduction to Virtual Connect Flex-10 and FlexFabric 18
In some cases, capacity cannot be increased. Even when possible, increasing capacity may
still encounter issues if traffic needs to be re-routed due to a failure
Traffic must be categorized and then classified. Once classified, traffic is given priorities and
scheduled for transmission. For end to end QoS, all hops along the way must be configured with
similar QoS policies of classification and traffic management. Virtual Connect manages and
guarantees its own QoS settings as one of the hops within the networking infrastructure.
Network Access Groups (NAG)
Before Virtual connect 3.30, any server profile could be assigned any set of networks. If policy
dictated that some networks should not be accessed by a system that accessed other networks (for
example, the Intranet and the Extranet or DMZ networks) there was no way to enforce that policy
automatically.
With Virtual Connect 3.30 and later, network access groups are defined by the network
administrator and associated with a set of networks that can be shared by a single server. Each
server profile is associated with one network access group. A network cannot be assigned to the
server profile unless the profile is a member of the network access group associated with that
network. A network access group can contain multiple networks. A network can reside in more than
one network access group, such as a management or VMotion VLAN.
Up to 128 network access groups are supported in the domain. Ethernet networks and server
profiles that are not assigned to a specific network access group are added to the domain Default
network access group automatically. The Default network access group is predefined by VCM and
cannot be removed or renamed.
If you are updating to Virtual Connect 3.30, all current networks are added to the Default network
access group and all server profiles are set to use the Default network access group. Network
communication within the network access group behaves similarly to earlier versions of Virtual
Connect firmware, because all profiles can reach all networks.
If you create a new network access group, NetGroup1, and copy or move existing networks from the
Default network access group to NetGroup1, then a profile that uses NetGroup1 cannot use
networks included in the Default network access group. Similarly, if you create a new network and
assign it to NetGroup1 but not to the Default network access group, then a profile that uses the
Default network access group cannot use the new network. Therefore, an administrator cannot
inadvertently, or intentionally, place a server on networks that reside in different Network Access
Groups.
Virtual Connect LACP Timers
Virtual Connect provides two options for configuring uplink redundancy (Auto and Failover). When
the connection mode is set to "Auto", Virtual Connect uses Link Aggregation Control Protocol to
aggregate uplink ports from a Network or Shared Uplink Set into Link Aggregation Groups. As part
of the LACP negotiation to form a LAG, the remote switch sends a request for the frequency of the
control packets (LACPDU). This frequency can be "short" or "long." Short is every 1 second with a 3
second timeout. Long is every 30 seconds with a 90 second timeout.
Prior to Virtual Connect 4.01 this setting defaulted to short. Starting with Virtual Connect 4.01 this
setting can be set to short or long. The domain-wide setting can be changed on the Ethernet
Settings (Advanced Settings) screen. Additionally, each Network or Shared Uplink Set also has a
LACP timer setting. There are three possible values: Domain-Default, Short, or Long. The domain
default option sets the LACP timer to the domain-wide default value that is specified on the
Advanced Ethernet Settings screen.
This setting specifies the domain-wide default LACP timer. VCM uses this value to set the duration
of the LACP timeout and to request the rate at which LACP control packets are to be received on
LACP-supported interfaces. Changes to the domain-wide setting are immediately applied to all
existing networks and shared uplink sets.
Using the "long" setting can help prevent loss of LAGs while performing in-service upgrades on
upstream switch firmware.
Introduction to Virtual Connect Flex-10 and FlexFabric 19
Multiple Networks Link Speed Settings (Min/Max Bandwidth Control)
A new feature to Virtual Connect 4.01 provides the ability to configure a minimum and maximum
preferred NIC link speed for server downlinks. This setting can be configured as a global default for
NICs configured with multiple networks, but can also be fine-tuned at the individual NIC level. The
default global Preferred Speed is set to 10Gb. The new “Maximum Link Connection Speed” setting
can be configured to enable a NIC to transmit at a speed greater that it’s configured speed. The
default Maximum speed is set to 10Gb. If these settings are remain as default, each NIC, although
configured for a set speed (minimum guaranteed speed), will be able to transmit at a rate as high as
10Gb. This feature is also known as “Min/Max”.
Configuring Multiple Networks Link Speed Settings (Min/Max)
Configure the global default setting for Preferred Link Speed to 2Gb and the Maximum Speed to
8Gb. This global setting applies to connections configured for Multiple Networks only.
On the Virtual Connect Manager screen, Left pane, click Ethernet Settings, Advanced
Settings
Select Set a Customer value for Preferred Link Connection Speed
o Set for 2Gb
Select Set a Customer value for Maximum Link Connection Speed
o Set for 8Gb
Select Apply
Figure 13 - Set Custom Link Speeds
The following command can be copied and pasted into an SSH based CLI session with Virtual
Connect;
# Set Preferred and Maximum Connection Speeds
set enet-vlan PrefSpeedType=Custom PrefSpeed=2000
set enet-vlan MaxSpeedType=Custom MaxSpeed=8000
Introduction to Virtual Connect Flex-10 and FlexFabric 20
Task
Action
Enable/disable
Select (enable) or clear (disable) the Enable Throughput Statistics checkbox
Change sampling
rate
Select a sampling rate from the Configuration list. Supported sampling rates
include:
Sample rate of 1 minute, collecting up to 5 hours of samples.
Sample rate of 2 minutes, collecting up to 10 hours of samples.
Sample rate of 3 minutes, collecting up to 15 hours of samples.
Sample rate of 4 minutes, collecting up to 20 hours of samples.
Sample rate of 5 minutes, collecting up to 25 hours of samples.
Sample rate of 1 hour, collecting up to 12.5 days of samples.
Configuring Throughput Statistics
Telemetry support for network devices caters to seamless operations and interoperability by
providing visibility into what is happening on the network at any given time. It offers extensive and
useful detection capabilities which can be coupled with upstream systems for analysis and trending
of observed activity.
The Throughput Statistics configuration determines how often the Throughput Statistics are
collected and the supported time frame for sample collection before overwriting existing samples.
When the time frame for sample collection is reached, the oldest sample is removed to allocate
room for the new sample. Configuration changes can be made without having to enable
Throughput Statistics. Applying configuration changes when Throughput statistics is enabled
clears all existing samples.
Some conditions can clear existing Throughput Statistics:
Disabling the collection of Throughput Statistics clears all existing samples.
Changing the sampling rate clears all existing samples.
Power cycling a Virtual connect Ethernet module clears all Throughput Statistics samples
for that module.
Collected samples are available for analysis on the Throughput Statistics screen (on page 226 of
the Virtual Connect 4.01 User Guide), accessible by selecting Throughput Statistics from the Tools
pull-down menu.
The following table describes the available actions for changing Throughput Statistics settings.
Connecting VC Flex-10/10D or VC FlexFabric to
the CORE
The baseline Virtual Connect technology adds a virtualization layer between the edge of the server
and the edge of the existing LAN and SAN. As a result, the external networks connect to a shared
resource pool of MAC addresses and WWNs rather than to MACs/WWNs of individual servers.
LAN-Safe
From the external networking view, Virtual Connect FlexFabric, Flex-10, or Ethernet uplinks appear
to be multiple NICs on a large server. Virtual Connect ports at the enclosure edge look like server
connections. This is analogous to a VMware environment that provides multiple MAC addresses to
the network through a single NIC port on a server.
Virtual Connect works seamlessly with your external network:
Does not participate in Spanning Tree Protocol (STP) on the network uplinks to the data
Introduction to Virtual Connect Flex-10 and FlexFabric 21
center. This avoids potential STP configuration errors that can negatively affect switches in
the network and the servers connected to those switches
Uses an internal loop prevention algorithm to automatically detect and prevent loops
inside a Virtual Connect domain. Virtual Connect ensures that there is only one active
uplink for any single network at one time
Allows aggregation of uplinks to data center networks (using LACP and fail-over)
Supports VLAN tagging on egress or pass-thru of VLAN tags in tunneled mode
Supports Link Layer Discovery Protocol (LLDP) and Jumbo Frames
Virtual Connect was designed to connect to the network as an endpoint device, as such, it is capable
of connecting to any network switch, at any layer, including directly to the core switch, providing
the ability to flatten the network as required.
Choosing VC Flex-10/10D or VC FlexFabric
When choosing between Flex-10/10D and FlexFabric, the first question to ask is whether a direct
connection to a Fibre Channel SAN fabric will be required, today or in the future. The key difference
between Flex-10 and FlexFabric is that FlexFabric modules leverage the built in Converged Network
Adapter (CNA) provided in the G7 and Gen 8 BladeSystem servers to provide FCoE (Fibre Channel)
connectivity. FCoE connectivity is provided through the integrated Converged Network Adapter
(CNA) and the FlexFabric modules, the FlexFabric modules connect directly to the existing Fibre
Channel switch fabrics, no additional components would be required, such as a traditional HBA.
With the release of Virtual connect firmware 4.01, the Flex-10/10D and FlexFabric modules can also
be utilized to provide dual hop FCoE connectivity to a switch that supports FCoE connections, in
which case the FCoE traffic would traverse the Ethernet uplinks and connect to the SAN through the
ToR or Core switch.
Virtual Connect 3.70 provided a new capability when connecting to HP’s 3PAR storage arrays using
Fibre Channel, allowing the 3PAR array to be directly connected to the FlexFabric modules. This
feature is call “FlatSAN” and provides the ability to completely eliminate the need for a fibre
channel SAN fabric, further reducing the cost of implementation and management of a blade server
environment.
If direct connection to a Fibre Channel SAN fabric is not required, then all the capabilities of the CNA
in the G7 and Gen 8 Blade and Virtual Connect can be obtained through the use of the Flex-10/10D
modules, the only feature not available would be direct connection to a fibre channel SAN fabric.
Fibre Channel connectivity could be later added through the use of traditional Virtual Connect Fibre
Channel modules, and FC HBAs. iSCSI support is provided through either FlexFabric or Flex-10
modules.
If Fibre Channel is not used, then the second Physical Function (pf) on each port would be used for
Ethernet. If Flex-10 modules are used with Virtual connect Fibre Channel modules, ensure an HBA
is installed in the appropriate MEZZ slot in the blade and simply configure a “FC HBA” within the
server profile and map it to the appropriate FC SAN Fabrics. In this case, FCoE SAN Fabrics and FCoE
CNAs would not be utilized. An example of this configuration is provided in Scenario 9.
The Scenarios provided in this document could be implemented on either; Flex-10, Flex-10/10D
(with VC-FC Modules for FC connections) or FlexFabric modules, with the exception of the dual hop
FCoE, which would not be supported on Flex-10 modules.
FlexFabric also provides the ability to support “Direct Attached” SAN fabrics to an HP 3PAR SAN,
which provides the ability to eliminate the SAN fabric.
Note: Dual hop FCoE connectivity is provided through Flex-10/10D and FlexFabric modules only.
The original Flex-10 module does not support dual hop FCoE.
Introduction to Virtual Connect Flex-10 and FlexFabric 22
Choosing an Adapter for VC Flex-10/10D or VC
FlexFabric
The following adapters are supported with Virtual Connect Flex-10, Flex-10/10D and FlexFabric;
Gen 8 Blades – FlexFabric FCoE/iSCSI support
HP FlexFabric 10Gb 2-port 554FLB Adapter
HP FlexFabric 10Gb 2-port 554M Adapter
Gen 8 Blades – Flex-10 Ethernet only
HP Flex-10 10Gb 2-port 530FLB Adapter
HP Flex-10 10Gb 2-port 530M Adapter
HP Flex-10 10Gb 2-port 552M Adapter
Gen 7 and older Blades – FlexFabric FCoE/iSCSI support
HP NC553i 10Gb FlexFabric adapter
HP NC553m 10Gb 2-port FlexFabric Adapter
Gen 7 and older Blades – Flex-10 Ethernet Only
HP NC552m 10Gb Dual Port Flex-10 Ethernet Adapter
HP NC532m 10Gb Dual Port Flex-10 Ethernet Adapter
HP NC542m 10Gb Dual Port Flex-10 Ethernet Adapter
HP NC550m 10Gb Dual Port Flex-10 Ethernet Adapter
The Min/Max bandwidth optimization feature released in Virtual Connect 4.01 excludes support for
the following adapters:
HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter
HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter
HP NC550m 10Gb 2-port PCIe x8 Flex-10 Ethernet Adapter
The following adapters are NOT supported with Flex-10, Flex-10/10D or FlexFabric:
HP Ethernet 10Gb 2-port 560FLB FIO Adapter
HP Ethernet 10Gb 2-port 560M Adapter
Note: All 1Gb Blade LAN adapters will function with any of the Virtual Connect 10Gb Ethernet
modules, however, will operate at 1Gb.
Determining Network Traffic Patterns and
Virtual Connect network design
(Active/Standby vs. Active/Active)
When choosing which Virtual Connect network design to use (Active/Active (A/A) vs. Active/Standby
(A/S) uplinks), consider the type of network traffic this enclosure will need to support. For example,
will there be much server to server traffic needed within the enclosure, or is the traffic flow mainly
in/out bound of the enclosure.
Network traffic patterns, North/South (N/S) vs. East/West (E/W), should be considered when
designing a Virtual Connect solution as network connectivity can be implemented in a way to
maximize the connected bandwidth and/or minimize the need for server to server traffic to leave
the enclosure when communicating on the same VLAN with other servers within the same
enclosure.
Introduction to Virtual Connect Flex-10 and FlexFabric 23
For example; if the solution being implemented will have a high level of in/out or North/South
traffic flow, an A/A network design would likely be the better solution as it would enable all
connected uplinks. However, if a greater level of network traffic is between systems within the
same enclosure/VLAN, such as a multi-tiered application, then a better design may be A/S, as this
would minimize or eliminate any server to server communications from leaving the enclosure.
Determining whether network connectivity is A/A vs. A/S is not a domain configuration issue or
concern. Networks are independent of one another and both A/A and A/S networks could be
implemented in the same Virtual Connect domains. As an example, an iSCSI connection could be
configured as A/A to support a high rate of N/S traffic between targets and initiators. Whereas the
LAN connectivity for the users and applications could be more E/W where an A/S network design
could be implemented.
In an active/standby network design, all servers would have both NICs connected to the same
Virtual Connect network. All communications between servers within the Virtual Connect Domain
would occur through this network, no matter which network adapter is active. In the example
below, if Windows Host 1 is active on NIC 1 and Windows Host 2 is active on NIC 2, the
communications between servers will cross the internal stacking links. For external
communications, all servers in the enclosure will use the Active uplink (currently) connected to Bay
1, no matter which NIC they are active on.
Figure 14 - This is an example of an Active/Standby network configuration. One uplink is active,
while the other is in standby, and available in the event of a network or module failure
In an A/A network design, all servers would have their NICs connected to opposite Virtual Connect
networks. Communications between servers within the Virtual Connect Domain would depend on
which NIC each server was active on. In the following example, if Windows Host 1 is active on NIC 1
and Windows Host 2 is active on NIC 2, the communications between servers will NOT cross the
internal stacking links and would need to leave the enclosure and re-enter via the opposite module;
however, if a higher rate of external communications is require, vs. peer to peer, then an
active/active configuration may be preferred as both uplinks would be actively forwarding traffic.
Also, if both servers were active on the same NIC, then communications between servers would
remain within the module/enclosure.
Introduction to Virtual Connect Flex-10 and FlexFabric 24
Figure 15 - This is an example of an Active/Active network configuration. Both uplinks are actively
forwarding traffic.
Figure 16 - Both A/A (iSCSI_x) and A/S (vNet_PROD) networks are used in this example.
Note: Alternatively, if Fibre Channel will not be required, the iSCSI networks could be connected as
iSCSI hardware accelerated and would be connected to the FlexHBA.
Introduction to Virtual Connect Flex-10 and FlexFabric 25
Emulex NC55x CNA Firmware
4.2.401.2155
VMware ESXi 5.0/5.1 Driver CD for Emulex be2net
4.2.327.0
VMware ESXi50 Driver for Emulex iSCSI Driver
4.2.324.12
VMware ESX/ESXi Driver CD for Emulex FCoE/FC adapters
8.2.4.141.55
VMware ESXi 5.0/5.1
VMware ESX 5.0 is fully supported with BladeSystem and Virtual Connect. However, it is important
to ensure that the proper Network Adapter and HBA drivers and firmware are properly installed. As
of this writing, the following drivers and firmware should be used.
CNA driver and Firmware recommendations:
Note:As noted in the “February 2013 VMware FW and Software Recipe”
http://vibsdepot.hp.com/hpq/recipes/February2013VMwareRecipe5.0.pdf.
Note: For the most up to date recipe document please visit “vibsdepot” at http://vibsdepot.hp.com
Figure 17 - Note the Emulex BIOS version as 4.2.401.2155
Introduction to Virtual Connect Flex-10 and FlexFabric 26
Figure 18 - Note the be2net driver and firmware level as displayed in vCenter, under the Hardware
Status tab
Introduction to Virtual Connect Flex-10 and FlexFabric 27
Single Domain/Enclosure Scenarios
Overview
This Cookbook will provide several configuration scenarios of Virtual Connect Flex-10/10D and
FlexFabric, using an HP BladeSystem c7000 enclosure. Virtual Connect also supports MultiEnclosure stacking, for up to 4 enclosures, which provides a single point of management and can
further reduce cable connectivity requirements. For Virtual connect stacked configurations, see the
Virtual Connect Multi-Enclosure Stacking Reference Guide. Each scenario will provide an overview
of the configuration, show how to complete that configuration and include both GUI and CLI
(scripted) methods. Where possible, examples for Windows and/or VMware vSphere will also be
provided.
Requirements
This Cookbook will utilize a single HP BladeSystem c7000 enclosure with TWO Virtual Connect
FlexFabric or Flex-10/10D modules installed in I/O Bays 1 and 2 and a BL460c Gen 8 half height
BladeSystem Servers in server Bay 1. Some of the scenarios will provide Ethernet only connections,
in which case Flex-10/10D modules may be used. In the scenarios where Fibre Channel connectivity
is required, FlexFabric modules will be used, with the exception of Scenario 9 which uses Flex10/10D and Virtual Connect Fibre Channel modules.
The server’s integrated converged network adapters (CNA) will connect to Bays 1 and 2, with two
10Gb FlexFabric adapter ports. Each FlexFabric Adapter port supports Ethernet and iSCSI or Fibre
Channel over Ethernet (FCoE) when connected to FlexFabric modules. Port 1 will connect to the
FlexFabric module in Bay 1 and Port 2 will connect to the FlexFabric module in Bay 2.
The Flex-10/10D modules are connected to a pair of 10Gb Ethernet switches for standard LAN
connectivity.
The FlexFabric modules and VC-FC modules are linked to a pair of 8Gb Brocade fibre channel
switches for SAN connectivity.
In each scenario, it’s assumed that a Virtual Connect Domain has been created either through the
GUI or a CLI script and no Virtual Connect Networks, uplink sets or Server Profiles have been
created. Virtual Connect scripting examples are provided within each scenario as well as additional
examples in Appendix C.
Figure 19- c7000 enclosure front view with Half Height Gen 8 BladeSystem servers installed
Single Domain/Enclosure Scenarios 28
Figure 20 - c7000 enclosure rear view with Virtual Connect FlexFabric Modules installed in
Interconnect bays 1& 2
Figure 21 - c7000 enclosure rear view with Virtual Connect Flex-10/10D modules in Bays 1 & 2 and
Virtual Connect 20 Port 8Gb Fibre Channel Modules installed in Interconnect bays 3 & 4. If Fibre
Channel connectivity is not required, the Fibre Channel modules would not be required
Single Domain/Enclosure Scenarios 29
Scenario 1 – Simple vNet with
Active/Standby Uplinks – Ethernet and
FCoE – Windows 2008 R2
Overview
This simple configuration uses the Virtual Connect vNet along with FCoE for SAN connectivity.
When VLAN mapping is not required, the vNet is the simplest way to connect Virtual Connect to a
network and server. In this scenario, the upstream network switch connects a network to a single
port on each FlexFabric module. In addition, Fibre Channel uplinks will also be connected to the
FlexFabric modules to connect to the existing Fibre Channel infrastructure.
No special upstream switch configuration is required as the switch is in the factory default
configuration, typically configured as an Access or untagged port on either the default VLAN or a
specific VLAN. In this scenario, Virtual Connect does not receive VLAN tags.
When configuring Virtual Connect, we can provide several ways to implement network fail-over or
redundancy. One option would be to connect TWO uplinks to a single vNet; those two uplinks would
connect from different Virtual Connect modules within the enclosure and could then connect to the
same or two different upstream switches, depending on your redundancy needs. An alternative
would be to configure TWO separate vNets, each with a single or multiple uplinks configured. Each
option has its advantages and disadvantages. For example; an Active/Standby configuration places
the redundancy at the VC level, where Active/Active places it at the OS NIC teaming or bonding level.
We will review the first option in this scenario.
In addition, several vNets can be configured to support the required networks to the servers within
the BladeSystem enclosure. These networks could be used to separate the various network traffic
types, such as iSCSI, backup and VMotion from production network traffic.
This scenario will also leverage the Fibre Channel over Ethernet (FCoE) capabilities of the FlexFabric
modules. Each Fibre channel fabric will have one uplink connected to each of the FlexFabric
modules.
Requirements
This scenario will support both Ethernet and fibre channel connectivity. In order to implement this
scenario, an HP BladeSystem c7000 enclosure with one or more server blades and TWO Virtual
Connect FlexFabric modules, installed in I/O Bays 1& 2 are required. In addition, we will require ONE
or TWO external Network switches. As Virtual Connect does not appear to the network as a switch
and is transparent to the network, any standard managed switch will work with Virtual Connect.
The Fibre Channel uplinks will connect to the existing FC SAN fabrics. The SAN switch ports will
need to be configured to support NPIV logins. One uplink from each FlexFabric module will be
connected the existing SAN fabrics.
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 30
Figure 22 - Physical View; Shows one Ethernet uplink from Ports X5 on Module 1 and 2 to Port 1 on
each network switch. The SAN fabrics are also connected redundantly, with TWO uplinks per fabric,
from ports X1 and X2 on module 1 to Fabric A and ports X1 and X2 to Fabric B.
Figure 23 - Logical View; Shows a single Ethernet uplink from Port X5 on Module 1 on the first
network switch and a single uplink from Port X5 on Module 2 to the second network switch. Both
Ethernet uplinks are connected to the same vNet, vNet-PROD. In addition, SAN Fabric FCoE_A
connects to the existing SAN Fabric A through port X1 on Module 1 (Bay 1) and FCoE_B connects to
the existing SAN Fabric B through port X1 on Module 2 (Bay 2)
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 31
Installation and configuration
Switch configuration
As the Virtual Connect module acts as an edge switch, Virtual Connect can connect to the network at
either the distribution level or directly to the core switch.
The appendices provide a summary of the cli commands required to configure various switches for
connection to Virtual Connect. The configuration information provided in the appendices for this
scenario assumes the following information:
The switch ports are configured as ACCESS or untagged ports, either presenting the
Default VLAN or a specific VLAN and will be forwarding untagged frames
As an alternative, if the switch ports were configured as TRUNK ports and forwarding
multiple VLANS, Virtual Connect would forward those tagged frames to the host NICs
configured for this network, however; the Virtual Connect network would need to be
configured for VLAN Tunneling. The connected host would then need to be configured to
interpret those VLAN tags.
The network switch port should be configured for Spanning Tree Edge as Virtual Connect appears to
the switch as an access device and not another switch. By configuring the port as Spanning Tree
Edge, it allows the switch to place the port into a forwarding state much quicker than otherwise,
this allows a newly connected port to come online and begin forwarding much quicker.
The SAN switch ports connecting to the FlexFabric module must be configured to accept NPIV
logins.
Configuring the VC module
Physically connect Port 1 of network switch 1 to Port X15 of the VC module in Bay 1
Physically connect Port 1 of network switch 2 to Port X5 of the VC module in Bay 2
Note: if you have only one network switch, connect VC port X5 (Bay 2) to an alternate port on the
same switch. This will NOT create a network loop and Spanning Tree is not required.
Physically connect Port X1 on the FlexFabric in module Bay 1 to a switch port in SAN Fabric A
Physically connect Port X1 on the FlexFabric in module Bay 2 to a switch port in SAN Fabric B
VC CLI commands
In addition to the GUI many of the configuration settings within VC can also be accomplished via a
CLI command set. In order to connect to VC via a CLI, open an SSH connection to the IP address of
the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide
also provides many useful examples. Throughout this scenario the CLI commands to configure VC
for each setting are provided.
Configuring Expanded VLAN Capacity via GUI
Virtual Connect release 3.30 provided an expanded VLAN capacity mode when using Shared Uplink
Sets, this mode can be enabled through the Ethernet Settings tab or the VC CLI. The default
configuration for a new Domain install is “Expanded VLAN Capacity” mode, Legacy mode is no
longer available and the Domain cannot be downgraded.
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 32
To verify the VLAN Capacity mode
On the Virtual Connect Manager screen, Left pane, click Ethernet Settings, Advanced
Settings
Select Expanded VLAN capacity
Verify Expanded VLAN Capacity is configured and Legacy VLAN Capacity is greyed out.
Note: Legacy VLAN mode will only be presented if 1Gb Virtual Connect Modules are present, in
which case the domain would be limited to Firmware version 3.6x.
Configuring Expanded VLAN Capacity via CLI
The following command can be copied and pasted into an SSH based CLI session with Virtual
Connect;
# Set Expanded VLAN Capacity
set enet-vlan -quiet VlanCapacity=Expanded
Figure 24 - Enabling Expanded VLAN Capacity
Note: If a 1Gb VC Ethernet module is present in the Domain, Expanded VLAN capacity will be greyed
out, this is only supported with 10Gb based VC modules. Also, once Expanded VLAN capacity is
selected, moving back to Legacy VLAN capacity mode will require a domain deletion and rebuild.
Configuring Fast MAC Cache Failover
When an uplink on a VC Ethernet Module that was previously in standby mode becomes active, it
can take several minutes for external Ethernet switches to recognize that the c-Class server blades
must now be reached on this newly active connection.
Enabling Fast MAC Cache Failover forces Virtual Connect to transmit Ethernet packets on newly
active links, which enables the external Ethernet switches to identify the new connection more
quickly (and update their MAC caches appropriately). This transmission sequence repeats a few
times at the MAC refresh interval (five seconds is the recommended interval) and completes in
about one minute.
Configuring the VC Module for Fast Mac Cache Fail-over via GUI (Ethernet settings)
Set Fast MAC Cache Fail-over to 5 Seconds
On the Virtual Connect Manager screen, Left pane, click Ethernet Settings, Advanced
Settings
Click the “Other” tab
Select Fast MAC Cache Fail-over with a refresh of 5
Select Apply
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 33
Configuring the VC Module for Fast Mac Cache Fail-over via CLI (Ethernet settings)
The following command can be copied and pasted into an SSH based CLI session with Virtual
Connect;
# Set Advanced Ethernet Settings to Enable Fast MAC cache fail-over
set mac-cache Enabled=True Refresh=5
Figure 25 -Set Fast MAC Cache, under Ethernet Settings “Advanced Settings”)
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 34
Defining a new vNet via GUI
Create a vNet and name it “vNet-PROD”
Login to Virtual Connect, if a Domain has not been created, create it now, but cancel out of
the configuration wizards after the domain has been created.
On the Virtual Connect Manager screen, click Define, Ethernet Network to create a vNet
Enter the Network Name of “vNet-PROD”
o Note; Do NOT select the options (ie; SmartLink, Private Networks or Enable VLAN
Tunnel)
Select Add Port, then add the following ports;
o Enclosure 1 (enc0), Bay 1, Port X5
o Enclosure 1 (enc0), Bay 2, Port X5
Leave Connection Mode as Auto
Optionally, Select Advanced Network Settings and set the Preferred speed to 4Gb and the
Maximum speed to 6Gb.
Select Apply
Note: By connecting TWO Uplinks from this vNet we have provided a redundant path to the
network. As each uplink originates from a different VC module, one uplink will be Active and the
second will be in Standby. This configuration provides the ability to lose an uplink cable, network
switch or depending on how the NICs are configured at the server (teamed or un-teamed), even a VC
module. An Active/Standby configuration also provides better East/West connectivity.
Note: SmartLink – In this configuration SmartLink should NOT be enabled. SmartLink is used to turn
off downlink ports within Virtual Connect, if ALL available uplinks to a vNet are down. We will use
SmartLink in a later scenario.
Figure 26 - Define Ethernet Network (vNet-PROD). Note: The Port Status and Connected to
information. If the connected switch has LLDP enabled, the connected to information should be
displayed as below
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 35
Figure 27 - Configuring the Advanced network setting for Min/Max Network Speed. We will see how
this configuration is utilized when we create the server profile
Defining a new vNet via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
# Create the vNet "vNet-PROD" and configure uplinks as discussed above
add Network vNet-PROD
add uplinkport enc0:1:X5 Network=vNet-PROD speed=auto
add uplinkport enc0:2:X5 Network=vNet-PROD speed=auto
set network vNet-PROD SmartLink=Disabled
Note: Optionally, if you wish to utilize the new Min/Max NIC speed setting provided within Virtual
Connect, you can set this Network to a “Preferred” Speed and a “Maximum Speed”. This provides
the ability to quickly create server profiles, using the NIC speed setting of “Preferred”, then allowing
Virtual Connect to configure the NIC speeds for both the minimum speed as well as the maximum
speed. Use the setting below to configure the Min. Max. NIC speeds for this network. It is also
important to note, that this does NOT affect the network uplink speed, which will remain at 10Gb
(or 1Gb if connected to a 1Gb switch port).
set network vNet-PROD SmartLink=Disabled PrefSpeedType=Custom PrefSpeed=4000
MaxSpeedType=Custom MaxSpeed=6000
Defining a new (FCoE) SAN Fabric via GUI
Create a Fabric and name it “FCoE_A”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the first Fabric
Enter the Network Name of “FCoE_A”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 1, Port X1
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Manual Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 36
Create a second Fabric and name it “FCoE_B”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the second Fabric
Enter the Network Name of “FCoE_B”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 2, Port X1
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Manual Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Defining SAN Fabrics via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
#Create the SAN Fabrics FCoE_A and FCoE_B and configure uplinks as discussed above
add fabric FCoE_A Type=FabricAttach Bay=1 Ports=1 Speed=Auto LinkDist=Manual
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
add fabric FCoE_B Type=FabricAttach Bay=2 Ports=1 Speed=Auto LinkDist=Manual
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
Figure 28 - SAN Configuration and Advanced Settings
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 37
Figure 29 - FCoE SAN fabrics configured with two 8Gb uplinks per fabric. Note the bay and port
numbers on the right
Defining a Server Profile with NIC and FCoE Connections, via GUI
Each server NIC will connect to a specific network.
On the Virtual Connect Manager screen, click Define, Server Profile to create a Server Profile
Create a server profile called “App-1”
In the Network Port 1 drop down box, select “vNet-PROD”
In the Network Port 2 drop down box, select “vNet-PROD”
Expand the FCoE Connections box, for Bay 1, select FCoE_A for Bay 2, select FCoE_B
Do not configure FC SAN or iSCSI Connection
In the Assign the Profile to a Server Bay, select Bay 1 and apply
Prior to applying the profile, ensure that the server in Bay 1 is currently OFF
Note: you should now have a server profile assigned to Bay 1, with 2 Server NIC connections. NICs
1&2 should be connected to network vNet_PROD and FCoE SAN fabrics FCoE_A and FCoE_B.
Defining a Server Profile with NIC and FCoE Connections, via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
# Create and Assign Server Profile App-1 to server bay 1
add profile App-1 -nodefaultfcconn -nodefaultfcoeconn
set enet-connection App-1 1 pxe=Enabled Network=vNet-PROD
set enet-connection App-1 2 pxe=Disabled Network=vNet-PROD
add fcoe-connection App-1 Fabric=FCoE_A SpeedType=4Gb
add fcoe-connection App-1 Fabric=FCoE_B SpeedType=4Gb
assign profile App-1 enc0:1
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 38
Figure 30 - Define Server Profile (App- 1)
Note: Observe the speed settings for both NIC and SAN connections and the new “Max” parameter,
as well as the MAC and WWN addresses. Also, note that the FCoE connections are assigned to the
two SAN fabrics created earlier and use ports LOM:1-b and LOM:2-b.
Figure 31 - Server Profile View Bay 1
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 39
Review
In this scenario we have created a simple vNet utilizing uplinks originating from each FlexFabric
Module, by doing so we provide redundant connections out of the Virtual Connect domain, where
one uplink will be active and the alternate uplink will be in standby. We create two FCoE SAN
Fabrics, utilizing a single uplink each.
We created a server profile, with two NICs connected to the same vNet, which provides the ability to
sustain a link or module failure and not lose connection to the network, this configuration also
guarantees that ALL server to server communications within the enclosure will remain inside the
enclosure. Alternatively, we could have created two vNets and assigned uplinks from one module
to each vNet, providing an Active/Active uplink scenario.
Additionally, FCoE port 1 is connected to SAN fabric FCoE_A and FCoE SAN port 1 is connected to
SAN Fabric FCoE_B, providing a multi-pathed connected to the SAN.
Additional uplinks could be added to either the San fabrics or the Ethernet networks, which could
increase performance and/or availability.
Results – Windows 2008 R2 Networking Examples
We have successfully configured FlexFabric with a simple vNet and redundant SAN fabrics. We have
created a server profile to connect to the vNet with TWO NICs and the SAN fabrics using the FCoE
connections created within the profile.
Although both Ethernet and Fibre channel connectivity is provided by the CNA adapter used in the
G7 and Gen 8 servers; each capability (LAN and SAN) is provided by a different component of the
adapter, they appear in the server as individual network and SAN adapters.
Figure 32 - Example of Emulex's OneCommand Manager Utility (formerly known as HBA Anywhere).
Note that there are 3 Ethernet personalities and one FCoE personality per port, as configured in the
server profile.
The following graphics show a Windows 2008 R2 server with TWO FlexNICs configured at 6Gb. You
will also notice that Windows believes there are 6 NICs within this server. However, only TWO NICs
are currently configured within the FlexFabric profile, the extra NICs are offline and could be
disabled. If we did not require SAN connectivity on this server, the FCoE connections could be
deleted and the server would then have 8 NIC ports available to the OS.
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 40
Note: the BL465c G7 and BL685c G7 utilize an NC551i chipset (BE2), whereas the BL460c G7,
BL620c G7 and BL680c G7 utilize an NC553i chipset (BE3) and the Gen 8 blades typically have a
NC554 adapter which also utilizes the BE3 chipset. Both the BE2 and BE3 chipsets share common
drivers and firmware.
Note: The NICs that are not configured within VC will appear with a red x as not connected. You can
go into Network Connections for the Windows 2008 server and disable any NICs that are not
currently in use. Windows assigns the NICs as NIC 1-6, whereas three of the NICs will reside on
LOM:1 and three on LOM:2. You may need to refer to the FlexFabric server profile for the NIC MAC
addresses to verify which NIC is which.
Figure 34 - Windows 2008 R2 Extra Network Connections – Disabled
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 41
Figure 35 - Windows 2008 R2 Network Connection Status
Note: In Windows 2008 and later the actual NIC speed is displayed as configured in server Profile.
Also, note that the speed displayed is the maximum speed setting, not the minimum setting.
Figure 36 - Windows 2008 R2, Device Manager, SIX NICs are shown, however, we have only
configured two of the NICs and two FCoE HBAs.
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 42
The following graphics provides an example of a Windows 2008 R2 server with TWO NICs connected
to the network, initially each NIC has its own TCP/IP address, alternatively, both NICs could be
teamed to provide NIC fail-over redundancy. If an active uplink or network switch were to fail,
Virtual Connect would fail-over to the standby uplink. In the event of a Virtual Connect FlexFabric
module failure, the server’s NIC teaming software would see one of the NICs go offline, assuming it
was the active NIC, NIC teaming would fail-over to the standby NIC.
Figure 37 - Both NICs for Profile App-1are connected to the network through vNet-PROD
NIC Teaming
If higher availability is desired, NIC teaming in Virtual Connect works the same way as in standard
network configurations. Simply, open the NIC teaming Utility and configure the available NICs for
teaming. In this example, we have only TWO NICs available, so selecting NICs for teaming will be
quite simple. However, if multiple NICs are available, ensure that the correct pair of NICs is teamed.
You will note the BAY#-Port# indication within each NIC. Another way to confirm you have the
correct NIC is to verify through the MAC address of the NIC. You would typically TEAM a NIC from
Bay 1 to Bay 2 for example.
The following graphics provide an example of a Windows 2008 R2 server with TWO NICs teamed
and connected to the network. In the event of an Uplink or switch failure, VC will fail-over to the
standby uplinks, if a VC FlexFabric module were to fail, the NIC teaming software would fail-over to
the standby NIC.
Figure 38 - Team both NICs, using the HP Network Configuration Utility
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 43
Figure 39 - Both NICs for Profile App-1are teamed and connected to the network through vNetPROD
Various modes can be configured for NIC Teaming, such as NFT, TLB etc. Once the Team is created,
you can select the team and edit its properties. Typically, the default settings can be used.
Figure 40 - View – Network Connections – NIC Team #1 – Windows
Figure 41 - Both NICs are teamed and connect to the network with a common IP Address
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 44
Results – Windows 2008 R2 SAN Connectivity
Figure 42 - By access the HBA BIOS during server boot, you can see that Port 1 of the FlexHBA is
connected to an EVA SAN LUN
Figure 43 - Windows 2008 R2 Disk Administrator. Note; that D: is the SAN attached volume
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 45
Summary
We presented a Virtual Connect Network scenario by creating a simple vNet, we configured the new
Min/Max Network speed setting for the vNet. A dual path SAN fabric for storage connectivity was
also configured.
When VC profile App-1 is applied to the server in bay1 and the server is powered up, it has one NIC
connected through each FlexFabric module connected to “vNet-PROD”, which connects to the
network infrastructure through the 10Gb uplinks. These NICs could now be configured as individual
NICs with their own IP address or as a pair of TEAMED NICs. Either NIC could be active. As a result,
this server could access the network through either NIC or either uplink cable, depending on which
is active at the time. Each NIC is configured for a guaranteed minimum bandwidth of 4Gb, with a
maximum of 6Gb of network bandwidth and each FCoE port is configured for 4Gb of SAN bandwidth
with the ability to burst to a maximum of 8Gb.
Additional NICs could be added within FlexFabric, by simply powering the server off and adding up
to a total of 6 NICs, the NIC speed can then be adjusted accordingly to suit the needs of each NIC. If
additional or less SAN bandwidth is required, the speed of the SAN connection can also be adjusted.
As additional servers are added to the enclosure, simply create additional profiles, or copy existing
profiles, configure the NICs for LAN and SAN fabrics as required and apply them to the appropriate
server bays and power the server on.
Scenario 1 – Simple vNet with Active/Standby Uplinks – Ethernet and FCoE – Windows 2008 R2 46
Scenario 2 –Shared Uplink Sets with
Active/Active uplinks and 802.3ad (LACP) Ethernet and FCoE – Windows 2008 R2
Overview
This scenario will implement the Shared Uplink Set (SUS) to provide support for multiple VLANs.
Virtual Connect 3.30 increased the number of VLANs supported on a Shared Uplink Set and provides
some enhanced GUI and CLI features that reduce the effort required to create large number of
VLANs. The upstream network switches connect to each FlexFabric module and two separate
Shared Uplink Sets, providing an Active/Active configuration, LACP will be used to aggregate those
links.
As multiple VLANs will be supported in this configuration, the upstream switch ports connecting to
the FlexFabric modules will be configured to properly present those VLANs. In this scenario, the
upstream switch ports will be configured for VLAN trunking/VLAN tagging.
When configuring Virtual Connect, we can provide several ways to implement network fail-over or
redundancy. One option would be to connect TWO uplinks to a single Virtual Connect network;
those two uplinks would connect from different Virtual Connect modules within the enclosure and
could then connect to the same upstream switch or two different upstream switches, depending on
your redundancy needs. An alternative would be to configure TWO separate Virtual Connect
networks, each with a single, or multiple, uplinks configured. Each option has its advantages and
disadvantages. For example; an Active/Standby configuration places the redundancy at the VC
level, where Active/Active places it at the OS NIC teaming or bonding level. We will review the
second option in this scenario.
In addition, several Virtual Connect Networks can be configured to support the required networks to
the servers within the BladeSystem enclosure. These networks could be used to separate the
various network traffic types, such as iSCSI, backup and VMotion from production network traffic.
This scenario will also leverage the Fibre Channel over Ethernet (FCoE) capabilities of the FlexFabric
modules. Each fibre channel fabric will have two uplinks connected to each of the FlexFabric
modules.
Requirements
This scenario will support both Ethernet and fibre channel connectivity. In order to implement this
scenario, an HP BladeSystem c7000 enclosure with one or more server blades and TWO Virtual
Connect FlexFabric modules, installed in I/O Bays 1& 2 are required. In addition, we will require ONE
or TWO external Network switches. As Virtual Connect does not appear to the network as a switch
and is transparent to the network, any standard managed switch will work with Virtual Connect.
The Fibre Channel uplinks will connect to the existing FC SAN fabrics. The SAN switch ports will
need to be configured to support NPIV logins. Two uplinks from each FlexFabric module will be
connected to the existing SAN fabrics.
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
47
Figure 44 - Physical View; Shows two Ethernet uplinks from Ports X5 and X6 on Module 1 and 2 to
Ports 1 and 2 on each network switch. The SAN fabrics are also connected redundantly, with TWO
uplinks per fabric, from ports X1 and X2 on module 1 to Fabric A and ports X1 and X2 to Fabric B.
Figure 45 - Logical View; the server blade profile is configured with TWO FlexNICs and 2 FlexHBAs.
NICs 1 and 2 are connected to VLAN-101-x which are part of the Shared Uplink Sets, VLAN-Trunk-1
and VLAN-Trunk-2 respectively. The VLAN-Trunks are connected, at 10Gb, to a network switch,
through Ports X5 and X6 on each FlexFabric Module in Bays 1 and 2. The FCoE SAN connections are
connected through ports X1 and X2 on each FlexFabric module. In addition, SAN Fabric FCoE_A
connects to the existing SAN Fabric A through port X1 on Module 1 (Bay 1) and FCoE_B connects to
the existing SAN Fabric B through port X1 on Module 2 (Bay 2)
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
48
Installation and configuration
Switch configuration
As the Virtual Connect module acts as an edge switch, Virtual Connect can connect to the network at
either the distribution level or directly to the core switch.
The appendices provide a summary of the cli commands required to configure various switches for
connection to Virtual Connection. The configuration information provided in the appendices for this
scenario assumes the following information:
The switch ports are configured as VLAN TRUNK ports (tagging) to support several VLANs.
All frames will be forwarded to Virtual Connect with VLAN tags. Optionally, one VLAN could
be configured as (Default) untagged, if so, then a corresponding vNet within the Shared
Uplink Set would be configured and set as “Default”
Note: when adding additional uplinks to the SUS, if the additional uplinks are connecting from the
same FlexFabric module to the same switch, in order to ensure all the uplinks are active, the switch
ports will need to be configured for LACP within the same Link Aggregation Group.
The network switch port should be configured for Spanning Tree Edge as Virtual Connect appears to
the switch as an access device and not another switch. By configuring the port as Spanning Tree
Edge, it allows the switch to place the port into a forwarding state much quicker than otherwise,
this allows a newly connected port to come online and begin forwarding much quicker.
The SAN switch ports connecting to the FlexFabric module must be configured to accept NPIV
logins.
Configuring the VC module
Physically connect Port 1 of network switch 1 to Port X5 of the VC module in Bay 1
Physically connect Port 2 of network switch 1 to Port X6 of the VC module in Bay 1
Physically connect Port 1 of network switch 2 to Port X5 of the VC module in Bay 2
Physically connect Port 2 of network switch 2 to Port X6 of the VC module in Bay 2
Note: if you have only one network switch, connect VC ports X5 and X6 (Bay 2) to an alternate ports
on the same switch. This will NOT create a network loop and Spanning Tree is not required.
Physically connect Ports X1/X2 on the FlexFabric in module Bay 1 to switch ports in SAN Fabric A
Physically connect Ports X1/X2 on the FlexFabric in module Bay 2 to switch ports in SAN Fabric B
VC CLI commands
Many of the configuration settings within VC can also be accomplished via a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once
logged in, VC provides a CLI with help menus. Through this scenario the CLI commands to configure
VC for each setting will also be provided.
Configuring Expanded VLAN Capacity via GUI
Virtual Connect release 3.30 provided an expanded VLAN capacity mode when using Shared Uplink
Sets, this mode can be enabled through the Ethernet Settings tab or the VC CLI. The default
configuration for a new Domain install is “Expanded VLAN Capacity” mode, Legacy mode is no
longer available and the Domain cannot be downgraded.
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
49
To verify the VLAN Capacity mode
On the Virtual Connect Manager screen, Left pane, click Ethernet Settings, Advanced
Settings
Select Expanded VLAN capacity
Verify Expanded VLAN Capacity is configured and Legacy VLAN Capacity is greyed out.
Note: Legacy VLAN mode will only be presented if 1Gb Virtual Connect Modules are present, in
which case the domain would be limited to Firmware version 3.6x.
Configuring Expanded VLAN Capacity via CLI
The following command can be copied and pasted into an SSH based CLI session with Virtual
Connect;
# Set Expanded VLAN Capacity
set enet-vlan -quiet VlanCapacity=Expanded
Figure 46 - Enabling Expanded VLAN Capacity
Note: If a 1Gb VC Ethernet module is present in the Domain, Expanded VLAN capacity will be greyed
out, this is only supported with 10Gb based VC modules. Also, once Expanded VLAN capacity is
selected, moving back to Legacy VLAN capacity mode will require a domain deletion and rebuild.
Defining a new Shared Uplink Set (VLAN-Trunk-1)
Connect Ports X5 and X6 of FlexFabric module in Bay 1 to Ports 1 and 2 on switch 1
Create a SUS named VLAN-Trunk-1 and connect it to FlexFabric Ports X5 and X6 on Module 1
On the Virtual Connect Home page, select Define, Shared Uplink Set
Insert Uplink Set Name as VLAN-Trunk-1
Select Add Port, then add the following port;
o Enclosure 1, Bay 1, Port X5
o Enclosure 1, Bay 1, Port X6
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
50
Figure 47 - Shared Uplink Set (VLAN-Trunk-1) Uplinks Assigned
Click Add Networks and select the Multiple Networks radio button and add the following
VLANs;
o Enter Name as VLAN-
o Enter Suffix as -1
o Enter VLAN IDs as follows (and shown in the following graphic);
o 101-105,2100-2400
Enable SmartLink on ALL networks
Click Advanced
o Configure Preferred speed to 4Gb
o Configure Maximum speed to 8Gb
Click Apply
Note: You can optionally specify a network “color” or “Label” when creating a shared Uplinkset and
its networks. In the example above we have not set either color or label. Also notice the new
configurable LACP Timer setting.
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
51
Figure 48 - Creating VLANs in a Shared Uplink Set
Note: If the VC domain is not in Expanded VLAN capacity mode, you will receive an error when
attempting to create more that 128 VLANs in a SUS. If that occurs, go to Advanced Ethernet
Settings and select Expanded VLAN capacity mode and apply.
After clicking apply, a list of VLANs will be presented as shown below. If one VLAN in the trunk is
untagged/native, see note below.
Click Apply at the bottom of the page to create the Shared Uplink Set
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
52
Figure 49 - Associated VLANs for Shared Uplink Set VLAN-Trunk-1
Note: Optionally, if one of the VLANs in the trunk to this shared uplink set were configured as Native
or Untagged, you would be required to “edit” that VLAN in the screen above, and configure Native as
TRUE. This would need to be set for BOTH VLAN-Trunk-1 and VLAN-Trunk-2.
Defining a new Shared Uplink Set (VLAN-Trunk-2) (Copying a Shared Uplink Set)
The second Shared Uplink Set could be created in the same manner as VLAN-Trunk-1 however; VC
now provides the ability to COPY a VC Network or Shared Uplink Set.
Connect Ports X5 and X6 of FlexFabric module in bay 2 to Ports 1 and 2 on switch 2
In the VC GUI screen, select Shared Uplink Sets in the left pane, in the right pane VLAN-
Trunk-1 will be displayed, left click VLAN-Trunk-1, it will appear as blue, right click and
select COPY
Edit the Settings as shown in the following graphic, the new SUS name will be VLAN-Trunk-
2 and ALL the associated VLANs with have a suffix of 2
In step 3, ADD uplinks X5 and X6 from Bay 2
Click OK
The SUS and ALL VLANs will be created
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
53
Figure 50 - Copying a SUS and ALL VLANs
Defining a new Shared Uplink Set via CLI
The following script can be used to create the first Shared Uplink Set (VLAN-Trunk-1)
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
# Create Networks VLAN101-2 through VLAN105-2 and VLAN-2100-2 through VLAN-24002 for Shared Uplink Set VLAN-Trunk-2
add network-range -quiet UplinkSet=VLAN-Trunk-2 NamePrefix=VLAN- NameSuffix=-2
VLANIds=101-105,2100-2400 State=enabled PrefSpeedType=Custom PrefSpeed=4000
MaxSpeedType=Custom MaxSpeed=8000 SmartLink=enabled
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
54
Note: In this scenario we have created two independent Share Uplink Sets (SUS), each originating
from the opposite FlexFabric Modules, by doing so we provide the ability to create separate and
redundant connections out of the Virtual Connect domain. When we create the server profiles, you
will see how the NICs will connect to VLANs accessed through the opposite VC module, which
provides the ability to create an Active / Active uplink scenario. Alternatively, we could have
created a single SUS and assigned both sets of these uplink ports to the same SUS, however, this
would have provided an Active/Standby uplink scenario, as shown in Scenario 5.
Note: As we did not configure the Min/Max network speed setting for any of these networks, you
will notice in each profile that ALL configured NICs will have a configured speed of 10Gb. If you wish
to limit the speed of specific networks, you could edit each network and configure a maximum
speed.
Defining a new (FCoE) SAN Fabric via GUI
Create a Fabric and name it “FCoE_A”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the first Fabric
Enter the Network Name of “FCoE_A”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 1, Port X1
o Enclosure 1, Bay 1, Port X2
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Automatic Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Create a second Fabric and name it “FCoE_B”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the second Fabric
Enter the Network Name of “FCoE_B”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 2, Port X1
o Enclosure 1, Bay 2, Port X2
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Automatic Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Defining SAN Fabrics via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
#Create the SAN Fabrics FCoE_A and FCoE_B and configure uplinks as discussed above
add fabric FCoE_A Type=FabricAttach Bay=1 Ports=1,2 Speed=Auto LinkDist=Auto
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
add fabric FCoE_B Type=FabricAttach Bay=2 Ports=1,2 Speed=Auto LinkDist=Auto
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
55
Figure 51 - SAN Configuration and Advanced Settings
Figure 52 - FCoE SAN fabrics configured with two 8Gb uplinks per fabric. Note the bay and port
numbers on the right
Defining a Server Profile
We will create a server profile with two server NICs.
Although, we have created Shared Uplink Sets with several VLANs, each server NIC, for this
scenario, will connect to VLAN 101, all other networks/VLANs will remain unused.
On the main menu, select Define, then Server Profile
Create a server profile called “App-1”
In the Network Port 1 drop down box, select VLAN-101-1
In the Network Port 2 drop down box, select VLAN-101-2
Expand the FCoE Connections box, for Bay 1, select FCoE_A for Bay 2, select FCoE_B
Do not configure FC SAN or iSCSI Connection
In the Assign Profile to Server Bay box, locate the Select Location drop down and select
Bay 1, then apply
Prior to applying the profile, ensure that the server in Bay 1 is currently OFF
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
56
Note: you should now have a server profile assigned to Bay 1, with 2 Server NIC connections. NICs
1&2 should be connected to networks VLAN-101-1 and VLAN-101-2 and FCoE SAN fabrics FCoE_A
and FCoE_B.
Defining a Server Profile via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
# Create Server Profile App-1
add profile App-1 -nodefaultfcconn -nodefaultfcoeconn
set enet-connection App-1 1 pxe=Enabled Network=VLAN-101-1
set enet-connection App-1 2 pxe=Disabled Network=VLAN-101-2
Add fcoe-connection App-1 Fabric=FCoE_A SpeedType=4Gb
Add fcoe-connection App-1 Fabric=FCoE_B SpeedType=4Gb
poweroff server 1
assign profile App-1 enc0:1
Note: The speed of the NIC and SAN connections, as well as the MAC and WWN. Also, that the FCoE
connections are assigned to the two SAN fabrics created earlier and use ports LOM:1-b and LOM:2b.
Figure 53 - Define a Server Profile (App-1)
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
57
Figure 54 - Server Profile View Bay 1
Review
In this scenario we have created Two Shared Uplink Sets (SUS), providing support for many VLANs.
Uplinks originating from each FlexFabric Module connect to each SUS, by doing so we provide
redundant connections out of the Virtual Connect domain. As multiple uplinks are used for each
SUS, we have also leveraged LACP to improve uplink performance. In this scenario, all uplinks will
be active. We also created two FCoE SAN Fabrics.
We created a server profile, with two NICs connected to the same external VLAN (101) through VC
networks VLAN-101-1 and VLAN-101-2, which provides the ability to sustain a link or module
failure and not lose connection to the network. VLAN101-1 and VLAN101-2 are configured to
support VLAN 101, frames will be presented to the NIC(s) without VLAN tags (untagged), these two
NICs are connected to the same VLAN, but taking a different path out of the enclosure.
Additionally, FCoE port 1 is connected to SAN fabric FCoE_A and FCoE SAN port 2 is connected to
SAN Fabric FCoE_B, providing a multi-pathed connected to the SAN.
Additional uplinks could be added to either the San fabrics or the Ethernet networks, which could
increase performance and/or availability.
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
58
Results – Windows 2008 R2 Networking Examples
We have successfully configured FlexFabric with a shared uplink set and redundant SAN fabrics. We
have created a server profile to connect the TWO NICs to VLAN 101 and the SAN fabrics using the
FCoE connections created within the profile.
Although both Ethernet and Fibre channel connectivity is provided by the CNA used in the G7 and
Gen 8 servers; each capability (LAN and SAN) is provided by a different component of the adapter,
they appear in the server as individual network and SAN adapters.
Figure 55 - Example of Emulex's OneCommand Manager Utility (formerly known as HBA
Anywhere). Note that there are 3 Ethernet personalities and one FCoE personality per port, as
configured in the server profile.
The following graphics show a Windows 2008 R2 server with TWO FlexNICs configured at 6Gb. You
will also notice that Windows believes there are 6 NICs within this server. However, only TWO NICs
are currently configured within the FlexFabric profile, the extra NICs are offline and could be
disabled. If we did not require SAN connectivity on this server, the FCoE connections could be
deleted and the server would then have 8 NIC ports available to the OS.
Note: the BL465c G7 and BL685c G7 utilize an NC551i chipset (BE2), whereas the BL460c G7,
BL620c G7 and BL680c G7 utilize an NC553i chipset (BE3) and the Gen 8 blades typically have a
NC554 adapter which also utilizes the BE3 chipset. Both the BE2 and BE3 chipsets share common
drivers and firmware.
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
59
Note: The NICs that are not configured within VC will appear with a red x as not connected. You can
go into Network Connections for the Windows 2008 server and Disable any NICs that are not
currently in use. Windows assigns the NICs as NIC 1-6, whereas three of the NICs will reside on
LOM:1 and three on LOM:2. You may need to refer to the FlexFabric server profile for the NIC MAC
addresses to verify which NIC is which.
Figure 57 - Windows 2008 R2 Extra Network Connections – Disabled
Figure 58 - Windows 2008 R2 Network Connection Status
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
60
Note: In Windows 2008 and later the actual NIC speed is displayed as configured in server Profile.
Also, note that the speed displayed is the maximum speed setting, not the minimum setting.
Figure 59 - Windows 2008 R2, Device Manager, SIX NICs are shown, however, we have only
configured two of the NICs and two FCoE HBAs.
The following graphics provides an example of a Windows 2008 R2 server with TWO NICs connected
to the network, initially each NIC has its own TCP/IP address, alternatively, both NICs could be
teamed to provide NIC fail-over redundancy. If an active uplink or network switch were to fail,
Virtual Connect would fail-over to the standby uplink. In the event of a Virtual Connect FlexFabric
module failure, the server’s NIC teaming software would see one of the NICs go offline, assuming it
was the active NIC, NIC teaming would fail-over to the standby NIC.
Figure 60 – Each NIC is connected to VLAN 101
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
61
NIC Teaming
If higher availability is desired, NIC teaming in Virtual Connect works the same way as in standard
network configurations. Simply, open the NIC teaming Utility and configure the available NICs for
teaming. In this example, we have only TWO NICs available, so selecting NICs for teaming will be
quite simple. However, if multiple NICs are available, ensure that the correct pair of NICs is teamed.
You will note the BAY#-Port# indication within each NIC. Another way to confirm you have the
correct NIC is to verify through the MAC address of the NIC. You would typically TEAM a NIC from
Bay 1 to Bay 2 for example.
The following graphics provide an example of a Windows 2008 R2 server with TWO NICs teamed
and connected to the network. In the event of an Uplink or switch failure, VC will fail-over to the
standby uplinks, if a VC FlexFabric module were to fail, the NIC teaming software would fail-over to
the standby NIC.
Figure 61 – Team both NICs, using the HP Network Configuration Utility
Figure 62 - Both NICs for Profile App-1are teamed and connected to the network through vNet-PROD
Various modes can be configured for NIC Teaming, such as NFT, TLB etc. Once the Team is created,
you can select the team and edit its properties. Typically, the default settings can be used.
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
62
Figure 63 - View – Network Connections – NIC Team #1 – Windows
Figure 64 - Both NICs are teamed and connect to the network with a common IP Address
Results – Windows 2008 R2 SAN Connectivity
Figure 65 - By access the HBA BIOS during server boot, you can see that Port 1 of the FlexHBA is
connected to an EVA SAN LUN
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
63
Figure 66 - Windows 2008 R2 Disk Administrator. Note; that D: is the SAN attached volume
Summary
We presented a Virtual Connect Network scenario by creating two shared uplink sets (SUS), each
SUS is connected with TWO active uplinks, both SUS’ can actively pass traffic. We included a dual
path SAN fabric for storage connectivity.
When VC profile App-1 is applied to the server in bay1 and the server is powered up, it has one NIC
connected through FlexFabric module 1 (connected to VLAN-101-1), the second NIC is connected
through FlexFabric module 2 (connected to VLAN-101-2). Each NIC is configured at 6Gb. These NICs
could now be configured as individual NICs with their own IP address or as a pair of TEAMED NICs.
Either NIC could be active. As a result, this server could access the network through either NIC or
either uplink, depending on which NIC is active at the time. Each NIC is configured for a guaranteed
minimum bandwidth of 4Gb, with a maximum of 10Gb of network bandwidth and each FCoE port is
configured for 4Gb of SAN bandwidth with the ability to burst to a maximum of 8Gb.
Additional NICs could be added within FlexFabric, by simply powering the server off and adding up
to a total of 6 NICs, the NIC speed can then be adjusted accordingly to suit the needs of each NIC. If
additional or less SAN bandwidth is required, the speed of the SAN connection can also be adjusted.
As additional servers are added to the enclosure, simply create additional profiles, or copy existing
profiles, configure the NICs for LAN and SAN fabrics as required and apply them to the appropriate
server bays and power the server on.
Scenario 2 –Shared Uplink Sets with Active/Active uplinks and 802.3ad (LACP) - Ethernet and FCoE – Windows 2008 R2
64
Scenario 3 – Shared Uplink Set with
Active/Active Uplinks and 802.3ad (LACP) Ethernet and FCoE Boot from SAN –
Windows 2008 R2
Overview
This scenario will implement the Shared Uplink Set (SUS) to provide support for multiple VLANs.
The upstream network switches connect a shared uplink set to two ports on each FlexFabric
modules, LACP will be used to aggregate those links. This scenario is identical to Scenario 2,
however, scenario 3 also provides the steps to configure a Windows 2008 R2 server to Boot from
SAN using the FCoE connections provided by FlexFabric. When using Virtual Connect/FlexFabric in a
Boot from SAN implementation, no custom or special HBA configuration is required. The HBA
configuration is controlled by Virtual Connect and maintained as part of the server profile. Once the
server profile has been configured and applied to the server bays, the controller will be configured
on the next and subsequent boot. When we later configure the server profile, we will also configure
the HBA boot parameters.
As multiple VLANs will be supported in this configuration, the upstream switch ports connecting to
the FlexFabric modules will be configured to properly present those VLANs. In this scenario, the
upstream switch ports will be configured for VLAN trunking/VLAN tagging.
When configuring Virtual Connect, we can provide several ways to implement network fail-over or
redundancy. One option would be to connect TWO uplinks to a single Virtual Connect network;
those two uplinks would connect from different Virtual Connect modules within the enclosure and
could then connect to the same upstream switch or two different upstream switches, depending on
your redundancy needs. An alternative would be to configure TWO separate Virtual Connect
networks, each with a single, or multiple, uplinks configured. Each option has its advantages and
disadvantages. For example; an Active/Standby configuration places the redundancy at the VC
level, where Active/Active places it at the OS NIC teaming or bonding level. We will review the
second option in this scenario.
In addition, several Virtual Connect Networks can be configured to support the required networks to
the servers within the BladeSystem enclosure. These networks could be used to separate the
various network traffic types, such as iSCSI, backup and VMotion from production network traffic.
This scenario will also leverage the Fibre Channel over Ethernet (FCoE) capabilities of the FlexFabric
modules. Each fibre channel fabric will have two uplinks connected to each of the FlexFabric
modules.
Requirements
This scenario will support both Ethernet and fibre channel connectivity. In order to implement this
scenario, an HP BladeSystem c7000 enclosure with one or more server blades and TWO Virtual
Connect FlexFabric modules, installed in I/O Bays 1& 2 are required. In addition, we will require ONE
or TWO external Network switches. As Virtual Connect does not appear to the network as a switch
and is transparent to the network, any standard managed switch will work with Virtual Connect.
The fibre channel uplinks will connect to the existing FC SAN fabrics. The SAN switch ports will need
to be configured to support NPIV logins. Two uplinks from each FlexFabric module will be
connected to the existing SAN fabrics.
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 65
Figure 67 - Physical View; Shows two Ethernet uplinks from Ports X5 and X6 on Module 1 and 2 to
Ports 1 and 2 on each network switch. The SAN fabrics are also connected redundantly, with TWO
uplinks per fabric, from ports X1 and X2 on module 1 to Fabric A and ports X1 and X2 to Fabric B.
Figure 68 -Logical View; the server blade profile is configured with TWO FlexNICs and 2 FlexHBAs.
NICs 1 and 2 are connected to VLAN-101-x which are part of the Shared Uplink Sets, VLAN-Trunk-1
and VLAN-Trunk-2 respectively. The VLAN-Trunks are connected, at 10Gb, to a network switch,
through Ports X5 and X6 on each FlexFabric Module in Bays 1 and 2. The FCoE SAN connections are
connected through ports X1 and X2 on each FlexFabric module. In addition, SAN Fabric FCoE_A
connects to the existing SAN Fabric A through port X1 on Module 1 (Bay 1) and FCoE_B connects to
the existing SAN Fabric B through port X1 on Module 2 (Bay 2).
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 66
Installation and configuration
Switch configuration
As the Virtual Connect module acts as an edge switch, Virtual Connect can connect to the network at
either the distribution level or directly to the core switch.
The appendices provide a summary of the cli commands required to configure various switches for
connection to Virtual Connect. The configuration information provided in the appendices for this
scenario assumes the following information:
The switch ports are configured as VLAN TRUNK ports (tagging) to support several VLANs.
All frames will be forwarded to Virtual Connect with VLAN tags. Optionally, one VLAN could
be configured as (Default) untagged, if so, then a corresponding vNet within the Shared
Uplink Set would be configured and set as “Default”
Note: when adding additional uplinks to the SUS, if the additional uplinks are connecting from the
same FlexFabric module to the same switch, in order to ensure all the uplinks are active, the switch
ports will need to be configured for LACP within the same Link Aggregation Group.
The network switch port should be configured for Spanning Tree Edge as Virtual Connect appears to
the switch as an access device and not another switch. By configuring the port as Spanning Tree
Edge, it allows the switch to place the port into a forwarding state much quicker than otherwise,
this allows a newly connected port to come online and begin forwarding much quicker.
The SAN switch ports connecting to the FlexFabric module must be configured to accept NPIV
logins.
Configuring the VC module
Physically connect Port 1 of network switch 1 to Port X5 of the VC module in Bay 1
Physically connect Port 2 of network switch 1 to Port X6 of the VC module in Bay 1
Physically connect Port 1 of network switch 2 to Port X5 of the VC module in Bay 2
Physically connect Port 2 of network switch 2 to Port X6 of the VC module in Bay 2
Note: if you have only one network switch, connect VC ports X5 and X6 (Bay 2) to an alternate port
on the same switch. This will NOT create a network loop and Spanning Tree is not required.
Physically connect Ports X1/X2 on the FlexFabric in module Bay 1 to switch ports in SAN Fabric A
Physically connect Ports X1/X2 on the FlexFabric in module Bay 2 to switch ports in SAN Fabric B
VC CLI commands
Many of the configuration settings within VC can also be accomplished via a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once
logged in, VC provides a CLI with help menus. Through this scenario the CLI commands to configure
VC for each setting will also be provided.
Configuring Expanded VLAN Capacity via GUI
Virtual Connect release 3.30 provided an expanded VLAN capacity mode when using Shared Uplink
Sets, this mode can be enabled through the Ethernet Settings tab or the VC CLI. The default
configuration for a new Domain install is “Expanded VLAN Capacity” mode, Legacy mode is no
longer available and the Domain cannot be downgraded.
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 67
To verify the VLAN Capacity mode
On the Virtual Connect Manager screen, Left pane, click Ethernet Settings, Advanced
Settings
Select Expanded VLAN capacity
Verify Expanded VLAN Capacity is configured and Legacy VLAN Capacity is greyed out.
Note: Legacy VLAN mode will only be presented if 1Gb Virtual Connect Modules are present, in
which case the domain would be limited to Firmware version 3.6x.
Configuring Expanded VLAN Capacity via CLI
The following command can be copied and pasted into an SSH based CLI session with Virtual
Connect;
# Set Expanded VLAN Capacity
set enet-vlan -quiet VlanCapacity=Expanded
Figure 69 - Enabling Expanded VLAN Capacity
Note: if a 1Gb VC Ethernet module is present in the Domain, Expanded VLAN capacity will be greyed
out, this is only supported with 10Gb based VC modules. Also, once Expanded VLAN capacity is
selected, moving back to Legacy VLAN capacity mode will require a domain deletion and rebuild.
Defining a new Shared Uplink Set (VLAN-Trunk-1)
Connect Ports X5 and X6 of FlexFabric module in Bay 1 to Ports 1 and 2 on switch 1
Create a SUS named VLAN-Trunk-1 and connect it to FlexFabric Ports X5 and X6 on Module 1
On the Virtual Connect Home page, select Define, Shared Uplink Set
Insert Uplink Set Name as VLAN-Trunk-1
Select Add Port, then add the following port;
o Enclosure 1, Bay 1, Port X5
o Enclosure 1, Bay 1, Port X6
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 68
Figure 70 - Shared Uplink Set (VLAN-Trunk-1) Uplinks Assigned
Click Add Networks and select the Multiple Networks radio button and add the following
VLANs;
o Enter Name as VLAN-
o Enter Suffix as -1
o Enter VLAN IDs as follows (and shown in the following graphic);
o 101-105,2100-2400
Enable SmartLink on ALL networks
Click Advanced
o Configure Preferred speed to 6Gb
o Configure Maximum speed to 10Gb
Click Apply
Note:You can optionally specify a network “color” or “Label” when creating a shared Uplinkset and
its networks. In the example above we have not set either color or label. Also notice the new
configurable LACP Timer setting.
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 69
Figure 71 - Creating VLANs in a Shared Uplink Set
Note: If the VC domain is not in Expanded VLAN capacity mode, you will receive an error when
attempting to create more that 128 VLANs in a SUS. If that occurs, go to Advanced Ethernet
Settings and select Expanded VLAN capacity mode and apply.
After clicking apply, a list of VLANs will be presented as shown below. If one VLAN in the trunk is
untagged/native, see note below.
Click Apply at the bottom of the page to create the Shared Uplink Set
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 70
Figure 72 - Associated VLANs for Shared Uplink Set VLAN-Trunk-1
Note: Optionally, if one of the VLANs in the trunk to this shared uplink set were configured as Native
or Untagged, you would be required to “edit” that VLAN in the screen above, and configure Native as
TRUE. This would need to be set for BOTH VLAN-Trunk-1 and VLAN-Trunk-2.
Defining a new Shared Uplink Set (VLAN-Trunk-2)(Copying a Shared Uplink Set)
The second Shared Uplink Set could be created in the same manner as VLAN-Trunk-1 however; VC
now provides the ability to COPY a VC Network or Shared Uplink Set.
Connect Ports X5 and X6 of FlexFabric module in Bay 2 to Ports 1 and 2 on switch 2
In the VC GUI screen, select Shared Uplink Sets in the left pane, in the right pane VLAN-
Trunk-1 will be displayed, left click VLAN-Trunk-1, it will appear as blue, right click and
select COPY
Edit the Settings as shown below, the new SUS name will be VLAN-Trunk-2 and ALL the
associated VLANs with have a suffix of 2
In step 3, ADD uplinks X5 and X6 from Bay 2
Click OK
The SUS and ALL VLANs will be created
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 71
Figure 73 - Copying a SUS and ALL VLANs
Defining a new Shared Uplink Set via CLI
The following script can be used to create the first Shared Uplink Set (VLAN-Trunk-1)
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
# Create Shared Uplink Set VLAN-Trunk-1 and configure uplinks
# Create Networks VLAN101-2 through VLAN105-2 and VLAN-2100-2 through VLAN-24002 for Shared Uplink Set VLAN-Trunk-2
add network-range -quiet UplinkSet=VLAN-Trunk-2 NamePrefix=VLAN- NameSuffix=-2
VLANIds=101-105,2100-2400 State=enabled PrefSpeedType=Custom PrefSpeed=6000
MaxSpeedType=Custom MaxSpeed=10000 SmartLink=enabled
Please refer to Appendix D; “Scripting the Native VLAN” for scripting examples.
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 72
Note: In this scenario we have created two independent Share Uplink Sets (SUS), each originating
from the opposite FlexFabric Modules, by doing so we provide the ability to create separate and
redundant connections out of the Virtual Connect domain. When we create the server profiles, you
will see how the NICs will connect to VLANs accessed through the opposite VC module, which
provides the ability to create an Active / Active uplink scenario. Alternatively, we could have
created a single SUS and assigned both sets of these uplink ports to the same SUS, however, this
would have provided an Active/Standby uplink scenario, as shown in Scenario 5.
Defining a new (FCoE) SAN Fabric via GUI
Create a Fabric and name it “FCoE_A”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the first Fabric
Enter the Network Name of “FCoE_A”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 1, Port X1
o Enclosure 1, Bay 1, Port X2
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Automatic Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Create a second Fabric and name it “FCoE_B”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the second Fabric
Enter the Network Name of “FCoE_B”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 2, Port X1
o Enclosure 1, Bay 2, Port X2
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Automatic Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Defining SAN Fabrics via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
#Create the SAN Fabrics FCoE_A and FCoE_B and configure uplinks as discussed above
add fabric FCoE_A Type=FabricAttach Bay=1 Ports=1,2 Speed=Auto LinkDist=Auto
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
add fabric FCoE_B Type=FabricAttach Bay=2 Ports=1,2 Speed=Auto LinkDist=Auto
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 73
Figure 74 - SAN Configuration and Advanced Settings
Figure 75 - FCoE SAN fabrics configured with two 8Gb uplinks per fabric. Note the bay and port
numbers on the right
Defining a Server Profile
We will create a server profile with two server NICs.
Each server NIC will connect to a specific network.
On the main menu, select Define, then Server Profile
Create a server profile called “App-1”
In the Network Port 1 drop down box, select VLAN-101-1
In the Network Port 2 drop down box, select VLAN-101-2
Expand the FCoE Connections box, for Bay 1, select FCoE_A for Bay 2, select FCoE_B
Select the “Fibre Channel Boot Parameters” box under the FCoE configuration box
o Select PORT 1 and click on the drop down under “SAN Boot” and select “Primary”
o Click on the Target Port Name field and enter the SAN controller ID
o Click on the LUN field and enter the boot LUN number, which is typically 1
o Click on PORT 2 and click on the drop down under “SAN Boot” and select
“Secondary”
o Click on the Target Port Name field and enter the SAN controller ID
o Click on the LUN field and enter the boot LUN number, which is typically 1
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 74
o Click Apply
Do not configure iSCSI HBA or FC HBA Connection
In the Assign Profile to Server Bay box, locate the Select Location drop down and select
Bay 1, then apply
Prior to applying the profile, ensure that the server in Bay 1 is currently OFF
Note: you should now have a server profile assigned to Bay 1, with 2 Server NIC connections. NICs
1&2 should be connected to networks VLAN101-1 and VLAN101-2 and FCoE SAN fabrics FCoE_A
and FCoE_B.
Defining a Server Profile via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
# Create Server Profile App-1
add profile App-1 -nodefaultfcconn -nodefaultfcoeconn
set enet-connection App-1 1 pxe=Enabled Network=VLAN-101-1
set enet-connection App-1 2 pxe=Disabled Network=VLAN-101-2
add fcoe-connection App-1
set fcoe-connection App-1:1 Fabric=FCoE_A SpeedType=4Gb BootPriority=Primary
BootPort=50:08:05:F3:00:00:58:11 BootLun=1
add fcoe-connection App-1
set fcoe-connection App-1:2 Fabric=FCoE_B SpeedType=4Gb BootPriority=Secondary
BootPort=50:08:05:F3:00:00:58:12 BootLun=1
poweroff server 1
assign profile App-1 enc0:1
Note: you will need to locate the WWN and Boot LUN numbers for the controller you are
booting to and substitute the addresses above.
Note: The speed of the NIC and SAN connections, as well as the MAC and WWN. Also, note that the
FCoE connections are assigned to the two SAN fabrics created earlier and use ports LOM:1-b and
LOM:2-b.
Figure 76 - Define a Server Profile (App-1)
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 75
Figure 77 - Boot from SAN Connection Settings
Note: When choosing the Primary and Secondary ports, ensure that each port can access the ID provided on
the fabric, the SAN Administrator should be able to provide this address, or it can also be discovered through
the HBA/CNA BIOS utilities. The LUN number will vary depending on SAN Array vendor/model and the order in
which the LUNs were assigned to the host within the SAN configuration.
Figure 78 - Server Profile View Bay 1
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 76
Review
In this scenario we have created Two Shared Uplink Sets (SUS), providing support for many VLANs.
Uplinks originating from each FlexFabric Module connect to each SUS, by doing so we provide
redundant connections out of the Virtual Connect domain. As multiple uplinks are used for each
SUS, we have also leveraged LACP to improve uplink performance. In this scenario, all uplinks will
be active. We also created two FCoE SAN Fabrics.
We created a server profile, with two NICs connected to the same external VLAN (101) through VC
networks VLAN-101-1 and VLAN-101-2, which provides the ability to sustain a link or module
failure and not lose connection to the network. VLAN101-1 and VLAN101-2 are configured to
support VLAN 101, frames will be presented to the NIC(s) without VLAN tags (untagged), these two
NICs are connected to the same VLAN, but taking a different path out of the enclosure.
Additionally, FCoE port 1 is connected to SAN fabric FCoE_A and FCoE SAN port 2 is connected to
SAN Fabric FCoE_B, providing a multi-pathed connected to the SAN. The server profile was also
configured for Boot to SAN over the FCoE connections. With Virtual Connect, there is no need to
configure the SAN HBA directly when booting to SAN, all required configuration is maintained in the
server profile. During installation of windows 2008 R2, ensure that Microsoft the MPIO role is
enabled.
The FCoE SAN fabric connects to each SAN fabric over two uplinks per module.
Results – Windows 2008 R2 Networking Examples
We have successfully configured FlexFabric with a shared uplink set and redundant SAN fabrics. We
have created a server profile to connect the TWO NICs to VLAN 101 and the SAN fabrics using the
FCoE connections created within the profile. We also configured the profile to Boot Windows 2008
R2 from a SAN LUN.
Although both Ethernet and Fibre channel connectivity is provided by the CNA used in the G7 and
Gen 8 servers; each capability (LAN and SAN) is provided by a different component of the adapter,
they appear in the server as individual network and SAN adapters.
Figure 79 - Example of Emulex's OneCommand Manager Utility (formerly known as HBA Anywhere).
Note that there are 3 Ethernet personalities and one FCoE personality per port, as configured in the
server profile.
The following graphics show a Windows 2008 R2 server with TWO FlexNICs configured at 6Gb. You
will also notice that Windows believes there are 6 NICs within this server. However, only TWO NICs
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 77
are currently configured within the FlexFabric profile, the extra NICs are offline and could be
disabled. If we did not require SAN connectivity on this server, the FCoE connections could be
deleted and the server would then have 8 NIC ports available to the OS.
Note: the BL465c G7 and BL685c G7 utilize an NC551i chipset (BE2), whereas the BL460c G7,
BL620c G7 and BL680c G7 utilize an NC553i chipset (BE3) and the Gen 8 blades typically have a
NC554 adapter which also utilizes the BE3 chipset. Both the BE2 and BE3 chipsets share common
drivers and firmware.
Note: The NICs that are not configured within VC will appear with a red x as not connected. You can
go into Network Connections for the Windows 2008 server and Disable any NICs that are not
currently in use. Windows assigns the NICs as NIC 1-6, whereas three of the NICs will reside on
LOM:1 and three on LOM:2. You may need to refer to the FlexFabric server profile for the NIC MAC
addresses to verify which NIC is which.
Figure 81 - Windows 2008 R2 Extra Network Connections – Disabled
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 78
Figure 82 - Windows 2008 R2 Network Connection Status
Note: In Windows 2008 and later the actual NIC speed is displayed as configured in server Profile.
Also, note that the speed displayed is the maximum speed setting, not the minimum setting.
Figure 83 - Windows 2008 R2, Device Manager, SIX NICs are shown, however, we have only
configured two of the NICs and two FCoE HBAs.
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 79
The following graphics provides an example of a Windows 2008 R2 server with TWO NICs connected
to the network, initially each NIC has its own TCP/IP address, alternatively, both NICs could be
teamed to provide NIC fail-over redundancy. If an active uplink or network switch were to fail,
Virtual Connect would fail-over to the standby uplink. In the event of a Virtual Connect FlexFabric
module failure, the server’s NIC teaming software would see one of the NICs go offline, assuming it
was the active NIC, NIC teaming would fail-over to the standby NIC.
Figure 84 - Each NIC is connected to VLAN 101
NIC Teaming
If higher availability is desired, NIC teaming in Virtual Connect works the same way as in standard
network configurations. Simply, open the NIC teaming Utility and configure the available NICs for
teaming. In this example, we have only TWO NICs available, so selecting NICs for teaming will be
quite simple. However, if multiple NICs are available, ensure that the correct pair of NICs is teamed.
You will note the BAY#-Port# indication within each NIC. Another way to confirm you have the
correct NIC is to verify through the MAC address of the NIC. You would typically TEAM a NIC from
Bay 1 to Bay 2 for example.
The following graphics provide an example of a Windows 2008 R2 server with TWO NICs teamed
and connected to the network. In the event of an Uplink or switch failure, the SUS will lose
connection to the network; SmartLink will alert the NIC teaming software to this event, by turning
the server NIC port off, causing the NIC teaming software to fail-over to the alternate NIC.
Figure 85 – Team both NICs, using the HP Network Configuration Utility
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 80
Figure 86 - Both NICs for Profile App-1are teamed and connected to the network through VLANTrunk-x, either path could be active
Various modes can be configured for NIC Teaming, such as NFT, TLB etc. Once the Team is created,
you can select the team and edit its properties. Typically, the default settings can be used.
Figure 87 - View – Network Connections – NIC Team #1 – Windows
Figure 88 - Both NICs are teamed and connect to the network with a common IP Address
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 81
Results – Windows 2008 R2 SAN Connectivity
Figure 89 - By access the HBA BIOS during server boot, you can see that Port 1 of the FlexHBA is
connected to an EVA SAN LUN
Figure 90 - Windows 2008 R2 Disk Administrator. Note; that C: is the SAN attached volume
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 82
Summary
We presented a Virtual Connect Network scenario by creating two shared uplink sets (SUS), each
SUS is connected with TWO active uplinks, both SUS’ can actively pass traffic. We included a dual
path FCoE SAN fabric for storage connectivity and boot to SAN.
When VC profile App-1 is applied to the server in bay1 and the server is powered up, it has one NIC
connected through FlexFabric module 1 (connected to VLAN-101-1), the second NIC is connected
through FlexFabric module 2 (connected to VLAN-101-2). Each NIC is configured at 6Gb. These NICs
could now be configured as individual NICs with their own IP address or as a pair of TEAMED NICs.
Either NIC could be active. As a result, this server could access the network through either NIC or
either uplink, depending on which NIC is active at the time. Each NIC is configured for a guaranteed
minimum bandwidth of 8Gb, with a maximum of 10Gb of network bandwidth and each FCoE port is
configured for 4Gb of SAN bandwidth with the ability to burst to a maximum of 8Gb.
We also configured the server profile to Boot to SAN, this configuration is part of the profile, if the
profile is moved to a different server bay, the Boot to SAN information will follow with the profile.
The profile can also be copied and assigned additional server bays, each new profile will retain the
Boot to SAN configuration, however, will also acquire new WWN addresses.
Additional NICs could be added within FlexFabric, by simply powering the server off and adding up
to a total of 6 NICs, the NIC speed can then be adjusted accordingly to suit the needs of each NIC. If
additional or less SAN bandwidth is required, the speed of the SAN connection can also be adjusted.
As additional servers are added to the enclosure, simply create additional profiles, or copy existing
profiles, configure the NICs for LAN and SAN fabrics as required and apply them to the appropriate
server bays and power the server on.
Scenario 3 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) - Ethernet and FCoE Boot from SAN –
Windows 2008 R2 83
Scenario 4 – Shared Uplink Set with
Active/Active Uplinks and 802.3ad (LACP) –
Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V
Overview
This scenario will implement the Shared Uplink Set (SUS) to provide support for multiple VLANs.
The upstream network switches connect a shared uplink set to two ports on each FlexFabric
modules, LACP will be used to aggregate those links.
As multiple VLANs will be supported in this configuration, the upstream switch ports connecting to
the FlexFabric modules will be configured to properly present those VLANs. In this scenario, the
upstream switch ports will be configured for VLAN trunking/VLAN tagging.
When configuring Virtual Connect, we can provide several ways to implement network fail-over or
redundancy. One option would be to connect TWO uplinks to a single Virtual Connect network;
those two uplinks would connect from different Virtual Connect modules within the enclosure and
could then connect to the same upstream switch or two different upstream switches, depending on
your redundancy needs. An alternative would be to configure TWO separate Virtual Connect
networks, each with a single, or multiple, uplinks configured. Each option has its advantages and
disadvantages. For example; an Active/Standby configuration places the redundancy at the VC
level, where Active/Active places it at the OS NIC teaming or bonding level. We will review the
second option in this scenario.
In addition, several Virtual Connect Networks can be configured to support the required networks to
the servers within the BladeSystem enclosure. These networks could be used to separate the
various network traffic types, such as iSCSI, backup and VMotion from production network traffic.
This scenario will also leverage the Fibre Channel over Ethernet (FCoE) capabilities of the FlexFabric
modules. Each fibre channel fabric will have two uplinks connected to each of the FlexFabric
modules.
Requirements
This scenario will support both Ethernet and fibre channel connectivity. In order to implement this
scenario, an HP BladeSystem c7000 enclosure with one or more server blades and TWO Virtual
Connect FlexFabric modules, installed in I/O Bays 1& 2 are required. In addition, we will require ONE
or TWO external Network switches. As Virtual Connect does not appear to the network as a switch
and is transparent to the network, any standard managed switch will work with Virtual Connect.
The fibre channel uplinks will connect to the existing FC SAN fabrics. The SAN switch ports will need
to be configured to support NPIV logins. Two uplinks from each FlexFabric module will be
connected to the existing SAN fabrics.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 84
Figure 91 - Physical View; Shows two Ethernet uplinks from Ports X5 and X6 on Module 1 and 2 to
Ports 1 and 2 on each network switch. The SAN fabrics are also connected redundantly, with TWO
uplinks per fabric, from ports X1 and X2 on module 1 to Fabric A and ports X1 and X2 to Fabric B.
Figure 92 - Logical View; the server blade profile is configured with Four FlexNICs and 2 FlexHBAs.
NICs 1 and 2 are connected to VLAN-101-x, NICs 3 and 4 are connected to multiple networks VLAN102-x through VLAN-105-x and VLAN-2100 through 2150-x, which are part of the Shared Uplink
Sets, VLAN-Trunk-1 and VLAN-Trunk-2 respectively. The VLAN-Trunks are connected, at 10Gb, to a
network switch, through Ports X5 and X6 on each FlexFabric Module in Bays 1 and 2. In addition,
SAN Fabric FCoE_A connects to the existing SAN Fabric A through port X1 on Module 1 (Bay 1) and
FCoE_B connects to the existing SAN Fabric B through port X1 on Module 2 (Bay 2)
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 85
Installation and configuration
Switch configuration
Appendices A and B provide a summary of the commands required to configure the switch in either
a Cisco or HP Networking (with both ProCurve and Comware examples). The configuration
information provided in the appendices assumes the following information:
The switch ports are configured as VLAN TRUNK ports (tagging) to support several VLANs.
All frames will be forwarded to Virtual Connect with VLAN tags. Optionally, one VLAN could
be configured as (Default) untagged, if so, then a corresponding vNet within the Shared
Uplink Set would be configured and set as “Default”.
Note: when adding additional uplinks to the SUS, if the additional uplinks are connecting from the
same FlexFabric module to the same switch, in order to ensure all the uplinks are active, the switch
ports will need to be configured for LACP within the same Link Aggregation Group.
The network switch port should be configured for Spanning Tree Edge as Virtual Connect appears to
the switch as an access device and not another switch. By configuring the port as Spanning Tree
Edge, it allows the switch to place the port into a forwarding state much quicker than otherwise,
this allows a newly connected port to come online and begin forwarding much quicker.
The SAN connection will be made with redundant connections to each Fabric. SAN switch ports
connecting to the FlexFabric module must be configured to accept NPIV logins.
Configuring the VC module
Physically connect Port 1 of network switch 1 to Port X5 of the VC module in Bay 1
Physically connect Port 2 of network switch 1 to Port X6 of the VC module in Bay 1
Physically connect Port 1 of network switch 2 to Port X5 of the VC module in Bay 2
Physically connect Port 2 of network switch 2 to Port X6 of the VC module in Bay 2
Note: if you have only one network switch, connect VC ports X5 and X6 (Bay 2) to an alternate port
on the same switch. This will NOT create a network loop and Spanning Tree is not required.
Physically connect Ports X1/X2 on the FlexFabric in module Bay 1 to switch ports in SAN Fabric A
Physically connect Ports X1/X2 on the FlexFabric in module Bay 2 to switch ports in SAN Fabric B
VC CLI commands
Many of the configuration settings within VC can also be accomplished via a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once
logged in, VC provides a CLI with help menus. Through this scenario the CLI commands to configure
VC for each setting will also be provided.
Configuring Expanded VLAN Capacity via GUI
Virtual Connect release 3.30 provided an expanded VLAN capacity mode when using Shared Uplink
Sets, this mode can be enabled through the Ethernet Settings tab or the VC CLI. The default
configuration for a new Domain install is “Expanded VLAN Capacity” mode, Legacy mode is no
longer available and the Domain cannot be downgraded.
To verify the VLAN Capacity mode
On the Virtual Connect Manager screen, Left pane, click Ethernet Settings, Advanced
Settings
Select Expanded VLAN capacity
Verify Expanded VLAN Capacity is configured and Legacy VLAN Capacity is greyed out.
Note: Legacy VLAN mode will only be presented if 1Gb Virtual Connect Modules are present, in
which case the domain would be limited to Firmware version 3.6x.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 86
Configuring Expanded VLAN Capacity via CLI
The following command can be copied and pasted into an SSH based CLI session with Virtual
Connect;
# Set Expanded VLAN Capacity
set enet-vlan -quiet VlanCapacity=Expanded
Figure 93 - Enabling Expanded VLAN Capacity
Note: if a 1Gb VC Ethernet module is present in the Domain, Expanded VLAN capacity will be greyed
out, this is only supported with 10Gb based VC modules. Also, once Expanded VLAN capacity is
selected, moving back to Legacy VLAN capacity mode will require a domain deletion and rebuild.
Defining a new Shared Uplink Set (VLAN-Trunk-1)
Connect Ports X5 and X6 of FlexFabric module in Bay 1 to Ports 1 and 2 on switch 1
Create a SUS named VLAN-Trunk-1 and connect it to FlexFabric Ports X5 and X6 on Module 1
On the Virtual Connect Home page, select Define, Shared Uplink Set
Insert Uplink Set Name as VLAN-Trunk-1
Select Add Port, then add the following port;
o Enclosure 1, Bay 1, Port X5
o Enclosure 1, Bay 1, Port X6
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 87
Figure 94 - Shared Uplink Set (VLAN-Trunk-1) Uplinks Assigned
Click Add Networks and select the Multiple Networks radio button and add the following
VLANs;
o Enter Name as VLAN-
o Enter Suffix as -1
o Enter VLAN IDs as follows (and shown in the following graphic);
101-105,2100-2400
Enable SmartLink on ALL networks
Click Advanced
o Configure Preferred speed to 4Gb
o Configure Maximum speed to 8Gb
Click Apply
Note: you can optionally specify a network “color” or “Label” when creating a shared Uplinkset and
its networks. In the example above we have not set either color or label.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 88
Figure 95 - Creating VLANs in a Shared Uplink Set
Note: If the VC domain is not in Expanded VLAN capacity mode, you will receive an error when
attempting to create more that 128 VLANs in a SUS. If that occurs, go to Advanced Ethernet
Settings and select Expanded VLAN capacity mode and apply.
After clicking apply, a list of VLANs will be presented as shown below. If one VLAN in the trunk is
untagged/native, see note below.
Click Apply at the bottom of the page to create the Shared Uplink Set
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 89
Figure 96 - Associated VLANs for Shared Uplink Set VLAN-Trunk-1
Note: Optionally, if one of the VLANs in the trunk to this shared uplink set were configured as Native
or Untagged, you would be required to “edit” that VLAN in the screen above, and configure Native as
TRUE. This would need to be set for BOTH VLAN-Trunk-1 and VLAN-Trunk-2.
Defining a new Shared Uplink Set (VLAN-Trunk-2)(Copying a Shared Uplink Set)
The second Shared Uplink Set could be created in the same manner as VLAN-Trunk-1 however; VC
now provides the ability to COPY a VC Network or Shared Uplink Set.
Connect Ports X5 and X6 of FlexFabric module in Bay 2 to Ports 1 and 2 on switch 2
In the VC GUI screen, select Shared Uplink Sets in the left pane, in the right pane VLAN-
Trunk-1 will be displayed, left click VLAN-Trunk-1, it will appear as blue, right click and
select COPY
Edit the Settings as shown below, the new SUS name will be VLAN-Trunk-2 and ALL the
associated VLANs with have a suffix of 2
In step 3, ADD uplinks X5 and X6 from Bay 2
Click OK
The SUS and ALL VLANs will be created
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 90
Figure 97 - Copying a SUS and ALL VLANs
Defining a new Shared Uplink Set via CLI
The following script can be used to create the first Shared Uplink Set (VLAN-Trunk-1)
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
# Create Networks VLAN101-2 through VLAN105-2 and VLAN-2100-2 through VLAN-24002 for Shared Uplink Set VLAN-Trunk-2
add network-range -quiet UplinkSet=VLAN-Trunk-2 NamePrefix=VLAN- NameSuffix=-2
VLANIds=101-105,2100-2400 NAGs=Default PrefSpeedType=Custom PrefSpeed=4000
MaxSpeedType=Custom MaxSpeed=8000 SmartLink=Enabled
Please refer to Appendix D; “Scripting the Native VLAN” for scripting examples.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 91
Note: In this scenario we have created two independent Share Uplink Sets (SUS), each originating
from the opposite FlexFabric Modules, by doing so we provide the ability to create separate and
redundant connections out of the Virtual Connect domain. When we create the server profiles, you
will see how the NICs will connect to VLANs accessed through the opposite VC module, which
provides the ability to create an Active / Active uplink scenario. Alternatively, we could have
created a single SUS and assigned both sets of these uplink ports to the same SUS, however, this
would have provided an Active/Standby uplink scenario, as shown in Scenario 5.
Defining a new (FCoE) SAN Fabric via GUI
Create a Fabric and name it “FCoE_A”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the first Fabric
Enter the Network Name of “FCoE_A”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 1, Port X1
o Enclosure 1, Bay 1, Port X2
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Automatic Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Create a second Fabric and name it “FCoE_B”
On the Virtual Connect Manager screen, click Define, SAN Fabric to create the second Fabric
Enter the Network Name of “FCoE_B”
Select Add Port, then add the following ports;
o Enclosure 1, Bay 2, Port X1
o Enclosure 1, Bay 2, Port X2
Ensure Fabric Type is set to “FabricAttach”
Select Show Advanced Settings
o Select Automatic Login Re-Distribution (FlexFabric Only)
o Select Set Preferred FCoE Connect Speed
Configure for 4Gb
o Select Set Maximum FCoE Connect Speed
Configure for 8Gb
Select Apply
Defining SAN Fabrics via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
#Create the SAN Fabrics FCoE_A and FCoE_B and configure uplinks as discussed above
add fabric FCoE_A Type=FabricAttach Bay=1 Ports=1,2 Speed=Auto LinkDist=Auto
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
add fabric FCoE_B Type=FabricAttach Bay=2 Ports=1,2 Speed=Auto LinkDist=Auto
PrefSpeedType=Custom PrefSpeed=4000 MaxSpeedType=Custom MaxSpeed=8000
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 92
Figure 98 - SAN Configuration and Advanced Settings
Figure 99 - FCoE SAN fabrics configured with two 8Gb uplinks per fabric. Note the bay and port
numbers on the right
Defining a Server Profile
We will create a server profile with two server NICs.
Each server NIC will connect to a specific network.
On the main menu, select Define, then Server Profile
Create a server profile called “App-1”
In the Network Port 1 drop down box, select VLAN101-1
Set the port speed to Custom at 1Gb
In the Network Port 2 drop down box, select VLAN101-2
Set the port speed to Custom at 1Gb
Left click on either of Port 1 or Port 2 in the Ethernet Connections box, and select ADD
network (add two additional network connections)
In the Network Port 3 drop down box, select Multiple Networks
Configure for networks VLAN-102-1 through VLAN-105-1 and VLAN-2100-1 through
VLAN-2150-1
Leave the network speed as Auto
In the Network Port 4 drop down box, select Multiple Networks
Configure for networks VLAN-102-2 through VLAN-105-2 and VLAN-2100-2 through
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 93
VLAN-2150-2
Leave the network speed as Auto
Expand the FCoE Connections box, for Bay 1, select FCoE_A for Bay 2, select FCoE_B
Do not configure FC SAN or iSCSI Connection
In the Assign Profile to Server Bay box, locate the Select Location drop down and select
Bay 1, then apply
Prior to applying the profile, ensure that the server in Bay 1 is currently OFF
Note: You should now have a server profile assigned to Bay 1, with 4 Server NIC connections. NICs
1&2 should be connected to networks VLAN-101-x, NICs 3&4 should be connected to networks
VLAN102-x through VLAN105-x and VLAN-2100-x through VLAN-2150-x. FCoE SAN fabrics are
connected to, Port 1 - FCoE_A and Port 2 - FCoE_B.
Defining a Server Profile via CLI
The following command(s) can be copied and pasted into an SSH based CLI session with Virtual
Connect
assign profile App-1 enc0:1
Note: The “add server-port-map-range” command is new to VC firmware release 3.30 and can be
used to map many VLANs to a server NIC, in a single command. Prior releases would have required
one command to create the NIC and one additional command per VLAN mapping added. This
command will make profile scripting much easier, less complicated and quicker.
Note: The speed of the NIC and SAN connections, as well as the MAC and WWN. Also, note that the
FCoE connections are assigned to the two SAN fabrics created earlier and use ports LOM:1-b and
LOM:2-b.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 94
Figure 100 - Define a Server Profile (App-1) Hyper-V Host
Figure 101 - Configure NICs 3 and 4 for multiple Networks and select the appropriate VLANs
Note: “Server VLAN ID” and “Untagged” boxes can be edited. One network per port could be marked
as “Untagged’, in which case the server would not be configured for tagging on that VLAN. It is also
possible to change the VLAN ID that is presented to the server (VLAN translation), in which case the
communications between Virtual Connect and the network would be the VLAN ID in grey, if the
Server VLAN ID box to the right were changed, VC would communication with the server on the new
VLAN ID, providing a VLAN translation function. VLAN translation could be a very useful feature, in
the event that VLAN renumbering is required within the datacenter. The network VLAN numbers
and Shared Uplink Set configurations could be changed to reflect the new VLAN IDs used, however,
the old VLAN IDs could still be presented to the server providing the ability to delay or eliminate the
need to change the VLAN ID used within the server/vSwitch.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 95
Figure 102 - Server Profile View Bay 1
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 96
Figure 103 - By clicking on the “Multiple Networks” statement for each LOM, the following page is
displayed, which lists the VLAN connections for this port.
Review
In this scenario we have created Two Shared Uplink Sets (SUS), providing support for many VLANs.
Uplinks originating from each FlexFabric Module connect to each SUS, by doing so we provide
redundant connections out of the Virtual Connect domain. As multiple uplinks are used for each
SUS, we have also leveraged LACP to improve uplink performance. In this scenario, all uplinks will
be active. We also create two FCoE SAN Fabrics.
We created a server profile, with FOUR NICs. Two connected to the same VLAN (101), Port 1
connects to VLAN-101-1 and Port 2 connects to VLAN-101-2, which provides the ability to sustain a
link or module failure and not lose connection to the network. VLAN-101-1 and VLAN-101-2 are
configured to support VLAN 101, frames will be presented to the NIC(s) without VLAN tags
(untagged), these two NICs are connected to the same VLAN, but taking a different path out of the
enclosure.
Network Ports 3 and 4 were added, these NICs will be connected to “Multiple Networks” and each
NIC will then be configured for networks VLAN-102-x through VLAN-105-x and networks VLAN-
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 97
2100-x through VLAN-2150-x. As these networks are tagging, frames will be presented to the
server with VLAN tags. NICs 3 and 4 will be teamed and connected to a Hyper-V virtual switch.
VLAN tagged frames for these networks will be forwarded to the Virtual switch and then passed on
to the appropriate Virtual Machine, VLAN tags will be removed as the frames are passed to the
virtual machine. NICs 3 and 4 had their speed set to Auto, as NICs 1 and 2 were set to 1Gb, NICs 3
and 4 received 5Gb of bandwidth. As the networks had a maximum speed configured for 8Gb, the
maximum speed for all NICs is 8Gb.
Additionally, FCoE port 1 is connected to SAN fabric FCoE_A and FCoE SAN port 2 is connected to
SAN Fabric FCoE_B, providing a multi-pathed connected to the SAN.
The FCoE SAN fabric connects to each SAN fabric over a pair of uplinks per module. SAN logins are
distributed across the multiple paths.
Results – Windows 2008 R2 Networking Examples
We have successfully configured FlexFabric with two shared uplink sets and redundant SAN fabrics.
We have created a server profile with FOUR NICs, two connected to VLAN 101 and TWO connected to
multiple tagged VLANs. We also configured SAN fabrics using the FCoE connections created within
the profile.
Although both Ethernet and Fibre channel connectivity is provided by the CNA used in the G7 and
Gen 8 servers; each capability (LAN and SAN) is provided by a different component of the adapter,
they appear in the server as individual network and SAN adapters.
Figure 104 - Example of Emulex's OneCommand Manager Utility (formerly known as HBA
Anywhere). Note that there are 3 Ethernet personalities and one FCoE personality per port, as
configured in the server profile.
The following graphics show a Windows 2008 R2 server with FOUR FlexNICs configured, two at 1Gb
and two at 5Gb. You will also notice that Windows believes there are 6 NICs within this server.
However, only four NICs are currently configured within FlexFabric, the extra NICs are offline and
could be disabled. If we did not require SAN connectivity on this server, the FCoE connections could
be deleted and the server would then have 8 NIC ports available to the OS.
Note: the BL465c G7 and BL685c G7 utilize an NC551i chipset (BE2), whereas the BL460c G7,
BL620c G7 and BL680c G7 utilize an NC553i chipset (BE3) and the Gen 8 blades typically have a
NC554 adapter which also utilizes the BE3 chipset. Both the BE2 and BE3 chipsets share common
drivers and firmware.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 98
Note: The NICs that are not configured within VC will appear with a red x as not connected. You can
go into Network Connections for the Windows 2008 R2 server and disable any NICs that are not
currently in use. Windows assigns the NICs as NIC 1-6, whereas three of the NICs will reside on
LOM:1 (a,c &d)and three on LOM:2 (a,c &d). You may need to refer to the FlexFabric server profile
for the NIC MAC addresses to verify which NIC is which.
Figure 107 - Windows 2008 R2 Network Connection Status
Note: In windows 2003 the NIC speeds may not be shown accurately when speeds are configured in
100MB increments above 1Gb. ie: if a NIC is configured for 2.5Gb it will be displayed in Windows
2003 as a 2Gb NIC. Windows 2008 does not have this limitation. In addition, as Virtual Connect 4.01
now provides the Min/Max network speed setting, even though we set the NIC to 1 and 5Gb, and set
the maximum to 8Gb, the NIC displays the speed of 8Gb.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 99
Figure 108 - Windows 2008 R2, Device Manager, SIX NICs are shown, however, we have only
configured four of the NICs and two FCoE HBAs.
The following graphics provides an example of a Windows 2008 R2 server with four NICs connected
to the network, initially each NIC has its own TCP/IP address, alternatively, NICs could be teamed to
provide NIC fail-over redundancy. In this scenario we will create two teams, one for the
management network (VLAN 101) and one for the Virtual guest networks (VLANs 102 through 105
and VLANs 2100 through 2150). If an active uplink or network switch were to fail, Virtual Connect
would fail-over to the standby uplink. In the event of a Virtual Connect FlexFabric module failure,
the server’s NIC teaming software would see one of the NICs go offline, assuming it was the active
NIC, NIC teaming would fail-over to the standby NIC.
Figure 109 – Two NICs for Profile App-1are connected to the network through VLAN-101 and two
NICs are connected to support all other VLANs. Those VLANs are tagged and no DHCP server is
present on that network.
Scenario 4 – Shared Uplink Set with Active/Active Uplinks and 802.3ad (LACP) – Ethernet, FCoE SAN - Windows 2008 R2
Hyper-V 100
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.