intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
2014 - 07
Rev. A00
Contents
1 About this Guide..................................................................................................13
Link Aggregation..................................................................................................................................18
Link Tracking........................................................................................................................................18
Monitor and Maintain Interfaces.......................................................................................................107
Maintenance Using TDR............................................................................................................. 108
Flow Control Using Ethernet Pause Frames.................................................................................... 108
MTU Size............................................................................................................................................109
Auto-Negotiation on Ethernet Interfaces........................................................................................ 110
This guide describes the supported protocols and software features, and provides configuration
instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version
9.4(0.0).
The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to
install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support
website at http://www.dell.com/support/manuals
Though this guide contains information about protocols, it is not intended to be a complete reference.
This guide is a reference for configuring protocols on Dell Networking systems. For complete information
about protocols, refer to other documentation, including IETF requests for comment (RFCs). The
instructions in this guide cite relevant RFCs, and Standards Compliance contains a complete list of the
supported RFCs and management information base files (MIBs).
NOTE: You can perform some of the configuration tasks described in this document by using either
the Dell command line or the chassis management controller (CMC) graphical interface. Tasks
supported by the CMC interface are shown with the CMC icon: CMC
Audience
This document is intended for system administrators who are responsible for configuring and maintaining
networks and assumes knowledge in Layer 2 and Layer 3 networking technologies.
Conventions
This guide uses the following conventions to describe command syntax.
Keyword
parameterParameters are in italics and require a number or word to be entered in the CLI.
{X}Keywords and parameters within braces must be entered in the CLI.
[X]Keywords and parameters within brackets are optional.
x|yKeywords and parameters separated by a bar require you to choose one option.
x||yKeywords and parameters separated by a double bar allows you to choose any or
Keywords are in Courier (a monospaced font) and must be entered in the CLI as
listed.
all of the options.
About this Guide
13
Related Documents
For more information about the Dell PowerEdge M I/O Aggregator MXL 10/40GbE Switch IO Module,
refer to the following documents:
•Dell Networking OS Command Line Reference Guide for the M I/O Aggregator
•Dell Networking OS Getting Started Guide for the M I/O Aggregator
•Release Notes for the M I/O Aggregator
14
About this Guide
2
Before You Start
To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell
PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator
(also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator
boots up with default settings and auto-configures with software features enabled. This chapter describes
the default settings and software features that are automatically configured at startup. To reconfigure the
Aggregator for customized network operation, use the tasks described in the other chapters.
IOA Operational Modes
IOA supports three operational modes. Select the operational mode that meets your deployment needs.
To enable a new operational mode, reload the switch.
Standalone mode
stack-unit unit iom-mode standalone
This is the default mode for IOA. It is a fully automated zero-touch mode that allows you to configure
VLAN memberships. (Supported in CMC)
Stacking mode
stack-unit unit iom-mode stacking
Select this mode to stack up to six IOA stack units as a single logical switch. The stack units can be in the
same or on different chassis. This is a low-touch mode where all configuration except VLAN membership
is automated. To enable VLAN, you must configure it. In this operational mode, base module links are
dedicated to stacking.
VLT mode
stack-unit unit iom-mode vlt
Select this mode to multi-home server interfaces to different IOA modules. This is a low-touch mode
where all configuration except VLAN membership is automated. To enable VLAN, you must configure it.
In this mode, port 9 links are dedicated to VLT interconnect.
Programmable MUX mode
stack-unit unit iom-mode programmable-mux
Select this mode to configure PMUX mode CLI commands.
Before You Start
15
Default Settings
The I/O Aggregator provides zero-touch configuration with the following default configuration settings:
•default user name (root)
•password (calvin)
•VLAN (vlan1) and IP address for in-band management (DHCP)
•IP address for out-of-band (OOB) management (DHCP)
•read-only SNMP community name (public)
•broadcast storm control (enabled in Standalone and VLT modes and disabled in VLT mode)
•IGMP multicast flooding (enabled)
•VLAN configuration (in Standalone mode, all ports belong to all VLANs)
You can change any of these default settings using the CLI. Refer to the appropriate chapter for details.
NOTE: You can also change many of the default settings using the chassis management controller
(CMC) interface. For information about how to access the CMC to configure the aggregator, refer
Dell Chassis Management Controller (CMC) User’s Guide on the Dell Support website at
to the
http://support.dell.com/
Other Auto-Configured Settings
After the Aggregator powers on, it auto-configures and is operational with software features enabled,
including:
•Ports: Ports are administratively up and auto-configured to operate as hybrid ports to transmit tagged
and untagged VLAN traffic.
Ports 1 to 32 are internal server-facing ports, which can operate in 10GbE mode. Ports 33 to 56 are
external ports auto-configured to operate by default as follows:
– The base-module ports operate in standalone 4x10GbE mode. You can configure these ports to
operate in 40GbE stacking mode. When configured for stacking, you cannot use 40GbE basemodule ports for uplinks.
– Ports on the 2-Port 40-GbE QSFP+ module operate only in 4x10GbE mode. You cannot user
them for stacking.
– Ports on the 4-Port 10-GbE SFP+ and 4-Port 10GBASE-T modules operate only in 10GbE mode.
For more information about how ports are numbered, refer to Port Numbering.
•Link aggregation: All uplink ports are configured in a single LAG (LAG 128).
•VLANs: All ports are configured as members of all (4094) VLANs. All VLANs are up and can send or
receive layer 2 traffic. For more information, refer to VLAN Membership.
•Data center bridging capability exchange protocol (DCBx): Server-facing ports auto-configure in
auto-downstream port roles; uplink ports auto-configure in auto-upstream port roles.
•Fibre Channel over Ethernet (FCoE) connectivity and FCoE initiation protocol (FIP) snooping: The
uplink port channel (LAG 128) is enabled to operate in Fibre channel forwarder (FCF) port mode.
•Link layer discovery protocol (LLDP): Enabled on all ports to advertise management TLV and system
name with neighboring devices.
16
Before You Start
•Internet small computer system interface (iSCSI)optimization.
•Internet group management protocol (IGMP) snooping.
•Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default.
•Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing
ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as
an upstream interface. Server-facing links are auto-configured to be brought up only if the uplink
port-channel is up.
•In stacking mode, base module ports are automatically configured as stack ports.
•In VLT mode, port 9 is automatically configured as VLT interconnect ports.
Data Center Bridging Support
To eliminate packet loss and provision links with required bandwidth, Data Center Bridging (DCB)
enhancements for data center networks are supported.
The aggregator provides zero-touch configuration for DCB. The aggregator auto-configures DCBX port
roles as follows:
•Server-facing ports are configured as auto-downstream interfaces.
•Uplink ports are configured as auto-upstream interfaces.
In operation, DCBx auto-configures uplink ports to match the DCB configuration in the ToR switches to
which they connect.
The Aggregator supports DCB only in standalone mode.
FCoE Connectivity and FIP Snooping
Many data centers use Fiber Channel (FC) in storage area networks (SANs). Fiber Channel over Ethernet
(FCoE) encapsulates Fiber Channel frames over Ethernet networks.
On an Aggregator, the internal ports support FCoE connectivity and connects to the converged network
adapter (CNA) in servers. FCoE allows Fiber Channel to use 10-Gigabit Ethernet networks while
preserving the Fiber Channel protocol.
The Aggregator also provides zero-touch configuration for FCoE connectivity. The Aggregator autoconfigures to match the FCoE settings used in the switches to which it connects through its uplink ports.
FIP snooping is automatically configured on an Aggregator. The auto-configured port channel (LAG 128)
operates in FCF port mode.
iSCSI Operation
Support for iSCSI traffic is turned on by default when the aggregator powers up. No configuration is
required.
When an aggregator powers up, it monitors known TCP ports for iSCSI storage devices on all interfaces.
When a session is detected, an entry is created and monitored as long as the session is active.
Before You Start
17
An aggregator also detects iSCSI storage devices on all interfaces and autoconfigures to optimize
performance. Performance optimization operations are applied automatically, such as Jumbo frame size
support on all the interfaces, disabling of storm control and enabling spanning-tree port fast on
interfaces connected to an iSCSI equallogic (EQL) storage device.
Link Aggregation
All uplink ports are configured in a single LAG (LAG 128). Server-facing ports are auto-configured as part
of link aggregation groups if the corresponding server is configured for LACP-based network interface
controller (NIC) teaming. Static LAGs are not supported.
NOTE: The recommended LACP timeout is Long-Timeout mode.
Link Tracking
By default, all server-facing ports are tracked by the operational status of the uplink LAG. If the uplink LAG
goes down, the aggregator loses its connectivity and is no longer operational; all server-facing ports are
brought down after the specified defer-timer interval, which is 10 seconds by default. If you have
configured VLAN, you can reduce the defer time by changing the defer-timer value or remove it by using
the no defer-timer command from UPLINK-STATE-GROUP mode.
NOTE: If installed servers do not have connectivity to a switch, check the Link Status LED of uplink
ports on the aggregator. If all LEDs are on, to ensure the LACP is correctly configured, check the
LACP configuration on the ToR switch that is connected to the aggregator.
Configuring VLANs
By default, in Standalone mode, all aggregator ports belong to all 4094 VLANs and are members of
untagged VLAN 1. To configure only the required VLANs on a port, use the CLI or CMC interface.
You can configure VLANs only on server ports. The uplink LAG will automatically get the VLANs, based on
the server ports VLAN configuration.
When you configure VLANs on server-facing interfaces (ports from 1 to 8), you can assign VLANs to a
port or a range of ports by entering the vlan tagged or vlan untagged commands in Interface
Configuration mode; for example:
The tagged VLAN membership of the uplink LAG is automatically configured based on the VLAN
configuration of all server-facing ports (ports from 1 to 32).
The untagged VLAN used for the uplink LAG is always the default VLAN.
18
Before You Start
Server-Facing LAGs
The tagged VLAN membership of a server-facing LAG is automatically configured based on the serverfacing ports that are members of the LAG.
The untagged VLAN of a server-facing LAG is configured based on the untagged VLAN to which the
lowest numbered server-facing port in the LAG belongs.
NOTE: Dell Networking recommends configuring the same VLAN membership on all LAG member
ports.
Where to Go From Here
You can customize the Aggregator for use in your data center network as necessary. To perform
additional switch configuration, do one of the following:
•For remote out-of-band management, enter the OOB management interface IP address into a Telnet
or SSH client and log in to the switch using the user ID and password to access the CLI.
•For local management using the CLI, use the attached console connection.
•For remote in-band management from a network management station, enter the IP address of the
default VLAN and log in to the switch to access the CLI.
In case of a Dell upgrade, you can check to see that an Aggregator is running the latest Dell version by
entering the show versioncommand. To download Dell version, go to http://support.dell.com
For detailed information about how to reconfigure specific software settings, refer to the appropriate
chapter.
Before You Start
19
3
Configuration Fundamentals
The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you
can use to configure interfaces and protocols.
The CLI is structured in modes for security and management purposes. Different sets of commands are
available in each mode, and you can limit user access to modes using privilege levels.
In Dell Networking OS, after you enable a command, it is entered into the running configuration file. You
can view the current configuration for the whole system or for a particular CLI mode. To save the current
configuration, copy the running configuration to another location. For more information, refer to Save
the Running-Configuration.
NOTE: You can use the chassis management controller (CMC) out-of-band management interface
to access and manage an Aggregator using the Dell Networking OS command-line reference. For
more information about how to access the CMC to configure an Aggregator, refer to the Dell
Chassis Management Controller (CMC) User’s Guide on the Dell Support website at http://support.dell.com/support/edocs/systems/pem/en/index.htm.
Accessing the Command Line
Access the command line through a serial console port or a Telnet session (Logging into the System
using Telnet). When the system successfully boots, enter the command line in EXEC mode.
Logging into the System using Telnet
telnet 172.31.1.53
Trying 172.31.1.53...
Connected to 172.31.1.53.
Escape character is '^]'.
Login: username
Password:
Dell>
CLI Modes
Different sets of commands are available in each mode.
A command found in one mode cannot be executed from another mode (except for EXEC mode
commands with a preceding do command (refer to the do Command section).
The Dell Networking OS CLI is divided into three major mode levels:
•EXEC mode is the default mode and has a privilege level of 1, which is the most restricted level. Only a
limited selection of commands is available, notably the show commands, which allow you to view
system information.
20
Configuration Fundamentals
•EXEC Privilege mode has commands to view configurations, clear counters, manage configuration
files, run diagnostics, and enable or disable debug operations. The privilege level is 15, which is
unrestricted. You can configure a password for this mode.
•CONFIGURATION mode allows you to configure security features, time settings, set logging and
SNMP functions, configure static ARP and MAC addresses, and set line cards on the system.
Beneath CONFIGURATION mode are submodes that apply to interfaces, protocols, and features. The
following example shows the submode command structure. Two sub-CONFIGURATION modes are
important when configuring the chassis for the first time:
•INTERFACE submode is the mode in which you configure Layer 2 protocols and IP services specific to
an interface. An interface can be physical (10 Gigabit Ethernet) or logical (Null, port channel, or virtual
local area network [VLAN]).
•LINE submode is the mode in which you to configure the console and virtual terminal lines.
NOTE: At any time, entering a question mark (?) displays the available command options. For
example, when you are in CONFIGURATION mode, entering the question mark first lists all available
commands, including the possible submodes.
The CLI modes are:
EXEC
EXEC Privilege
CONFIGURATION
INTERFACE
10 GIGABIT ETHERNET
INTERFACE RANGE
MANAGEMENT ETHERNET
LINE
CONSOLE
VIRTUAL TERMINAL
MONITOR SESSION
Navigating CLI Modes
The Dell prompt changes to indicate the CLI mode.
The following table lists the CLI mode, its prompt, and information about how to access and exit the CLI
mode. Move linearly through the command modes, except for the end command which takes you
directly to EXEC Privilege mode and the exit command which moves you up one command mode level.
NOTE: Sub-CONFIGURATION modes all have the letters “conf” in the prompt with more modifiers
to identify the mode and slot/port information.
Table 1. Dell Command Modes
CLI Command ModePromptAccess Command
EXEC
EXEC Privilege
Configuration Fundamentals
Dell>
Dell#
Access the router through the
console or Telnet.
•From EXEC mode, enter the
enable command.
•From any other mode, use
the end command.
21
CLI Command ModePromptAccess Command
CONFIGURATION
NOTE: Access all of the
following modes from
CONFIGURATION mode.
Dell(conf)#
•From EXEC privilege mode,
enter the configure
command.
•From every mode except
EXEC and EXEC Privilege,
enter the exit command.
10 Gigabit Ethernet Interface
Interface Range
Management Ethernet Interface
MONITOR SESSION
IP COMMUNITY-LIST
CONSOLE
VIRTUAL TERMINAL
The following example shows how to change the command mode from CONFIGURATION mode to
INTERFACE configuration mode.
You can enter an EXEC mode command from any CONFIGURATION mode (CONFIGURATION,
INTERFACE, and so on.) without having to return to EXEC mode by preceding the EXEC mode command
with the
The following example shows the output of the do command.
do command.
Dell(conf)#do show system brief
Stack MAC : 00:01:e8:00:ab:03
-- Stack Info -Slot UnitType Status ReqTyp CurTyp
Version Ports
-------0 Member not present
1 Management online I/O-Aggregator I/O-Aggregator
8-3-17-38 56
2 Member not present
3 Member not present
4 Member not present
5 Member not present
22
Configuration Fundamentals
Dell(conf)#
Undoing Commands
When you enter a command, the command line is added to the running configuration file (runningconfig).
To disable a command and remove it from the running-config, enter the no command, then the original
command. For example, to delete an IP address configured on an interface, use the no ip address ip-address command.
NOTE: Use the help or ? command as described in Obtaining Help.
Example of Viewing Disabled Commands
Dell(conf)# interface managementethernet 0/0
Dell(conf-if-ma-0/0)# ip address 192.168.5.6/16
Dell(conf-if-ma-0/0)#
Dell(conf-if-ma-0/0)#
Dell(conf-if-ma-0/0)#show config
!
interface ManagementEthernet 0/0
ip address 192.168.5.6/16
no shutdown
Dell(conf-if-ma-0/0)#
Dell(conf-if-ma-0/0)# no ip address
Dell(conf-if-ma-0/0)#
Dell(conf-if-ma-0/0)# show config
!
interface ManagementEthernet 0/0
no ip address
no shutdown
Dell(conf-if-ma-0/0)#
Obtaining Help
Obtain a list of keywords and a brief functional description of those keywords at any CLI mode using
the ? or help command:
•To list the keywords available in the current mode, enter ? at the prompt or after a keyword.
•Enter ? after a prompt lists all of the available keywords. The output of this command is the same for
the help command.
Dell#?
start Start Shell
capture Capture Packet
cd Change current directory
clear Reset functions
clock Manage the system clock
configure Configuring from terminal
copy Copy from one file to another
--More--
•Enter ? after a partial keyword lists all of the keywords that begin with the specified letters.
Dell(conf)#cl?
clock
Dell(conf)#cl
Configuration Fundamentals
23
•Enter [space]? after a keyword lists all of the keywords that can follow the specified keyword.
Dell(conf)#clock ?
summer-time Configure summer (daylight savings) time
timezone Configure time zone
Dell(conf)#clock
Entering and Editing Commands
Notes for entering commands.
•The CLI is not case-sensitive.
•You can enter partial CLI keywords.
– Enter the minimum number of letters to uniquely identify a command. For example, you cannot
enter cl as a partial keyword because both the clock and class-map commands begin with the
letters “cl.” You can enter clo, however, as a partial keyword because only one command begins
with those three letters.
•The TAB key auto-completes keywords in commands. Enter the minimum number of letters to
uniquely identify a command.
•The UP and DOWN arrow keys display previously entered commands (refer to Command History).
•The BACKSPACE and DELETE keys erase the previous letter.
•Key combinations are available to move quickly across the command line. The following table
describes these short-cut key combinations.
Short-Cut Key
Combination
CNTL-AMoves the cursor to the beginning of the command line.
CNTL-BMoves the cursor back one character.
CNTL-DDeletes character at cursor.
CNTL-EMoves the cursor to the end of the line.
CNTL-FMoves the cursor forward one character.
CNTL-ICompletes a keyword.
CNTL-KDeletes all characters from the cursor to the end of the command line.
CNTL-LRe-enters the previous command.
CNTL-NReturn to more recent commands in the history buffer after recalling commands
CNTL-PRecalls commands, beginning with the last command.
CNTL-RRe-enters the previous command.
CNTL-UDeletes the line.
CNTL-WDeletes the previous word.
CNTL-XDeletes the line.
Action
with CTRL-P or the UP arrow key.
CNTL-ZEnds continuous scrolling of command outputs.
Esc BMoves the cursor back one word.
24
Configuration Fundamentals
Short-Cut Key
Combination
Esc FMoves the cursor forward one word.
Esc DDeletes all characters from the cursor to the end of the word.
Action
Command History
Dell Networking OS maintains a history of previously-entered commands for each mode. For example:
•When you are in EXEC mode, the UP and DOWN arrow keys display the previously-entered EXEC
mode commands.
•When you are in CONFIGURATION mode, the UP or DOWN arrows keys recall the previously-entered
CONFIGURATION mode commands.
Filtering show Command Outputs
Filter the output of a show command to display specific information by adding | [except | find |
grep | no-more | save] specified_text after the command.
The variable specified_text is the text for which you are filtering and it IS case sensitive unless you
use the ignore-case sub-option.
Starting with Dell Networking OS version 7.8.1.0, the grep command accepts an ignore-case suboption that forces the search to case-insensitive. For example, the commands:
•show run | grep Ethernet returns a search result with instances containing a capitalized
“Ethernet,” such as
•show run | grep ethernet does not return that search result because it only searches for
instances containing a non-capitalized “ethernet.”
•show run | grep Ethernet ignore-case returns instances containing both “Ethernet” and
“ethernet.”
The grep command displays only the lines containing specified text. The following example shows this
command used in combination with the
NOTE: Dell accepts a space or no space before and after the pipe. To filter a phrase with spaces,
underscores, or ranges, enclose the phrase with double quotation marks.
The except keyword displays text that does not match the specified text. The following example shows
this command used in combination with the show linecard all command.
Example of the except Keyword
Dell(conf)#do show stack-unit all stack-ports all pfc details | except 0
interface TenGigabitEthernet 0/1.
show linecard all command.
Configuration Fundamentals
25
Admin mode is On
Admin is enabled
Local is enabled
Link Delay 65535 pause quantum
Dell(conf)#
The find keyword displays the output of the show command beginning from the first occurrence of
specified text. The following example shows this command used in combination with the show
linecard all
Example of the find Keyword
Dell(conf)#do show stack-unit all stack-ports all pfc details | find 0
stack unit 0 stack-port all
Admin mode is On
Admin is enabled
Local is enabled
Link Delay 65535 pause quantum
0 Pause Tx pkts, 0 Pause Rx pkts
Dell(conf)#
The no-more command displays the output all at once rather than one screen at a time. This is similar to
the terminal length command except that the no-more option affects the output of the specified
command only.
The save command copies the output to a file for future reference.
NOTE: You can filter a single command output multiple times. The save option must be the last
option entered. For example:
Dell notifies all users when there are multiple users logged in to CONFIGURATION mode.
A warning message indicates the username, type of connection (console or VTY), and in the case of a VTY
connection, the IP address of the terminal on which the connection was established. For example:
•On the system that telnets into the switch, this message appears:
% Warning: The following users are currently configuring the system:
User "<username>" on line console0
•On the system that is connected over the console, this message appears:
% Warning: User "<username>" on line vty0 "10.11.130.2" is in configuration
mode
If either of these messages appears, Dell Networking recommends coordinating with the users listed in
the message so that you do not unintentionally overwrite each other’s configuration changes.
26
Configuration Fundamentals
4
Data Center Bridging (DCB)
On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You
can display information on DCB operation by using show commands.
NOTE: DCB features are not supported on an Aggregator in stacking mode.
Ethernet Enhancements in Data Center Bridging
DCB refers to a set of IEEE Ethernet enhancements that provide data centers with a single, robust,
converged network to support multiple traffic types, including local area network (LAN), server, and
storage traffic. Through network consolidation, DCB results in reduced operational cost, simplified
management, and easy scalability by avoiding the need to deploy separate application-specific networks.
For example, instead of deploying an Ethernet network for LAN traffic, additional storage area networks
(SANs) to ensure lossless fibre-channel traffic, and a separate InfiniBand network for high-performance
inter-processor computing within server clusters, only one DCB-enabled network is required in a data
center. The Dell Networking switches that support a unified fabric and consolidate multiple network
infrastructures use a single input/output (I/O) device called a converged network adapter (CNA).
A CNA is a computer input/output device that combines the functionality of a host bus adapter (HBA)
with a network interface controller (NIC). Multiple adapters on different devices for several traffic types
are no longer required.
Data center bridging satisfies the needs of the following types of data center traffic in a unified fabric:
•LAN traffic consists of a large number of flows that are generally insensitive to latency requirements,
while certain applications, such as streaming video, are more sensitive to latency. Ethernet functions
as a best-effort network that may drop packets in case of network congestion. IP networks rely on
transport protocols (for example, TCP) for reliable data transmission with the associated cost of
greater processing overhead and performance impact.
•Storage traffic based on Fibre Channel media uses the SCSI protocol for data transfer. This traffic
typically consists of large data packets with a payload of 2K bytes that cannot recover from frame loss.
To successfully transport storage traffic, data center Ethernet must provide no-drop service with
lossless links.
•Servers use InterProcess Communication (IPC) traffic within high-performance computing clusters to
share information. Server traffic is extremely sensitive to latency requirements.
To ensure lossless delivery and latency-sensitive scheduling of storage and service traffic and I/O
convergence of LAN, storage, and server traffic over a unified fabric, IEEE data center bridging adds the
following extensions to a classical Ethernet network:
•802.1Qbb - Priority-based Flow Control (PFC)
•802.1Qaz - Enhanced Transmission Selection (ETS)
•802.1Qau - Congestion Notification
Data Center Bridging (DCB)
27
•Data Center Bridging Exchange (DCBx) protocol
NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in
data center bridging.
Priority-Based Flow Control
In a data center network, priority-based flow control (PFC) manages large bursts of one traffic type in
multiprotocol links so that it does not affect other traffic types and no frames are lost due to congestion.
When PFC detects congestion on a queue for a specified priority, it sends a pause frame for the 802.1p
priority traffic to the transmitting device. In this way, PFC ensures that large amounts of queued LAN
traffic do not cause storage traffic to be dropped, and that storage traffic does not result in high latency
for high-performance computing (HPC) traffic between servers.
PFC enhances the existing 802.3x pause and 802.1p priority capabilities to enable flow control based on
802.1p priorities (classes of service). Instead of stopping all traffic on a link (as performed by the
traditional Ethernet pause mechanism), PFC pauses traffic on a link according to the 802.1p priority set on
a traffic type. You can create lossless flows for storage and server traffic while allowing for loss in case of
LAN traffic congestion on the same physical interface.
The following illustration shows how PFC handles traffic congestion by pausing the transmission of
incoming traffic with dot1p priority 3.
Figure 1. Priority-Based Flow Control
In the system, PFC is implemented as follows:
•PFC is supported on specified 802.1p priority traffic (dot1p 0 to 7) and is configured per interface.
However, only two lossless queues are supported on an interface: one for Fibre Channel over
Ethernet (FCoE) converged traffic and one for Internet Small Computer System Interface (iSCSI)
storage traffic. Configure the same lossless queues on all ports.
•A dynamic threshold handles intermittent traffic bursts and varies based on the number of PFC
priorities contending for buffers, while a static threshold places an upper limit on the transmit time of
a queue after receiving a message to pause a specified priority. PFC traffic is paused only after
surpassing both static and dynamic thresholds for the priority specified for the port.
•By default, PFC is enabled when you enabled DCB. When you enable DCB globally, you cannot
simultaneously enable TX and RX on the interface for flow control and link-level flow control is
disabled.
•Buffer space is allocated and de-allocated only when you configure a PFC priority on the port.
•PFC delay constraints place an upper limit on the transmit time of a queue after receiving a message
to pause a specified priority.
28
Data Center Bridging (DCB)
•By default, PFC is enabled on an interface with no dot1p priorities configured. You can configure the
PFC priorities if the switch negotiates with a remote peer using DCBX. During DCBX negotiation with a
remote peer:
– DCBx communicates with the remote peer by link layer discovery protocol (LLDP) type, length,
value (TLV) to determine current policies, such as PFC support and enhanced transmission
selection (ETS) BW allocation.
– If the negotiation succeeds and the port is in DCBX Willing mode to receive a peer configuration,
PFC parameters from the peer are used to configured PFC priorities on the port. If you enable the
link-level flow control mechanism on the interface, DCBX negotiation with a peer is not
performed.
– If the negotiation fails and PFC is enabled on the port, any user-configured PFC input policies are
applied. If no PFC dcb-map has been previously applied, the PFC default setting is used (no
priorities configured). If you do not enable PFC on an interface, you can enable the 802.3x linklevel pause function. By default, the link-level pause is disabled, when you disable DCBx and PFC.
If no PFC dcb-map has been applied on the interface, the default PFC settings are used.
•PFC supports buffering to receive data that continues to arrive on an interface while the remote
system reacts to the PFC operation.
•PFC uses the DCB MIB IEEE802.1azd2.5 and the PFC MIB IEEE802.1bb-d2.2.
If DCBx negotiation is not successful (for example, due to a version or TLV mismatch), DCBx is disabled
and you cannot enable PFC or ETS.
Configuring Priority-Based Flow Control
PFC provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic
received on an interface and is enabled by default when you enable DCB.
As an enhancement to the existing Ethernet pause mechanism, PFC stops traffic transmission for
specified priorities (Class of Service (CoS) values) without impacting other priority classes. Different traffic
types are assigned to different priority classes.
When traffic congestion occurs, PFC sends a pause frame to a peer device with the CoS priority values of
the traffic that is to be stopped. Data Center Bridging Exchange protocol (DCBx) provides the link-level
exchange of PFC parameters between peer devices. PFC allows network administrators to create zeroloss links for Storage Area Network (SAN) traffic that requires no-drop service, while retaining packetdrop congestion management for Local Area Network (LAN) traffic.
To ensure complete no-drop service, apply the same DCB input policy with the same pause time and
dot1p priorities on all PFC-enabled peer interfaces.
To configure PFC and apply a PFC input policy to an interface, follow these steps.
1.Create a DCB input policy to apply pause or flow control for specified priorities using a configured
delay time.
CONFIGURATION mode
dcb-input policy-name
The maximum is 32 alphanumeric characters.
2.Configure the link delay used to pause specified priority traffic.
DCB INPUT POLICY mode
pfc link-delay value
One quantum is equal to a 512-bit transmission.
Data Center Bridging (DCB)
29
The range (in quanta) is from 712 to 65535.
The default is 45556 quantum in link delay.
3.Configure the CoS traffic to be stopped for the specified delay.
DCB INPUT POLICY mode
pfc priority priority-range
Enter the 802.1p values of the frames to be paused.
The range is from 0 to 7.
The default is none.
Maximum number of loss less queues supported on the switch: 2.
Separate priority values with a comma. Specify a priority range with a dash, for example: pfc priority
1,3,5-7.
4.Enable the PFC configuration on the port so that the priorities are included in DCBx negotiation with
peer PFC devices.
DCB INPUT POLICY mode
pfc mode on
The default is PFC mode is on.
5.(Optional) Enter a text description of the input policy.
DCB INPUT POLICY mode
description text
The maximum is 32 characters.
6.Exit DCB input policy configuration mode.
DCB INPUT POLICY mode
exit
7.Enter interface configuration mode.
CONFIGURATION mode
interface type slot/port
8.Apply the input policy with the PFC configuration to an ingress interface.
INTERFACE mode
dcb-policy input policy-name
9.Repeat Steps 1 to 8 on all PFC-enabled peer interfaces to ensure lossless traffic service.
Dell Networking OS Behavior: As soon as you apply a DCB policy with PFC enabled on an interface,
DCBx starts exchanging information with PFC-enabled peers. The IEEE802.1Qbb, CEE, and CIN versions
of PFC Type, Length, Value (TLV) are supported. DCBx also validates PFC configurations that are received
in TLVs from peer devices.
30
Data Center Bridging (DCB)
Loading...
+ 284 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.