Dell Intel PRO Family of Adapters User Manual

Intel® Network Adapters User Guide

Restrictions and Disclaimers

Information for the Intel® Boot Agent, Intel® Ethernet iSCSI Boot, or Intel® FCoE/DCB can be found in their respective user guides.
Information in this document is subject to change without notice. Copyright © 2008-2014, Intel Corporation. All rights reserved.
* Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Intel Corporation disclaims any proprietary interest in trademarks and trade names other than its own.

Restrictions and Disclaimers

The information contained in this document, including all instructions, cautions, and regulatory approvals and cer­tifications, is provided by the supplier and has not been independently verified or tested by Dell. Dell cannot be responsible for damage caused as a result of either following or failing to follow these instructions.
All statements or claims regarding the properties, capabilities, speeds or qualifications of the part referenced in this doc­ument are made by the supplier and not by Dell. Dell specifically disclaims knowledge of the accuracy, completeness or substantiation for any such statements. All questions or comments relating to such statements or claims should be directed to the supplier.

Export Regulations

Customer acknowledges that these Products, which may include technology and software, are subject to the customs and export control laws and regulations of the United States (U.S.) and may also be subject to the customs and export laws and regulations of the country in which the Products are manufactured and/or received. Customer agrees to abide by those laws and regulations. Further, under U.S. law, the Products may not be sold, leased or otherwise transferred to restricted end users or to restricted countries. In addition, the Products may not be sold, leased or otherwise trans­ferred to, or utilized by an end-user engaged in activities related to weapons of mass destruction, including without lim­itation, activities related to the design, development, production or use of nuclear weapons, materials, or facilities, missiles or the support of missile projects, and chemical or biological weapons.
Last revised: 29 April 2014

Overview

Welcome to the User's Guide for Intel® Ethernet Adapters and devices. This guide covers hardware and software installation, setup procedures, and troubleshooting tips for the Intel® Gigabit Server Adapters and Intel® 10 Gigabit Server Adapters. In addition to supporting 32-bit operating systems, this software release also supports Intel® 64 Archi­tecture (Intel® 64).
Supported 10 Gigabit Network Adapters
l
Intel® Ethernet X520 10GbE Dual Port KX4 Mezz
l
Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 2P X520 Adapter
l
Intel® Ethernet 10G 4P X540/I350 rNDC
l
Intel® Ethernet 10G 4P X520/I350 rNDC
l
Intel® Ethernet 10G 2P X520-k bNDC
Supported Gigabit Network Adapters and Devices
l
Intel® PRO/1000 PT Server Adapter
l
Intel® PRO/1000 PT Dual Port Server Adapter
l
Intel® PRO/1000 PF Server Adapter
l
Intel® Gigabit ET Dual Port Server Adapter
l
Intel® Gigabit ET Quad Port Server Adapter
l
Intel® Gigabit ET Quad Port Mezzanine Card
l
Intel® Gigabit 2P I350-t Adapter
l
Intel® Gigabit 4P I350-t Adapter
l
Intel® Gigabit 4P I350-t rNDC
l
Intel® Gigabit 4P X540/I350 rNDC
l
Intel® Gigabit 4P X520/I350 rNDC
l
Intel® Ethernet Connection I354 1.0 GbE Backplane
l
Intel® Gigabit 4P I350-t Mezz
l
Intel® Gigabit 2P I350-t LOM
l
Intel® Gigabit 2P I350 LOM
l
Intel® Gigabit 4P I350 bNDC

Installing the Network Adapter

If you are installing a network adapter, follow this procedure from step 1. If you are upgrading the driver software, start with step 6.
1. Review system requirements.
2. Follow the procedure in Insert the PCI Express Adapter in the Server.
3. Carefully connect the network copper cable(s) or fiber cable(s).
4. After the network adapter is in the server, install the network drivers.
5. For Windows*, install the Intel® PROSet software.
6. Testing the Adapter.
System Requirements
Hardware Compatibility
Before installing the adapter, check your system for the following minimum configuration requirements:
l
IA-32-based (32-bit x86 compatible)
l
One open PCI Express* slot (v1.0a or newer) operating at 1x, 4x, 8x, or 16x.
l
The latest BIOS for your system
l
Supported operating system environments: see Installing Network Drivers
Supported Operating Systems
32-bit Operating Systems
Basic software and drivers are supported on the following operating systems:
l
Microsoft Windows Server 2008
64-bit Operating Systems
Software and drivers are supported on the following 64-bit operating systems:
l
Microsoft Windows Server 2012
l
Microsoft Windows Server 2008
l
RHEL 6.5
l
SLES 11 SP3
Cabling Requirements
Intel Gigabit Adapters
l
1000BASE-SX on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 550 meters max.
l
Utilizing 62.5 micron multimode, length is 275 meters max.
l
1000BASE-T or 100BASE-TX on Category 5 or Category 5e wiring, twisted 4-pair copper:
l
Make sure you use Category 5 cabling that complies with the TIA-568 wiring specification. For more information on this specification, see the Telecommunications Industry Association's web site: www.-
tiaonline.org.
l
Length is 100 meters max.
l
Category 3 wiring supports only 10 Mbps.
Intel 10 Gigabit Adapters
l
10GBASE-SR/LC on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 300 meters max.
l
Utilizing 62.5 micron multimode, length is 33 meters max.
l
10GBASE-T on Category 6, Category 6a, or Category 7 wiring, twisted 4-pair copper:
l
Length is 55 meters max for Category 6.
l
Length is 100 meters max for Category 6a.
l
Length is 100 meters max for Category 7.
l
10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l
Length is 10 meters max.
OS Updates
Some features require specific versions of an operating system. You can find more information in the sections that describe those features. You can download the necessary software patches from support sites, as listed here:
l
Microsoft Windows* Service Packs: support.microsoft.com
l
Red Hat Linux*: www.redhat.com
l
SUSE Linux: http://www.novell.com/linux/suse/
Ethernet MAC Addresses
Single-Port Adapters
The Ethernet address should be printed on the identification sticker on the front of the card.
Multi-Port Adapters
Dual port adapters have two Ethernet addresses. The address for the first port (port A or 1) is printed on a label on the component side of the adapter. Add one to this address to obtain the value for the second port (port B or 2).
In other words:
l
Port A = X (where X is the last numeric of the address printed on the label)
l
Port B = X + 1

Intel® Network Adapters Quick Installation Guide

Install the Intel PCI Express Adapter

1. Turn off the computer and unplug the power cord.
2. Remove the computer cover and the adapter slot cover from the slot that matches your adapter.
3. Insert the adapter edge connector into the PCI Express slot and secure the bracket to the chassis.
4. Replace the computer cover, then plug in the power cord.
NOTE: For information on identifying PCI Express slots that support your adapters, see your Dell system guide.

Attach the Network Cable

1. Attach the network connector.
2. Attach the other end of the cable to the compatible link partner.
3. Start your computer and follow the driver installation instructions for your operating system.

Install the Drivers

Windows* Operating Systems
You must have administrative rights to the operating system to install the drivers.
To installing drivers using setup.exe:
1. Install the adapter in the computer and turn on the computer.
2. Insert the installation CD in the CD-ROM drive. If the autorun program on the CD appears, ignore it.
3. Go to the root directory of the CD and double-click on setup.exe.
4. Follow the onscreen instructions.
Linux*
There are three methods for installing the Linux drivers:
l
Install from Source Code
l
Install from KMOD
l
Install from KMP RPM
NOTE: This release includes Linux Base Drivers for the Intel® Network Adapters. These drivers are named
e1000e and igb and ixgbe. The ixgbe driver must be installed to support 10 Gigabit 82598 and 82599-based net-
work connections. The igb driver must be installed to support any 82575 and 82576-based network con­nections. All other network connections require the e1000e driver. Please refer to the user guide for more specific information.
Other Operating Systems
To install other drivers, visit the Customer Support web site: http://www.support.dell.com.

Installing the Adapter

Insert the PCI Express Adapter in the Server

NOTE: If you are replacing an existing adapter with a new adapter, you must re-install the driver.
1. Turn off the server and unplug the power cord, then remove the server's cover.
CAUTION: Turn off and unplug the server before removing the server's cover. Failure to do so could endanger you and may damage the adapter or server.
2. Remove the cover bracket from a PCI Express slot (v1.0a or later). PCI Express slots and adapters vary in the number of connectors present, depending on the data lanes being supported.
NOTE: The following adapters will only fit into x4 or larger PCI Express slots.
l
Intel® PRO/1000 PT Dual Port Server Adapter
l
Intel® PRO/1000 PF Server Adapter
l
Intel® Gigabit ET Dual Port Server Adapter
l
Intel® Gigabit ET Quad Port Server Adapter
The following adapters will only fit into x8 or larger PCI Express slots.
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet Server Adapter X520-T2
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 2P X520 Adapter
Some systems have physical x8 PCI Express slots that actually support lower speeds. Please check your system manual to identify the slot.
3. Insert the adapter in an available, compatible PCI Express slot. Push the adapter into the slot until the adapter is firmly seated. You can install a smaller PCI Express adapter in a larger PCI Express slot.
CAUTION: Some PCI Express adapters may have a short connector, making them more fragile than PCI adapters. Excessive force could break the connector. Use caution when pressing the board in the slot.
4. Repeat steps 2 through 3 for each adapter you want to install.
5. Replace the server cover and plug in the power cord.
6. Turn the power on.

Connecting Network Cables

Connect the appropriate network cable, as described in the following sections.
Connect the UTP Network Cable
Insert the twisted pair, RJ-45 network cable as shown below.
Single-port Adapter Dual-port Adapter Quad-port Adapter
Type of cabling to use:
l
10GBASE-T on Category 6, Category 6a, or Category 7 wiring, twisted 4-pair copper:
l
Length is 55 meters max for Category 6.
l
Length is 100 meters max for Category 6a.
l
Length is 100 meters max for Category 7.
NOTE: For the Intel® 10 Gigabit AT Server Adapter, to ensure compliance with CISPR 24 and the EU’s EN55024, this product should be used only with Category 6a shielded cables that are properly ter­minated according to the recommendations in EN50174-2.
l
For 1000BASE-T or 100BASE-TX, use Category 5 or Category 5e wiring, twisted 4-pair copper:
l
Make sure you use Category 5 cabling that complies with the TIA-568 wiring specification. For more information on this specification, see the Telecommunications Industry Association's web site: www.-
tiaonline.org.
l
Length is 100 meters max.
l
Category 3 wiring supports only 10 Mbps.
CAUTION: If using less than 4-pair cabling, you must manually configure the speed and duplex set­ting of the adapter and the link partner. In addition, with 2- and 3-pair cabling the adapter can only achieve speeds of up to 100Mbps.
l
For 100Base-TX, use Category 5 wiring.
l
For 10Base-T, use Category 3 or 5 wiring.
l
If you want to use this adapter in a residential environment (at any speed), use Category 5 wiring. If the cable runs between rooms or through walls and/or ceilings, it should be plenum-rated for fire safety.
In all cases:
l
The adapter must be connected to a compatible link partner, preferably set to auto-negotiate speed and duplex for Intel gigabit adapters.
l
Intel Gigabit and 10 Gigabit Server Adapters using copper connections automatically accommodate either MDI or MDI-X connections. The auto-MDI-X feature of Intel gigabit copper adapters allows you to directly connect two adapters without using a cross-over cable.
Connect the Fiber Optic Network Cable
CAUTION: The fiber optic ports contain a Class 1 laser device. When the ports are disconnected, always cover them with the provided plug. If an abnormal fault occurs, skin or eye damage may result if in close proximity to the exposed ports.
Remove and save the fiber optic connector cover. Insert a fiber optic cable into the ports on the network adapter bracket as shown below.
Most connectors and ports are keyed for proper orientation. If the cable you are using is not keyed, check to be sure the connector is oriented properly (transmit port connected to receive port on the link partner, and vice versa).
The adapter must be connected to a compatible link partner, such as an IEEE 802.3z-compliant gigabit switch, which is operating at the same laser wavelength as the adapter.
Conversion cables to other connector types (such as SC-to-LC) may be used if the cabling matches the optical spe­cifications of the adapter, including length limitations.
The Intel® 10 Gigabit XF SR and Intel® PRO/1000 PF Server Adapters use an LC connection. Insert the fiber optic cable as shown below.
The Intel® 10 Gigabit XF SR Server Adapter uses an 850 nanometer laser wavelength (10GBASE-SR/LC). The Intel® PRO/1000 PF Server Adapter uses an 850 nanometer laser wavelength (1000BASE-SX).
Connection requirements
l
10GBASE-SR/LC on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 300 meters max.
l
Utilizing 62.5 micron multimode, length is 33 meters max.
l
1000BASE-SX/LC on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 550 meters max.
l
Utilizing 62.5 micron multimode, length is 275 meters max.
SFP+ Devices with Pluggable Optics
82599-based adapters
The Intel® Ethernet Server Adapter X520-2 only supports Intel optics and/or the direct attach cables listed below. When 82599-based SFP+ devices are connected back to back, they should be set to the same Speed setting using Intel PROSet for Windows or ethtool. Results may vary if you mix speed settings.
NOTE: 82599-Based adapters support all passive and active limiting direct attach cables that comply with SFF­8431 v4.1 and SFF-8472 v10.4 specifications.
Supplier Type Part Numbers
Intel Dual Rate 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2 / FTLX8571D3BCV-IT
82598-BASED ADAPTERS
The following is a list of SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices.
NOTES:
:
l
Intel® Network Adapters that support removable optical modules only support their original module type. If you plug in a different type of module, the driver will not load.
l
82598-Based adapters support all passive direct attach cables that comply with SFF-8431 v4.1 and SFF­8472 v10.4 specifications. Active direct attach cables are not supported
l
Hot Swapping/hot plugging optical modules is not supported.
l
Only single speed, 10 gigabit modules are supported.
Supplier Type Part Numbers
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
Dell 1m - Force 10 Assy, Cbl, SFP+, CU, 10GE, DAC C4D08, V250M, NMMT9
Dell 3m - Force 10 Assy, Cbl, SFP+, CU, 10GE, DAC 53HVN, F1VT9
Dell 5m - Force 10 Assy, Cbl, SFP+, CU, 10GE, DAC 5CN56, 358vv, W25W9
Cisco 1m - Twin-ax cable SFP-H10GB-CU1M
Cisco 3m - Twin-ax cable SFP-H10GB-CU3M
Cisco 5m - Twin-ax cable SFP-H10GB-CU5M
Cisco 7m - Twin-ax cable SFP-H10GB-CU7M
Molex 1m - Twin-ax cable 74752-1101, 74752-9093
Molex 3m - Twin-ax cable 74752-2301, 74752-9094
Molex 5m - Twin-ax cable 74752-3501, 74752-9096
Molex 7m - Twin-ax cable 74752-9098
Molex 10m - Twin-ax cable 74752-9004
Tyco 1m - Twin-ax cable 2032237-2
Tyco 3m - Twin-ax cable 2032237-4
Tyco 5m - Twin-ax cable 2032237-6
Tyco 10m - Twin-ax cable 1-2032237-1
THIRD PARTY OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE LISTED ONLY FOR THE PURPOSE OF HIGHLIGHTING THIRD PARTY SPECIFICATIONS AND POTENTIAL COMPATIBILITY, AND ARE NOT RECOMMENDATIONS OR ENDORSEMENT OR SPONSORSHIP OF ANY THIRD PARTY’S PRODUCT BY INTEL. INTEL IS NOT ENDORSING OR PROMOTING PRODUCTS MADE BY ANY THIRD PARTY AND THE THIRD PARTY REFERENCE IS PROVIDED ONLY TO SHARE INFORMATION REGARDING CERTAIN OPTIC MODULES AND CABLES WITH THE ABOVE SPECIFICATIONS. THERE MAY BE OTHER MANUFACTURERS OR SUPPLIERS, PRODUCING OR SUPPLYING OPTIC MODULES AND CABLES WITH SIMILAR OR MATCHING DESCRIPTIONS. CUSTOMERS MUST USE THEIR OWN DISCRETION AND DILIGENCE TO PURCHASE OPTIC MODULES AND CABLES FROM ANY THIRD PARTY OF THEIR CHOICE. CUSTOMERS ARE SOLELY RESPONSIBLE FOR ASSESSING THE SUITABILITY OF THE PRODUCT AND/OR DEVICES AND FOR THE SELECTION OF THE VENDOR FOR PURCHASING ANY PRODUCT. THE OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS.
Connect the Twinaxial Cable
Insert the twinaxial network cable as shown below.
Type of cabling:
l
10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l
Length is 10 meters max.

Insert the Mezzanine Card in the Blade Server

1. Turn off the blade server and pull it out of the chassis, then remove its cover.
CAUTION: Failure to turn off the blade server could endanger you and may damage the card or server.
2. Lift the locking lever and insert the card in an available, compatible mezzanine card socket. Push the card into the socket until it is firmly seated.
NOTE: A switch or pass-through module must be present on the same fabric as the card in the chassis to provide a physical connection. For example, if the mezzanine card is inserted in fabric B, a switch must also be present in fabric B of the chassis.
3. Repeat steps 2 for each card you want to install.
4. Lower the locking lever until it clicks into place over the card or cards.
5. Replace the blade server cover and put the blade back into the server chassis.
6. Turn the power on.

Setup

Installing Network Drivers

Before you begin
To successfully install drivers or software, you must have administrative privileges on the computer.
To install drivers
For directions on how to install drivers for a specific operating system, click one of the links below.
You can download the files from Customer Support.
For directions on how to install drivers for a specific operating system, select an operating system link below.
l Windows Server 2012
l Windows Server 2008
l Linux
NOTES:
l
If you are installing a driver in a computer with existing Intel adapters, be sure to update all the adapters and ports with the same driver and Intel® PROSet software. This ensures that all adapters will function correctly.
l
If you are using an Intel 10GbE Server Adapter and an Intel Gigabit adapter in the same machine, the driver for the Gigabit adapter must be running with the gigabit drivers found in respective download pack­age.
l
If you have Fibre Channel over Ethernet (FCoE) boot enabled on any devices in the system, you will not be able to upgrade your drivers. You must disable FCoE boot before upgrading your Ethernet drivers.
Installing Multiple Adapters
Windows Server users: Follow the procedure in Installing Windows Drivers. After the first adapter is detected, you may
be prompted to insert the installation media supplied with your system. After the first adapter driver finishes installing, the next new adapter is detected and Windows automatically installs the driver. (You must manually update the drivers for any existing adapters.) For more information, see Updating the Drivers.
Linux users: For more information, see Linux* Driver for the Intel® Gigabit Family of Adapters.
Updating drivers for multiple adapters or ports: If you are updating or installing a driver in a server with existing Intel
adapters, be sure to update all the adapters and ports with the same new software. This will ensure that all adapters will function correctly.
Installing Intel PROSet
Intel PROSet for Windows Device Manager is an advanced configuration utility that incorporates additional con­figuration and diagnostic features into the device manager. For information on installation and usage, see Using Intel®
PROSet for Windows Device Manager.
NOTE: You must install Intel® PROSet for Windows Device Manager if you want to use adapter teams or VLANs.
Push Installation for Windows
An unattended install or "Push" of Windows enables you to automate Windows installation when several computers on a network require a fresh install of a Windows operating system.
To automate the process, a bootable disk logs each computer requiring installation or update onto a central server that contains the install executable. After the remote computer logs on, the central server then pushes the operating system to the computer.
Supported operating systems
l Windows Server 2012
l Windows Server 2008
Setting Speed and Duplex
Overview
The Link Speed and Duplex setting lets you choose how the adapter sends and receives data packets over the net­work.
In the default mode, an Intel network adapter using copper connections will attempt to auto-negotiate with its link part­ner to determine the best setting. If the adapter cannot establish link with the link partner using auto-negotiation, you may need to manually configure the adapter and link partner to the identical setting to establish link and pass packets. This should only be needed when attempting to link with an older switch that does not support auto-negotiation or one that has been forced to a specific speed or duplex mode.
Auto-negotiation is disabled by selecting a discrete speed and duplex mode in the adapter properties sheet.
NOTES:
l
Configuring speed and duplex can only be done on Intel gigabit copper-based adapters.
l
Fiber-based adapters operate only in full duplex at their native speed.
l
The Intel Gigabit ET Quad Port Mezzanine Card only operates at 1 Gbps full duplex.
l
The following adapters operate at either10 Gbps or1 Gbps full duplex:
l
Intel® 10 Gigabit AT Server Adapter
l
Intel® Ethernet Server Adapter X520-T2
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet X520 10GbE Dual Port KX4 Mezz
l
Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz
l
Intel® Ethernet 10G 2P X520 Adapter
l
Intel® Ethernet 10G 4P X520/I350 rNDC
l
Intel® Ethernet 10G 2P X520-k bNDC
l
The following adapters operate at either 10Gbps, 1Gbps, or 100Mbps full duplex:
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 4P X540/I350 rNDC
NOTE: X540 ports will support 100Mbps only when both link partners are set to auto­negotiate.
l
The Intel® Ethernet Connection I354 1.0 GbE Backplane only operates at 1 Gbps full duplex.
Per IEEE specification, 10 gigabit and gigabit speeds are available only in full duplex.
The settings available when auto-negotiation is disabled are:
l
10 Gbps full duplex (requires a full duplex link partner set to full duplex). The adapter can send and receive packets at the same time.
l
1 Gbps full duplex (requires a full duplex link partner set to full duplex). The adapter can send and receive pack­ets at the same time. You must set this mode manually (see below).
l
10 Mbps or 100 Mbps full duplex (requires a link partner set to full duplex). The adapter can send and receive packets at the same time. You must set this mode manually (see below).
l
10 Mbps or 100 Mbps half duplex (requires a link partner set to half duplex). The adapter performs one oper­ation at a time; it either sends or receives. You must set this mode manually (see below).
Your link partner must match the setting you choose.
NOTES:
l
Although some adapter property sheets (driver property settings) list 10Mbps and 100Mbps in full or half duplex as options, using those settings is not recommended.
l
Only experienced network administrators should force speed and duplex manually.
l
You cannot change the speed or duplex of Intel adapters that use fiber cabling.
Intel 10 Gigabit adapters that support 1 gigabit speed allow you to configure the speed setting. If this option is not present, your adapter only runs at its native speed.
Manually Configuring Duplex and Speed Settings
If your switch supports the NWay* standard, and both the adapter and switch are set to auto-negotiate, full duplex con­figuration is automatic, and no action is required on your part. Not all switches support auto-negotiation. Check with your network administrator or switch documentation to verify whether your switch supports this feature.
Configuration is specific to the driver you are loading for your network operating system. To set a specific Link Speed and Duplex mode, refer to the section below that corresponds to your operating system.
CAUTION: The settings at the switch must always match the adapter settings. Adapter performance may suffer, or your adapter might not operate correctly if you configure the adapter differently from your switch.
Windows
The default setting is for auto-negotiation to be enabled. Only change this setting to match your link partner's speed and duplex setting if you are having trouble connecting.
1. In Windows Device Manager, double-click the adapter you want to configure.
2. On the Link Speed tab, select a speed and duplex option from the Speed and Duplex drop-down menu.
3. Click OK.
More specific instructions are available in the Intel PROSet help.
Linux
See Linux* Driver for the Intel® Gigabit Family of Adapters for information on configuring Speed and Duplex on Linux systems.

Windows* Server Push Install

Introduction
A "Push", or unattended installation provides a means for network administrators to easily install a Microsoft Windows* operating system on similarly equipped systems. The network administrator can create a bootable media that will auto­matically log into a central server and install the operating system from an image of the Windows installation directory stored on that server. This document provides instructions for a basic unattended installation that includes the install­ation of drivers for Intel® Networking Devices.
As part of the unattended installation, you can create teams and VLANs. If you wish to create one or more team­s/VLANs as part of the unattended installation, you must also follow the instructions in the "Instructions for Creating
Teams and VLANs (Optional)" section of this document.
The elements necessary for the Windows Server unattended installation are:
l
A Windows Server system with a shared image of the Windows Installation CD.
l
If you want to create teams/VLANs as part of the unattended installation, you need to create a configuration file with the team/VLAN information in it. To create this file, you need a sample system that has the same type of adapters that will be in the systems receiving the push installation. On the sample system, use Intel® PROSet for Windows Device Manager to set up the adapters in the teaming/VLAN configuration you want. This system could also be the Windows Server mentioned above. For clarity, this system is referred to in this page as the configured system.
l
An unattended installation configuration file that provides Windows setup with information it needs to complete the installation. The name of this file is UNATTEND.XML.
NOTE: Intel® 10GbE Network Adapters do not support unattended driver installation.
Setting up an Install Directory on the File Server
The server must be set up with a distribution folder that holds the required Windows files. Clients must also be able to read this folder when connecting via TCP/IP or IPX.
For illustration purposes, the examples in this document use the network share D:\WINPUSH. To create this share:
1. Create a directory on the server (EX: D:\WINPUSH).
2. Use the My Computer applet in Windows to locate the D:\WINPUSH folder.
3. Right-click the folder and select Sharing.
4. Select Share this folder, then give it a share name (EX: WINPUSH). This share name will be used to connect to this directory from the remote target systems. By default, the permissions for this share are for Everyone to have Read access.
5. Adjust permissions as necessary and click OK.
To prepare the distribution folder:
1. Copy the entire contents from the Windows Server DVD to D:\WINPUSH. Use Windows Explorer or XCOPY to maintain the same directory structure as on the Windows Server DVD. When the copy is complete, the Windows Server installation files should be in the D:\WINPUSH directory.
2. Use the Windows System Image Manager to edit/generate the Unattend.xml file and save it to the D:\WINPUSH directory. See sample below for example of Unattend.xml.
3. Create the driver install directory structure and copy the driver files to it.
NOTE: The PUSHCOPY.BAT file provided with the drivers in the PUSH directory copies the appropriate files for the installation. PUSHCOPY also copies the components needed to perform the automated installations contained in the [GuiRunOnce] section of the sample UNATTEND.XML file. These include an unattended installation of the Intel® PROSet for Windows Device Manager.
Example: From a Windows command prompt where e: is the drive letter of your CD-ROM drive:
e: cd \PUSH (You must be in the PUSH\ directory to run PUSHCOPY.)
pushcopy D:\WINPUSH WS8 The above command creates the $OEM$ directory structure and copies all the necessary files to install the driver and Intel® PROSet for Windows Device Manager. However, Intel® PROSet is not installed unless the FirstLogonCommands is added as seen in the example below.
[Microsoft-Windows-Shell-Setup\FirstLogonCommands\SynchronousCommand]
CommandLine= %systemdrive%\WMIScr\Install.bat
Description= Begins silent unattended install of Intel PROSet for Windows Device Manager
Order= 1
Instructions for Creating Teams and VLANs (Optional)
NOTE: If you used Pushcopy from the section "To prepare the distribution folder:" the directory structure will
already be created and you can skip step 4.
1. Prepare the distribution folder on the file server as detailed in the following section.
2. Copy SavResDX.vbs from the Intel CD to the configured system. The file is located in the \WMI directory on the Intel CD.
3. Open a command prompt on the configured system and navigate to the directory containing SavResDX.vbs.
4. Run the following command: cscript SavResDX.vbs save. A configuration file called WmiConf.txt is created in the same directory.
5. Copy the SavResDX.vbs and WmiConf.txt files to the $OEM$\$1\WMIScr directory on the file server.
6. Locate the batch file, Install.bat, in $OEM$\$1\WMIScr. Edit the batch file by removing the comment that pre­cedes the second START command. The file should look like the following when finished.
Start /wait %systemdrive%\drivers\net\INTEL\APPS\ProsetDX\Win32\PROSetDX.msi /qn /li %tem­p%\PROSetDX.log REM Uncomment the next line if VLANs or Teams are to be installed. Start /wait /b cscript %systemdrive%\wmiscr\SavResDX.vbs restore %sys­temdrive%\wmiscr\wmiconf.txt > %systemdrive%\wmiscr\output.txt exit
NOTE: If you are adding a team or VLAN, run ImageX.exe afterwards to create the Intel.wim containing the team or VLAN.
Deployment Methods
Boot using your WinPE 2.0 media and connect to the server containing your Windows Server 2008 installation share.
Run the command from the \\Server\WINPUSH prompt:
setup /unattend:<full path to answer file>
NOTE: In the above procedure, setup runs the installation in unattended mode and also detects the plug and play network adapters. All driver files are copied from the shared directory to the target system directories and installation of the OS and Network Adapters continues without user intervention.
If you installed teams/VLANs as part of the unattended installation, view the results of the script execution in the out­put.txt file. This file is in the same directory as the SavResDX.vbs file.
Microsoft Documentation for Unattended Installations of Windows Server 2008 and Windows Server 2012
For a complete description of the parameters supported in Unattend.XML visit support.microsoft.com to view the Win­dows Automated Installation Kit (WAIK) documentation.
Sample Unattend.XML file for Windows Server 2008
Add the following Components to the specified Configuration Pass:
Pass: 1 WindowsPE Component: Microsoft-Windows-Setup\DiskConfiguration\Disk\CreatePartitions\CreatePartition Microsoft-Windows-Setup\DiskConfiguration\Disk\ModifyPartitions\ModifyPartition Microsoft-Windows-Setup\ImageInstall\OSImage\InstallTo Microsoft-Windows-Setup\ImageInstall\DataImage\InstallTo Microsoft-Windows-Setup\ImageInstall\DataImage\InstallFrom Microsoft-Windows-Setup\UserData Microsoft-Windows-International-Core-WinPE
Pass: 4 Specialize Component: Microsoft-Windows-Deployment\RunSynchronous\RunSynchronousCommand
Pass: 7 oobeSystem Component: Microsoft-Windows-Shell-Setup\OOBE Microsoft-Windows-Shell-Setup\AutoLogon Microsoft-Windows-Shell-Setup\FirstLogonCommands
Use the following values to populate the UNATTEND.XML:
[Microsoft-Windows-Setup\DiskConfiguration] WillShowUI = OnError
[Microsoft-Windows-Setup\DiskConfiguration\Disk] DiskID = 0 WillWipeDisk = true (if false then only need to use modify section and adjust it to your system)
[Microsoft-Windows-Setup\DiskConfiguration\Disk\CreatePartitions\CreatePartition] Extend = false Order = 1 Size = 20000 (NOTE: This example creates a 20-GB partition.) Type = Primary
[Microsoft-Windows-Setup\DiskConfiguration\Disk\ModifyPartitions\ModifyPartition] Active = true Extend = false Format = NTFS Label = OS_Install Letter = C Order = 1 PartitionID = 1
[Microsoft-Windows-Setup\ImageInstall\OSImage\] WillShowUI = OnError
[Microsoft-Windows-Setup\ImageInstall\OSImage\InstallTo] DiskID = 0 PartitionID = 1
[Microsoft-Windows-Setup\ImageInstall\DataImage\InstallFrom] Path = \\Server\PushWS8\intel.wim
[Microsoft-Windows-Setup\ImageInstall\DataImage\InstallTo] DiskID = 0 PartitionID = 1
[Microsoft-Windows-Setup\UserData] AcceptEula = true FullName = LADV Organization = Intel Corporation
[Microsoft-Windows-Setup\UserData\ProductKey] Key = <enter appropriate key> WillShowUI = OnError
[Microsoft-Windows-Shell-Setup\OOBE] HideEULAPage = true ProtectYourPC = 3 SkipMachineOOBE = true SkipUserOOBE = true
[Microsoft-Windows-International-Core-WinPE] InputLocale = en-us SystemLocale = en-es UILanguage = en-es UserLocale = en-us
[Microsoft-Windows-International-Core-WinPE\SetupUILanguage] UILanguage = en-us
[Microsoft-Windows-Deployment\RunSynchronous\RunSynchronousCommand] Description= Enable built-in administrator account Order= 1 Path= net user administrator /active:Yes
[Microsoft-Windows-Shell-Setup\AutoLogon] Enabled = true LogonCount = 5 Username = Administrator
[Microsoft-Windows-Shell-Setup\AutoLogon\Password] <strongpassword>
[Microsoft-Windows-Shell-Setup\FirstLogonCommands] Description= Begins silent unattended install of Intel PROSet for Windows Device Manager Order= 1 CommandLine= %systemdrive%\WMIScr\Install.bat

Command Line Installation for Base Drivers and Intel® PROSet

Driver Installation
The driver install utility setup.exe allows unattended installation of drivers from a command line.
These utilities can be used to install the base driver, intermediate driver, and all management applications for sup­ported devices.
NOTE: You must run setup.exe in a DOS Window in Windows Server 2008. Setup.exe will not run on a com­puter booted to DOS.
Setup.exe Command Line Options
By setting the parameters in the command line, you can enable and disable management applications. If parameters are not specified, only existing components are updated.
Setup.exe supports the following command line parameters:
Parameter Definition
BD Base Driver
"0", do not install the base driver.
"1", install the base driver (default).
ANS Advanced Network Services
"0", do not install ANS (default). If ANS is already installed, it will be uninstalled.
"1", install ANS. The ANS property requires DMIX=1.
NOTE: If the ANS parameter is set to ANS=1, both Intel PROSet and ANS will be installed.
DMIX PROSet for Windows Device Manager
"0", do not install Intel PROSet feature (default). If the Intel PROSet feature is already installed, it will be uninstalled.
"1", install Intel PROSet feature. The DMIX property requires BD=1.
NOTE: If DMIX=0, ANS will not be installed. If DMIX=0 and Intel PROSet, ANS, SMASHv2 and FCoE are already installed, Intel PROSet, ANS, SMASHv2 and FCoE will be uninstalled.
Parameter Definition
SNMP Intel SNMP Agent
"0", do not install SNMP (default). If SNMP is already installed, it will be uninstalled.
"1", install SNMP. The SNMP property requires BD=1.
NOTE: Although the default value for the SNMP parameter is 1 (install), the SNMP agent will only be installed if:
l
The Intel SNMP Agent is already installed. In this case, the SNMP agent will be updated.
l
The Windows SNMP service is installed. In this case, the SNMP window will pop up and you may cancel the installation if you do not want it installed.
FCOE Fibre Channel over Ethernet
"0", do not install FCoE (default). If FCoE is already installed, it will be uninstalled.
"1", install FCoE. The FCoE property requires DMIX=1.
NOTE: Even if FCOE=1 is passed, FCoE will not be installed if the operating system and installed adapters do not support FCoE.
ISCSI
LOG [log file name]
XML [XML file name]
-a Extract the components required for installing the base driver to
-f Force a downgrade of the components being installed.
-v Display the current install package version.
/q[r|n] /q --- silent install options
iSCSI
"0", do not install iSCSI (default). If iSCSI is already installed, it will be uninstalled.
"1", install FCoE. The iSCSI property requires DMIX=1.
LOG allows you to enter a file name for the installer log file. The default name is C:\UmbInst.log.
XML allows you to enter a file name for the XML output file.
tel\Drivers.
(/qn) is specified. If this parameter is specified, the installer will exit after the base driver is extracted. Any other parameters will be ignored.
NOTE: If the installed version is newer than the current version, this parameter needs to be set.
r Reduced GUI Install (only displays critical warning messages)
n Silent install
The directory where these files will be extracted to can be modified unless silent mode
C:\Program Files\In-
/l[i|w|e|a] /l --- log file option for DMIX and SNMP installation. Following are log switches:
i log status messages.
w log non-fatal warnings.
e log error messages.
a log the start of all actions.
-u Uninstall the drivers.
NOTE: You must include a space between parameters.
Command line install examples
This section describes some examples used in command line installs.
Assume that setup.exe is in the root directory of the CD, D:\. You can modify the paths for different operating systems and CD layouts and apply the command line examples.
1. How to install the base driver on Windows Server 2008:
D:\Setup DMIX=0 ANS=0 SNMP=0
2. How to install the base driver on Windows Server 2008 using the LOG option:
D:\Setup LOG=C:\installBD.log DMIX=0 ANS=0 SNMP=0
3. How to install Intel PROSet and ANS silently on Windows Server 2008:
D:\Setup DMIX=1 ANS=1 /qn
4. How to install Intel PROSet without ANS silently on Windows Server 2008:
D:\Setup DMIX=1 ANS=0 /qn
5. How to install components but deselect ANS for Windows Server 2008:
D:\Setup DMIX=1 ANS=0 /qn /liew C:\install.log
The /liew log option provides a log file for the DMIX installation.
NOTE: To install teaming and VLAN support on a system that has adapter base drivers and Intel PROSet for Win­dows Device Manager installed, type the command line D:\Setup ANS=1.
Windows Server 2008 Server Core
In Windows Server 2008 Server Core, the base driver can be installed using the Plug and Play Utility, PnPUtil.exe. For more information on this utility, see http://technet2.microsoft.com/windowsserver2008/en/library/c265eb4d-f579-42ca-
b82a-02130d33db531033.mspx?mfr=true.

Using the Adapter

Testing the Adapter

Intel's diagnostic software lets you test the adapter to see if there are problems with the adapter hardware, the cabling, or the network connection.
Tests for Windows
Intel PROSet allows you to run four types of diagnostic tests.
l
Connection Test: Verifies network connectivity by pinging the DHCP server, WINS server, and gateway.
l
Cable Tests: Provide information about cable properties.
NOTE: The Cable Test is not supported on:
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet X520 10GbE Dual Port KX4 Mezz
l
Intel® Ethernet X520 10GbE Dual Port KX4­KR Mezz
l
Intel® Ethernet 10G 2P X520 Adapter
l
Intel® Ethernet 10G 2P X520-k bNDC
l
Intel® Ethernet 10G 4P X520/I350 rNDC
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 4P X540/I350 rNDC
l
Intel® Gigabit 4P I350-t Mezz
l
Intel® Gigabit 4P X520/I350 rNDC
l
Intel® Ethernet Connection I354 1.0 GbE Backplane
l
Intel® Gigabit 4P I350 bNDC
l
Intel® Gigabit 2P I350 LOM
l
Hardware Tests: Determines if the adapter is functioning properly.
NOTE: Hardware tests will fail if the adapter is configured for iSCSI Boot.
To access these tests, select the adapter in Windows Device Manager, click the Link tab, and click Diagnostics. A Dia- gnostics window displays tabs for each type of test. Click the appropriate tab and run the test.
The availability of these tests is hardware and operating system dependent.
DOS Diagnostics
Use the DIAGS test utility to test adapters under DOS.
Linux Diagnostics
The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical inform­ation. ethtool version 1.6 or later is required for this functionality.
The latest release of ethtool can be found at: http://sourceforge.net/projects/gkernel.
NOTE: ethtool 1.6 only supports a limited set of ethtool options. Support for a more complete ethtool feature set can be enabled by upgrading ethtool to the latest version.
Responder Testing
The Intel adapter can send test messages to another Ethernet adapter on the same network. This testing is available in DOS via the diags.exe utility found in the \DOSUtilities\UserDiag\ directory on the installation CD or downloaded from
Customer Support.

Adapter Teaming

ANS Teaming, a feature of the Intel Advanced Network Services (ANS) component, lets you take advantage of multiple adapters in a system by grouping them together. ANS can use features like fault tolerance and load balancing to increase throughput and reliability.
Teaming functionality is provided through the intermediate driver, ANS. Teaming uses the intermediate driver to group physical adapters into a team that acts as a single virtual adapter. ANS serves as a wrapper around one or more base drivers, providing an interface between the base driver and the network protocol stack. By doing so, the intermediate driver gains control over which packets are sent to which physical interface as well as control over other properties essential to teaming.
There are several teaming modes you can configure ANS adapter teams to use.
Setting Up Adapter Teaming
Before you can set up adapter teaming in Windows*, you must install Intel® PROSet software. For more information on setting up teaming, see the information for your operating system.
Operating Systems Supported
The following links provide information on setting up teaming with your operating system:
l Windows
NOTE: To configure teams in Linux, use Channel Bonding, available in supported Linux kernels. For more information see the channel bonding documentation within the kernel source, located at Docu­mentation/networking/bonding.txt.
Supported Adapters
Teaming options are supported on Intel server adapters. Selected adapters from other manufacturers are also sup­ported. If you are using a Windows-based computer, adapters that appear in Intel PROSet may be included in a team.
NOTE: In order to use adapter teaming, you must have at least one Intel gigabit or 10 gigabit server adapter in your system. Furthermore, all adapters must be linked to the same switch or hub.
Conditions that may prevent you from teaming a device
During team creation or modification, the list of available team types or list of available devices may not include all team types or devices. This may be caused by any of several conditions, including:
l
The operating system does not support the desired team type.
l
The device does not support the desired team type or does not support teaming at all.
l
The devices you want to team together use different driver versions.
l
You are trying to team an Intel PRO/100 device with an Intel 10GbE device.
l
You can add Intel® Active Management Technology (Intel® AMT) enabled devices to Adapter Fault Tolerance (AFT), Switch Fault Tolerance (SFT), and Adaptive Load Balancing (ALB) teams. All other team types are not supported. The Intel AMT enabled device must be designated as the primary adapter for the team.
l
The device's MAC address is overridden by the Locally Administered Address advanced setting.
l
Fibre Channel over Ethernet (FCoE) Boot has been enabled on the device.
l
The device has “OS Controlled” selected on the Data Center tab.
l
The device has a virtual NIC bound to it.
l
The device is part of a Microsoft* Load Balancing and Failover (LBFO) team.
Configuration Notes
l
Not all team types are available on all operating systems.
l
Be sure to use the latest available drivers on all adapters.
l
NDIS 6.2 introduced new RSS data structures and interfaces. Due to this, you cannot enable RSS on teams that contain a mix of adapters that support NDIS 6.2 RSS and adapters that do not.
l
If you are using an Intel® 10GbE Server Adapter and an Intel® Gigabit adapter in the same machine, the driver for the Gigabit adapter must be updated with the version on the Intel 10GbE CD or respective download pack­age.
l
If a team is bound to a Hyper-V virtual NIC, the Primary or Secondary adapter cannot be changed.
l
Some advanced features, including hardware offloading, are automatically disabled when non-Intel adapters are team members to assure a common feature set.
l
TOE (TCP Offload Engine) enabled devices cannot be added to an ANS team and will not appear in the list of available adapters.
To enable teaming using Broadcom Advanced Control Suite 2:
1. Load base drivers and Broadcom Advanced Control Suite 2 (always use the latest software releases from www.support.dell.com)
2. Select the Broadcom device and go to the Advanced Tab
3. Disable Receive Side Scaling
4. Go to Resource Allocations and select TCP Offload Engine (TOE)
5. Click on Configure and uncheck TCP Offload Engine (TOE) from the NDIS Configuration section
To enable teaming using Broadcom Advanced Control Suite 3:
1. Load base drivers and Broadcom Advanced Control Suite 3 (always use the latest software releases from www.support.dell.com)
2. Select the Broadcom device and uncheck TOE from the Configurations Tab
3. Click on Apply
4. Choose NDIS entry of Broadcom device and disable Receive Side Scaling from Configurations Tab
5. Click on Apply
NOTE: Multivendor teaming is not supported in Windows Server 2008 x64 versions.
l
Spanning tree protocol (STP) should be disabled on switch ports connected to team adapters in order to pre­vent data loss when the primary adapter is returned to service (failback). Activation Delay is disabled by default. Alternatively, an activation delay may be configured on the adapters to prevent data loss when spanning tree is used. Set the Activation Delay on the advanced tab of team properties.
l
Fibre Channel over Ethernet (FCoE)/Data Center Bridging (DCB) will be automatically disabled when an adapter is added to a team with non-FCoE/DCB capable adapters.
l
ANS teaming of VF devices inside a Windows 2008 R2 guest running on an open source hypervisor is sup­ported.
l
An Intel® Active Management Technology (Intel AMT) enabled device can be added to Adapter Fault Tolerance (AFT), Switch Fault Tolerance (SFT), and Adaptive Load Balancing (ALB) teams. All other team types are not supported. The Intel AMT enabled device must be designated as the primary adapter for the team.
l
Before creating a team, adding or removing team members, or changing advanced settings of a team member, make sure each team member has been configured similarly. Settings to check include VLANs and QoS Packet Tagging, Jumbo Packets, and the various offloads. These settings are available on the Advanced Settings tab. Pay particular attention when using different adapter models or adapter versions, as adapter capabilities vary.
l
If team members implement Intel ANS features differently, failover and team functionality may be affected. To avoid team implementation issues:
l
Create teams that use similar adapter types and models.
l
Reload the team after adding an adapter or changing any Advanced features. One way to reload the team is to select a new preferred primary adapter. Although there will be a temporary loss of network connectivity as the team reconfigures, the team will maintain its network addressing schema.
l
ANS allows you to create teams of one adapter. A one-adapter team will not take advantage of teaming fea­tures, but it will allow you to "hot-add" another adapter to the team without the loss of network connectivity that occurs when you create a new team.
l
Before hot-adding a new member to a team, make sure that new member's link is down. When a port is added to a switch channel before the adapter is hot-added to the ANS team, disconnections will occur because the switch will start forwarding traffic to the port before the new team member is actually configured. The opposite, where the member is first hot-added to the ANS team and then added to the switch channel, is also problematic because ANS will forward traffic to the member before the port is added to the switch channel, and dis­connection will occur.
l
Intel 10 Gigabit Server Adapters can team with Intel Gigabit adapters and certain server-oriented models from other manufacturers. If you are using a Windows-based computer, adapters that appear in the Intel® PROSet teaming wizard may be included in a team.
l
Network ports using OS2BMC should not be teamed with ports that have OS2BMC disabled.
l
A reboot is required when any changes are made, such as modifying an advanced parameter setting of the base driver or creating a team or VLAN, on the network port that was used for a RIS install.
l
Intel adapters that do not support Intel PROSet may still be included in a team. However, they are restricted in the same way non-Intel adapters are. See Multi-Vendor Teaming for more information.
l
If you create a Multi-Vendor Team, you must manually verify that the RSS settings for all adapters in the team are the same.
l
The table below provides a summary of support for Multi-Vendor Teaming.
Multi-vendor Teaming using Intel
Teaming Driver (iANS/PROSet)
Intel Broadcom AFT SFT ALB/RLB SLA LACP LSO CSO TOE RSS
Intel PCI Express Broadcom Device
with TOE disabled
Intel PCI Express Broadcom Device
with TOE enabled
Teaming Mode Supported Offload
Support
Yes Yes Yes Yes Yes Yes Yes No No
No No No No No No No No No
Other Offload
and RSS Sup-
port
Microsoft* Load Balancing and Failover (LBFO) teams
Intel ANS teaming and VLANs are not compatible with Microsoft's LBFO teams. Intel® PROSet will block a member of an LBFO team from being added to an Intel ANS team or VLAN. You should not add a port that is already part of an Intel ANS team or VLAN to an LBFO team, as this may cause system instability. If you use an ANS team member or VLAN in an LBFO team, perform the following procedure to restore your configuration:
1. Reboot the machine.
2. Remove LBFO team. Even though LBFO team creation failed, after a reboot Server Manager will report that LBFO is Enabled, and the LBFO interface is present in the ‘NIC Teaming’ GUI.
3. Remove the ANS teams and VLANS involved in the LBFO team and recreate them. This is an optional (all bind­ings are restored when the LBFO team is removed ), but strongly recommended step.
NOTE: If you add an Intel AMT enabled port to an LBFO team, do not set the port to Standby in the LBFO team. If you set the port to Standby you may lose AMT functionality.
Teaming Modes
There are several teaming modes, and they can be grouped into these categories:
Fault Tolerance
Provides network connection redundancy by designating a primary controller and utilizing the remaining controllers as backups. Designed to ensure server availability to the network. When the user-specified primary adapter loses link, the iANS driver will "fail over" the traffic to the available secondary adapter. When the link of the primary adapter resumes, the iANS driver will "fail back" the traffic to the primary adapter. See Primary and Secondary Adapters for more inform­ation. The iANS driver uses link-based tolerance and probe packets to detect the network connection failures.
l
Link-based tolerance - The teaming driver checks the link status of the local network interfaces belonging to the team members. Link-based tolerance provides fail over and fail back for the immediate link failures only.
l
Probing - Probing is another mechanism used to maintain the status of the adapters in a fault tolerant team. Probe packets are sent to establish known, minimal traffic between adapters in a team. At each probe interval, each adapter in the team sends a probe packet to other adapters in the team. Probing provides fail over and fail back for immediate link failures as well as external network failures in the single network path of the probe pack­ets between the team members.
Fault Tolerance teams include Adapter Fault Tolerance (AFT) and Switch Fault Tolerance (SFT).
Load Balancing
Provides transmission load balancing by dividing outgoing traffic among all the NICs, with the ability to shift traffic away from any NIC that goes out of service. Receive Load Balancing balances receive traffic.
Load Balancing teams include Adaptive Load Balancing (ALB) teams.
NOTE: If your network is configured to use a VLAN, make sure the load balancing team is configured to use the same VLAN.
Link Aggregation
Combines several physical channels into one logical channel. Link Aggregation is similar to Load Balancing.
Link Aggregation teams include Static Link Aggregation and IEEE 802.3ad: dynamic mode.
IMPORTANT
l For optimal performance, you must disable the Spanning Tree Protocol (STP) on all the switches
in the network when using AFT, ALB, or Static Link Aggregation teaming.
l When you create a team, a virtual adapter instance is created. In Windows, the virtual adapter
appears in both the Device Manager and Network and Dial-up Connections. Each virtual adapter instance appears as "Intel Advanced Network Services Virtual Adapter." Do not attempt to modify (except to change protocol configuration) or remove these virtual adapter instances using Device Manager or Network and Dial-up Connections. Doing so might result in system anomalies.
l Before creating a team, adding or removing team members, or changing advanced settings of a
team member, make sure each team member has been configured similarly. Settings to check include VLANs and QoS Packet Tagging, Jumbo Packets, and the various offloads. These settings are available in Intel PROSet's Advanced tab. Pay particular attention when using different adapter models or adapter versions, as adapter capabilities vary.
If team members implement Advanced Features differently, failover and team functionality will be affected. To avoid team implementation issues:
l
Use the latest available drivers on all adapters.
l
Create teams that use similar adapter types and models.
l
Reload the team after adding an adapter or changing any Advanced Features. One way to reload the team is to select a new preferred primary adapter. Although there will be a temporary loss of network connectivity as the team reconfigures, the team will maintain its network addressing schema.
Primary and Secondary Adapters
Teaming modes that do not require a switch with the same capabilities (AFT, SFT, ALB (with RLB)) use a primary adapter. In all of these modes except RLB, the primary is the only adapter that receives traffic. RLB is enabled by default on an ALB team.
If the primary adapter fails, another adapter will take over its duties. If you are using more than two adapters, and you want a specific adapter to take over if the primary fails, you must specify a secondary adapter. If an Intel AMT enabled device is part of a team, it must be designated as the primary adapter for the team.
There are two types of primary and secondary adapters:
l
Default primary adapter: If you do not specify a preferred primary adapter, the software will choose an adapter of the highest capability (model and speed) to act as the default primary. If a failover occurs, another adapter becomes the primary. Once the problem with the original primary is resolved, the traffic will not automatically restore to the default (original) primary adapter in most modes. The adapter will, however, rejoin the team as a non-primary.
l
Preferred Primary/Secondary adapters: You can specify a preferred adapter in Intel PROSet. Under normal conditions, the Primary adapter handles all traffic. The Secondary adapter will receive fallback traffic if the primary fails. If the Preferred Primary adapter fails, but is later restored to an active status, control is auto­matically switched back to the Preferred Primary adapter. Specifying primary and secondary adapters adds no benefit to SLA and IEEE 802.3ad dynamic teams, but doing so forces the team to use the primary adapter's MAC address.
To specify a preferred primary or secondary adapter in Windows
1. In the Team Properties dialog box's Settings tab, click Modify Team.
2. On the Adapters tab, select an adapter.
3. Click Set Primary or Set Secondary.
NOTE: You must specify a primary adapter before you can specify a secondary adapter.
4. Click OK.
The adapter's preferred setting appears in the Priority column on Intel PROSet's Team Configuration tab. A "1" indic- ates a preferred primary adapter, and a "2" indicates a preferred secondary adapter.
Failover and Failback
When a link fails, either because of port or cable failure, team types that provide fault tolerance will continue to send and receive traffic. Failover is the initial transfer of traffic from the failed link to a good link. Failback occurs when the ori­ginal adapter regains link. You can use the Activation Delay setting (located on the Advanced tab of the team's prop­erties in Device Manager) to specify a how long the failover adapter waits before becoming active. If you don't want your team to failback when the original adapter gets link back, you can set the Allow Failback setting to disabled (loc­ated on the Advanced tab of the team's properties in Device Manager).
Adapter Fault Tolerance (AFT)
Adapter Fault Tolerance (AFT) provides automatic recovery from a link failure caused from a failure in an adapter, cable, switch, or port by redistributing the traffic load across a backup adapter.
Failures are detected automatically, and traffic rerouting takes place as soon as the failure is detected. The goal of AFT is to ensure that load redistribution takes place fast enough to prevent user sessions from being disconnected. AFT sup­ports two to eight adapters per team. Only one active team member transmits and receives traffic. If this primary con­nection (cable, adapter, or port) fails, a secondary, or backup, adapter takes over. After a failover, if the connection to the user-specified primary adapter is restored, control passes automatically back to that primary adapter. For more information, see Primary and Secondary Adapters.
AFT is the default mode when a team is created. This mode does not provide load balancing.
NOTES
l
AFT teaming requires that the switch not be set up for teaming and that spanning tree protocol is turned off for the switch port connected to the NIC or LOM on the server.
l
All members of an AFT team must be connected to the same subnet.
Switch Fault Tolerance (SFT)
Switch Fault Tolerance (SFT) supports only two NICs in a team connected to two different switches. In SFT, one adapter is the primary adapter and one adapter is the secondary adapter. During normal operation, the secondary adapter is in standby mode. In standby, the adapter is inactive and waiting for failover to occur. It does not transmit or receive net­work traffic. If the primary adapter loses connectivity, the secondary adapter automatically takes over. When SFT teams are created, the Activation Delay is automatically set to 60 seconds.
In SFT mode, the two adapters creating the team can operate at different speeds.
NOTE: SFT teaming requires that the switch not be set up for teaming and that spanning tree protocol is turned on.
Configuration Monitoring
You can set up monitoring between an SFT team and up to five IP addresses. This allows you to detect link failure bey­ond the switch. You can ensure connection availability for several clients that you consider critical. If the connection between the primary adapter and all of the monitored IP addresses is lost, the team will failover to the secondary adapter.
Adaptive/Receive Load Balancing (ALB/RLB)
Adaptive Load Balancing (ALB) is a method for dynamic distribution of data traffic load among multiple physical chan­nels. The purpose of ALB is to improve overall bandwidth and end station performance. In ALB, multiple links are provided from the server to the switch, and the intermediate driver running on the server performs the load balancing function. The ALB architecture utilizes knowledge of Layer 3 information to achieve optimum distribution of the server transmission load.
ALB is implemented by assigning one of the physical channels as Primary and all other physical channels as Sec­ondary. Packets leaving the server can use any one of the physical channels, but incoming packets can only use the Primary Channel. With Receive Load Balancing (RLB) enabled, it balances IP receive traffic. The intermediate driver analyzes the send and transmit loading on each adapter and balances the rate across the adapters based on des­tination address. Adapter teams configured for ALB and RLB also provide the benefits of fault tolerance.
NOTES:
l
ALB teaming requires that the switch not be set up for teaming and that spanning tree protocol is turned off for the switch port connected to the network adapter in the server.
l
ALB does not balance traffic when protocols such as NetBUI and IPX* are used.
l
You may create an ALB team with mixed speed adapters. The load is balanced according to the adapter's capabilities and bandwidth of the channel.
l
All members of ALB and RLB teams must be connected to the same subnet.
Virtual Machine Load Balancing
Virtual Machine Load Balancing (VMLB) provides transmit and receive traffic load balancing across Virtual Machines bound to the team interface, as well as fault tolerance in the event of switch port, cable, or adapter failure.
The driver analyzes the transmit and receive load on each member adapter and balances the traffic across member adapters. In a VMLB team, each Virtual Machine is associated with one team member for its TX and RX traffic.
If only one virtual NIC is bound to the team, or if Hyper-V is removed, then the VMLB team will act like an AFT team.
NOTES:
l
VMLB does not load balance non-routed protocols such as NetBEUI and some IPX* traffic.
l
VMLB supports from two to eight adapter ports per team.
l
You can create a VMLB team with mixed speed adapters. The load is balanced according to the lowest common denominator of adapter capabilities and the bandwidth of the channel.
l
An Intel AMT enabled adapter cannot be used in a VLMB team.
Static Link Aggregation
Static Link Aggregation (SLA) is very similar to ALB, taking several physical channels and combining them into a single logical channel.
This mode works with:
l
Cisco EtherChannel capable switches with channeling mode set to "on"
l
Intel switches capable of Link Aggregation
l
Other switches capable of static 802.3ad
The Intel teaming driver supports Static Link Aggregation for:
l
Fast EtherChannel (FEC): FEC is a trunking technology developed mainly to aggregate bandwidth between switches working in Fast Ethernet. Multiple switch ports can be grouped together to provide extra bandwidth. These aggregated ports together are called Fast EtherChannel. Switch software treats the grouped ports as a single logical port. An end node, such as a high-speed end server, can be connected to the switch using FEC.
FEC link aggregation provides load balancing in a way which is very similar to ALB, including use of the same algorithm in the transmit flow. Receive load balancing is a function of the switch.
The transmission speed will never exceed the adapter base speed to any single address (per specification). Teams must match the capability of the switch. Adapter teams configured for static Link Aggregation also provide the benefits of fault tolerance and load balancing. You do not need to set a primary adapter in this mode.
l
Gigabit EtherChannel (GEC): GEC link aggregation is essentially the same as FEC link aggregation.
NOTES:
l
All adapters in a Static Link Aggregation team must run at the same speed and must be connected to a Static Link Aggregation-capable switch. If the speed capabilities of adapters in a Static Link Aggregation team are different, the speed of the team is dependent on the switch.
l
Static Link Aggregation teaming requires that the switch be set up for Static Link Aggregation teaming and that spanning tree protocol is turned off.
l
An Intel AMT enabled adapter cannot be used in an SLA team.
IEEE 802.3ad: Dynamic Link Aggregation
IEEE 802.3ad is the IEEE standard. Teams can contain two to eight adapters. You must use 802.3ad switches (in dynamic mode, aggregation can go across switches). Adapter teams configured for IEEE 802.3ad also provide the benefits of fault tolerance and load balancing. Under 802.3ad, all protocols can be load balanced.
Dynamic mode supports multiple aggregators. Aggregators are formed by port speed connected to a switch. For example, a team can contain adapters running at 1 Gbps and 10 Gbps, but two aggregators will be formed, one for each speed. Also, if a team contains 1 Gbps ports connected to one switch, and a combination of 1Gbps and 10Gbps ports connected to a second switch, three aggregators would be formed. One containing all the ports connected to the first switch, one containing the 1Gbps ports connected to the second switch, and the third containing the 10Gbps ports connected to the second switch.
NOTES:
l
IEEE 802.3ad teaming requires that the switch be set up for IEEE 802.3ad (link aggregation) teaming and that spanning tree protocol is turned off.
l
Once you choose an aggregator, it remains in force until all adapters in that aggregation team lose link.
l
In some switches, copper and fiber adapters cannot belong to the same aggregator in an IEEE 802.3ad configuration. If there are copper and fiber adapters installed in a system, the switch might configure the copper adapters in one aggregator and the fiber-based adapters in another. If you experience this beha­vior, for best performance you should use either only copper-based or only fiber-based adapters in a sys­tem.
l
An Intel AMT enabled adapter cannot be used in a DLA team.
Before you begin
l
Verify that the switch fully supports the IEEE 802.3ad standard.
l
Check your switch documentation for port dependencies. Some switches require pairing to start on a primary port.
l
Check your speed and duplex settings to ensure the adapter and switch are running at full duplex, either forced or set to auto-negotiate. Both the adapter and the switch must have the same speed and duplex configuration. The full-duplex requirement is part of the IEEE 802.3ad specification: http://standards.ieee.org/. If needed, change your speed or duplex setting before you link the adapter to the switch. Although you can change speed and duplex settings after the team is created, Intel recommends you disconnect the cables until settings are in effect. In some cases, switches or servers might not appropriately recognize modified speed or duplex settings if settings are changed when there is an active link to the network.
l
If you are configuring a VLAN, check your switch documentation for VLAN compatibility notes. Not all switches support simultaneous dynamic 802.3ad teams and VLANs. If you do choose to set up VLANs, configure team­ing and VLAN settings on the adapter before you link the adapter to the switch. Setting up VLANs after the switch has created an active aggregator affects VLAN functionality.
Multi-Vendor Teaming
Multi-Vendor Teaming (MVT) allows teaming with a combination of Intel and non-Intel adapters. This feature is cur­rently available under Windows Server 2008 and Windows Server 2008 R2.
NOTE: MVT is not supported on Windows Server 2008 x64.
If you are using a Windows-based computer, adapters that appear in the Intel PROSet teaming wizard can be included in a team.
MVT Design Considerations
l
In order to activate MVT, you must have at least one Intel adapter or integrated connection in the team, which must be designated as the primary adapter.
l
A multi-vendor team can be created for any team type.
l
All members in an MVT must operate on a common feature set (lowest common denominator).
l
For MVT teams, manually verify that the frame setting for the non-Intel adapter is the same as the frame settings for the Intel adapters.
l
If a non-Intel adapter is added to a team, its RSS settings must match the Intel adapters in the team.

Virtual LANs

Overview
NOTE: Windows* users must install Intel® PROSet for Windows Device Manager and Advanced Networking Ser-
vices in order to use VLANs.
The term VLAN (Virtual Local Area Network) refers to a collection of devices that communicate as if they were on the same physical LAN. Any set of ports (including all ports on the switch) can be considered a VLAN. LAN segments are not restricted by the hardware that physically connects them.
VLANs offer the ability to group computers together into logical workgroups. This can simplify network administration when connecting clients to servers that are geographically dis­persed across the building, campus, or enterprise network.
Typically, VLANs consist of co-workers within the same depart­ment but in different locations, groups of users running the same network protocol, or a cross-functional team working on a joint project.
By using VLANs on your network, you can:
l
Improve network performance
l
Limit broadcast storms
l
Improve LAN configuration updates (adds, moves, and changes)
l
Minimize security problems
l
Ease your management task
Supported Operating Systems
IEEE VLANs are supported in the following operating systems. Configuration details are contained in the following links:
l Windows Server 2012
l Windows Server 2008
l Linux
NOTE: Native VLANs are now available in supported Linux kernels.
Other Implementation Considerations
l
To set up IEEE VLAN membership (multiple VLANs), the adapter must be attached to a switch with IEEE 802.1Q VLAN capability.
l
VLANs can co-exist with teaming (if the adapter supports both). If you do this, the team must be defined first, then you can set up your VLAN.
l
The Intel PRO/100 VE and VM Desktop Adapters and Network Connections can be used in a switch based VLAN but do not support IEEE Tagging.
l
You can set up only one untagged VLAN per adapter or team. You must have at least one tagged VLAN before you can set up an untagged VLAN.
IMPORTANT: When using IEEE 802.1Q VLANs, VLAN ID settings must match between the switch and those adapters using the VLANs.
NOTE: Intel ANS VLANs are not compatible with Microsoft's Load Balancing and Failover (LBFO) teams. Intel®
PROSet will block a member of an LBFO team from being added to an Intel ANS VLAN. You should not add a port that is already part of an Intel ANS VLAN to an LBFO team, as this may cause system instability.
Configuring VLANs in Microsoft* Windows*
In Microsoft* Windows*, you must use Intel® PROSet to set up and configure VLANs. For more information, select Intel PROSet in the Table of Contents (left pane) of this window.
NOTES:
l
If you change a setting under the Advanced tab for one VLAN, it changes the settings for all VLANS using that port.
l
In most environments, a maximum of 64 VLANs per network port or team are supported by Intel PROSet.
l
ANS VLANs are not supported on adapters and teams that have VMQ enabled. However, VLAN filtering with VMQ is supported via the Microsoft Hyper-V VLAN interface. For more information see Microsoft
Hyper-V virtual NICs on teams and VLANs.
l
You can have different VLAN tags on a child partition and its parent. Those settings are separate from one another, and can be different or the same. The only instance where the VLAN tag on the parent and child MUST be the same is if you want the parent and child partitions to be able to communicate with each other through that VLAN. For more information see Microsoft Hyper-V virtual NICs on teams and
VLANs.

Advanced Features

Jumbo Frames
Jumbo Frames are Ethernet frames that are larger than 1518 bytes. You can use Jumbo Frames to reduce server CPU utilization and increase throughput. However, additional latency may be introduced.
NOTES:
l
Jumbo Frames are supported at 1000 Mbps and higher. Using Jumbo Frames at 10 or 100 Mbps is not supported and may result in poor performance or loss of link.
l
End-to-end network hardware must support this capability; otherwise, packets will be dropped.
l
Intel adapters that support Jumbo Frames have a frame size limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes.
Jumbo Frames can be implemented simultaneously with VLANs and teaming.
NOTE: If an adapter that has Jumbo Frames enabled is added to an existing team that has Jumbo Frames dis­abled, the new adapter will operate with Jumbo Frames disabled. The new adapter's Jumbo Frames setting in Intel PROSet will not change, but it will assume the Jumbo Frames setting of the other adapters in the team.
To configure Jumbo Frames at the switch, consult your network administrator or switch user's guide.
Restrictions:
l
Jumbo Frames are not supported in multi-vendor team configurations.
l
Supported protocols are limited to IP (TCP, UDP).
l
Jumbo Frames require compatible switch connections that forward Jumbo Frames. Contact your switch vendor for more information.
l
The Jumbo Frame setting inside a virtual machine must be the same, or lower than, the setting on the physical port.
l
When standard sized Ethernet frames (64 to 1518 bytes) are used, there is no benefit to configuring Jumbo Frames.
l
The Jumbo Frames setting on the switch must be set to at least 8 bytes larger than the adapter setting for Microsoft* Windows* operating systems, and at least 22 bytes larger for all other operating systems.
For information on configuring Jumbo Frames in Windows, see the Intel PROSet for Windows Device Manager online help.
For information on configuring Jumbo Frames in Linux*, see the Linux Driver for the Intel Network Adapters.
Quality of Service
Quality of Service (QoS) allows the adapter to send and receive IEEE 802.3ac tagged frames. 802.3ac tagged frames include 802.1p priority-tagged frames and 802.1Q VLAN-tagged frames. In order to implement QoS, the adapter must be connected to a switch that supports and is configured for QoS. Priority-tagged frames allow programs that deal with real-time events to make the most efficient use of network bandwidth. High priority packets are processed before lower priority packets.
To implement QoS, the adapter must be connected to a switch that supports and is configured for 802.1p QoS.
QoS Tagging is enabled and disabled in the Advanced tab of Intel PROSet for Windows Device Manager.
Once QoS is enabled in Intel PROSet, you can specify levels of priority based on IEEE 802.1p/802.1Q frame tagging.
Supported Operating Systems
l
Windows Server 2012
l
Windows Server 2008
l
RHEL 6.5 (Intel® 64)
l
SLES 11 SP3 (Intel® 64 only)
Saving and Restoring an Adapter's Configuration Settings
The Save and Restore Command Line Tool is a VBScript (SavResDX.vbs) that allows you to copy the current adapter and team settings into a standalone file (such as on a USB drive) as a backup measure. In the event of a hard drive fail­ure, you can reinstate most of your former settings.
The system on which you restore network configuration settings must have the same configuration as the one on which the save was performed.
NOTES:
l
Only adapter settings are saved (these include ANS teaming and VLANs). The adapter's driver is not saved.
l
Restore using the script only once. Restoring multiple times may result in unstable configuration.
l
The Restore operation requires the same OS as when the configuration was Saved.
Command Line Syntax
cscript SavResDX.vbs save|restore [filename] [/bdf]
SavResDX.vbs has the following command line options:
save Saves adapter and team settings that have been changed from the default settings. When you restore with
the resulting file, any settings not contained in the file are assumed to be the default.
restore Restores the settings.
filename
The file to save settings to or restore settings from. If no filename is specified, the script default to WmiCon­f.txt.
NOTE: The static IP address and WINS configuration are saved to separate files (StaticIP.txt and
WINS.txt). You cannot choose the path or names for these files. If you wish restore these settings, the files must be in the same directory as the SavResDX.vbs script.
/bdf
If you specify /bdf during a restore, the script attempts to restore the configuration based on the PCI Bus:Device:Function:Segment values of the saved configuration. If you removed, added, or moved a NIC to a different slot, this may result in the script applying the saved settings to a different device.
NOTES:
l
If the restore system is not identical to the saved system, the script may not restore any settings when the /bdf option is specified.
l
Virtual Function devices do not support the /bdf option.
Examples
Save Example
To save the adapter settings to a file on a removable media device, do the following.
1. Open a Windows Command Prompt.
2. Navigate to the directory where SavResDX.vbs is located (generally c:\Program Files\Intel\DMIX).
3. Type the following:
cscript SavResDX.vbs save e:\settings.txt
Restore Example
To restore the adapter settings from a file on removable media, do the following:
1. Open a Windows Command Prompt.
2. Navigate to the directory where SavResDX.vbs is located (generally c:\Program Files\Intel\DMIX).
3. Type the following:
cscript SavResDX.vbs restore e:\settings.txt
System Management Architecture for Server Hardware (SMASH): Saving and Restoring an Adapter's Configuration Settings
The System Management Architecture for Server Hardware (SMASH), version 2.0, supports a suite of specifications that include architectural semantics, industry standard protocols, and profiles to unify the management of the data cen­ter. SMASH provides a feature-rich system management environment, allowing you to manage hardware through the CIM standard interface.
The SMASH Provider exposes information about network adapters and allows you to update firmware through the CIM Object Manager (CIMOM). A client application, running a layer above CIMOM, can query the standard CIM interface for enumerating network adapters and executing modification methods. SMASH allows the client application to:
l
Discover, configure and display network adapter information
l
Display PCI structure organization
l
Display physical assets of hardware
l
Display software and firmware version information
l
Display boot configuration settings
l
View progress and control software updates
l
View and manage software update results

Remote Wake-Up

About Remote Wake-up
The ability to remotely wake servers is an important development in server management. This feature has evolved over the last few years from a simple remote power-on capability to a complex system interacting with a variety of device and operating system power states.
With the exception of where specifically stated otherwise in the following list of supported adapters, WOL is supported only from S5.
Supported Adapters
l
Intel® PRO/1000 PT Server Adapter
l
Intel® PRO/1000 PT Dual Port Server Adapter
l
Intel® PRO/1000 PF Server Adapter
l
Intel® Gigabit ET Quad Port Mezzanine Card (port A only)
l
Intel® Gigabit ET Dual Port Server Adapter (port A only)
l
Intel® Gigabit ET Quad Port Server Adapter (port A only)
l
Intel® Gigabit 2P I350-t Adapter (port A only)
l
Intel® Gigabit 4P I350-t Adapter (port A only)
l
Intel® Gigabit 4P I350-t rNDC
l
Intel® Gigabit 4P X540/I350 rNDC
l
Intel® Gigabit 4P X520/I350 rNDC
l
Intel® Gigabit 4P I350-t Mezz
l
Intel® Gigabit 2P I350-t LOM
l
Intel® Ethernet 10G 4P X540/I350 rNDC
l
Intel® Ethernet 10G 4P X520/I350 rNDC
l
Intel® Ethernet 10G 2P X520-k bNDC
l
Intel® Ethernet Connection I354 1.0 GbE Backplane
l
Intel® Gigabit 4P I350 bNDC
l
Intel® Gigabit 2P I350-t LOM
l
Intel® Gigabit 2P I350 LOM
NOTE: For 82599, I350, and I354-based adapters, Wake on LAN is enabled through the uEFI environment. To do this:
1. Go to System Setup.
2. Choose a port and go to configuration.
3. Specify Wake on LAN.
NOTE: Not all systems support every wake setting. There may be BIOS or operating system settings that need to be enabled for your system to wake up. In particular, this is true for Wake from S5 (also referred to as Wake from power off).
Wake on Magic Packet
In early implementations of Remote Wake-up, the computer could be started from a power-off state by sending a Magic Packet. A Magic Packet is an Ethernet packet that contains an adapter's MAC address repeated 16 times in the data field. When an adapter receives a Magic Packet containing its own MAC address, it activates the computer's power. This enables network administrators to perform off-hours maintenance at remote locations without sending a technician out.
This early implementation did not require an OS that was aware of remote wake-up. However, it did require a computer that was equipped with a standby power supply and had the necessary circuitry to allow the remote power control. These computers were typically equipped with a feature named Advanced Power Management (APM). APM provided BIOS-based power control.
APM Power States
Power State Description
Ready On and fully operational
Stand-by CPU is idle, and no device activity has occurred recently
Power State Description
Suspended System is at the lowest level of power consumption available that preserves data
Hibernation Power is off, but system state is preserved
Off Power off
Advanced Configuration and Power Interface (ACPI)
Newer computers feature ACPI, which extends the APM concept to enable the OS to selectively control power. ACPI supports a variety of power states. Each state represents a different level of power, from fully powered up to completely powered down, with partial levels of power in each intermediate state.
ACPI Power States
Power State
S0 On and fully operational
S1 System is in low-power mode (sleep mode). The CPU clock is stopped, but RAM is powered on and being
S2 Similar to S1, but power is removed from the CPU.
S3 Suspend to RAM (standby mode). Most components are shut down. RAM remains operational.
S4 Suspend to disk (hibernate mode). The memory contents are swapped to the disk drive and then reloaded
S5 Power off
Some newer machines do not support being woken up from a powered-off state.
Remote wake-up can be initiated by a variety of user selectable packet types and is not limited to the Magic Packet format. For more information about supported packet types, see the operating system settings section.
See the System Documentation for information on supported power states.
Description
refreshed.
into RAM when the system is awakened.
NOTE: Dell supports all modes, S0 through S5. However, Wake on LAN is only supported in the S4 mode.
Wake-Up Address Patterns
The wake-up capability of Intel adapters is based on patterns sent by the OS. You can configure the driver to the fol­lowing settings using Intel PROSet for Windows. For Linux*, WoL is provided through the ethtool* utility. For more information on ethtool, see the following Web site: http://sourceforge.net/projects/gkernel.
l
Wake on Directed Packet - accepts only patterns containing the adapter's Ethernet address in the Ethernet header or containing the IP address, assigned to the adapter, in the IP header.
l
Wake on Magic Packet - accept only patterns containing 16 consecutive repetitions of the adapter's MAC address.
l
Wake on Directed Packet and Wake on Magic Packet - accepts the patterns of both directed packets and magic packets.
Choosing "Wake on directed packet" will also allow the adapter to accept patterns of the Address Resolution Protocol (ARP) querying the IP address assigned to the adapter. If multiple IP addresses are assigned to an adapter, the oper­ating system may request to wake up on ARP patterns querying any of the assigned addresses. However, the adapter will only awaken in response to ARP packets querying the first IP address in the list, usually the first address assigned to the adapter.
NOTE: The Intel PRO/1000 PT Dual Port Server Adapter, PRO/1000 PT Server Adapter, PRO/1000 PF Server Adapter do not support Directed Packets.
Physical Installation Issues
Slot
Some motherboards will only support remote wake-up (or remote wake-up from S5 state) in a particular slot. See the documentation that came with your system for details on remote wake-up support.
Power
Newer Intel PRO adapters are 3.3 volt and some are 12 volt. They are keyed to fit either type of slot.
The 3.3 volt standby supply must be capable of supplying at least 0.2 amps for each Intel PRO adapter installed. Turn­ing off the remote wake-up capability on the adapter using the BootUtil utility reduces the power draw to around 50 mil­liamps (.05 amps) per adapter.
Operating System Settings
Microsoft Windows Products
Windows Server 2008 is ACPI-capable. These operating systems do not support remote wake-up from a powered off state (S5), only from standby. When shutting down the system, they shut down ACPI devices including the Intel PRO adapters. This disarms the adapters' remote wake-up capability. However, in some ACPI-capable computers, the BIOS may have a setting that allows you to override the OS and wake from an S5 state anyway. If there is no support for wake from S5 state in your BIOS settings, you are limited to wake from standby when using these operating systems in ACPI computers.
The Power Management tab in Intel PROSet includes a setting called Wake on Magic Packet from power off state for some adapters. To explicitly allow wake-up with a Magic Packet from shutdown under APM power management mode, check this box to enable this setting. See Intel PROSet help for more details.
In ACPI-capable versions of Windows, the Intel PROSet advanced settings include a setting called Wake on Settings. This setting controls the type of packets that wake the system from standby. See Intel PROSet help for more details.
In ACPI computers running ACPI-aware operating systems with only base drivers installed, make sure the wake on standby option is enabled. To enable wake on standby, open the Device Manager, then navigate to the Power Man-
agement tab setting. Check the setting Allow this device to bring the computer out of standby.
NOTE: To use the Wake on LAN features on the Intel® PRO/1000 PT Dual Port Server Adapter, Intel®
PRO/1000 PT Server Adapter or Intel® PRO/1000 PF Server Adapter, WOL must first be enabled in the EEPROM using BootUtil.
Other Operating Systems
Remote Wake-Up is also supported in Linux.
Using Power Management Features
You must use Intel PROSet to configure advanced features such as teaming or VLANs. For more information, see
Power Management Settings for Windows Drivers.

Optimizing Performance

You can configure Intel network adapter advanced settings to help optimize server performance.
The examples below provide guidance for three server usage models:
l
Optimized for quick response and low latency – useful for video, audio, and High Performance Computing
Cluster (HPCC) servers
l
Optimized for throughput – useful for data backup/retrieval and file servers
l
Optimized for CPU utilization – useful for application, web, mail, and database servers
NOTES:
1. Install the adapter in a PCI Express bus slot.
2. Use the proper fiber cabling for the adapter you have.
3. Enable Jumbo Packets, if your other network components can also be configured for it.
4. Increase the number of TCP and Socket resources from the default value. For Windows based systems, we
5. Increase the allocation size of Driver Resources (transmit/receive buffers). However, most TCP traffic patterns
For specific information on any advanced settings, see Advanced Settings for Windows* Drivers or Linux* Driver for the Intel® Network Server Adapters.
l
The recommendations below are guidelines and should be treated as such. Additional factors such as installed applications, bus type, network topology, and operating system also affect system performance.
l
These adjustments should be performed by a highly skilled network administrator. They are not guar­anteed to improve performance. Not all settings shown here may be available through your BIOS, oper­ating system or network driver configuration. Linux users, see the README file in the Linux driver package for Linux-specific performance enhancement details.
l
When using performance test software, refer to the documentation of the application for optimal results.
have not identified system parameters other than the TCP Window Size which significantly impact performance.
work best with the transmit buffer set to its default value, and the receive buffer set to its minimum value.
Optimized for quick response and low latency
l
Minimize or disable Interrupt Moderation Rate.
l
Disable Offload TCP Segmentation.
l
Disable Jumbo Packets.
l
Increase Transmit Descriptors.
l
Increase Receive Descriptors.
l
Increase RSS Queues.
Optimized for throughput
l
Enable Jumbo Packets.
l
Increase Transmit Descriptors.
l
Increase Receive Descriptors.
l
On systems that support NUMA, set the Preferred NUMA Node on each adapter to achieve better scaling across NUMA nodes.
Optimized for CPU utilization
l
Maximize Interrupt Moderation Rate.
l
Keep the default setting for the number of Receive Descriptors; avoid setting large numbers of Receive Descriptors.
l
Decrease RSS Queues.
l
In Hyper-V environments, decrease the Max number of RSS CPUs.

Windows Drivers

Installing Windows* Drivers

Installing the Drivers
The drivers can be installed using the Found New Hardware wizard.
Installing Drivers on Windows Server Using the Found New Hardware Wizard
NOTES:
l
When Windows Server detects a new adapter, it attempts to find an acceptable Windows driver already installed on the computer. If the operating system finds a driver, it installs this driver without any user inter­vention. However, this Windows driver may not be the most current one and may provide only basic func­tionality. Update the driver to make sure you have access to all the base driver's features.
l
The Roll Back Driver feature of Windows Server 2008 or Windows Server 2012 (available on the Adapter Properties dialog's Driver tab) will not work correctly if an adapter team or Intel PROSet are present on the system. Before you use the Roll Back Driver feature, use Intel PROSet to remove any teams, then remove Intel PROSet using Programs and Features from the Control Panel of Windows.
1. Install the adapter in the computer and turn on the computer.
2. When Windows discovers the new adapter, the Found New Hardware Wizard starts.
3. Extract the Dell Driver Update Package to a specified path.
4. Open a DOS command box and go to the specified path.
5. Type "setup -a" at the command prompt to extract the drivers.
6. Type in the directory path where you want the files saved. The default path is c:\Program Files\Intel\Drivers.
7. The Wizard Welcome screen asks whether you want to connect to Windows Update to search for software. Click No, not this time. Click Next.
8. Click Install from a list or specific location, then click Next.
9. On the next screen, type in the directory path where you saved the driver files and click Next.
10. Windows searches for a driver. When the search is complete, a message indicates a driver was found.
11. Click Next. The necessary files are copied to your computer. The wizard displays a Completed message.
12. Click Finish.
If Windows does not detect the adapter, see Troubleshooting.
Installing Drivers Using the Windows Command Line
You can also use the Windows command line to install the drivers. The driver install utility (setup.exe) allows unat­tended install of the drivers.
For complete information, see Command Line Installation for Base Drivers and Intel® PROSet.
Installing Additional Adapters
When you use the Found New Hardware Wizard to install drivers, Windows installs the driver for the first adapter and then automatically installs drivers for additional adapters.
There are no special instructions for installing drivers of non-Intel adapters (e.g., for multi-vendor teaming). Follow the instructions that came with that adapter.
Updating the Drivers
NOTE: If you update the adapter driver and are using Intel PROSet, you should also update Intel PROSet. To
update the application, double-click setup.exe and make sure the option for Intel® PROset for Windows Device Manager is checked.
Drivers can be updated using the Update Device Driver Wizard.
Updating Windows Server Using the Device Manager
1. Extract the Dell Driver Update Package to a specified path.
2. From the Control Panel, double-click the System icon and click Device Manager.
3. Double-click Network Adapters and right-click on the Intel adapter listing to display its menu.
4. Click the Update Driver... menu option. The Update Driver Software page appears.
5. Select Browse my computer for driver software.
6. Type in the directory path to the specified drive or browse to the location.
7. Select Next.
8. After the system finds and installs the file, click Close.
Removing the Drivers
You should uninstall the Intel driver if you are permanently removing all Intel adapters, or if you need to perform a clean installation of new drivers. This procedure removes the driver for all Intel adapters that use it as well as Intel PROSet and Advanced Networking Services.
WARNING: Removing an adapter driver results in a disruption of all network traffic through that adapter.
NOTE: Before you remove the driver, make sure the adapter is not a member of a team. If the adapter is a mem-
ber of a team, remove the adapter from the team in Intel PROSet.
To uninstall the drivers and software from Windows Server, select Intel(R) Network Connections from Programs and
Features in the Control Panel. To uninstall the adapter drivers, double-click on it or click the Remove button.
NOTE: The Device Manager should not be used to uninstall drivers. If the Device Manager is used to uninstall
drivers, base drivers will not reinstall using the Modify option through Add/Remove Programs in the Control Panel.
Temporarily Disabling an Adapter
If you are testing your system to determine network faults, particularly in a multi-adapter environment, it is recom­mended that you temporarily disable the adapter.
1. From the Control Panel, double-click the System icon, click the Hardware tab, and click Device Manager.
2. Right-click the icon of adapter you want to disable, then click Disable.
3. Click Yes on the confirmation dialog box.
To enable the adapter, right-click its icon, then click Enable.
NOTE: You can also disable an adapter by right-clicking its icon in the Network Connections control panel and selecting Disable.
Replacing an Adapter
After installing an adapter in a specific slot, Windows treats any other adapter of the same type as a new adapter. Also, if you remove the installed adapter and insert it into a different slot, Windows recognizes it as a new adapter. Make sure that you follow the instructions below carefully.
1. Open Intel PROSet.
2. If the adapter is part of a team remove the adapter from the team.
3. Shut down the server and unplug the power cable.
4. Disconnect the network cable from the adapter.
5. Open the case and remove the adapter.
6. Insert the replacement adapter. (Use the same slot, otherwise Windows assumes that there is a new adapter.)
7. Reconnect the network cable.
8. Close the case, reattach the power cable, and power-up the server.
9. Open Intel PROSet and check to see that the adapter is available.
10. If the former adapter was part of a team, follow the instructions in Configuring ANS Teams to add the new adapter to the team.
11. If the former adapter was tagged with a VLAN, follow the instructions in Creating IEEE VLANs to tag the new adapter.
Removing an Adapter
Before physically removing an adapter from the system, be sure to complete these steps:
1. Use Intel PROSet to remove the adapter from any team or VLAN.
2. Uninstall the adapter drivers.
After you have completed these steps, power down the system, unplug the power cable and remove the adapter.
Using Advanced Features
You must use Intel PROSet to configure advanced features such as teaming or VLANs. Settings can be configured under the Intel PROSet for Windows Device Manager's Advanced tab. Some settings can also be configured using the Device Manager's adapter properties dialog box.

Using Intel® PROSet for Windows* Device Manager

Overview
Intel® PROSet for Windows* Device Manager is an extension to the Windows Device Manager. When you install the Intel PROSet software, additional tabs are automatically added to the supported Intel adapters in Device Manager. These features allow you to test and configure Intel wired network adapters. You can install Intel PROSet on computers running Microsoft Windows Server 2008 or later.
Tips for Intel PROSet Users
If you have used previous versions of Intel PROSet, you should be aware of the following changes with Intel PROSet for Windows Device Manager:
l
There is no system tray icon.
l
The configuration utility is not accessible from Control Panels or the Start menu.
l
All Intel PROSet features are now accessed from Device Manager. To access features, simply open the Device Manager and double-click the Intel adapter you would like to configure.
Installing Intel PROSet for Windows Device Manager
Intel PROSet for Windows Device Manager is installed from the Product CD with the same process used to install drivers.
NOTES:
l
You must have administrator rights to install or use Intel PROSet for Windows Device Manager.
l
Upgrading PROSet for Windows Device Manager may take a few minutes.
1. On the autorun, click Install Base Drivers and Software.
NOTE: You can also run setup.exe from the installation CD or files downloaded from Customer Support.
2. Proceed with the installation wizard until the Custom Setup page appears.
3. Select the features to install.
4. Follow the instructions to complete the installation.
If Intel PROSet for Windows Device Manager was installed without ANS support, you can install support by clicking Install Base Drivers and Software on the autorun, or running setup.exe, and then selecting the Modify option when prompted. From the Intel® Network Connections window, select Advanced Network Services then click Next to con­tinue with the installation wizard.
Using Intel PROSet for Windows Device Manager
The main Intel PROSet for Windows Device Manager window is similar to the illustration below. For more information about features on the custom Intel tabs, see the online help, which is integrated into the Properties dialog.
The Link Speed tab allows you to change the adapter's speed and duplex setting, run diagnostics, and use the identify adapter feature.
The Advanced tab allows you to change the adapter's advanced settings. These settings will vary on the type and model of adapter.
The Teaming tab allows you to create, modify, and delete adapter teams. You must install Advanced Network Services in order to see this tab and use the feature. See Installing Intel PROSet for Windows Device Manager for more inform- ation.
The VLANs tab allows you to create, modify, and delete VLANs. You must install Advanced Network Services in order to see this tab and use the feature. See Installing Intel PROSet for Windows Device Manager for more information.
The Boot Options tab allows you to configure Intel Boot Agent settings for the adapter.
NOTE: This tab will not appear if the Boot Agent has not been enabled on the adapter.
The Power Management tab allows you to configure power consumption settings for the adapter.
Configuring ANS Teams
Advanced Network Services (ANS) Teaming, a feature of the Advanced Network Services component, lets you take advantage of multiple adapters in a system by grouping them together. ANS teaming can use features like fault tol­erance and load balancing to increase throughput and reliability.
Before you can set up ANS teaming in Windows*, you must install Intel® PROSet software. See Installing Intel PROSet
for Windows Device Manager for more information.
NOTES:
l
NLB will not work when Receive Load Balancing (RLB) is enabled. This occurs because NLB and iANS both attempt to set the server's multicast MAC address, resulting in an ARP table mismatch.
l
Teaming with the Intel® 10 Gigabit AF DA Dual Port Server Adapter is only supported with similar adapter types and models or with switches using a Direct Attach connection.
Creating a team
1. Launch Windows Device Manager
2. Expand Network Adapters.
3. Double-click on one of the adapters that will be a member of the team. The adapter properties dialog box appears.
4. Click the Teaming tab.
5. Click Team with other adapters.
6. Click New Team.
7. Type a name for the team, then click Next.
8. Click the checkbox of any adapter you want to include in the team, then click Next.
9. Select a teaming mode, then click Next. For more information on team types, see Set Up Adapter Teaming.
10. Click Finish.
The Team Properties window appears, showing team properties and settings.
Once a team has been created, it appears in the Network Adapters category in the Computer Management window as a virtual adapter. The team name also precedes the adapter name of any adapter that is a member of the team.
NOTE: If you want to set up VLANs on a team, you must first create the team.
Adding or Removing an Adapter from an Existing Team
NOTE: A team member should be removed from the team with link down. See the Configuration Notes in
Adapter Teaming for more information.>
1. Open the Team Properties dialog box by double-clicking on a team listing in the Computer Management win­dow.
2. Click the Settings tab.
3. Click Modify Team, then click the Adapters tab.
4. Select the adapters that will be members of the team.
l
Click the checkbox of any adapter that you want to add to the team.
l
Clear the checkbox of any adapter that you want to remove from the team.
5. Click OK.
Renaming a Team
1. Open the Team Properties dialog box by double-clicking on a team listing in the Computer Management win­dow.
2. Click the Settings tab.
3. Click Modify Team, then click the Name tab.
4. Type a new team name, then click OK.
Removing a Team
1. Open the Team Properties dialog box by double-clicking on a team listing in the Computer Management win­dow.
2. Click the Settings tab.
3. Select the team you want to remove, then click Remove Team.
4. Click Yes when prompted.
NOTE: If you defined a VLAN or QoS Prioritization on an adapter joining a team, you may have to redefine it when it is returned to a stand-alone mode.
Configuring IEEE VLANs
Before you can set up VLANs in Windows*, you must install Intel® PROSet software. See Installing Intel PROSet for
Windows Device Manager for more information.
A maximum of 64 VLANs can be used on a server.
CAUTION:
l VLANs cannot be used on teams that contain non-Intel network adapters
l Use Intel PROSet to add or remove a VLAN. Do not use the Network and Dial-up Connections dia-
log box to enable or disable VLANs. Otherwise, the VLAN driver may not be correctly enabled or disabled.
NOTES:
l
If you will be using both teaming and VLANs, be sure to set up teaming first.
l
If you change a setting under the Advanced tab for one VLAN, it changes the settings for all VLANS using that port.
Setting Up an IEEE tagged VLAN
1. In the adapter properties window, click the VLANs tab.
2. Click New.
3. Type a name and ID number for the VLAN you are creating. The VLAN ID must match the VLAN ID on the switch. Valid ID range is from 1-4094, though your switch might not support this many IDs. The VLAN Name is for information only and does not have to match the name on the switch. The VLAN Name is limited to 256 characters.
NOTE: VLAN IDs 0 and 1 are often reserved for other uses.
4. Click OK.
The VLAN entry will appear under Network Adapters in the Computer Management window.
Complete these steps for each adapter you want to add to a VLAN.
NOTE: If you configure a team to use VLANs, the team object icon in the Network Connections Panel will indic­ate that the team is disconnected. You will not be able to make any TCP/IP changes, such as changing an IP address or subnet mask. You will, however, be able to configure the team (add or remove team members, change team type, etc.) through Device Manager.
Setting Up an Untagged VLAN
You can set up only one untagged VLAN per adapter or team.
NOTE: An untagged VLAN cannot be created unless at least one tagged VLAN already exists.
1. In the adapter properties window, click the VLANs tab.
2. Click New.
3. Check the Untagged VLAN box.
4. Type a name for the VLAN you are creating. The VLAN name is for information only and does not have to match the name on the switch. It is limited to 256 characters.
5. Click OK.
Removing a VLAN
1. On the VLANs tab, select the VLAN you want to remove.
2. Click Remove.
3. Click Yes to confirm.
Removing Phantom Teams and Phantom VLANs
If you physically remove all adapters that are part of a team or VLAN from the system without removing them via the Device Manager first, a phantom team or phantom VLAN will appear in Device Manager. There are two methods to remove the phantom team or phantom VLAN.
Removing the Phantom Team or Phantom VLAN through the Device Manager
Follow these instructions to remove a phantom team or phantom VLAN from the Device Manager:
1. In the Device Manager, double-click on the phantom team or phantom VLAN.
2. Click the Settings tab.
3. Select Remove Team or Remove VLAN.
Removing the Phantom Team or Phantom VLAN using the savresdx.vbs Script
For Windows Server, the savresdx.vbs script is located on the CD in the WMI directory of the appropriate Windows folder. From the DOS command box type: "cscript savresdx.vbs removephantoms".
Preventing the Creation of Phantom Devices
To prevent the creation of phantom devices, make sure you perform these steps before physically removing an adapter from the system:
1. Remove the adapter from any teams using the Settings tab on the team properties dialog box.
2. Remove any VLANs from the adapter using the VLANs tab on the adapter properties dialog box.
3. Uninstall the adapter from Device Manager.
You do not need to follow these steps in hot-replace scenarios.
Removing Intel PROSet for Windows Device Manager
To uninstall the extensions to Windows Device Manager provided by Intel PROSet for Windows Device Manager, select Intel(R) PRO Network Connections from Programs and Features in the Control Panel for Windows Server 2008 or later.
NOTES:
l
This process removes all Intel PRO adapter drivers and software.
l
It is suggested that you uninstall VLANS and teams before removing adapters.
l
The setup -u can also be used from the command line to remove Intel PROSet.
Changing Intel PROSet Settings Under Windows Server Core
You can use the command line utility prosetcl.exe to change most Intel PROSet settings under Windows Server Core. Please refer to the help file prosetcl.txt located in the \Program Files\Intel\DMIX\CL directory. For iSCSI Crash Dump configuration, use the CrashDmp.exe utility and refer to the CrashDmp.txt help file.

Advanced Settings for Windows* Drivers

The settings listed on Intel PROSet for Windows Device Manager's Advanced tab allow you to customize how the adapter handles QoS packet tagging, Jumbo Packets, Offloading, and other capabilities. Some of the following features might not be available depending on the operating system you are running, the specific adapters installed, and the spe­cific platform you are using.
Gigabit Master Slave Mode
Determines whether the adapter or link partner is designated as the master. The other device is designated as the slave. By default, the IEEE 802.3ab specification defines how conflicts are handled. Multi-port devices such as switches have higher priority over single port devices and are assigned as the master. If both devices are multi-port devices, the
one with higher seed bits becomes the master. This default setting is called "Hardware Default."
NOTE: In most scenarios, it is recommended to keep the default value of this feature.
Setting this to either "Force Master Mode" or "Force Slave Mode" overrides the hardware default.
Default Auto Detect
l
Range
NOTE: Some multi-port devices may be forced to Master Mode. If the adapter is connected to such a device and
is configured to "Force Master Mode," link is not established.
Force Master Mode
l
Force Slave Mode
l
Auto Detect
Jumbo Frames
Enables or disables Jumbo Packet capability. The standard Ethernet frame size about 1514 bytes, while Jumbo Pack­ets are larger than this. Jumbo Packets can increase throughput and decrease CPU utilization. However, additional latency may be introduced.
Enable Jumbo Packets only if ALL devices across the network support them and are configured to use the same frame size. When setting up Jumbo Packets on other network devices, be aware that network devices calculate Jumbo Packet sizes differently. Some devices include the frame size in the header information while others do not. Intel adapters do not include frame size in the header information.
Jumbo Packets can be implemented simultaneously with VLANs and teaming. If a team contains one or more non-Intel adapters, the Jumbo Packets feature for the team is not supported. Before adding a non-Intel adapter to a team, make sure that you disable Jumbo Packets for all non-Intel adapters using the software shipped with the adapter.
Restrictions
l
Jumbo frames are not supported in multi-vendor team configurations.
l
Supported protocols are limited to IP (TCP, UDP).
l
Jumbo frames require compatible switch connections that forward Jumbo Frames. Contact your switch vendor for more information.
l
When standard-sized Ethernet frames (64 to 1518 bytes) are used, there is no benefit to configuring Jumbo Frames.
l
The Jumbo Packets setting on the switch must be set to at least 8 bytes larger than the adapter setting for Microsoft Windows operating systems, and at least 22 bytes larger for all other operating systems.
Default Disabled
Range Disabled (1514), 4088, or 9014 bytes. (Set the switch 4 bytes higher for CRC, plus 4 bytes if using
VLANs.)
NOTES:
l
Jumbo Packets are supported at 10 Gbps and 1 Gbps only. Using Jumbo Packets at 10 or 100 Mbps may result in poor performance or loss of link.
l
End-to-end hardware must support this capability; otherwise, packets will be dropped.
l
Intel adapters that support Jumbo Packets have a frame size limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes.
Locally Administered Address
Overrides the initial MAC address with a user-assigned MAC address. To enter a new network address, type a 12-digit hexadecimal number in this box.
Default None
Range 0000 0000 0001 - FFFF FFFF FFFD
Exceptions:
l
Do not use a multicast address (Least Significant Bit of the high byte = 1). For example, in the address 0Y123456789A, "Y" cannot be an odd number. (Y must be 0, 2, 4, 6, 8, A, C, or E.)
l
Do not use all zeros or all Fs.
If you do not enter an address, the address is the original network address of the adapter.
For example,
Multicast: 0123 4567 8999 Broadcast: FFFF FFFF FFFF Unicast (legal): 0070 4567 8999
NOTE: In a team, Intel PROSet uses either:
l
The primary adapter's permanent MAC address if the team does not have an LAA configured, or
l
The team's LAA if the team has an LAA configured.
Intel PROSet does not use an adapter's LAA if the adapter is the primary adapter in a team and the team has an LAA.
Log Link State Event
This setting is used to enable/disable the logging of link state changes. If enabled, a link up change event or a link down change event generates a message that is displayed in the system event logger. This message contains the link's speed and duplex. Administrators view the event message from the system event log.
The following events are logged.
l
The link is up.
l
The link is down.
l
Mismatch in duplex.
l
Spanning Tree Protocol detected.
Default Enabled
Range Enabled, Disabled
Priority & VLAN Tagging
Enables the adapter to offload the insertion and removal of priority and VLAN tags for transmit and receive.
Default Priority & VLAN Enabled
l
Range
Priority & VLAN Disabled
l
Priority Enabled
l
VLAN Enabled
l
Priority & VLAN Enabled
Receive Side Scaling
When Receive Side Scaling (RSS) is enabled, all of the receive data processing for a particular TCP connection is shared across multiple processors or processor cores. Without RSS all of the processing is performed by a single pro­cessor, resulting in less efficient system cache utilization. RSS can be enabled for a LAN or for FCoE. In the first case, it is called "LAN RSS". In the second, it is called "FCoE RSS".
LAN RSS
LAN RSS applies to a particular TCP connection.
NOTE: This setting has no effect if your system has only one processing unit.
LAN RSS Configuration
RSS is enabled on the Advanced tab of the adapter property sheet. If your adapter does not support RSS, or if the SNP or SP2 is not installed, the RSS setting will not be displayed. If RSS is supported in your system environment, the fol­lowing will be displayed:
l
Port NUMA Node. This is the NUMA node number of a device.
l
Starting RSS CPU. This setting allows you to set the preferred starting RSS processor. Change this setting if the current processor is dedicated to other processes. The setting range is from 0 to the number of logical CPUs
- 1. In Server 2008 R2, RSS will only use CPUs in group 0 (CPUs 0 through 63).
l
Max number of RSS CPU. This setting allows you to set the maximum number of CPUs assigned to an adapter and is primarily used in a Hyper-V environment. By decreasing this setting in a Hyper-V environment, the total number of interrupts is reduced which lowers CPU utilization. The default is 8 for Gigabit adapters and 16 for 10 Gigabit adapters.
l
Preferred NUMA Node. This setting allows you to choose the preferred NUMA (Non-Uniform Memory Access) node to be used for memory allocations made by the network adapter. In addition the system will attempt to use the CPUs from the preferred NUMA node first for the purposes of RSS. On NUMA platforms, memory access latency is dependent on the memory location. Allocation of memory from the closest node helps improve per­formance. The Windows Task Manager shows the NUMA Node ID for each processor.
NOTE: This setting only affects NUMA systems. It will have no effect on non-NUMA systems.
l
Receive Side Scaling Queues. This setting configures the number of RSS queues, which determine the space to buffer transactions between the network adapter and CPU(s).
Default 2 queues for the Intel® 10 Gigabit Server Adapters
1 queue for the following adapters.
Range
l
l
l
l
LAN RSS and Teaming
l
If RSS is not enabled for all adapters in a team, RSS will be disabled for the team.
l
If an adapter that does not support RSS is added to a team, RSS will be disabled for the team.
l
If you create a multi-vendor team, you must manually verify that the RSS settings for all adapters in the team are the same.
Intel® Gigabit ET Dual Port Server Adapter Intel® Gigabit ET Quad Port Server Adapter Intel® Gigabit ET Quad Port Mezzanine Card
1 queue is used when low CPU utilization is required.
2 queues are used when good throughput and low CPU utilization are required.
4 queues are used for applications that demand maximum throughput and transactions per second. 8 and 16 queues are supported on the Intel® 82598-based and 82599-based adapters.
NOTES:
l
The 8 and 16 queues are only available when PROSet for Windows Device Manager is installed. If PROSet is not installed, only 4 queues are available.
l
Using 8 or more queues will require the system to reboot.
NOTE: Not all settings are available on all adapters.
FCoE RSS
If FCoE is installed, FCoE RSS is enabled and applies to FCoE receive processing that is shared across processor cores.
FCoE RSS Configuration
If your adapter supports FCoE RSS, the following configuration settings can be viewed and changed on the base driver Advanced Performance tab:
l
FCoE NUMA Node Count. This setting specifies the number of consecutive NUMA Nodes where the allocated FCoE queues will be evenly distributed.
l
FCoE Starting NUMA Node. This setting specifies the NUMA node representing the first node within the FCoE NUMA Node Count.
l
FCoE Starting Core Offset. This setting specifies the offset to the first NUMA Node CPU core that will be assigned to FCoE queue.
l
FCoE Port NUMA Node. This setting in an indication from the platform of the optimal closest NUMA Node to the physical port, if available. This setting is read-only and cannot be configured.
Performance Tuning
The Intel Network Controller provides a new set of advanced FCoE performance tuning options. These options will dir­ect how FCoE transmit/receive queues are allocated in NUMA platforms. Specifically, they direct what target set of NUMA node CPUs can be selected from to assign individual queue affinity. Selecting a specific CPU has two main effects:
l
It sets the desired interrupt location for processing queue packet indications.
l
It sets the relative locality of the queue to available memory.
As indicated, these are intended as advanced tuning options for those platform managers attempting to maximize sys­tem performance. They are generally expected to be used to maximize performance for multi-port platform con­figurations. Since all ports share the same default installation directives (the .inf file, etc.), the FCoE queues for every port will be associated with the same set of NUMA CPUs which may result in CPU contention.
The software exporting these tuning options defines a NUMA Node to be equivalent to an individual processor (socket). Platform ACPI information presented by the BIOS to the operating system helps define the relation of PCI devices to individual processors. However, this detail is not currently reliably provided in all platforms. Therefore, using the tuning options may produce unexpected results. Consistent or predictable results when using the performance options cannot be guaranteed.
The performance tuning options are listed in the LAN RSS Configuration section.
Example 1: A platform with two physical sockets, each socket processor providing 8 core CPUs (16 when hyper thread­ing is enabled), and a dual port Intel adapter with FCoE enabled.
By default 8 FCoE queues will be allocated per NIC port. Also, by default the first (non-hyper thread) CPU cores of the first processor will be assigned affinity to these queues resulting in the allocation model pictured below. In this scen­ario, both ports would be competing for CPU cycles from the same set of CPUs on socket 0.
Socket Queue to CPU Allocation
Using performance tuning options, the association of the FCoE queues for the second port can be directed to a dif­ferent non-competing set of CPU cores. The following settings would direct SW to use CPUs on the other processor socket:
l
FCoE NUMA Node Count = 1: Assign queues to cores from a single NUMA node (or processor socket).
l
FCoE Starting NUMA Node = 1: Use CPU cores from the second NUMA node (or processor socket) in the sys­tem.
l
FCoE Starting Core Offset = 0: SW will start at the first CPU core of the NUMA node (or processor socket).
The following settings would direct SW to use a different set of CPUs on the same processor socket. This assumes a processor that supports 16 non-hyperthreading cores.
l
FCoE NUMA Node Count = 1
l
FCoE Starting NUMA Node = 0
l
FCoE Starting Core Offset = 8
Example 2: Using one or more ports with queues allocated across multiple NUMA nodes. In this case, for each NIC port the FCoE NUMA Node Count is set to that number of NUMA nodes. By default the queues will be allocated evenly from each NUMA node:
l
FCoE NUMA Node Count = 2
l
FCoE Starting NUMA Node = 0
l
FCoE Starting Core Offset = 0
Example 3: The display shows FCoE Port NUMA Node setting is 2 for a given adapter port. This is a read-only indic­ation from SW that the optimal nearest NUMA node to the PCI device is the third logical NUMA node in the system. By default SW has allocated that port's queues to NUMA node 0. The following settings would direct SW to use CPUs on the optimal processor socket:
l
FCoE NUMA Node Count = 1
l
FCoE Starting NUMA Node = 2
l
FCoE Starting Core Offset = 0
This example highlights the fact that platform architectures can vary in the number of PCI buses and where they are attached. The figures below show two simplified platform architectures. The first is the older common FSB style archi­tecture in which multiple CPUs share access to a single MCH and/or ESB that provides PCI bus and memory con­nectivity. The second is a more recent architecture in which multiple CPU processors are interconnected via QPI, and each processor itself supports integrated MCH and PCI connectivity directly.
There is a perceived advantage in keeping the allocation of port objects, such as queues, as close as possible to the NUMA node or collection of CPUs where it would most likely be accessed. If the port queues are using CPUs and memory from one socket when the PCI device is actually hanging off of another socket, the result may be undesirable QPI processor-to-processor bus bandwidth being consumed. It is important to understand the platform architecture when using these performance options.
Shared Single Root PCI/Memory Architecture
Distributed Multi-Root PCI/Memory Architecture
Example 4: The number of available NUMA node CPUs is not sufficient for queue allocation. If your platform has a pro­cessor that does not support an even power of 2 CPUs (for example, it supports 6 cores), then during queue allocation if SW runs out of CPUs on one socket it will by default reduce the number of queues to a power of 2 until allocation is achieved. For example, if there is a 6 core processor being used, the SW will only allocate 4 FCoE queues if there only a single NUMA node. If there are multiple NUMA nodes, the NUMA node count can be changed to a value greater than or equal to 2 in order to have all 8 queues created.
Determining Active Queue Location
The user of these performance options will want to determine the affinity of FCoE queues to CPUs in order to verify their actual effect on queue allocation. This is easily done by using a small packet workload and an I/O application such as IoMeter. IoMeter monitors the CPU utilization of each CPU using the built-in performance monitor provided by the operating system. The CPUs supporting the queue activity should stand out. They should be the first non-hyper thread CPUs available on the processor unless the allocation is specifically directed to be shifted via the performance options discussed above.
To make the locality of the FCoE queues even more obvious, the application affinity can be assigned to an isolated set of CPUs on the same or another processor socket. For example, the IoMeter application can be set to run only on a finite number of hyper thread CPUs on any processor. If the performance options have been set to direct queue alloc­ation on a specific NUMA node, the application affinity can be set to a different NUMA node. The FCoE queues should not move and the activity should remain on those CPUs even though the application CPU activity moves to the other processor CPUs selected.
Wait for Link
Determines whether the driver waits for auto-negotiation to be successful before reporting the link state. If this feature is off, the driver does not wait for auto-negotiation. If the feature is on, the driver does wait for auto-negotiation.
If this feature is on, and the speed is not set to auto-negotiation, the driver will wait for a short time for link to complete before reporting the link state.
If the feature is set to Auto Detect, this feature is automatically set to On or Off depending on speed and adapter type when the driver is installed. The setting is:
l
Off for copper Intel gigabit adapters with a speed of "Auto".
l
On for copper Intel gigabit adapters with a forced speed and duplex.
l
On for fiber Intel gigabit adapters with a speed of "Auto".
Default Auto Detect
l
Range
On
l
Off
l
Auto Detect
Performance Options
Adaptive Inter-Frame Spacing
Compensates for excessive Ethernet packet collisions on the network.
The default setting works best for most computers and networks. By enabling this feature, the network adapter dynam­ically adapts to the network traffic conditions. However, in some rare cases you might obtain better performance by dis­abling this feature. This setting forces a static gap between packets.
Default Disabled
l
Range
Direct Memory Access (DMA) Coalescing
DMA (Direct Memory Access) allows the network device to move packet data directly to the system's memory, reducing CPU utilization. However, the frequency and random intervals at which packets arrive do not allow the system to enter a lower power state. DMA Coalescing allows the NIC to collect packets before it initiates a DMA event. This may increase network latency but also increases the chances that the system will consume less energy. Adapters and net­work devices based on the Intel® Ethernet Controller I350 (and later controllers) support DMA Coalescing.
Higher DMA Coalescing values result in more energy saved but may increase your system's network latency. If you enable DMA Coalescing, you should also set the Interrupt Moderation Rate to 'Minimal'. This minimizes the latency impact imposed by DMA Coalescing and results in better peak network throughput performance. You must enable DMA Coalescing on all active ports in the system. You may not gain any energy savings if it is enabled only on some of the ports in your system. There are also several BIOS, platform, and application settings that will affect your potential energy savings. A white paper containing information on how to best configure your platform is available on the Intel website.
Enabled
l
Disabled
Flow Control
Enables adapters to more effectively regulate traffic. Adapters generate flow control frames when their receive queues reach a pre-defined limit. Generating flow control frames signals the transmitter to slow transmission. Adapters respond to flow control frames by pausing packet transmission for the time specified in the flow control frame.
By enabling adapters to adjust packet transmission, flow control helps prevent dropped packets.
NOTE: For adapters to benefit from this feature, link partners must support flow control frames .
Default RX & TX Enabled
l
Range
Disabled
l
RX Enabled
l
TX Enabled
l
RX & TX Enabled
Interrupt Moderation Rate
Sets the Interrupt Throttle Rate (ITR). This setting moderates the rate at which Transmit and Receive interrupts are gen­erated.
When an event such as packet receiving occurs, the adapter generates an interrupt. The interrupt interrupts the CPU and any application running at the time, and calls on the driver to handle the packet. At greater link speeds, more inter­rupts are created, and CPU rates also increase. This results in poor system performance. When you use a higher ITR setting, the interrupt rate is lower and the result is better CPU performance.
NOTE: A higher ITR rate also means that the driver has more latency in handling packets. If the adapter is hand­ling many small packets, it is better to lower the ITR so that the driver can be more responsive to incoming and outgoing packets.
Altering this setting may improve traffic throughput for certain network and system configurations, however the default setting is optimal for common network and system configurations. Do not change this setting without verifying that the desired change will have a positive effect on network performance.
Default Adaptive
l
Range
Adaptive
l
Extreme
l
High
l
Medium
l
Low
l
Minimal
l
Off
Low Latency Interrupts
LLI enables the network device to bypass the configured interrupt moderation scheme based on the type of data being received. It configures which arriving TCP packets trigger an immediate interrupt, enabling the system to handle the packet more quickly. Reduced data latency enables some applications to gain faster access to network data.
NOTE: When LLI is enabled, system CPU utilization may increase.
LLI can be used for data packets containing a TCP PSH flag in the header or for specified TCP ports.
l
Packets with TCP PSH Flag - Any incoming packet with the TCP PSH flag will trigger an immediate interrupt. The PSH flag is set by the sending device.
l
TCP Ports - Every packet received on the specified ports will trigger an immediate interrupt. Up to eight ports may be specified.
Default Disabled
l
Range
Disabled
l
PSH Flag-Based
l
Port-Based
Receive Buffers
Defines the number of Receive Buffers, which are data segments. They are allocated in the host memory and used to store the received packets. Each received packet requires at least one Receive Buffer, and each buffer uses 2KB of memory.
You might choose to increase the number of Receive Buffers if you notice a significant decrease in the performance of received traffic. If receive performance is not an issue, use the default setting appropriate to the adapter.
Default 512, for the 10 Gigabit Server Adapters.
256, for all other adapters depending on the features selected.
Range 128-4096, in intervals of 64, for the 10 Gigabit Server Adapters.
80-2048, in intervals of 8, for all other adapters.
Recommended Value Teamed adapter: 256
Using IPSec and/or multiple features: 352
Transmit Buffers
Defines the number of Transmit Buffers, which are data segments that enable the adapter to track transmit packets in the system memory. Depending on the size of the packet, each transmit packet requires one or more Transmit Buffers.
You might choose to increase the number of Transmit Buffers if you notice a possible problem with transmit per­formance. Although increasing the number of Transmit Buffers can enhance transmit performance, Transmit Buffers do consume system memory. If transmit performance is not an issue, use the default setting. This default setting varies with the type of adapter.
View the Adapter Specifications topic for help identifying your adapter.
Default 512, depending on the requirements of the adapter
Range 128-16384, in intervals of 64, for 10 Gigabit Server Adapters.
80-2048, in intervals of 8, for all other adapters.
Performance Profile
Performance Profiles are supported on Intel® 10GbE adapters and allow you to quickly optimize the performance of your Intel® Ethernet Adapter. Selecting a performance profile will automatically adjust some Advanced Settings to their optimum setting for the selected application. For example, a standard server has optimal performance with only two RSS (Receive-Side Scaling) queues, but a web server requires more RSS queues for better scalability.
You must install Intel® PROSet for Windows Device Manager to use Performance profiles. Profiles are selected on the Advanced tab of the adapter's property sheet.
l
Profiles
Standard Server – This profile is optimized for typical servers.
l
Web Server – This profile is optimized for IIS and HTTP-based web servers.
l
Virtualization Server – This profile is optimized for Microsoft’s Hyper-V virtualization environment.
l
Storage Server – This profile is optimized for Fibre Channel over Ethernet or for iSCSI over DCB performance. Selecting this profile will disable SR-IOV and VMQ.
l
Storage + Virtualization – This profile is optimized for a combination of storage and virtualization requirements.
l
Low Latency – This profile is optimized to minimize network latency.
NOTES:
l
Not all options are available on all adapter/operating system combinations.
l
If you have selected the Virtualization Server profile or the Storage + Virtualization profile, and you unin­stall the Hyper-V role, you should select a new profile.
Teaming Considerations
When you create a team with all members of the team supporting Performance Profiles, you will be asked which profile to use at the time of team creation. The profile will be synchronized across the team. If there is not a profile that is sup­ported by all team members then the only option will be Use Current Settings. The team will be created normally. Adding an adapter to an existing team works in much the same way.
If you attempt to team an adapter that supports performance profiles with an adapter that doesn't, the profile on the sup­porting adapter will be set to Custom Settings and the team will be created normally.
TCP/IP Offloading Options
IPv4 Checksum Offload
This allows the adapter to compute the IPv4 checksum of incoming and outgoing packets. This feature enhances IPv4 receive and transmit performance and reduces CPU utilization.
With Offloading off, the operating system verifies the IPv4 checksum.
With Offloading on, the adapter completes the verification for the operating system.
Default RX & TX Enabled
l
Range
Disabled
l
RX Enabled
l
TX Enabled
l
RX & TX Enabled
Large Send Offload (IPv4 and IPv6)
Sets the adapter to offload the task of segmenting TCP messages into valid Ethernet frames. The maximum frame size limit for large send offload is set to 64,000 bytes.
Since the adapter hardware is able to complete data segmentation much faster than operating system software, this fea­ture may improve transmission performance. In addition, the adapter uses fewer CPU resources.
Default Enabled; Disabled in Windows Server 2008
l
Range
Enabled
l
Disabled
TCP Checksum Offload (IPv4 and IPv6)
Allows the adapter to verify the TCP checksum of incoming packets and compute the TCP checksum of outgoing pack­ets. This feature enhances receive and transmit performance and reduces CPU utilization.
With Offloading off, the operating system verifies the TCP checksum.
With Offloading on, the adapter completes the verification for the operating system.
Default RX & TX Enabled
l
Range
Disabled
l
RX Enabled
l
TX Enabled
l
RX & TX Enabled
UDP Checksum Offload (IPv4 and IPv6)
Allows the adapter to verify the UDP checksum of incoming packets and compute the UDP checksum of outgoing pack­ets. This feature enhances receive and transmit performance and reduces CPU utilization.
With Offloading off, the operating system verifies the UDP checksum.
With Offloading on, the adapter completes the verification for the operating system.
Default RX & TX Enabled
l
Range
Disabled
l
RX Enabled
l
TX Enabled
l
RX & TX Enabled
Thermal Monitoring
Adapters and network controllers based on the Intel® Ethernet Controller I350 (and later controllers) can display tem­perature data and automatically reduce the link speed if the controller temperature gets too hot.
NOTE: This feature is enabled and configured by the equipment manufacturer. It is not available on all adapters and network controllers. There are no user configurable settings.
Monitoring and Reporting
Temperature information is displayed on the Link tab in Intel® PROSet for Windows* Device Manger. There are three possible conditions:
l
Temperature: Normal Indicates normal operation.
l
Temperature: Overheated, Link Reduced Indicates that the device has reduced link speed to lower power consumption and heat.
l
Temperature: Overheated, Adapter Stopped Indicates that the device is too hot and has stopped passing traffic so it is not damaged.
If either of the overheated events occur, the device driver writes a message to the system event log.

Power Management Settings for Windows* Drivers

The Intel® PROSet Power Management tab replaces the standard Microsoft* Windows* Power Management tab in Device Manager. It includes the Power Saver options that were previously included on the Advanced tab. The standard Windows power management functionality is incorporated on the Intel PROSet tab.
NOTES:
l
The Intel® 10 Gigabit Network Adapters do not support power management.
l
If your system has a Manageability Engine, the Link LED may stay lit even if WoL is disabled.
Power Saver Options
The Intel PROSet Power Management tab includes several settings that control the adapter's power consumption. For example, you can set the adapter to reduce its power consumption if the cable is disconnected.
Reduce Power if Cable Disconnected & Reduce Link Speed During Standby
Enables the adapter to reduce power consumption when the LAN cable is disconnected from the adapter and there is no link. When the adapter regains a valid link, adapter power usage returns to its normal state (full power usage).
The Hardware Default option is available on some adapters. If this option is selected, the feature is disabled or enabled based on the system hardware.
Default The default varies with the operating system and adapter.
Range The range varies with the operating system and adapter.
Energy Efficient Ethernet
The Energy Efficient Ethernet (EEE) feature allows a capable device to enter Low-Power Idle between bursts of net­work traffic. Both ends of a link must have EEE enabled for any power to be saved. Both ends of the link will resume full power when data needs to be transmitted. This transition may introduce a small amount of network latency.
NOTES:
l
Both ends of the EEE link must automatically negotiate link speed.
l
EEE is not supported at 10Mbps.
Wake on LAN Options
The ability to remotely wake computers is an important development in computer management. This feature has evolved over the last few years from a simple remote power-on capability to a complex system interacting with a variety of device and operating system power states. More details are available here.
Microsoft Windows Server 2008 is ACPI-capable. Windows does not support waking from a power-off (S5) state, only from standby (S3) or hibernate (S4). When shutting down the system, these states shut down ACPI devices, including Intel adapters. This disarms the adapter's remote wake-up capability. However, in some ACPI-capable computers, the BIOS may have a setting that allows you to override the operating system and wake from an S5 state anyway. If there is no support for wake from S5 state in your BIOS settings, you are limited to Wake From Standby when using these oper­ating systems in ACPI computers.
For some adapters, the Power Management tab in Intel PROSet includes a setting called Wake on Magic Packet from power off state. Enable this setting to explicitly allow wake-up with a Magic Packet* from shutdown under APM power management mode.
In ACPI-capable versions of Windows, the Intel PROSet Power Management tab includes Wake on Magic Packet and
Wake on directed packet settings. These control the type of packets that wake up the system from standby.
NOTES:
l
To use the Wake on Directed Packet feature, WOL must first be enabled in the EEPROM using BootUtil.
l
Wake on LAN is only supported on the following adapters:
Gigabit Adapters Adapter Port(s) supporting WOL
Intel® PRO/1000 PT Server Adapter
Intel® PRO/1000 PT Dual Port Server Adapter both ports A and B
Intel® PRO/1000 PF Server Adapter
Intel® Gigabit ET Dual Port Server Adapter port A
Intel® Gigabit ET Quad Port Server Adapter port A
Intel® Gigabit ET Quad Port Mezzanine Card all ports
Intel® Gigabit 2P I350-t Adapter port A
Intel® Gigabit 4P I350-t Adapter port A
Intel® Gigabit 4P I350-t rNDC all ports
Intel® Gigabit 4P X540/I350 rNDC all ports
Intel® Gigabit 4P X520/I350 rNDC all ports
Intel® Gigabit 4P I350 bNDC all ports
Intel® Gigabit 4P I350-t Mezz all ports
Intel® Gigabit 2P I350-t LOM all ports
Intel® Gigabit 2P I350 LOM all ports
10 Gigabit Adapters
Intel® Ethernet 10G 4P X540/I350 rNDC both 10G ports
Intel® Ethernet 10G 4P X520/I350 rNDC both 10G ports
Intel® Ethernet 10G 2P X520-k bNDC all ports
Wake on Link Settings
Wakes the computer if the network connection establishes link while the computer is in standby mode. You can enable the feature, disable it, or let the operating system use its default.
NOTES:
l
To use the Wake on Link feature with the Intel® PRO/1000 PT Dual Port Server Adapter, Intel® PRO/1000 PT Server Adapter or Intel® PRO/1000 PF Server Adapter, WOL must first be enabled in the EEPROM using BootUtil.
l
If a copper-based Intel adapter is advertising a speed of one gigabit only, this feature does not work because the adapter cannot identify a gigabit link at a D3 state.
l
The network cable must be disconnected when entering into S3/S4 in order to wake the system up by link up event.
Default Disabled
Range Disabled
OS Controlled Forced

Microsoft* Hyper-V* Overview

Microsoft* Hyper-V* makes it possible for one or more operating systems to run simultaneously on the same physical
system as virtual machines. This allows you to consolidate several servers onto one system, even if they are running dif­ferent operating systems. Intel® Network Adapters work with, and within, Microsoft Hyper-V virtual machines with their standard drivers and software.
NOTES:
l
Some virtualization options are not available on some adapter/operating system combinations.
l
The jumbo frame setting inside a virtual machine must be the same, or lower than, the setting on the physical port.
l
See http://www.intel.com/technology/advanced_comm/virtualization.htm for more information on using Intel Network Adapters in virtualized environments.
Using Intel® Network Adapters in a Hyper-V Environment
When a Hyper-V Virtual NIC (VNIC) interface is created in the parent partition, the VNIC takes on the MAC address of the underlying physical NIC. The same is true when a VNIC is created on a team or VLAN. Since the VNIC uses the MAC address of the underlying interface, any operation that changes the MAC address of the interface (for example, setting LAA on the interface, changing the primary adapter on a team, etc.), will cause the VNIC to lose connectivity. In order to prevent this loss of connectivity, Intel® PROSet will not allow you to change settings that change the MAC address.
NOTES:
l
On 10Gb Ethernet devices, if Fibre Channel over Ethernet (FCoE)/Data Center Bridging (DCB) is present on the port, configuring the device in Virtual Machine Queue (VMQ) + DCB mode reduces the number of VMQ VPorts available for guest OSes.
l
When sent from inside a virtual machine, LLDP and LACP packets may be a security risk. The Intel® Virtual Function driver blocks the transmission of such packets.
l
The Virtualization setting on the Advanced tab of the adapter's Device Manager property sheet is not available if the Hyper-V role is not installed.
l
While Microsoft supports Hyper-V on the Windows* 8 client OS, Intel® Ethernet adapters do not sup­port virtualization settings (VMQ, SR-IOV) on Windows 8 client.
l
ANS teaming of VF devices inside a Windows 2008 R2 guest running on an open source hypervisor is supported.
The Virtual Machine Switch
The virtual machine switch is part of the network I/O data path. It sits between the physical NIC and the virtual machine NICs and routes packets to the correct MAC address. Enabling Virtual Machine Queue (VMQ) offloading in Intel(R) ProSet will automatically enable VMQ in the virtual machine switch. For driver-only installations, you must manually enable VMQ in the virtual machine switch.
Using ANS VLANs
If you create ANS VLANs in the parent partition, and you then create a Hyper-V Virtual NIC interface on an ANS VLAN, then the Virtual NIC interface *must* have the same VLAN ID as the ANS VLAN. Using a different VLAN ID or not set­ting a VLAN ID on the Virtual NIC interface will result in loss of communication on that interface.
Virtual Switches bound to an ANS VLAN will have the same MAC address as the VLAN, which will have the same address as the underlying NIC or team. If you have several VLANs bound to a team and bind a virtual switch to each VLAN, all of the virtual switches will have the same MAC address. Clustering the virtual switches together will cause a network error in Microsoft’s cluster validation tool. In some cases, ignoring this error will not impact the performance of the cluster. However, such a cluster is not supported by Microsoft. Using Device Manager to give each of the virtual switches a unique address will resolve the issue. See the Microsoft Technet article Configure MAC Address Spoofing
for Virtual Network Adapters for more information.
Virtual Machine Queues (VMQ) and SR-IOV cannot be enabled on a Hyper-V Virtual NIC interface bound to a VLAN configured using the VLANs tab in Windows Device Manager.
Using an ANS Team or VLAN as a Virtual NIC
If you want to use a team or VLAN as a virtual NIC you must follow these steps:
NOTE: This applies only to virtual NICs created on a team or VLAN. Virtual NICs created on a physical adapter do not require these steps.
1. Use Intel® PROSet to create the team or VLAN.
2. Open the Network Control Panel.
3. Open the team or VLAN.
4. On the General Tab, uncheck all of the protocol bindings and click OK.
5. Create the virtual NIC. (If you check the "Allow management operating system to share the network adapter." box you can do the following step in the parent partition.)
6. Open the Network Control Panel for the Virtual NIC.
7. On the General Tab, check the protocol bindings that you desire.
NOTE: This step is not required for the team. When the Virtual NIC is created, its protocols are correctly bound.
Command Line for Microsoft Windows Server* Core
Microsoft Windows Server* Core does not have a GUI interface. If you want to use an ANS Team or VLAN as a Virtual NIC, you must use the prosetcl.exe utility, and may need the nvspbind.exe utility, to set up the configuration. Use the prosetcl.exe utility to create the team or VLAN. See the prosetcl.txt file for installation and usage details. Use the nvsp­bind.exe utility to unbind the protocols on the team or VLAN. The following is an example of the steps necessary to set up the configuration.
NOTE: The nvspbind.exe utility is not needed in Windows Server 2008 R2 or later.
1. Use prosetcl.exe to create a team.
prosetcl.exe Team_Create 1,2,3 TeamNew VMLB
(VMLB is a dedicated teaming mode for load balancing under Hyper-V.)
2. Use nvspbind to get the team’s GUID
nvspbind.exe -n
3. Use nvspbind to disable the team’s bindings
nvspbind.exe -d aaaaaaaa-bbbb-cccc-dddddddddddddddd *
4. Create the virtual NIC by running a remote Hyper-V manager on a different machine. Please see Microsoft's doc­umentation for instructions on how to do this.
5. Use nvspbind to get the Virtual NIC’s GUID.
6. Use nvspbind to enable protocol bindings on the Virtual NIC.
nvspbind.exe -e tttttttt-uuuu-wwww-xxxxxxxxxxxxxxxx ms_netbios nvspbind.exe -e tttttttt-uuuu-wwww-xxxxxxxxxxxxxxxx ms_tcpip nvspbind.exe -e tttttttt-uuuu-wwww-xxxxxxxxxxxxxxxx ms_server
Virtual Machine Queue Offloading
Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to perform these tasks faster than the operating system. Offloading also frees up CPU resources. Filtering is based on MAC and/or VLAN filters. For devices that support it, VMQ offloading is enabled in the host partition on the adapter's Device Man­ager property sheet, under Virtualization on the Advanced Tab.
Each Intel® Ethernet Adapter has a pool of virtual ports that are split between the various features, such as VMQ Off­loading, SR-IOV, Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE). Increasing the number of vir­tual ports used for one feature decreases the number available for other features. On devices that support it, enabling DCB reduces the total pool available for other features to 32. Enabling FCoE further reduces the total pool to 24.
Intel PROSet displays the number of virtual ports available for virtual functions under Virtualization properties on the device's Advanced Tab. It also allows you to set how the available virtual ports are distributed between VMQ and SR­IOV.
Teaming Considerations
l
If VMQ is not enabled for all adapters in a team, VMQ will be disabled for the team.
l
If an adapter that does not support VMQ is added to a team, VMQ will be disabled for the team.
l
Virtual NICs cannot be created on a team with Receive Load Balancing enabled. Receive Load Balancing is automatically disabled if you create a virtual NIC on a team.
l
If a team is bound to a Hyper-V virtual NIC, you cannot change the Primary or Secondary adapter.
SR-IOV (Single Root I/O Virtualization)
SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you have an SR­IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions. The virtual functions bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a guest partition's memory, res­ulting in higher throughput and lower CPU utilization. SR-IOV also allows you to move packet data directly to a guest partition's memory. SR-IOV support was added in Microsoft Windows Server 2012. See your operating system doc­umentation for system requirements.
For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab. Some devices may need to have SR-IOV enabled in a preboot environment.
NOTES:
l
You must enable VMQ for SR-IOV to function.
l
SR-IOV is not supported with ANS teams.
l
Due to chipset limitations, not all systems or slots support SR-IOV. Below is a chart summarizing SR-IOV support on Dell server platforms.
NDC or LOM 10Gbe 1Gbe
Intel X520 DP 10Gb DA/SFP+, + I350 DP 1Gb Ethernet, Network Daughter Card Yes No
Intel Ethernet X540 DP 10Gb BT + I350 1Gb BT DP Network Daughter Card Yes No
Intel Ethernet I350 QP 1Gb Network Daughter Card Yes
PowerEdge T620 LOMs No
PowerEdge T630 LOMs No
Rack NDC PCI Express Slot
Dell Platform 10 GbE Adapter 1 GbE Adapter 1 2 3 4 5 6 7 8 9 10
R320 no yes
R420 1 x CPU no yes
2 x CPU yes yes
R520 1 x CPU no yes yes yes
2 x CPU yes yes yes yes
R620 yes yes yes
R720XD yes no yes yes yes yes yes yes
R720 yes no yes yes yes yes yes yes yes
R820 yes no yes yes yes yes yes yes yes
R920 yes no yes yes yes yes yes yes yes yes yes yes
T320 no no yes yes yes
T420 no no yes yes yes yes
T620 yes yes no yes yes yes yes
Blade NDC Mezzanine Slot
Dell Platform 10 GbE Adapter 1 GbE Adapter B C
M420 yes yes yes
M520 no yes yes
M620 yes yes yes
M820 yes yes yes
Supported platforms or slots are indicated by "yes."

Linux* Drivers for the Intel® Gigabit and 10 Gigabit Adapters

Overview

This release includes Linux Base Drivers for Intel® Network Connections. These drivers are named e1000e, igb, and ixgbe. Specific information on building and installation, configuration, and command line parameters for these drivers are located in the following sections:
l
e1000e Linux Driver for the Intel® Gigabit Family of Adapters for 82571 and 82572 based Gigabit Family of
Adapters
l
igb Linux Driver for the Intel® Gigabit Family of Adapters for 82575, 82576, I350, and I354-based Gigabit Family
of Adapters
l
ixgbe Linux Driver for the Intel® 10 Gigabit Family of Adapters for 82598, 82599, and X540-based 10 Gigabit
Family of Adapters
See the Supported Adapters section below to determine which driver to use.
These drivers are only supported as a loadable module. Intel is not supplying patches against the kernel source to allow for static linking of the driver. For questions related to hardware requirements, refer to System Requirements. All hardware requirements listed apply to use with Linux.
This release also includes support for Single Root I/O Virtualization (SR-IOV) drivers. More detail on SR-IOV can be found here. Intel recommends test-mode environments until industry hypervisors release production level support. The following drivers support the listed virtual function devices that can only be activated on kernels that support SR-IOV. SR-IOV requires the correct platform and OS support.
l
igbvf Linux Driver for the Intel® Gigabit Family of Adapters for 82575, 82576, I350, and I354-based Gigabit Fam-
ily of Adapters
l
ixgbevf Linux Driver for the Intel® 10 Gigabit Family of Adapters for 82599 and X540-based 10 Gigabit Family of
Adapters.

Supported Network Connections

The following Intel network adapters are compatible with the drivers in this release:
Controller Adapter Name Board IDs Linux Base
Driver
82571EB Intel® PRO/1000 PT Dual Port Server Adapter C57721-xxx e1000e
82572EI Intel® PRO/1000 PT Server Adapter D28777-xxx and E55239-xxx e1000e
82572EI Intel® PRO/1000 PF Server Adapter D28779-xxx e1000e
82576 Intel® ET Quad Port Mezzanine Card G19945-xxx igb
82576 Intel® ET Dual Port Server Adapter G18758-xxx and G20882-xxx igb
82576 Intel® ET Quad Port Server Adapter G18771-xxx and G20885-xxx igb
I350 Intel® Gigabit 2P I350-t Adapter G15136-xxx and G32101-xxx igb
I350 Intel® Gigabit 4P I350-t Adapter G13158-xxx and G32100-xxx igb
I350 Intel® Gigabit 4P I350-t rNDC G10565-xxx igb
I350 Intel® Gigabit 4P X540/I350 rNDC G14843-xxx and G59298-xxx igb
I350 Intel® Gigabit 4P X520/I350 rNDC G61346-xxx igb
I350 Intel® Gigabit 4P I350-t Mezz G27581-xxx igb
Controller Adapter Name Board IDs Linux Base
Driver
I350 Intel® Gigabit 2P I350-t LOM n/a igb
I354 Intel® Ethernet Connection I354 1.0 GbE
Backplane
82598GB Intel® 10 Gigabit XF SR Server Adapter D99083-xxx ixgbe
82598GB Intel® 10 Gigabit AT Server Adapter D79893-xxx and E84436-xxx ixgbe
82598EB Intel® 10 Gigabit AF DA Dual Port Server
Adapter
82599 Intel® Ethernet X520 10GbE Dual Port KX4
Mezz
82599 Intel® Ethernet Server Adapter X520-2 G18786-xxx and G20891-xxx
82599 Intel® Ethernet X520 10GbE Dual Port KX4-
KR Mezz
82599 Intel® Ethernet Server Adapter X520-T2 E76986-xxx and E92016-xxx
X540 Intel® Ethernet 10G 2P X540-t Adapter G35632-xxx
82599
X540 Intel® Ethernet 10G 4P X540/I350 rNDC G14843-xxx, G59298-xxx, and
82599 Intel® Ethernet 10G 4P X520/I350 rNDC G61346-xxx and G63668-xxx
82599 Intel® Ethernet 10G 2P X520-k bNDC G19030-xxx
Intel® Ethernet 10G 2P X520 Adapter
n/a igb
E45329-xxx, E92325-xxx and E45320-xxx
E62954-xxx
G18846-xxx
G28774-xxx and G38004-xxx
G33388-xxx
ixgbe
ixgbe
ixgbe
ixgbe
ixgbe
ixgbe
ixgbe
ixgbe
ixgbe
ixgbe
or
or
or
or
or
or
or
or
or
ixgbevf
ixgbevf
ixgbevf
ixgbevf
ixgbevf
ixgbevf
ixgbevf
ixgbevf
ixgbevf
I350 Intel® Gigabit 4P I350 bNDC H23791-xxx igb
I350 Intel® Gigabit 4P I350-t LOM n/a igb
I350 Intel® Gigabit 4P I350 LOM n/a igb
To verify your adapter is supported, find the board ID number on the adapter. Look for a label that has a barcode and a number in the format 123456-001 (six digits hyphen three digits). Match this to the list of numbers above.
For more information on how to identify your adapter or for the latest network drivers for Linux, see Customer Support.

Supported Linux Versions

Linux drivers are provided for the following versions:
Red Hat Enterprise Linux (RHEL):
l
RHEL 6.5 (Intel® 64 only)
SUSE Linux Enterprise Server (SUSE):
l
SLES 11 SP3 (Intel® 64 only)

Support

For general information and support, check with Customer Support.
If an issue is identified with the released source code on supported kernels with a supported adapter, email the specific information related to the issue to e1000e-devel@lists.sf.net.

e1000e Linux* Driver for the Intel® Gigabit Adapters

e1000e Overview
This file describes the Linux* Base Driver for the Gigabit Intel® Network Connections based on the Intel® 82571EB and 82572EI. This driver supports the 2.6.x and 3.x kernels.
This driver is only supported as a loadable module. Intel is not supplying patches against the kernel source to allow for static linking of the driver. For questions related to hardware requirements, refer to System Requirements. All hardware requirements listed apply to use with Linux.
The following features are now available in supported kernels:
l
Native VLANs
l
Channel Bonding (teaming)
l
SNMP
Adapter teaming is now implemented using the native Linux Channel bonding module. This is included in supported Linux kernels. Channel Bonding documentation can be found in the Linux kernel source: /doc­umentation/networking/bonding.txt
Use ethtool, lspci, or ifconfig to obtain driver information. Instructions on updating the ethtool can be found in the Addi-
tional Configurations section later in this page.
e1000e Linux Base Driver Supported Adapters
The following Intel network adapters are compatible with the e1000e driver in this release:
Controller Adapter Name Board IDs
82571EB Intel PRO/1000 PT Dual Port Server Adapter C57721-xxx
82572EI Intel PRO/1000 PT Server Adapter D28777-xxx and E55239-xxx
82572EI Intel PRO/1000 PF Server Adapter D28779-xxx
To verify your adapter is supported, find the board ID number on the adapter. Look for a label that has a barcode and a number in the format 123456-001 (six digits hyphen three digits). Match this to the list of numbers above.
For more information on how to identify your adapter or for the latest network drivers for Linux, see Customer Support.
Building and Installation
There are three methods for installing the e1000e driver:
l Install from Source Code
l Install Using KMP RPM
l Install Using KMOD RPM
Install from Source Code
To build a binary RPM* package of this driver, run 'rpmbuild -tb <filename.tar.gz>'. Replace <filename.tar.gz> with the specific filename of the driver.
NOTES:
l
For the build to work properly it is important that the currently running kernel MATCH the version and con­figuration of the installed kernel source. If you have just recompiled your kernel, reboot the system and choose the correct kernel to boot.
l
RPM functionality has only been tested in Red Hat distributions.
1. Copy the base driver tar file from 'Linux/Source/base_driver/e1000e-<x.x.x>tar.gz' on the driver CD, where <x.x.x> is the version number for the driver tar file, to the directory of your choice. For example, use '/home/username/e1000e' or '/usr/local/src/e1000e'.
2. Untar/unzip the archive, where <x.x.x> is the version number for the driver tar:
tar zxf e1000e-<x.x.x>.tar.gz
3. Change to the driver src directory, where <x.x.x> is the version number for the driver tar:
cd e1000e-<x.x.x>/src/
4. Compile the driver module:
# make install
The binary will be installed as:
/lib/modules/<KERNEL VERSION>/kernel/drivers/net/e1000e/e1000e.ko
The install locations listed above are the default location. This might differ for various Linux distributions. For more information, see the ldistrib.txt file included in the driver tar.
5. Install the module using the modprobe command:
modprobe e1000e
For 2.6 based kernels, make sure that the older e1000e drivers are removed from the kernel, before loading the new module:
rmmod e1000e; modprobe e1000e
6. Assign an IP address to and activate the Ethernet interface by entering the following, where <x> is the interface number:
ifconfig eth<x> <IP_address>
7. Verify that the interface works. Enter the following, where <IP_address> is the IP address for another machine on the same subnet as the interface that is being tested:
ping <IP_address>
NOTE: Some systems have trouble supporting MSI and/or MSI-X interrupts. If your system needs to disable this type of interrupt, the driver can be built and installed with the command:
#make CFLAGS_EXTRA=-DDISABLE_PCI_MSI install
Normally, the driver generates an interrupt every two seconds. If interrupts are not received in cat /proc/interrupts for the ethX e1000e device, then this workaround may be necessary.
Install Using KMP RPM
NOTE: KMP is only supported on RHEL 6 and SLES11.
The KMP RPMs update existing e1000e RPMs currently installed on the system. These updates are provided by SuSE in the SLES release. If an RPM does not currently exist on the system, the KMP will not install.
The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is:
intel-<component name>-<component version>.<arch type>.rpm
For example, intel-e1000e-1.3.8.6-1.x86_64.rpm: e1000e is the component name; 1.3.8.6-1 is the component version; and x86_64 is the architecture type.
KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is:
intel-<component name>-kmp-<kernel type>-<component version>_<kernel version>.<arch type>.rpm
For example, intel-e1000e-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: e1000e is the component name; default is the kernel type; 1.3.8.6 is the component version; 2.6.27.19_5-1 is the kernel version; and x86_64 is the architecture type.
To install the KMP RPM, type the following two commands:
rpm -i <rpm filename> rpm -i <kmp rpm filename>
For example, to install the e1000e KMP RPM package, type the following:
rpm -i intel-e1000e-1.3.8.6-1.x86_64.rpm rpm -i intel-e1000e-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm
Install Using KMOD RPM
The KMOD RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is:
kmod-<driver name>-<version>-1.<arch type>.rpm
For example, kmod-e1000e-2.3.4-1.x86_64.rpm:
l
e1000e is the driver name
l
2.3.4 is the version
l
x86_64 is the architecture type
To install the KMOD RPM, go to the directory of the RPM and type the following command:
rpm -i <rpm filename>
For example, to install the e1000e KMOD RPM package, type the following:
rpm -i kmod-e1000e-2.3.4-1.x86_64.rpm
Command Line Parameters
If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax:
modprobe e1000e [<option>=<VAL1>,<VAL2>,...]
A value (<VAL#>) must be assigned to each network port in the system supported by this driver. The values are applied to each instance, in function order. For example:
modprobe e1000e InterruptThrottleRate=16000,16000
In this case, there are two network ports supported by igb in the system. The default value for each parameter is gen­erally the recommended setting, unless otherwise noted.
The following table contains parameters and possible values for modprobe commands:
Parameter Name Valid
Range/Settings
InterruptThrottleRate 0, 1, 3, 100-
100000 (0=off, 1=dynamic, 3=d­dynamic con­servative)
Inter­ruptThrottleRate is not supported on Intel 82542, 82543, or 82544­based adapters.
Default Description
3 The driver can limit the number of interrupts-per second that
the adapter will generate for incoming packets. It does this by writing a value to the adapter that is based on the max­imum number of interrupts that the adapter will generate per second.
Setting InterruptThrottleRate to a value greater or equal to 100 will program the adapter to send out a maximum of that many interrupts per second, even if more packets have come in. This reduces interrupt load on the system and can lower CPU utilization under heavy load, but will increase latency as packets are not processed as quickly.
The default behavior of the driver previously assumed a static InterruptThrottleRate value of 8000, providing a good fallback value for all traffic types, but lacking in small packet performance and latency.
Parameter Name Valid
Range/Settings
Default Description
In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 for traffic that falls in class "Bulk traffic". If traffic falls in the "Low latency" or "Lowest latency" class, the InterruptThrottleRate is increased stepwise to 20000. This default mode is suitable for most applications.
For situations where low latency is vital such as cluster or grid computing, the algorithm can reduce latency even more when InterruptThrottleRate is set to mode 1. In this mode, which operates the same as mode 3, the Inter­ruptThrottleRate will be increased stepwise to 70000 for traffic in class "Lowest latency".
In simplified mode the interrupt rate is based on the ratio of tx and rx traffic. If the bytes-per-second rates are approx­imately equal, the interrupt rate will drop as low as 2000 interrupts per second. If the traffic is mostly transmit or mostly receive, the interrupt rate could be as high as 8000.
Setting InterruptThrottleRate to 0 turns off any interrupt mod­eration and may improve small packet latency, but is gen­erally not suitable for bulk throughput traffic.
NOTE: When e1000e is loaded with default settings and multiple adapters are in use simultaneously, the CPU utilization may increase non-linearly. In order to limit the CPU utilization without impacting the overall throughput, load the driver as follows:
modprobe e1000e.o Inter­ruptThrottleRate­e=3000,3000,3000
This sets the InterruptThrottleRate to 3000 inter­rupts/sec for the first, second, and third instances of the driver. The range of 2000 to 3000 interrupts per second works on a majority of systems and is a good starting point, but the optimal value will be platform­specific. If CPU utilization is not a concern, use RX_ POLLING (NAPI) and default driver settings.
NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and RxAbsIntDelay parameters. In other words, minimizing the receive and/or transmit absolute delays does not force the controller to gen­erate more interrupts than what the Interrupt Throttle Rate allows.
Parameter Name Valid
Range/Settings
RxIntDelay 0-65535 (0=off) 0 This value delays the generation of receive interrupts in
RxAbsIntDelay 0-65535 (0=off) 8 This value, in units of 1.024 microseconds, limits the delay in
Default Description
units of 1.024 microseconds. Receive interrupt reduction can improve CPU efficiency if properly tuned for specific net­work traffic. Increasing this value adds extra latency to frame reception and can end up decreasing the throughput of TCP traffic. If the system is reporting dropped receives, this value may be set too high, causing the driver to run out of avail­able receive descriptors.
CAUTION: When setting RxIntDelay to a value other than 0, adapters may hang (stop transmitting) under certain network conditions. If this occurs a NETDEV WATCHDOG message is logged in the system event log. In addition, the controller is auto­matically reset, restoring the network connection. To eliminate the potential for the hang ensure that RxIntDelay is set to zero.
which a receive interrupt is generated. Useful only if RxIntDelay is non-zero, this value ensures that an interrupt is generated after the initial packet is received within the set amount of time. Proper tuning, along with RxIntDelay, may improve traffic throughput in specific network conditions.
(Supported on Intel 82540, 82545, and later adapters only.)
TxIntDelay 0-65535 (0=off) 8 This value delays the generation of transmit interrupts in
units of 1.024 microseconds. Transmit interrupt reduction can improve CPU efficiency if properly tuned for specific net­work traffic. If the system is reporting dropped transmits, this value may be set too high causing the driver to run out of available transmit descriptors.
TxAbsIntDelay 0-65535 (0=off) 32
copybreak 0-xxxxxxx (0=off) 256
SmartPower­DownEnable
0-1 0 (dis-
abled)
This value, in units of 1.024 microseconds, limits the delay in which a transmit interrupt is generated. Useful only if TxIntDelay is non-zero, this value ensures that an interrupt is generated after the initial packet is sent on the wire within the set amount of time. Proper tuning, along with TxIntDelay, may improve traffic throughput in specific network con­ditions.
(Supported on Intel 82540, 82545 and later adapters only.)
Usage: modprobe e1000e.ko copybreak=128
Driver copies all packets below or equaling this size to a fresh receive buffer before handing it up the stack.
This parameter is different than other parameters, in that it is a single (not 1, 1, 1, etc.) parameter applied to all driver instances and it is also available during runtime at /sys/­module/e1000e/parameters/copybreak.
This value allows the PHY to turn off in lower power states. This parameter can be turned off in supported chipsets.
KumeranLockLoss 0-1 1
(enable­d)
This value skips resetting the PHY at shutdown for the initial silicon releases of ICH8 systems.
Parameter Name Valid
Range/Settings
Default Description
IntMode 0-2
(0=legacy, 1=MSI, 2=MSI-X)
CrcStripping 0-1 1
EEE 0-1 1
Node
0-n, where n is the number of the NUMA node that should be used to allocate memory for this adapter port.
-1, uses the driver default of alloc­ating memory on whichever pro­cessor is running modprobe.
2 (MSI-X)IntMode allows changing the interrupt mode at module load
time without requiring a recompile. If the driver load fails to enable a specific interrupt mode, the driver will try other inter­rupt modes, from least to most compatible. The interrupt order is MSI-X, MSI, Legacy. If specifying MSI interrupts (IntMode=1), only MSI and Legacy will be attempted.
This strips the CRC from received packets before sending (enable­d)
(enabled for parts sup­porting EEE)
-1 (off) The Node parameter allows you to choose which NUMA
them up the network stack. If you have a system with BMC
enabled but cannot receive IPMI traffic after loading or
enabling the driver, try disabling this feature.
This option allows for the ability of IEEE802.3az, Energy Effi-
cient Ethernet (EEE), to be advertised to the link partner on
parts supporting EEE. EEE saves energy by putting the
device into a low-power state when the link is idle, but only
when the link partner also supports EEE and after the fea-
ture has been enabled during link negotiation. It is not neces-
sary to disable the advertisement of EEE when connected
with a link partner that does not support EEE.
NOTE: EEE is disabled by default on all I350-based adapters.
node you want to have the adapter allocate memory from.
All driver structures, in-memory queues, and receive buffers
will be allocated on the node specified. This parameter is
only useful when interrupt affinity is specified, otherwise
some portion of the time the interrupt could run on a different
core than the memory is allocated on, causing slower
memory access and impacting throughput, CPU, or both.
Additional Configurations
Configuring the Driver on Different Distributions
Configuring a network driver to load properly when the system is started is distribution-dependent. Typically, the con­figuration process involves adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well as editing other sys­tem startup scripts and/or configuration files. Many Linux distributions ship with tools to make these changes for you. To learn the proper way to configure a network device for your system, refer to your distribution documentation. If during this process you are asked for the driver or module name, the name for the Linux Base Driver for the Intel Gigabit Fam­ily of Adapters is e1000e.
As an example, if you install the e1000e driver for two Intel Gigabit adapters (eth0 and eth1) and set the speed and duplex to 10 Full and 100 Half, add the following to modules.conf:
alias eth0 e1000e alias eth1 e1000e options e1000e IntMode=2,1
Viewing Link Messages
Link messages will not be displayed to the console if the distribution is restricting system messages. In order to see net­work driver link messages on your console, set dmesg to eight by entering the following:
dmesg -n 8
NOTE: This setting is not saved across reboots.
Jumbo Frames
Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) to a value larger than the default of 1500 bytes. Use the ifconfig command to increase the MTU size. For example:
ifconfig eth<x> mtu 9000 up
This setting is not saved across reboots. The setting change can be made permanent by adding MTU = 9000 to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>, in Red Hat distributions. Other distributions may store this set-
ting in a different location.
NOTES:
l
Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or loss of link.
l
To enable Jumbo Frames, increase the MTU size on the interface beyond 1500.
l
The maximum Jumbo Frames size is 9238 bytes, with a corresponding MTU size of 9216 bytes. The adapters with this limitation are based on the Intel® 82571EB and 82572EI LAN controllers. These cor­respond to the following product names:
Intel PRO/1000 PT Dual Port Server Adapter Intel PRO/1000 PT Server Adapter Intel PRO/1000 PF Server Adapter
ethtool
The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical inform­ation. ethtool version 3.0 or later is required for this functionality, although we strongly recommend downloading the latest version at: http://ftp.kernel.org/pub/software/network/ethtool/.
NOTE: When validating enable/disable tests on some parts (for example, 82578) you need to add a few seconds between tests when working with ethtool.
Speed and Duplex
Speed and Duplex are configured through the ethtool utility. ethtool is included with all Red Hat versions 7.2 or later. For other Linux distributions, download and install ethtool from the following website: http://-
sourceforge.net/projects/gkernel.
Enabling Wake on LAN*
Wake on LAN (WOL) is configured through the ethtool* utility. ethtool is included with all versions of Red Hat after Red Hat 7.2. For other Linux distributions, download and install ethtool from the following website: http://-
sourceforge.net/projects/gkernel.
For instructions on enabling WOL with ethtool, refer to the website listed above.
WOL will be enabled on the system during the next shutdown or reboot. For this driver version, in order to enable WOL, the igb driver must be loaded prior to shutting down or suspending the system.
NOTE: Wake on LAN is only supported on port A of multi-port devices.
NAPI
NAPI (Rx polling mode) is supported in the e1000e driver. NAPI can reduce the overhead of packet receiving by using polling instead of interrupt-based processing. NAPI is enabled by default. To override the default, use the following compile-time flags.
To disable NAPI, specify this additional compiler flag when compiling the driver module:
# make CFLAGS_EXTRA=-DE1000E_NO_NAPI install
To enable NAPI, specify this additional compiler flag when compiling the driver module:
# make CFLAGS_EXTRA=-DE1000E_NAPI install
See http://www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.
Enabling a Separate Vector for TX
# make CFLAGS_EXTRA=-DCONFIG_E1000E_SEPARATE_TX_HANDLER
This allows a separate handler for transmit cleanups. This may be useful if you have many CPU cores under heavy load and want to distribute the processing load.
With this option, three MSI-X vectors are used: one for TX, one for RX and one for link.
Known Issues
Detected Tx Unit Hang in Quad Port Adapters
In some cases ports 3 and 4 don't pass traffic and report 'Detected Tx Unit Hang' followed by 'NETDEV WATCHDOG: ethX: transmit timed out' errors. Ports 1 and 2 don't show any errors and will pass traffic.
This issue MAY be resolved by updating to the latest kernel and BIOS. The user is encouraged to run an OS that fully supports MSI interrupts. You can check your system's BIOS by downloading the Linux Firmware Developer Kit that can be obtained at http://www.linuxfirmwarekit.org/.
Dropped Receive Packets on Half-duplex 10/100 Networks
If you have an Intel PCI Express adapter running at 10 Mbps or 100 Mbps, half-duplex, you may observe occasional dropped receive packets. There are no workarounds for this problem in this network configuration. The network must be updated to operate in full-duplex and/or 1000 Mbps only.
Compiling the Driver
When trying to compile the driver by running make install, the following error may occur:
"Linux kernel source not configured - missing version.h"
To solve this issue, create the version.h file by going to the Linux Kernel source tree and entering:
# make include/linux/version.h
Performance Degradation with Jumbo Frames
Degradation in throughput performance may be observed in some Jumbo frames environments. If this is observed, increasing the application's socket buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. For more details, see the specific application documentation and the text file located in /us­r/src/linux*/Documentation/networking/ip-sysctl.txt.
Jumbo frames on Foundry BigIron 8000 Switch
There is a known issue using Jumbo frames when connected to a Foundry BigIron 8000 switch. This is a 3rd party lim­itation. If you experience loss of packets, lower the MTU size.
Allocating Rx Buffers when Using Jumbo Frames
Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if the available memory is heavily frag­mented. This issue may be seen with PCI-X adapters or with packet split disabled. This can be reduced or eliminated by changing the amount of available memory for receive buffer allocation, by increasing /proc/sys/vm/min_free_kbytes.
Multiple Interfaces on Same Ethernet Broadcast Network
Due to the default ARP behavior on Linux, it is not possible to have one system on two IP networks in the same Eth­ernet broadcast domain (non-partitioned switch) behave as expected. All Ethernet interfaces will respond to IP traffic for any IP address assigned to the system. This results in unbalanced receive traffic.
If you have multiple interfaces in a server, turn on ARP filtering by entering:
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
(this only works if your kernel's version is higher than 2.4.5).
NOTE: This setting is not saved across reboots. However, this configuration change can be made permanent through one of the following methods:
l
Add the following line to /etc/sysctl.conf:
net.ipv4.conf.all.arp_filter = 1
l
Install the interfaces in separate broadcast domains (either in different switches or in a switch partitioned to VLANs).
Disable Rx Flow Control with ethtool
In order to disable receive flow control using ethtool, you must turn off auto-negotiation on the same command line. For example:
ethtool -A eth? autoneg off rx off
Unplugging Network Cable While ethtool -p is Running
In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging the network cable while ethtool -p is running will cause the system to become unresponsive to keyboard commands, except for control-alt-delete. Restarting the system appears to be the only remedy.
MSI-X Issues with Kernels Between 2.6.19 and 2.6.21 (inclusive)
Kernel panics and instability may be observed on any MSI-X hardware if you use irqbalance with kernels between
2.6.19 and 2.6.21. If these types of problems are encountered, you may disable the irqbalance daemon or upgrade to a newer kernel.
Rx Page Allocation Errors
Page allocation failure order:0 errors may occur under stress with kernels 2.6.25 and above. This is caused by the way the Linux kernel reports this stressed condition.
Activity LED Blinks Unexpectedly
If a system based on the 82577, 82578, or 82579 controller is connected to a hub, the Activity LED will blink for all net­work traffic present on the hub. Connecting the system to a switch or router will filter out most traffic not addressed to the local port.
Link May Take Longer Than Expected
With some PHY and switch combinations, link can take longer than expected. This can be an issue on Linux dis­tributions that timeout when checking for link prior to acquiring a DHCP address; however, there is usually a way to work around this (for example, set LINKDELAY in the interface configuration on RHEL).

igb Linux* Driver for the Intel® Gigabit Adapters

igb Overview
This file describes the Linux* Base Driver for the Gigabit Intel® Network Connections based on the Intel® 82575EB, the Intel® 82576, the Intel® I350, and the Intel® I354. This driver supports the 2.6.x and 3.x kernels.
This driver is only supported as a loadable module. Intel is not supplying patches against the kernel source to allow for static linking of the driver. For questions related to hardware requirements, refer to System Requirements. All hardware requirements listed apply to use with Linux.
The following features are now available in supported kernels:
l
Native VLANs
l
Channel Bonding (teaming)
l
SNMP
Adapter teaming is now implemented using the native Linux Channel bonding module. This is included in supported Linux kernels. Channel Bonding documentation can be found in the Linux kernel source: /doc­umentation/networking/bonding.txt
The igb driver supports IEEE time stamping for kernels 2.6.30 and above. A basic tutorial for the technology can be found here.
Use ethtool, lspci, or ifconfig to obtain driver information. Instructions on updating the ethtool can be found in the Addi-
tional Configurations section later in this page.
igb Linux Base Driver Supported Network Connections
The following Intel network adapters are compatible with the igb driver in this release:
Controller Adapter Name Board IDs
82576 Intel® Gigabit ET Quad Port Mezzanine Card G19945-xxx
82576 Intel® Gigabit ET Dual Port Server Adapter G18758-xxx and G20882-xxx
82576 Intel® Gigabit ET Quad Port Server Adapter G18771-xxx and G20885-xxx
I350 Intel® Gigabit 2P I350-t Adapter G15136-xxx and G32101-xxx
I350 Intel® Gigabit 4P I350-t Adapter G13158-xxx and G32100-xxx
I350 Intel® Gigabit 4P I350-t rNDC G10565-xxx
I350 Intel® Gigabit 4P X540/I350 rNDC G14843-xxx and G59298-xxx
I350 Intel® Gigabit 4P X520/I350 rNDC G61346-xxx
I350 Intel® Gigabit 4P I350-t Mezz G27581-xxx
I350 Intel® Gigabit 2P I350-t LOM n/a
I350 Intel® Gigabit 2P I350 LOM n/a
I350 Intel® Gigabit 4P x710/I350 rNDC H20674-xxx
I350 Intel® Gigabit 4P I350 bNDC H23791-xxx
To verify your adapter is supported, find the board ID number on the adapter. Look for a label that has a barcode and a number in the format 123456-001 (six digits hyphen three digits). Match this to the list of numbers above.
For more information on how to identify your adapter or for the latest network drivers for Linux, see Customer Support.
Building and Installation
There are three methods for installing the igb driver:
l Install from Source Code
l Install Using KMP RPM
l Install Using KMOD RPM
Install from Source Code
To build a binary RPM* package of this driver, run 'rpmbuild -tb <filename.tar.gz>'. Replace <filename.tar.gz> with the specific filename of the driver.
NOTE:
l
For the build to work properly it is important that the currently running kernel MATCH the version and con­figuration of the installed kernel source. If you have just recompiled your kernel, reboot the system.
l
RPM functionality has only been tested in Red Hat distributions.
1. Copy the base driver tar file from 'Linux/Source/base_driver/igb-<x.x.x>tar.gz' on the driver CD, where <x.x.x> is the version number for the driver tar file, to the directory of your choice. For example, use '/home/username/igb' or '/usr/local/src/igb'.
2. Untar/unzip the archive, where <x.x.x> is the version number for the driver tar:
tar zxf igb-<x.x.x>.tar.gz
3. Change to the driver src directory, where <x.x.x> is the version number for the driver tar:
cd igb-<x.x.x>/src/
4. Compile the driver module:
# make install
The binary will be installed as:
/lib/modules/<KERNEL VERSION>/kernel/drivers/net/igb/igb.ko
The install locations listed above are the default locations. This might differ for various Linux distributions. For more information, see the ldistrib.txt file included in the driver tar.
5. Install the module using the modprobe command:
modprobe igb
For 2.6 based kernels, make sure that the older igb drivers are removed from the kernel, before loading the new module:
rmmod igb.ko; modprobe igb
6. Assign an IP address to and activate the Ethernet interface by entering the following, where <x> is the interface number:
ifconfig eth<x> <IP_address> up
7. Verify that the interface works. Enter the following, where <IP_address> is the IP address for another machine on the same subnet as the interface that is being tested:
ping <IP_address>
NOTE: Some systems have trouble supporting MSI and/or MSI-X interrupts. If your system needs to disable this type of interrupt, the driver can be built and installed with the command:
#make CFLAGS_EXTRA=-DDISABLE_PCI_MSI install
Normally, the driver generates an interrupt every two seconds. If interrupts are not received in cat /proc/interrupts for the ethX e1000e device, then this workaround may be necessary.
To build igb driver with DCA
If your kernel supports DCA, the driver will build by default with DCA enabled.
Install Using KMP RPM
NOTE: KMP is only supported on RHEL 6 and SLES11.
The KMP RPMs update existing igb RPMs currently installed on the system. These updates are provided by SuSE in the SLES release. If an RPM does not currently exist on the system, the KMP will not install.
The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is:
intel-<component name>-<component version>.<arch type>.rpm
For example, intel-igb-1.3.8.6-1.x86_64.rpm: igb is the component name; 1.3.8.6-1 is the component version; and x86_ 64 is the architecture type.
KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is:
intel-<component name>-kmp-<kernel type>-<component version>_<kernel version>.<arch type>.rpm
For example, intel-igb-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: igb is the component name; default is the kernel type; 1.3.8.6 is the component version; 2.6.27.19_5-1 is the kernel version; and x86_64 is the architecture type.
To install the KMP RPM, type the following two commands:
rpm -i <rpm filename> rpm -i <kmp rpm filename>
For example, to install the igb KMP RPM package, type the following:
rpm -i intel-igb-1.3.8.6-1.x86_64.rpm rpm -i intel-igb-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm
Install Using KMOD RPM
The KMOD RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is:
kmod-<driver name>-<version>-1.<arch type>.rpm
For example, kmod-igb-2.3.4-1.x86_64.rpm:
l
igb is the driver name
l
2.3.4 is the version
l
x86_64 is the architecture type
To install the KMOD RPM, go to the directory of the RPM and type the following command:
rpm -i <rpm filename>
For example, to install the igb KMOD RPM package from RHEL 6.4, type the following:
rpm -i kmod-igb-2.3.4-1.x86_64.rpm
Command Line Parameters
If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax:
modprobe igb [<option>=<VAL1>,<VAL2>,...]
A value (<VAL#>) must be assigned to each network port in the system supported by this driver. The values are applied to each instance, in function order. For example:
modprobe igb InterruptThrottleRate=16000,16000
In this case, there are two network ports supported by igb in the system. The default value for each parameter is gen­erally the recommended setting, unless otherwise noted.
The following table contains parameters and possible values for modprobe commands:
Parameter Name Valid
Range/Settings
Default Description
InterruptThrottleRate 0, 1, 3, 100-
100000 (0=off, 1=dynamic, 3=d­dynamic con­servative)
3 The driver can limit the number of interrupts per second that the
adapter will generate for incoming packets. It does this by writ­ing a value to the adapter that is based on the maximum num­ber of interrupts that the adapter will generate per second.
Setting InterruptThrottleRate to a value greater or equal to 100 will program the adapter to send out a maximum of that many interrupts per second, even if more packets have come in. This reduces interrupt load on the system and can lower CPU util­ization under heavy load, but will increase latency as packets are not processed as quickly.
The default behavior of the driver previously assumed a static InterruptThrottleRate value of 8000, providing a good fallback value for all traffic types, but lacking in small packet per­formance and latency.
The driver has two adaptive modes (setting 1 or 3) in which it dynamically adjusts the InterruptThrottleRate value based on the traffic that it receives. After determining the type of incoming traffic in the last timeframe, it will adjust the Inter­ruptThrottleRate to an appropriate value for that traffic.
The algorithm classifies the incoming traffic every interval into classes. Once the class is determined, the InterruptThrottleRate value is adjusted to suit that traffic type the best. There are three classes defined: "Bulk traffic", for large amounts of packets of normal size; "Low latency", for small amounts of traffic and/or a significant percentage of small packets; and "Lowest latency", for almost completely small packets or minimal traffic.
In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 for traffic that falls in class "Bulk traffic". If traffic falls in the "Low latency" or "Lowest latency" class, the Inter­ruptThrottleRate is increased stepwise to 20000. This default mode is suitable for most applications.
For situations where low latency is vital such as cluster or grid computing, the algorithm can reduce latency even more when InterruptThrottleRate is set to mode 1. In this mode, which oper­ates the same as mode 3, the InterruptThrottleRate will be increased stepwise to 70000 for traffic in class "Lowest latency".
Setting InterruptThrottleRate to 0 turns off any interrupt mod­eration and may improve small packet latency, but is generally not suitable for bulk throughput traffic.
NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and RxAbsIntDelay parameters. In other words, minimizing the receive and/or transmit absolute delays does not force the controller to generate more interrupts that what the Interrupt Throttle Rate allows.
Parameter Name Valid
Range/Settings
Default Description
LLIPort 0-65535 0 (dis-
abled)
LLIPush 0-1 0 (dis-
abled)
LLISize 0-1500 0 (dis-
abled)
IntMode 0-2 2
LLIPort configures the port for Low Latency Interrupts (LLI).
Low Latency Interrupts allow for immediate generation of an interrupt upon processing receive packets that match certain cri­teria as set by the parameters described below. LLI parameters are not enabled when Legacy interrupts are used. You must be using MSI or MSI-X (see cat /proc/interrupts) to successfully use LLI.
For example, using LLIPort=80 would cause the board to gen­erate an immediate interrupt upon receipt of any packet sent to TCP port 80 on the local machine.
CAUTION: Enabling LLI can result in an excessive number of interrupts/second that may cause problems with the system and in some cases may cause a ker­nel panic.
LLIPush can be set to enabled or disabled (default). It is most effective in an environment with many small transactions.
NOTE: Enabling LLIPush may allow a denial of service attack.
LLISize causes an immediate interrupt if the board receives a packet smaller than the specified size.
This allows load time control over the interrupt type registered by the driver. MSI-X is required for multiple queue support. Some kernels and combinations of kernel .config options will force a lower level of interrupt support. 'cat/proc/interrupts' will show different values for each type of interrupt.
0 = Legacy Interrupts. 1 = MSI Interrupts. 2 = MSI-X interrupts (default).
RSS 0-8 1 0 = Assign up to whichever is less between the number of
CPUs or the number of queues. X = Assign X queue, where X is less than or equal to the max­imum number of queues. The driver allows maximum sup­ported queue value. For example, I350-based adapters allow RSS=8, where 8 is the maximum queues allowed.
NOTE: For 82575-based adapters, the maximum num­ber of queues is 4. For 82576-based and newer adapters, it is 8.
This parameter is also affected by the VMDQ parameter in that it will limit the queues more.
Model 0 1 2 3+
82575 4 4 3 1
82576 8 2 2 2
Parameter Name Valid
Range/Settings
Default Description
VMDQ 0-4 for 82575-
based adapters
0-8 for 82576­based adapters
max_vfs 0-7 0 This parameter adds support for SR-IOV. It causes the driver to
0 This supports enabling VMDq pools, which is needed to sup-
port SR-IOV.
This parameter is forced to 1 or more if the max_vfs module parameter is used. In addition, the number of queues available for RSS is limited if this is set to 1 or greater.
0 = Disabled 1 = Sets the netdev as pool 0 2 or greater = Add additional queues. However, these are cur­rently not used.
NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering and VLAN tag strip­ping/insertion will remain enabled.
spawn up to max_vfs worth of virtual function.
If the value is greater than 0, it will force the VMDQ parameter to equal 1 or more.
NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering and VLAN tag strip­ping/insertion will remain enabled. Please remove the old VLAN filter before the new VLAN filter is added. For example,
ip link set eth0 vf 0 vlan 100 // set vlan 100 for VF 0 ip link set eth0 vf 0 vlan 0 // Delete vlan 100 ip link set eth0 vf 0 vlan 200 // set a new vlan 200 for VF 0
QueuePairs 0-1 1 This option can be overridden to 1 if there are not sufficient inter-
rupts available. This can occur if any combination of RSS, VMDQ and max_vfs results in more than 4 queues being used.
0 = When MSI-X is enabled, the TX and RX will attempt to occupy separate vectors. 1 = TX and RX are paired onto one interrupt vector (default).
Node
0-n, where n is the number of the NUMA node that should be used to allocate memory for this adapter port.
-1, uses the driver default of allocating memory on whichever pro­cessor is run­ning modprobe.
-1 (off) The Node parameter allows you to choose which NUMA node you want to have the adapter allocate memory from. All driver structures, in-memory queues, and receive buffers will be alloc­ated on the node specified. This parameter is only useful when interrupt affinity is specified, otherwise some portion of the time the interrupt could run on a different core than the memory is allocated on, causing slower memory access and impacting throughput, CPU, or both.
Parameter Name Valid
Range/Settings
EEE 0-1 1
Default Description
This option allows for the ability of IEEE802.3az, Energy Effi-
(enabled)
cient Ethernet (EEE), to be advertised to the link partner on parts supporting EEE.
A link between two EEE-compliant devices will result in peri­odic bursts of data followed by periods where the link is in an idle state. This Low Power Idle (LPI) state is supported in both 1Gbps and 100Mbps link speeds.
NOTES:
l
l
EEE support requires auto-negotiation.
EEE is disabled by default on all I350-based adapters.
DMAC 0, 250, 500,
1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000
MDD 0-1 1
0 (dis­abled)
(enabled)
Enables or disables DMA Coalescing feature. Values are in microseconds and increase the internal DMA Coalescing fea­ture's internal timer. Direct Memory Access (DMA) allows the network device to move packet data directly to the system's memory, reducing CPU utilization. However, the frequency and random intervals at which packets arrive do not allow the sys­tem to enter a lower power state. DMA Coalescing allows the adapter to collect packets before it initiates a DMA event. This may increase network latency but also increases the chances that the system will enter a lower power state.
Turning on DMA Coalescing may save energy with kernel
2.6.32 and later. This will impart the greatest chance for your system to consume less power. DMA Coalescing is effective in helping potentially saving the platform power only when it is enabled across all active ports.
InterruptThrottleRate (ITR) should be set to dynamic. When ITR=0, DMA Coalescing is automatically disabled.
A whitepaper containing information on how to best configure your platform is available on the Intel website.
The Malicious Driver Detection (MDD) parameter is only rel­evant for I350 devices operating in SR-IOV mode. When this parameter is set, the driver detects malicious VF driver and dis­ables its TX/RX queues until a VF driver reset occurs.
Additional Configurations
Configuring the Driver on Different Distributions
Configuring a network driver to load properly when the system is started is distribution dependent. Typically, the con­figuration process involves adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well as editing other sys­tem startup scripts and/or configuration files. Many Linux distributions ship with tools to make these changes for you. To learn the proper way to configure a network device for your system, refer to your distribution documentation. If during this process you are asked for the driver or module name, the name for the Linux Base Driver for the Intel Gigabit Fam­ily of Adapters is igb.
As an example, if you install the igb driver for two Intel Gigabit adapters (eth0 and eth1) and set the speed and duplex to 10 Full and 100 Half, add the following to modules.conf:
alias eth0 igb alias eth1 igb options igb IntMode=2,1
Viewing Link Messages
Link messages will not be displayed to the console if the distribution is restricting system messages. In order to see net­work driver link messages on your console, set dmesg to eight by entering the following:
dmesg -n 8
NOTE: This setting is not saved across reboots.
Jumbo Frames
Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500 bytes. Use the ifcon­fig command to increase the MTU size. For example:
ifconfig eth<x> mtu 9000 up
This setting is not saved across reboots. The setting change can be made permanent by adding MTU = 9000 to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>, in Red Hat distributions. Other distributions may store this set-
ting in a different location.
NOTES:
l
Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or loss of link.
l
To enable Jumbo Frames, increase the MTU size on the interface beyond 1500.
l
The maximum Jumbo Frames size is 9234 bytes, with a corresponding MTU size of 9216 bytes.
ethtool
The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical inform­ation. ethtool version 3 or later is required for this functionality, although we strongly recommend downloading the latest version at: http://ftp.kernel.org/pub/software/network/ethtool/.
Speed and Duplex Configuration
In the default mode, an Intel® Network Adapter using copper connections will attempt to auto-negotiate with its link part­ner to determine the best setting. If the adapter cannot establish link with the link partner using auto-negotiation, you may need to manually configure the adapter and link partner to identical settings to establish link and pass packets. This should only be needed when attempting to link with an older switch that does not support auto-negotiation or one that has been forced to a specific speed or duplex mode.
Your link partner must match the setting you choose. Fiber-based adapters operate only in full duplex, and only at their native speed.
Speed and Duplex are configured through the ethtool* utility. ethtool is included with all versions of Red Hat after Red Hat 6.2. For other Linux distributions, download and install ethtool from the following website: http://ft-
p.kernel.org/pub/software/network/ethtool/.
CAUTION: Only experienced network administrators should force speed and duplex manually. The settings at the switch must always match the adapter settings. Adapter performance may suffer or your adapter may not operate if you configure the adapter differently from your switch.
Enabling Wake on LAN*
Wake on LAN (WOL) is configured through the ethtool* utility. ethtool is included with all versions of Red Hat after Red Hat 7.2. For other Linux distributions, download and install ethtool from the following website: http://ft-
p.kernel.org/pub/software/network/ethtool/.
For instructions on enabling WOL with ethtool, refer to the website listed above.
WOL will be enabled on the system during the next shut down or reboot. For this driver version, in order to enable WOL, the igb driver must be loaded prior to shutting down or suspending the system.
NOTE: Wake on LAN is only supported on port A of multi-port devices.
Multiqueue
In this mode, a separate MSI-X vector is allocated for each queue and one for “other” interrupts such as link status change and errors. All interrupts are throttled via interrupt moderation. Interrupt moderation must be used to avoid inter­rupt storms while the driver is processing one interrupt. The moderation value should be at least as large as the expec­ted time for the driver to process an interrupt. Multiqueue is off by default.
MSI-X support is required for Multiqueue. If MSI-X is not found, the system will fallback to MSI or to Legacy interrupts. This driver supports multiqueue in kernel versions 2.6.24 and greater and supports receive multiqueue on all kernels supporting MSI-X.
NOTES:
l
Do not use MSI-X with the 2.6.19 or 2.6.20 kernels. It is recommended to use the 2.6.21 or later ker­nel.
l
Some kernels require a reboot to switch between single queue mode and multiqueue modes or vice-versa.
Large Receive Offload (LRO)
Large Receive Offload (LRO) is a technique for increasing inbound throughput of high-bandwidth network connections by reducing CPU overhead. It works by aggregating multiple incoming packets from a single stream into a larger buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed. LRO combines multiple Ethernet frames into a single receive in the stack, thereby potentially decreasing CPU util­ization for receives.
NOTE: LRO requires 2.6.22 or later kernel version.
IGB_LRO is a compile time flag. It can be enabled at compile time to add support for LRO from the driver. The flag is used by adding CFLAGS_EXTRA="-DIGB_LRO" to the make file when it is being compiled. For example:
# make CFLAGS_EXTRA="-DIGB_LRO" install
You can verify that the driver is using LRO by looking at these counters in ethtool:
l
lro_aggregated - count of total packets that were combined
l
lro_flushed - counts the number of packets flushed out of LRO
l
lro_no_desc - counts the number of times an LRO descriptor was not available for the LRO packet
NOTE: IPv6 and UDP are not supported by LRO.
IEEE 1588 Precision Time Protocol (PTP) Hardware Clock (PHC)
Precision Time Protocol (PTP) is an implementation of the IEEE 1588 specification allowing network cards to syn­chronize their clocks over a PTP-enabled network. It works through a series of synchronization and delay notification transactions that allow a software daemon to implement a PID controller to synchronize the network card clocks.
NOTE: PTP requires a 3.0.0 or later kernel version with PTP support enabled in the kernel and a user-space soft­ware daemon.
IGB_PTP is a compile time flag. The user can enable it at compile time to add support for PTP from the driver. The flag is used by adding CFLAGS_EXTRA="-DIGB_PTP" to the make file when it's being compiled:
make CFLAGS_EXTRA="-DIGB_PTP" install
NOTE: The driver will fail to compile if your kernel does not support PTP.
You can verify that the driver is using PTP by looking at the system log to see whether a PHC was attempted to be registered or not. If you have a kernel and version of ethtool with PTP support, you can check the PTP support in the driver by executing:
ethtool -T ethX
MAC and VLAN anti-spoofing feature
When a malicious driver attempts to send a spoofed packet, it is dropped by the hardware and not transmitted. An inter­rupt is sent to the PF driver notifying it of the spoof attempt.
When a spoofed packet is detected the PF driver will send the following message to the system log (displayed by the "dmesg" command):
Spoof event(s) detected on VF(n)
Where n=the VF that attempted to do the spoofing.
Setting MAC Address, VLAN and Rate Limit Using IProute2 Tool
You can set a MAC address of a Virtual Function (VF), a default VLAN and the rate limit using the IProute2 tool. Down­load the latest version of the iproute2 tool from Sourceforge if your version does not have all the features you require.
Known Issues
Using the igb Driver on 2.4 or Older 2.6 Based Kernels
Due to limited support for PCI Express in 2.4 kernels and older 2.6 kernels, the igb driver may run into interrupt related problems on some systems, such as no link or hang when bringing up the device.
It is recommend to use the newer 2.6 based kernels, as these kernels correctly configure the PCI Express configuration space of the adapter and all intervening bridges. If you are required to use a 2.4 kernel, use a 2.4 kernel newer than
2.4.30. For 2.6 kernels, use the 2.6.21 kernel or newer.
Alternatively, on 2.6 kernels you may disable MSI support in the kernel by booting with the "pci=nomsi" option or per­manently disable MSI support in your kernel by configuring your kernel with CONFIG_PCI_MSI unset.
Compiling the Driver
When trying to compile the driver by running make install, the following error may occur:
"Linux kernel source not configured - missing version.h"
To solve this issue, create the version.h file by going to the Linux Kernel source tree and entering:
# make include/linux/version.h
Performance Degradation with Jumbo Frames
Degradation in throughput performance may be observed in some Jumbo frames environments. If this is observed, increasing the application's socket buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. For more details, see the specific application documentation and in the text file /us­r/src/linux*/Documentation/networking/ip-sysctl.txt.
Jumbo Frames on Foundry BigIron 8000 switch
There is a known issue using Jumbo frames when connected to a Foundry BigIron 8000 switch. This is a 3rd party lim­itation. If you experience loss of packets, lower the MTU size.
Multiple Interfaces on Same Ethernet Broadcast Network
Due to the default ARP behavior on Linux, it is not possible to have one system on two IP networks in the same Eth­ernet broadcast domain (non-partitioned switch) behave as expected. All Ethernet interfaces will respond to IP traffic for any IP address assigned to the system. This results in unbalanced receive traffic.
If you have multiple interfaces in a server, turn on ARP filtering by entering:
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
(this only works if your kernel's version is higher than 2.4.5).
NOTE: This setting is not saved across reboots. However this configuration change can be made permanent through one of the following methods:
l
Add the following line to /etc/sysctl.conf:
net.ipv4.conf.all.arp_filter = 1
l
Install the interfaces in separate broadcast domains (either in different switches or in a switch partitioned to VLANs).
Disable Rx Flow Control with ethtool
In order to disable receive flow control using ethtool, you must turn off auto-negotiation on the same command line. For example:
ethtool -A eth? autoneg off rx off
Unplugging Network Cable While ethtool -p is Running
In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging the network cable while ethtool -p is running will cause the system to become unresponsive to keyboard commands, except for control-alt-delete. Restarting the system appears to be the only remedy.
Detected Tx Unit Hang in Quad Port Adapters
In some cases, ports 3 and 4 do not pass traffic and report "Detected Tx Unit Hang" followed by "NETDEV WATCHDOG: ethX: transmit timed out" errors. Ports 1 and 2 do not show any errors and will pass traffic.
This issue may be resolved by updating to the latest kernel and BIOS. It is encouraged to run an operating system that fully supports MSI interrupts. This can be verified in your system BIOS by downloading the Linux Firmware Developer Kit that can be obtained at http://www.linuxfirmwarekit.org/.
Do Not Use LRO when Routing Packets
Due to a known general compatibility issue with LRO and routing, do not use LRO when routing packets.
MSI-X Issues with Kernels Between 2.6.19 and 2.6.21 (inclusive)
Kernel panics and instability may be observed on any MSI-X hardware if you use irqbalance with kernels between
2.6.19 and 2.6.21. If these types of problems are encountered, you may disable the irqbalance daemon or upgrade to a newer kernel.
Rx Page Allocation Errors
Page allocation failure order:0 errors may occur under stress with kernels 2.6.25 and above. This is caused by the way the Linux kernel reports this stressed condition.
Enabling SR-IOV in a 32-bit Microsoft* Windows* Server 2008 Guest OS Using Intel® 82576-based GbE or Intel® 82599-based 10GbE Controller Under KVM
KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This includes traditional PCIe devices, as well as SR-IOV-capable devices using Intel 82576-based and 82599-based controllers.
While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF) to a Linux-based VM running 2.6.32 or later kernel works fine, there is a known issue with Microsoft Windows Server 2008 VM that results in a "yellow bang" error. This problem is within the KVM VMM itself, not the Intel driver, or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU model for the guests, and this older CPU model does not support MSI-X interrupts, which is a requirement for Intel SR-IOV.
If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode with KVM and a Microsoft Windows Server 2008 guest try the following workaround. The workaround is to tell KVM to emulate a different model of CPU when using qemu to create the KVM guest:
"-cpu qemu64,model=13"
Host May Reboot after Removing PF when VF is Active in Guest
Using kernel versions earlier than 3.2, do not unload the PF driver with active VFs. Doing this will cause your VFs to stop working until you reload the PF driver and may cause a spontaneous reboot of your system.

igbvf Linux* Driver for the Intel® Gigabit Adapters

igbvf Overview
This driver supports upstream kernel versions 2.6.30 (or higher) x86_64.
The igbvf driver supports 82576-based and I350-based virtual function devices that can only be activated on kernels that support SR-IOV. SR-IOV requires the correct platform and OS support.
The igbvf driver requires the igb driver, version 2.0 or later. The igbvf driver supports virtual functions generated by the igb driver with a max_vfs value of 1 or greater. For more information on the max_vfs parameter refer to the section on the igb driver.
The guest OS loading the igbvf driver must support MSI-X interrupts.
This driver is only supported as a loadable module at this time. Intel is not supplying patches against the kernel source to allow for static linking of the driver. For questions related to hardware requirements, refer to the documentation sup­plied with your Intel Gigabit adapter. All hardware requirements listed apply to use with Linux.
Instructions on updating ethtool can be found in the section Additional Configurations later in this document.
NOTE: For VLANs, There is a limit of a total of 32 shared VLANs to 1 or more VFs.
igbvf Linux Base Driver Supported Network Connections
The following Intel network adapters are compatible with the igbvf driver in this release:
Controller Adapter Name Board IDs
82576 Intel® Gigabit ET Quad Port Mezzanine Card E62941-xxx
82576 Intel® Gigabit ET Dual Port Server Adapter E66292-xxx and E65438-xxx
82576 Intel® Gigabit ET Quad Port Server Adapter E66339-xxx and E65439-xxx
I350 Intel® Gigabit 2P I350-t Adapter G15136-xxx and G32101-xxx
I350 Intel® Gigabit 4P I350-t Adapter G13158-xxx and G32100-xxx
I350 Intel® Gigabit 4P I350-t rNDC G10565-xxx
I350 Intel® Gigabit 4P X540/I350 rNDC G14843-xxx and G59298-xxx
I350 Intel Gigabit 4P X520/I350 rNDC G61346-xxx
I350 Intel® Gigabit 4P I350-t Mezz G27581-xxx
I350 Intel® Gigabit 2P I350-t LOM n/a
I350 Intel® Gigabit 2P I350 LOM n/a
To verify your adapter is supported, find the board ID number on the adapter. Look for a label that has a barcode and a number in the format 123456-001 (six digits hyphen three digits). Match this to the list of numbers above.
For more information on how to identify your adapter or for the latest network drivers for Linux, see Customer Support.
Building and Installation
There are two methods for installing the igbvf driver:
l Install from Source Code
l Install Using KMP RPM
Install from Source Code
To build a binary RPM* package of this driver, run 'rpmbuild -tb <filename.tar.gz>'. Replace <filename.tar.gz> with the specific filename of the driver.
1. Copy the base driver tar file from 'Linux/Source/base_driver/igbvf-<x.x.x>tar.gz' on the driver CD, where <x.x.x> is the version number for the driver tar file, to the directory of your choice. For example, use '/home/username/igbvf' or '/usr/local/src/igbvf'.
2. Untar/unzip the archive, where <x.x.x> is the version number for the driver tar:
tar zxf igbvf-<x.x.x>.tar.gz
3. Change to the driver src directory, where <x.x.x> is the version number for the driver tar:
cd igbvf-<x.x.x>/src/
4. Compile the driver module:
# make install
The binary will be installed as:
/lib/modules/<KERNEL VERSION>/kernel/drivers/net/igbvf/igbvf.ko
The install locations listed above are the default locations. This might differ for various Linux distributions. For more information, see the ldistrib.txt file included in the driver tar.
5. Install the module using the modprobe command:
modprobe igbvf
For 2.6 based kernels, make sure that the older igbvf drivers are removed from the kernel, before loading the new module:
rmmod igbvf.ko; modprobe igbvf
6. Assign an IP address to and activate the Ethernet interface by entering the following, where <x> is the interface number:
ifconfig eth<x> <IP_address> up
7. Verify that the interface works. Enter the following, where <IP_address> is the IP address for another machine on the same subnet as the interface that is being tested:
ping <IP_address>
NOTE: Some systems have trouble supporting MSI and/or MSI-X interrupts. If your system needs to disable this type of interrupt, the driver can be built and installed with the command:
#make CFLAGS_EXTRA=-DDISABLE_PCI_MSI install
Normally, the driver generates an interrupt every two seconds. If interrupts are not received in cat /proc/interrupts for the ethX e1000e device, then this workaround may be necessary.
To build igbvf driver with DCA
If your kernel supports DCA, the driver will build by default with DCA enabled.
Install Using KMP RPM
NOTE: KMP is only supported on SLES11.
The KMP RPMs update existing igbvf RPMs currently installed on the system. These updates are provided by SuSE in the SLES release. If an RPM does not currently exist on the system, the KMP will not install.
The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is:
intel-<component name>-<component version>.<arch type>.rpm
For example, intel-igbvf-1.3.8.6-1.x86_64.rpm: igbvf is the component name; 1.3.8.6-1 is the component version; and x86_64 is the architecture type.
KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is:
intel-<component name>-kmp-<kernel type>-<component version>_<kernel version>.<arch type>.rpm
For example, intel-igbvf-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: igbvf is the component name; default is the kernel type; 1.3.8.6 is the component version; 2.6.27.19_5-1 is the kernel version; and x86_64 is the architecture type.
To install the KMP RPM, type the following two commands:
rpm -i <rpm filename> rpm -i <kmp rpm filename>
For example, to install the igbvf KMP RPM package, type the following:
rpm -i intel-igbvf-1.3.8.6-1.x86_64.rpm rpm -i intel-igbvf-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm
Command Line Parameters
If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax:
modprobe igbvf [<option>=<VAL1>,<VAL2>,...]
A value (<VAL#>) must be assigned to each network port in the system supported by this driver. The values are applied to each instance, in function order. For example:
modprobe igbvf InterruptThrottleRate=16000,16000
In this case, there are two network ports supported by igb in the system. The default value for each parameter is gen­erally the recommended setting, unless otherwise noted.
The following table contains parameters and possible values for modprobe commands:
Parameter Name
Interrupt­ThrottleRate
Valid Range/Settings Default Description
0, 1, 3, 100-100000 (0=off, 1=dynamic, 3=d­dynamic conservative)
3 The driver can limit the number of interrupts per second
that the adapter will generate for incoming packets. It does this by writing a value to the adapter that is based on the maximum number of interrupts that the adapter will generate per second.
Setting InterruptThrottleRate to a value greater or equal to 100 will program the adapter to send out a maximum of that many interrupts per second, even if more packets have come in. This reduces interrupt load on the system and can lower CPU utilization under heavy load, but will increase latency as packets are not processed as quickly.
The default behavior of the driver previously assumed a static InterruptThrottleRate value of 8000, providing a
Parameter Name
Valid Range/Settings Default Description
The algorithm classifies the incoming traffic every inter­val into classes. Once the class is determined, the Inter­ruptThrottleRate value is adjusted to suit that traffic type the best. There are three classes defined: "Bulk traffic", for large amounts of packets of normal size; "Low latency", for small amounts of traffic and/or a significant percentage of small packets; and "Lowest latency", for almost completely small packets or minimal traffic.
In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 for traffic that falls in class "Bulk traffic". If traffic falls in the "Low latency" or "Lowest latency" class, the InterruptThrottleRate is increased stepwise to 20000. This default mode is suitable for most applications.
For situations where low latency is vital such as cluster or grid computing, the algorithm can reduce latency even more when InterruptThrottleRate is set to mode 1. In this mode, which operates the same as mode 3, the InterruptThrottleRate will be increased stepwise to 70000 for traffic in class "Lowest latency".
Setting InterruptThrottleRate to 0 turns off any interrupt moderation and may improve small packet latency, but is generally not suitable for bulk throughput traffic.
NOTES:
l
l
Dynamic interrupt throttling is only applic­able to adapters operating in MSI or Legacy interrupt mode, using a single receive queue. When igbvf is loaded with default settings and multiple adapters are in use sim­ultaneously, the CPU utilization may increase non-linearly. In order to limit the CPU utilization without impacting the over­all throughput, it is recommended to load the driver as follows:
modprobe igbvf Inter­ruptThrottleRate­e=3000,3000,3000
This sets the InterruptThrottleRate to 3000 interrupts/sec for the first, second, and third instances of the driver. The range of 2000 to 3000 interrupts per second works on a majority of systems and is a good starting point, but the optimal value will be platform­specific. If CPU utilization is not a concern, use default driver settings.
Additional Configurations
Configuring the Driver on Different Distributions
Configuring a network driver to load properly when the system is started is distribution dependent. Typically, the con­figuration process involves adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well as editing other sys­tem startup scripts and/or configuration files. Many Linux distributions ship with tools to make these changes for you. To learn the proper way to configure a network device for your system, refer to your distribution documentation. If during this process you are asked for the driver or module name, the name for the Linux Base Driver for the Intel Gigabit Fam­ily of Adapters is igbvf.
As an example, if you install the igbvf driver for two Intel Gigabit adapters (eth0 and eth1) and want to set the interrupt mode to MSI-X and MSI, respectively, add the following to modules.conf or /etc/modprobe.conf:
alias eth0 igbvf alias eth1 igbvf options igbvf InterruptThrottleRate=3,1
Viewing Link Messages
Link messages will not be displayed to the console if the distribution is restricting system messages. In order to see net­work driver link messages on your console, set dmesg to eight by entering the following:
dmesg -n 8
NOTE: This setting is not saved across reboots.
Jumbo Frames
Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500 bytes. Use the ifcon­fig command to increase the MTU size. For example:
ifconfig eth<x> mtu 9000 up
This setting is not saved across reboots. The setting change can be made permanent by adding MTU = 9000 to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>, in Red Hat distributions. Other distributions may store this set-
ting in a different location.
NOTES:
l
Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or loss of link.
l
To enable Jumbo Frames, increase the MTU size on the interface beyond 1500.
l
The maximum Jumbo Frames size is 9234 bytes, with a corresponding MTU size of 9216 bytes.
ethtool
The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical inform­ation. ethtool version 3 or later is required for this functionality, although we strongly recommend downloading the latest version at: http://ftp.kernel.org/pub/software/network/ethtool/.
Known Issues
Compiling the Driver
When trying to compile the driver by running make install, the following error may occur:
"Linux kernel source not configured - missing version.h"
To solve this issue, create the version.h file by going to the Linux Kernel source tree and entering:
# make include/linux/version.h
Multiple Interfaces on Same Ethernet Broadcast Network
Due to the default ARP behavior on Linux, it is not possible to have one system on two IP networks in the same Eth­ernet broadcast domain (non-partitioned switch) behave as expected. All Ethernet interfaces will respond to IP traffic for any IP address assigned to the system. This results in unbalanced receive traffic.
If you have multiple interfaces in a server, turn on ARP filtering by entering:
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
(this only works if your kernel's version is higher than 2.4.5).
NOTE: This setting is not saved across reboots. However this configuration change can be made permanent through one of the following methods:
l
Add the following line to /etc/sysctl.conf:
net.ipv4.conf.all.arp_filter = 1
l
Install the interfaces in separate broadcast domains (either in different switches or in a switch partitioned to VLANs).
Do Not Use LRO when Routing Packets
Due to a known general compatibility issue with LRO and routing, do not use LRO when routing packets.
MSI-X Issues with Kernels Between 2.6.19 and 2.6.21 (inclusive)
Kernel panics and instability may be observed on any MSI-X hardware if you use irqbalance with kernels between
2.6.19 and 2.6.21. If these types of problems are encountered, you may disable the irqbalance daemon or upgrade to a newer kernel.
Rx Page Allocation Errors
Page allocation failure order:0 errors may occur under stress with kernels 2.6.25 and above. This is caused by the way the Linux kernel reports this stressed condition.
Unloading Physical Function (PF) Driver Causes System Reboots when VM is Run­ning and VF is Loaded on the VM
Do not unload the PF driver (igb) while VFs are assigned to guests.
Host May Reboot after Removing PF when VF is Active in Guest
Using kernel versions earlier than 3.2, do not unload the PF driver with active VFs. Doing this will cause your VFs to stop working until you reload the PF driver and may cause a spontaneous reboot of your system.

ixgbe Linux* Driver for the Intel® 10 Gigabit Server Adapters

ixgbe Overview
WARNING: By default, the ixgbe driver complies with the Large Receive Offload (LRO) feature enabled. This
option offers the lowest CPU utilization for receives but is incompatible with routing/ip forwarding and bridging. If enabling ip forwarding or bridging is a requirement, it is necessary to disable LRO using compile time options as noted in the LRO section later in this section. The result of not disabling LRO when combined with ip forwarding or bridging can be low throughput or even a kernel panic.
This file describes the Linux* Base Driver for the 10 Gigabit Intel® Network Connections. This driver supports the 2.6.x kernels and includes support for any Linux supported system, including X86_64, i686 and PPC.
This driver is only supported as a loadable module. Intel is not supplying patches against the kernel source to allow for static linking of the driver. A version of the driver may already be included by your distribution or the kernel. For ques­tions related to hardware requirements, refer to System Requirements. All hardware requirements listed apply to use with Linux.
The following features are now available in supported kernels:
l
Native VLANs
l
Channel Bonding (teaming)
l
SNMP
l
Generic Receive Offload
l
Data Center Bridging
Adapter teaming is now implemented using the native Linux Channel bonding module. This is included in supported Linux kernels. Channel Bonding documentation can be found in the Linux kernel source: /doc­umentation/networking/bonding.txt
Use ethtool, lspci, or ifconfig to obtain driver information. Instructions on updating the ethtool can be found in the Addi-
tional Configurations section later in this page.
ixgbe Linux Base Driver Supported Adapters
The following Intel network adapters are compatible with the Linux driver in this release:
Controller Adapter Name Board IDs
82598EB Intel® 10 Gigabit AF DA Dual Port Server Adapter E45329-xxx, E92325-xxx and E45320-xxx
82598GB Intel® 10 Gigabit AT Server Adapter D79893-xxx and E84436-xxx
82598GB Intel® 10 Gigabit XF SR Server Adapter D99083-xxx
82599 Intel® Ethernet X520 10GbE Dual Port KX4 Mezz E62954-xxx
82599 Intel® Ethernet Server Adapter X520-2 G18786-xxx and G20891-xxx
82599 Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz G18846-xxx
82599 Intel® Ethernet Server Adapter X520-T2 E76986-xxx and E92016-xxx
X540 Intel® Ethernet 10G 2P X540-t Adapter G35632-xxx
82599
X540 Intel® Ethernet 10G 4P X540/I350 rNDC G14843-xxx, G59298-xxx, and G33388-xxx
82599 Intel Ethernet 10G 4P X520/I350 rNDC G61346-xxx and G63668-xxx
82599 Intel® Ethernet 10G 2P X520-k bNDC G19030-xxx
To verify your adapter is supported, find the board ID number on the adapter. Look for a label that has a barcode and a number in the format 123456-001 (six digits hyphen three digits). Match this to the list of numbers above.
For more information on how to identify your adapter or for the latest network drivers for Linux, see Customer Support.
Intel® Ethernet 10G 2P X520 Adapter
G28774-xxx and G38004-xxx
SFP+ Devices with Pluggable Optics
NOTE: For 92500-based SFP+ fiber adapters, using "ifconfig down" turns off the laser. "ifconfig up" turns
the laser on.
For information on using SFP+ devices with pluggable optics, click here.
Building and Installation
There are three methods for installing the Linux driver:
l Install from Source Code
l Install Using KMP RPM
l Install Using KMOD RPM
Install from Source Code
To build a binary RPM* package of this driver, run 'rpmbuild -tb <filename.tar.gz>'. Replace <filename.tar.gz> with the specific filename of the driver.
NOTES:
l
For the build to work properly it is important that the currently running kernel MATCH the version and con­figuration of the installed kernel source. If you have just recompiled your kernel, reboot the system.
l
RPM functionality has only been tested in Red Hat distributions.
1. Copy the base driver tar file from 'Linux/Source/base_driver/ixgbe-<x.x.x>tar.gz' on the driver CD, where <x.x.x> is the version number for the driver tar file, to the directory of your choice. For example, use '/home/username/ixgbe' or '/usr/local/src/ixgbe'.
2. Untar/unzip the archive, where <x.x.x> is the version number for the driver tar:
tar zxf ixgbe-<x.x.x>.tar.gz
3. Change to the driver src directory, where <x.x.x> is the version number for the driver tar:
cd ixgbe-<x.x.x>/src/
4. Compile the driver module:
make install
The binary will be installed as: /lib/modules/<KERNEL VERSION>/ker-
nel/drivers/net/ixgbe/ixgbe.ko
The install locations listed above are the default locations. This might differ for various Linux distributions. For more information, see the ldistrib.txt file included in the driver tar.
NOTE: IXGBE_NO_LRO is a compile time flag. The user can enable it at compile time to remove support for LRO from the driver. The flag is used by adding `CFLAGS_EXTRA=-"DIXGBE_NO_LRO"` to the make file when it is being compiled. For example:
make CFLAGS_EXTRA="-DIXGBE_NO_LRO" install
5. Install the module using the modprobe command for kernel 2.6.x:
modprobe ixgbe <parameter>=<value>
For 2.6 based kernels, make sure that the older ixgbe drivers are removed from the kernel, before loading the new module:
rmmod ixgbe; modprobe ixgbe
6. Assign an IP address to and activate the Ethernet interface by entering the following, where <x> is the interface number:
ifconfig eth<x> <IP_address> netmask <netmask>
7. Verify that the interface works. Enter the following, where <IP_address> is the IP address for another machine on the same subnet as the interface that is being tested:
ping <IP_address>
Install Using KMP RPM
NOTE: KMP is only supported on RHEL 6 and SLES11.
The KMP RPMs update existing ixgbe RPMs currently installed on the system. These updates are provided by SuSE in the SLES release. If an RPM does not currently exist on the system, the KMP will not install.
The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is:
intel-<component name>-<component version>.<arch type>.rpm
For example, intel-ixgbe-1.3.8.6-1.x86_64.rpm: ixgbe is the component name; 1.3.8.6-1 is the component version; and x86_64 is the architecture type.
KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is:
intel-<component name>-kmp-<kernel type>-<component version>_<kernel version>.<arch type>.rpm
For example, intel-ixgbe-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: ixgbe is the component name; default is the kernel type; 1.3.8.6 is the component version; 2.6.27.19_5-1 is the kernel version; and x86_64 is the architecture type.
To install the KMP RPM, type the following two commands:
rpm -i <rpm filename> rpm -i <kmp rpm filename>
For example, to install the ixgbe KMP RPM package, type the following:
rpm -i intel-ixgbe-1.3.8.6-1.x86_64.rpm rpm -i intel-ixgbe-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm
Install Using KMOD RPM
The KMOD RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is:
kmod-<driver name>-<version>-1.<arch type>.rpm
For example, kmod-ixgbe-2.3.4-1.x86_64.rpm:
l
ixgbe is the driver name
l
2.3.4 is the version
l
x86_64 is the architecture type
To install the KMOD RPM, go to the directory of the RPM and type the following command:
rpm -i <rpm filename>
For example, to install the ixgbe KMOD RPM package from RHEL 6.4, type the following:
rpm -i kmod-ixgbe-2.3.4-1.x86_64.rpm
Command Line Parameters
If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax:
modprobe ixgbe [<option>=<VAL1>,<VAL2>,...]
For example:
modprobe ixgbe InterruptThrottleRate=16000,16000
The default value for each parameter is generally the recommended setting, unless otherwise noted.
The following table contains parameters and possible values for modprobe commands:
Parameter Name Valid
Range/Settings
RSS 0 - 16 1 Receive Side Scaling allows multiple queues for receiving
MQ 0, 1 1 Multi Queue support.
IntMode 0 - 2 2 Interrupt mode controls the allowed load time control over the
Default Description
data.
0 = Sets the descriptor queue count to the lower value of either the number of CPUs or 16. 1 - 16 = Sets the descriptor queue count to 1 - 16.
RSS also effects the number of transmit queues allocated on
2.6.23 and newer kernels with CONFIG_NET_MULTIQUEUE set in the kernel .config file. CONFIG_NETDEVICES_ MULTIQUEUE is only supported in kernels 2.6.23 to 2.6.26. For kernels 2.6.27 or newer, other options enable multiqueue.
NOTE: The RSS parameter has no effect on 82599­based adapters unless the FdirMode parameter is sim­ultaneously used to disable Flow Director. See Intel®
Ethernet Flow Director section for more detail.
0 = Disables Multiple Queue support. 1 = Enables Multiple Queue support (prerequisite for RSS).
type of interrupt registered for by the driver. MSI-X is required for multiple queue support, and some kernels and com­binations of kernel .config options will force a lower level of interrupt support. 'cat /proc/interrupts' will show different values for each type of interrupt. 0 = Legacy interrupt 1 = MSI 2 = MSIX
InterruptThrottleRate 956 - 488,281
(0=off, 1=d­dynamic)
1 Interrupt Throttle Rate (interrupts/second). The ITR parameter
controls how many interrupts each interrupt vector can gen­erate per second. Increasing ITR lowers latency at the cost of increased CPU utilization, though it may help throughput in some circumstances.
0 = This turns off any interrupt moderation and may improve small packet latency. However, it is generally not suitable for bulk throughput traffic due to the increased CPU utilization of the higher interrupt rate.
NOTES:
l
For 82599-based adapters, disabling Inter­ruptThrottleRate will also result in the driver dis­abling HW RSC.
l
For 82598-based adapters, disabling Inter­ruptThrottleRate will also result in disabling LRO.
1 = Dynamic mode attempts to moderate interrupts per vector while maintaining very low latency. This can sometimes cause extra CPU utilization when in dynamic mode. If planning on deploying ixgbe in a latency sensitive environment, please con­sider this parameter.
Parameter Name Valid
Range/Settings
LLI Low Latency Interrupts allow for immediate generation of an
Default Description
interrupt upon processing receive packets that match certain cri­teria as set by the parameters described below. LLI parameters are not enabled when Legacy interrupts are used. You must be using MSI or MSI-X (see cat /proc/interrupts) to successfully use LLI.
LLIPort 0 - 65535 0 (dis-
abled)
LLIPush 0 - 1 0 (dis-
abled)
LLISize 0 - 1500 0 (dis-
abled)
LLIEType 0 - x8FFF 0 (dis-
abled)
LLIVLANP 0 - 7 0 (dis-
abled)
Flow Control Flow Control is enabled by default. If you want to disable a flow
LLI is configured with the LLIPort command-line parameter, which specifies which TCP should generate Low Latency Inter­rupts.
For example, using LLIPort=80 would cause the board to gen­erate an immediate interrupt upon receipt of any packet sent to TCP port 80 on the local machine.
WARNING: Enabling LLI can result in an excessive number of interrupts/second that may cause problems with the system and in some cases may cause a ker­nel panic.
LLIPush can be set to be enabled or disabled (default). It is most effective in an environment with many small transactions.
NOTE: Enabling LLIPush may allow a denial of service attack.
LLISize causes an immediate interrupt if the board receives a packet smaller than the specified size.
Low Latency Interrupt Ethernet Protocol Type.
Low Latency Interrupt on VLAN Priority Threshold.
control capable link partner, use ethtool:
Intel® Ethernet Flow Director
ethtool -A eth? autoneg off rx off tx off
NOTE: For 82598 backplane cards entering 1 Gbps mode, flow control default behavior is changed to off. Flow control in 1 Gbps mode on these devices can lead to transmit hangs.
NOTE: Flow director parameters are only supported on kernel versions 2.6.30 or later. Flow control in 1 Gbps mode on these devices can lead to transmit hangs.
This supports advanced filters that direct receive packets by their flows to different queues and enables tight control on rout­ing a flow in the platform. It matches flows and CPU cores for flow affinity and supports multiple parameters for flexible flow classification and load balancing.
The flow director is enabled only if the kernel is multiple TX queue capable. An included script (set_irq_affinity.sh) auto­mates setting the IRQ to CPU affinity. To verify that the driver is using Flow Director, look at the counter in ethtool: fdir_miss and fdir_match.
Parameter Name Valid
Range/Settings
Default Description
Other ethtool Commands:
To enable Flow Director
ethtool -K ethX ntuple on
To add a filter, use -U switch
ethtool -U ethX flow-type tcp4 src-ip 192.168.0.100 action 1
To see the list of filters currently present
ethtool -u ethX
Perfect Filter: Perfect filter is an interface to load the filter table that funnels all flow into queue_0 unless an alternative queue is specified using "action." In that case, any flow that matches the filter criteria will be directed to the appropriate queue.
Support for Virtual Function (VF) is via the user-data field. You must update to the version of ethtool built for the 2.6.40 kernel. Perfect Filter is supported on all kernels 2.6.30 and later. Rules may be deleted from the table itself. This is done via "ethtool -U ethX delete N" where N is the rule number to be deleted.
NOTE: Flow Director Perfect Filters can run in single queue mode, when SR-IOV is enabled, or when DCB is enabled.
If the queue is defined as -1, the filter will drop matching pack­ets.
To account for filter matches and misses, there are two stats in ethtool: fdir_match and fdir_miss. In addition, rx_queue_N_ packets shows the number of packets processed by the Nth queue.
NOTES:
l
Receive Packet Steering (RPS) and Receive Flow Steering (RFS) are not compatible with Flow Director. If Flow Director is enabled, these will be disabled.
l
For VLAN Masks only 4 masks are supported.
l
Once a rule is defined, you must supply the same fields and masks (if masks are specified).
Support for UDP RSS
This feature adds an ON/OFF switch for hashing over certain flow types. You can't turn on anything other than UDP. The default setting is disabled. We only support enabling/disabling hashing on ports for UDP over IPv4 (udp4) or IPv6 (udp6).
NOTE: Fragmented packets may arrive out of order when RSS UDP support is configured.
Supported ethtool Commands and Options
-n --show-nfc
Retrieves the receive network flow classification con­figurations.
rx-flow-hash tcp4|ud­p4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6
Retrieves the hash options for the specified network traffic type.
Parameter Name Valid
Range/Settings
Default Description
-N --config-nfc
Configures the receive network flow classification.
rx-flow-hash tcp4|ud­p4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6
m|v|t|s|d|f|n|r...
Configures the hash options for the specified network traffic type.
udp4
UDP over IPv4
udp6
UDP over IPv6
f
Hash on bytes 0 and 1 of the Layer 4 header of the rx packet.
n
Hash on bytes 2 and 3 of the Layer 4 header of the rx packet.
The following is an example using udp4 (UDP over IPv4):
To include UDP port numbers in RSS hashing run:
ethtool -N eth1 rx-flow-hash udp4 sdfn
To exclude UDP port numbers from RSS hashing run:
ethtool -N eth1 rx-flow-hash udp4 sd
To display UDP hashing current configuration run:
ethtool -n eth1 rx-flow-hash udp4
The results of running that call will be the following, if UDP hash­ing is enabled:
UDP over IPV4 flows use these fields for com­puting Hash flow key: IP SA IP DA L4 bytes 0 & 1 [TCP/UDP src port] L4 bytes 2 & 3 [TCP/UDP dst port]
The results if UDP hashing is disabled would be:
UDP over IPV4 flows use these fields for com­puting Hash flow key: IP SA IP DA
The following two parameters impact Flow Director: FdirPballoc and AtrSampleRate.
FdirPballoc 0 - 2 0 (64k) Flow Allocated Packet Buffer Size.
0 = 64k 1 = 128k 2 = 256k
AtrSampleRate 1 - 100 20 Software ATR Tx Packet Sample Rate. For example, when set
to 20, every 20th packet is sampled to determine if the packet will create a new flow.
Parameter Name Valid
Range/Settings
max_vfs 1 - 63 0 This parameter adds support for SR-IOV. It causes the driver to
Default Description
spawn up to max_vfs worth of virtual function.
If the value is greater than 0, it will also force the VMDq para­meter to be 1 or more.
NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering and VLAN tag strip­ping/insertion will remain enabled. Please remove the old VLAN filter before the new VLAN filter is added. For example:
ip link set eth0 vf 0 vlan 100 // set vlan 100 for VF 0 ip link set eth0 vf 0 vlan 0 // Delete vlan 100 ip link set eth0 vf 0 vlan 200 // set a new vlan 200 for VF 0
The parameters for the driver are referenced by position. So, if you have a dual port 82599-based adapter and you want N vir­tual functions per port, you must specify a number for each port with each parameter separated by a comma.
For example: modprobe ixgbe max_vfs=63,63
NOTE: If both 82598 and 82599-based adapters are installed on the same machine, you must be careful in loading the driver with the parameters. Depending on system configuration, number of slots, etc., it is impossible to predict in all cases where the positions would be on the command line and the user will have to specify zero in those positions occupied by an 82598 port.
With kernel 3.6, the driver supports the simultaneous usage of max_vfs and DCB features, subject to the constraints described below. Prior to kernel 3.6, the driver did not support the sim­ultaneous operation of max_vfs > 0 and the DCB features (mul­tiple traffic classes utilizing Priority Flow Control and Extended Transmission Selection).
When DCB is enabled, network traffic is transmitted and received through multiple traffic classes (packet buffers in the NIC). The traffic is associated with a specific class based on pri­ority, which has a value of 0 through 7 used in the VLAN tag. When SR-IOV is not enabled, each traffic class is associated with a set of RX/TX descriptor queue pairs. The number of queue pairs for a given traffic class depends on the hardware configuration. When SR-IOV is enabled, the descriptor queue pairs are grouped into pools. The Physical Function (PF) and each Virtual Function (VF) is allocated a pool of RX/TX descriptor queue pairs. When multiple traffic classes are con­figured (for example, DCB is enabled), each pool contains a queue pair from each traffic class. When a single traffic class is configured in the hardware, the pools contain multiple queue pairs from the single traffic class.
The number of VFs that can be allocated depends on the num­ber of traffic classes that can be enabled. The configurable num­ber of traffic classes for each enabled VF is as follows:
Parameter Name Valid
Range/Settings
Default Description
0 - 15 VFs = Up to 8 traffic classes, depending on device support
16 - 31 VFs = Up to 4 traffic classes
32 - 63 = 1 traffic class
When VFs are configured, the PF is allocated one pool as well. The PF supports the DCB features with the constraint that each traffic class will only use a single queue pair. When zero VFs are configured, the PF can support multiple queue pairs per traffic class.
VMDQ 1-16 1 (dis-
abled)
L2LBen 0-1 1
(enabled)
This provides the option for turning VMDQ on or off.
Values 2 through 16 enable VMDQ with the descriptor queues set to the specified value.
This parameter controls the internal switch (L2 loopback between pf and vf). By default the switch is enabled.
Additional Configurations
Configuring the Driver on Different Distributions
Configuring a network driver to load properly when the system is started is distribution dependent. Typically, the con­figuration process involves adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well as editing other sys­tem startup scripts and/or configuration files. Many Linux distributions ship with tools to make these changes for you. To learn the proper way to configure a network device for your system, refer to your distribution documentation. If during this process you are asked for the driver or module name, the name for the Linux Base Driver for the Intel® 10 Gigabit PCI Express Family of Adapters is ixgbe.
Viewing Link Messages
Link messages will not be displayed to the console if the distribution is restricting system messages. In order to see net­work driver link messages on your console, set dmesg to eight by entering the following:
dmesg -n 8
NOTE: This setting is not saved across reboots.
Jumbo Frames
Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500 bytes. The maximum value for the MTU is 9710. Use the ifconfig command to increase the MTU size. For example, enter the following where <x> is the interface number:
ifconfig ethx mtu 9000 up
This setting is not saved across reboots. The setting change can be made permanent by adding MTU = 9000 to the file /etc/sysconfig/network-scripts/ifcfg-eth<x> for RHEL or to the file /etc/sysconfig/network/<config_ file> for SLES.
The maximum MTU setting for Jumbo Frames is 9710. This value coincides with the maximum Jumbo Frames size of
9728. This driver will attempt to use multiple page sized buffers to receive each jumbo packet. This should help to avoid buffer starvation issues when allocating receive packets.
For 82599-based network connections, if you are enabling jumbo frames in a virtual function (VF), jumbo frames must first be enabled in the physical function (PF). The VF MTU setting cannot be larger than the PF MTU.
ethtool
The driver uses the ethtool interface for driver configuration and diagnostics, as well as displaying statistical inform­ation. The latest ethtool version is required for this functionality.
The latest release of ethtool can be found at: http://sourceforge.net/projects/gkernel.
NAPI
NAPI (Rx polling mode) is supported in the ixgbe driver.
See ftp://robur.slu.se/pub/Linux/net-development/NAPI/usenix-paper.tgz for more information on NAPI.
Large Receive Offload (LRO)
Large Receive Offload (LRO) is a technique for increasing inbound throughput of high-bandwidth network connections by reducing CPU overhead. It works by aggregating multiple incoming packets from a single stream into a larger buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed. LRO combines multiple Ethernet frames into a single receive in the stack, thereby potentially decreasing CPU util­ization for receives.
IXGBE_NO_LRO is a compile time flag. The user can enable it at compile time to remove support for LRO from the driver. The flag is used by adding CFLAGS_EXTRA="-DIXGBE_NO_LRO" to the make file when it's being compiled.
make CFLAGS_EXTRA="-DIXGBE_NO_LRO" install
You can verify that the driver is using LRO by looking at these counters in ethtool:
l
lro_flushed - the total number of receives using LRO.
l
lro_coal - counts the total number of Ethernet packets that were combined.
HW RSC
82599-based adapters support hardware based receive side coalescing (RSC) which can merge multiple frames from the same IPv4 TCP/IP flow into a single structure that can span one or more descriptors. It works similarly to software large receive offload technique. By default HW RSC is enabled, and SW LRO can not be used for 82599-based adapters unless HW RSC is disabled.
IXGBE_NO_HW_RSC is a compile time flag that can be enabled at compile time to remove support for HW RSC from the driver. The flag is used by adding CFLAGS_EXTRA="-DIXGBE_NO_HW_RSC" to the make file when it is being compiled.
make CFLAGS_EXTRA="-DIXGBE_NO_HW_RSC" install
You can verify that the driver is using HW RSC by looking at the counter in ethtool:
hw_rsc_count - counts the total number of Ethernet packets that were being com­bined.
rx_dropped_backlog
When in a non-Napi (or Interrupt) mode, this counter indicates that the stack is dropping packets. There is an adjustable parameter in the stack that allows you to adjust the amount of backlog. We recommend increasing the net­dev_max_backlog if the counter goes up.
# sysctl -a |grep netdev_max_backlog
net.core.netdev_max_backlog = 1000
# sysctl -e net.core.netdev_max_backlog=10000
net.core.netdev_max_backlog = 10000
Flow Control
Flow control is disabled by default. To enable it, use ethtool:
ethtool -A eth? autoneg off rx on tx on
NOTE: You must have a flow control capable link partner.
MAC and VLAN Anti-spoofing Feature
When a malicious driver attempts to send a spoofed packet, it is dropped by the hardware and not transmitted. An inter­rupt is sent to the PF driver notifying it of the spoof attempt. When a spoofed packet is detected the PF driver will send the following message to the system log (displayed by the "dmesg" command):
ixgbe ethx: ixgbe_spoof_check: n spoofed packets detected
Where x=the PF interface# and n=the VF that attempted to do the spoofing.
NOTE: This feature can be disabled for a specific Virtual Function (VF).
Support for UDP RSS
This feature adds an ON/OFF switch for hashing over certain flow types. The default setting is disabled. NOTE: Frag­mented packets may arrive out of order when RSS UDP support is configured.
Supported ethtool Commands and Options
-n --show-nfc
Retrieves the receive network flow classification configurations.
rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6
Retrieves the hash options for the specified network traffic type.
-N --config-nfc
Configures the receive network flow classification.
rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 m|v|t|s|d|f|n|r...
Configures the hash options for the specified network traffic type.
udp4 UDP over IPv4 udp6 UDP over IPv6
f Hash on bytes 0 and 1 of the Layer 4 header of the rx packet. n Hash on bytes 2 and 3 of the Layer 4 header of the rx packet.
Known Issues
MSI-X Issues with 82598-based Intel(R)10GbE-LR/LRM/SR/AT Server Adapters
Kernel panics and instability may be observed on some platforms when running 82598-based Intel(R) 10GbE­LR/LRM/SR/AT Server Adapters with MSI-X in a stress environment. Symptoms of this issue include observing "APIC 40 Error" or "no irq handler for vector" error messages on the console or in "dmesg."
If such problems are encountered, you may disable the irqbalance daemon. If the problems persist, compile the driver in pin interrupt mode, do
make CFLAGS_EXTRA=-DDISABLE_PCI_MSI
Or you can load the module with modprobe ixgbe InterruptType=0.
Driver Compilation
When trying to compile the driver by running make install, the following error may occur: "Linux kernel source not con­figured - missing version.h"
To solve this issue, create the version.h file by going to the Linux source tree and entering:
make include/linux/version.h
Do Not Use LRO when Routing Packets
Due to a known general compatibility issue with LRO and routing, do not use LRO when routing packets.
Performance Degradation with Jumbo Frames
Degradation in throughput performance may be observed in some Jumbo frames environments. If this is observed, increasing the application's socket buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. For more details, see the specific application documentation in the text file ip-sysctl.txt in your kernel documentation.
Multiple Interfaces on Same Ethernet Broadcast Network
Due to the default ARP behavior on Linux, it is not possible to have one system on two IP networks in the same Eth­ernet broadcast domain (non-partitioned switch) behave as expected. All Ethernet interfaces will respond to IP traffic for any IP address assigned to the system. This results in unbalanced receive traffic.
If you have multiple interfaces in a server, turn on ARP filtering by entering:
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
(this only works if your kernel's version is higher than 2.4.5), or install the interfaces in separate broadcast domains.
UDP Stress Test Dropped Packet Issue
Under small packets UDP stress test with the 10GbE driver, the Linux system may drop UDP packets due to the full­ness of socket buffers. You may want to change the driver's Flow Control variables to the minimum value for controlling packet reception.
Another option is to increase the kernel's default buffer sizes for udp by changing the values in /proc/sys/net/­core/rmem_default and rmem_max.
Unplugging Network Cable While ethtool -p is Running
In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging the network cable while ethtool -p is running will cause the system to become unresponsive to keyboard commands, except for control-alt-delete. Restarting the system appears to be the only remedy.
Cisco Catalyst 4948-10GE Switch Running ethtool -g May Cause Switch to Shut Down Ports
82598-based hardware can re-establish link quickly and when connected to some switches, rapid resets within the driver may cause the switch port to become isolated due to "link flap". This is typically indicated by a yellow instead of a green link light. Several operations may cause this problem, such as repeatedly running ethtool commands that cause a reset.
A potential workaround is to use the Cisco IOS command "no errdisable detect cause all" from the Global Configuration prompt which enables the switch to keep the interfaces up, regardless of errors.
MSI-X Issues with Kernels Between 2.6.19 and 2.6.21 (inclusive)
Kernel panics and instability may be observed on any MSI-X hardware if you use irqbalance with kernels between
2.6.19 and 2.6.21. If these types of problems are encountered, you may disable the irqbalance daemon or upgrade to a newer kernel.
Loading...