Dell Intel PRO Family of Adapters User Manual

Intel® Network Adapters User Guide

Restrictions and Disclaimers

Information for the Intel® Boot Agent, Intel® Ethernet iSCSI Boot, or Intel® FCoE/DCB can be found in their respective user guides.
Information in this document is subject to change without notice. Copyright © 2008-2014, Intel Corporation. All rights reserved.
* Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Intel Corporation disclaims any proprietary interest in trademarks and trade names other than its own.

Restrictions and Disclaimers

The information contained in this document, including all instructions, cautions, and regulatory approvals and cer­tifications, is provided by the supplier and has not been independently verified or tested by Dell. Dell cannot be responsible for damage caused as a result of either following or failing to follow these instructions.
All statements or claims regarding the properties, capabilities, speeds or qualifications of the part referenced in this doc­ument are made by the supplier and not by Dell. Dell specifically disclaims knowledge of the accuracy, completeness or substantiation for any such statements. All questions or comments relating to such statements or claims should be directed to the supplier.

Export Regulations

Customer acknowledges that these Products, which may include technology and software, are subject to the customs and export control laws and regulations of the United States (U.S.) and may also be subject to the customs and export laws and regulations of the country in which the Products are manufactured and/or received. Customer agrees to abide by those laws and regulations. Further, under U.S. law, the Products may not be sold, leased or otherwise transferred to restricted end users or to restricted countries. In addition, the Products may not be sold, leased or otherwise trans­ferred to, or utilized by an end-user engaged in activities related to weapons of mass destruction, including without lim­itation, activities related to the design, development, production or use of nuclear weapons, materials, or facilities, missiles or the support of missile projects, and chemical or biological weapons.
Last revised: 29 April 2014

Overview

Welcome to the User's Guide for Intel® Ethernet Adapters and devices. This guide covers hardware and software installation, setup procedures, and troubleshooting tips for the Intel® Gigabit Server Adapters and Intel® 10 Gigabit Server Adapters. In addition to supporting 32-bit operating systems, this software release also supports Intel® 64 Archi­tecture (Intel® 64).
Supported 10 Gigabit Network Adapters
l
Intel® Ethernet X520 10GbE Dual Port KX4 Mezz
l
Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 2P X520 Adapter
l
Intel® Ethernet 10G 4P X540/I350 rNDC
l
Intel® Ethernet 10G 4P X520/I350 rNDC
l
Intel® Ethernet 10G 2P X520-k bNDC
Supported Gigabit Network Adapters and Devices
l
Intel® PRO/1000 PT Server Adapter
l
Intel® PRO/1000 PT Dual Port Server Adapter
l
Intel® PRO/1000 PF Server Adapter
l
Intel® Gigabit ET Dual Port Server Adapter
l
Intel® Gigabit ET Quad Port Server Adapter
l
Intel® Gigabit ET Quad Port Mezzanine Card
l
Intel® Gigabit 2P I350-t Adapter
l
Intel® Gigabit 4P I350-t Adapter
l
Intel® Gigabit 4P I350-t rNDC
l
Intel® Gigabit 4P X540/I350 rNDC
l
Intel® Gigabit 4P X520/I350 rNDC
l
Intel® Ethernet Connection I354 1.0 GbE Backplane
l
Intel® Gigabit 4P I350-t Mezz
l
Intel® Gigabit 2P I350-t LOM
l
Intel® Gigabit 2P I350 LOM
l
Intel® Gigabit 4P I350 bNDC

Installing the Network Adapter

If you are installing a network adapter, follow this procedure from step 1. If you are upgrading the driver software, start with step 6.
1. Review system requirements.
2. Follow the procedure in Insert the PCI Express Adapter in the Server.
3. Carefully connect the network copper cable(s) or fiber cable(s).
4. After the network adapter is in the server, install the network drivers.
5. For Windows*, install the Intel® PROSet software.
6. Testing the Adapter.
System Requirements
Hardware Compatibility
Before installing the adapter, check your system for the following minimum configuration requirements:
l
IA-32-based (32-bit x86 compatible)
l
One open PCI Express* slot (v1.0a or newer) operating at 1x, 4x, 8x, or 16x.
l
The latest BIOS for your system
l
Supported operating system environments: see Installing Network Drivers
Supported Operating Systems
32-bit Operating Systems
Basic software and drivers are supported on the following operating systems:
l
Microsoft Windows Server 2008
64-bit Operating Systems
Software and drivers are supported on the following 64-bit operating systems:
l
Microsoft Windows Server 2012
l
Microsoft Windows Server 2008
l
RHEL 6.5
l
SLES 11 SP3
Cabling Requirements
Intel Gigabit Adapters
l
1000BASE-SX on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 550 meters max.
l
Utilizing 62.5 micron multimode, length is 275 meters max.
l
1000BASE-T or 100BASE-TX on Category 5 or Category 5e wiring, twisted 4-pair copper:
l
Make sure you use Category 5 cabling that complies with the TIA-568 wiring specification. For more information on this specification, see the Telecommunications Industry Association's web site: www.-
tiaonline.org.
l
Length is 100 meters max.
l
Category 3 wiring supports only 10 Mbps.
Intel 10 Gigabit Adapters
l
10GBASE-SR/LC on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 300 meters max.
l
Utilizing 62.5 micron multimode, length is 33 meters max.
l
10GBASE-T on Category 6, Category 6a, or Category 7 wiring, twisted 4-pair copper:
l
Length is 55 meters max for Category 6.
l
Length is 100 meters max for Category 6a.
l
Length is 100 meters max for Category 7.
l
10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l
Length is 10 meters max.
OS Updates
Some features require specific versions of an operating system. You can find more information in the sections that describe those features. You can download the necessary software patches from support sites, as listed here:
l
Microsoft Windows* Service Packs: support.microsoft.com
l
Red Hat Linux*: www.redhat.com
l
SUSE Linux: http://www.novell.com/linux/suse/
Ethernet MAC Addresses
Single-Port Adapters
The Ethernet address should be printed on the identification sticker on the front of the card.
Multi-Port Adapters
Dual port adapters have two Ethernet addresses. The address for the first port (port A or 1) is printed on a label on the component side of the adapter. Add one to this address to obtain the value for the second port (port B or 2).
In other words:
l
Port A = X (where X is the last numeric of the address printed on the label)
l
Port B = X + 1

Intel® Network Adapters Quick Installation Guide

Install the Intel PCI Express Adapter

1. Turn off the computer and unplug the power cord.
2. Remove the computer cover and the adapter slot cover from the slot that matches your adapter.
3. Insert the adapter edge connector into the PCI Express slot and secure the bracket to the chassis.
4. Replace the computer cover, then plug in the power cord.
NOTE: For information on identifying PCI Express slots that support your adapters, see your Dell system guide.

Attach the Network Cable

1. Attach the network connector.
2. Attach the other end of the cable to the compatible link partner.
3. Start your computer and follow the driver installation instructions for your operating system.

Install the Drivers

Windows* Operating Systems
You must have administrative rights to the operating system to install the drivers.
To installing drivers using setup.exe:
1. Install the adapter in the computer and turn on the computer.
2. Insert the installation CD in the CD-ROM drive. If the autorun program on the CD appears, ignore it.
3. Go to the root directory of the CD and double-click on setup.exe.
4. Follow the onscreen instructions.
Linux*
There are three methods for installing the Linux drivers:
l
Install from Source Code
l
Install from KMOD
l
Install from KMP RPM
NOTE: This release includes Linux Base Drivers for the Intel® Network Adapters. These drivers are named
e1000e and igb and ixgbe. The ixgbe driver must be installed to support 10 Gigabit 82598 and 82599-based net-
work connections. The igb driver must be installed to support any 82575 and 82576-based network con­nections. All other network connections require the e1000e driver. Please refer to the user guide for more specific information.
Other Operating Systems
To install other drivers, visit the Customer Support web site: http://www.support.dell.com.

Installing the Adapter

Insert the PCI Express Adapter in the Server

NOTE: If you are replacing an existing adapter with a new adapter, you must re-install the driver.
1. Turn off the server and unplug the power cord, then remove the server's cover.
CAUTION: Turn off and unplug the server before removing the server's cover. Failure to do so could endanger you and may damage the adapter or server.
2. Remove the cover bracket from a PCI Express slot (v1.0a or later). PCI Express slots and adapters vary in the number of connectors present, depending on the data lanes being supported.
NOTE: The following adapters will only fit into x4 or larger PCI Express slots.
l
Intel® PRO/1000 PT Dual Port Server Adapter
l
Intel® PRO/1000 PF Server Adapter
l
Intel® Gigabit ET Dual Port Server Adapter
l
Intel® Gigabit ET Quad Port Server Adapter
The following adapters will only fit into x8 or larger PCI Express slots.
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet Server Adapter X520-T2
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 2P X520 Adapter
Some systems have physical x8 PCI Express slots that actually support lower speeds. Please check your system manual to identify the slot.
3. Insert the adapter in an available, compatible PCI Express slot. Push the adapter into the slot until the adapter is firmly seated. You can install a smaller PCI Express adapter in a larger PCI Express slot.
CAUTION: Some PCI Express adapters may have a short connector, making them more fragile than PCI adapters. Excessive force could break the connector. Use caution when pressing the board in the slot.
4. Repeat steps 2 through 3 for each adapter you want to install.
5. Replace the server cover and plug in the power cord.
6. Turn the power on.

Connecting Network Cables

Connect the appropriate network cable, as described in the following sections.
Connect the UTP Network Cable
Insert the twisted pair, RJ-45 network cable as shown below.
Single-port Adapter Dual-port Adapter Quad-port Adapter
Type of cabling to use:
l
10GBASE-T on Category 6, Category 6a, or Category 7 wiring, twisted 4-pair copper:
l
Length is 55 meters max for Category 6.
l
Length is 100 meters max for Category 6a.
l
Length is 100 meters max for Category 7.
NOTE: For the Intel® 10 Gigabit AT Server Adapter, to ensure compliance with CISPR 24 and the EU’s EN55024, this product should be used only with Category 6a shielded cables that are properly ter­minated according to the recommendations in EN50174-2.
l
For 1000BASE-T or 100BASE-TX, use Category 5 or Category 5e wiring, twisted 4-pair copper:
l
Make sure you use Category 5 cabling that complies with the TIA-568 wiring specification. For more information on this specification, see the Telecommunications Industry Association's web site: www.-
tiaonline.org.
l
Length is 100 meters max.
l
Category 3 wiring supports only 10 Mbps.
CAUTION: If using less than 4-pair cabling, you must manually configure the speed and duplex set­ting of the adapter and the link partner. In addition, with 2- and 3-pair cabling the adapter can only achieve speeds of up to 100Mbps.
l
For 100Base-TX, use Category 5 wiring.
l
For 10Base-T, use Category 3 or 5 wiring.
l
If you want to use this adapter in a residential environment (at any speed), use Category 5 wiring. If the cable runs between rooms or through walls and/or ceilings, it should be plenum-rated for fire safety.
In all cases:
l
The adapter must be connected to a compatible link partner, preferably set to auto-negotiate speed and duplex for Intel gigabit adapters.
l
Intel Gigabit and 10 Gigabit Server Adapters using copper connections automatically accommodate either MDI or MDI-X connections. The auto-MDI-X feature of Intel gigabit copper adapters allows you to directly connect two adapters without using a cross-over cable.
Connect the Fiber Optic Network Cable
CAUTION: The fiber optic ports contain a Class 1 laser device. When the ports are disconnected, always cover them with the provided plug. If an abnormal fault occurs, skin or eye damage may result if in close proximity to the exposed ports.
Remove and save the fiber optic connector cover. Insert a fiber optic cable into the ports on the network adapter bracket as shown below.
Most connectors and ports are keyed for proper orientation. If the cable you are using is not keyed, check to be sure the connector is oriented properly (transmit port connected to receive port on the link partner, and vice versa).
The adapter must be connected to a compatible link partner, such as an IEEE 802.3z-compliant gigabit switch, which is operating at the same laser wavelength as the adapter.
Conversion cables to other connector types (such as SC-to-LC) may be used if the cabling matches the optical spe­cifications of the adapter, including length limitations.
The Intel® 10 Gigabit XF SR and Intel® PRO/1000 PF Server Adapters use an LC connection. Insert the fiber optic cable as shown below.
The Intel® 10 Gigabit XF SR Server Adapter uses an 850 nanometer laser wavelength (10GBASE-SR/LC). The Intel® PRO/1000 PF Server Adapter uses an 850 nanometer laser wavelength (1000BASE-SX).
Connection requirements
l
10GBASE-SR/LC on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 300 meters max.
l
Utilizing 62.5 micron multimode, length is 33 meters max.
l
1000BASE-SX/LC on 850 nanometer optical fiber:
l
Utilizing 50 micron multimode, length is 550 meters max.
l
Utilizing 62.5 micron multimode, length is 275 meters max.
SFP+ Devices with Pluggable Optics
82599-based adapters
The Intel® Ethernet Server Adapter X520-2 only supports Intel optics and/or the direct attach cables listed below. When 82599-based SFP+ devices are connected back to back, they should be set to the same Speed setting using Intel PROSet for Windows or ethtool. Results may vary if you mix speed settings.
NOTE: 82599-Based adapters support all passive and active limiting direct attach cables that comply with SFF­8431 v4.1 and SFF-8472 v10.4 specifications.
Supplier Type Part Numbers
Intel Dual Rate 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2 / FTLX8571D3BCV-IT
82598-BASED ADAPTERS
The following is a list of SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices.
NOTES:
:
l
Intel® Network Adapters that support removable optical modules only support their original module type. If you plug in a different type of module, the driver will not load.
l
82598-Based adapters support all passive direct attach cables that comply with SFF-8431 v4.1 and SFF­8472 v10.4 specifications. Active direct attach cables are not supported
l
Hot Swapping/hot plugging optical modules is not supported.
l
Only single speed, 10 gigabit modules are supported.
Supplier Type Part Numbers
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
Dell 1m - Force 10 Assy, Cbl, SFP+, CU, 10GE, DAC C4D08, V250M, NMMT9
Dell 3m - Force 10 Assy, Cbl, SFP+, CU, 10GE, DAC 53HVN, F1VT9
Dell 5m - Force 10 Assy, Cbl, SFP+, CU, 10GE, DAC 5CN56, 358vv, W25W9
Cisco 1m - Twin-ax cable SFP-H10GB-CU1M
Cisco 3m - Twin-ax cable SFP-H10GB-CU3M
Cisco 5m - Twin-ax cable SFP-H10GB-CU5M
Cisco 7m - Twin-ax cable SFP-H10GB-CU7M
Molex 1m - Twin-ax cable 74752-1101, 74752-9093
Molex 3m - Twin-ax cable 74752-2301, 74752-9094
Molex 5m - Twin-ax cable 74752-3501, 74752-9096
Molex 7m - Twin-ax cable 74752-9098
Molex 10m - Twin-ax cable 74752-9004
Tyco 1m - Twin-ax cable 2032237-2
Tyco 3m - Twin-ax cable 2032237-4
Tyco 5m - Twin-ax cable 2032237-6
Tyco 10m - Twin-ax cable 1-2032237-1
THIRD PARTY OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE LISTED ONLY FOR THE PURPOSE OF HIGHLIGHTING THIRD PARTY SPECIFICATIONS AND POTENTIAL COMPATIBILITY, AND ARE NOT RECOMMENDATIONS OR ENDORSEMENT OR SPONSORSHIP OF ANY THIRD PARTY’S PRODUCT BY INTEL. INTEL IS NOT ENDORSING OR PROMOTING PRODUCTS MADE BY ANY THIRD PARTY AND THE THIRD PARTY REFERENCE IS PROVIDED ONLY TO SHARE INFORMATION REGARDING CERTAIN OPTIC MODULES AND CABLES WITH THE ABOVE SPECIFICATIONS. THERE MAY BE OTHER MANUFACTURERS OR SUPPLIERS, PRODUCING OR SUPPLYING OPTIC MODULES AND CABLES WITH SIMILAR OR MATCHING DESCRIPTIONS. CUSTOMERS MUST USE THEIR OWN DISCRETION AND DILIGENCE TO PURCHASE OPTIC MODULES AND CABLES FROM ANY THIRD PARTY OF THEIR CHOICE. CUSTOMERS ARE SOLELY RESPONSIBLE FOR ASSESSING THE SUITABILITY OF THE PRODUCT AND/OR DEVICES AND FOR THE SELECTION OF THE VENDOR FOR PURCHASING ANY PRODUCT. THE OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS.
Connect the Twinaxial Cable
Insert the twinaxial network cable as shown below.
Type of cabling:
l
10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l
Length is 10 meters max.

Insert the Mezzanine Card in the Blade Server

1. Turn off the blade server and pull it out of the chassis, then remove its cover.
CAUTION: Failure to turn off the blade server could endanger you and may damage the card or server.
2. Lift the locking lever and insert the card in an available, compatible mezzanine card socket. Push the card into the socket until it is firmly seated.
NOTE: A switch or pass-through module must be present on the same fabric as the card in the chassis to provide a physical connection. For example, if the mezzanine card is inserted in fabric B, a switch must also be present in fabric B of the chassis.
3. Repeat steps 2 for each card you want to install.
4. Lower the locking lever until it clicks into place over the card or cards.
5. Replace the blade server cover and put the blade back into the server chassis.
6. Turn the power on.

Setup

Installing Network Drivers

Before you begin
To successfully install drivers or software, you must have administrative privileges on the computer.
To install drivers
For directions on how to install drivers for a specific operating system, click one of the links below.
You can download the files from Customer Support.
For directions on how to install drivers for a specific operating system, select an operating system link below.
l Windows Server 2012
l Windows Server 2008
l Linux
NOTES:
l
If you are installing a driver in a computer with existing Intel adapters, be sure to update all the adapters and ports with the same driver and Intel® PROSet software. This ensures that all adapters will function correctly.
l
If you are using an Intel 10GbE Server Adapter and an Intel Gigabit adapter in the same machine, the driver for the Gigabit adapter must be running with the gigabit drivers found in respective download pack­age.
l
If you have Fibre Channel over Ethernet (FCoE) boot enabled on any devices in the system, you will not be able to upgrade your drivers. You must disable FCoE boot before upgrading your Ethernet drivers.
Installing Multiple Adapters
Windows Server users: Follow the procedure in Installing Windows Drivers. After the first adapter is detected, you may
be prompted to insert the installation media supplied with your system. After the first adapter driver finishes installing, the next new adapter is detected and Windows automatically installs the driver. (You must manually update the drivers for any existing adapters.) For more information, see Updating the Drivers.
Linux users: For more information, see Linux* Driver for the Intel® Gigabit Family of Adapters.
Updating drivers for multiple adapters or ports: If you are updating or installing a driver in a server with existing Intel
adapters, be sure to update all the adapters and ports with the same new software. This will ensure that all adapters will function correctly.
Installing Intel PROSet
Intel PROSet for Windows Device Manager is an advanced configuration utility that incorporates additional con­figuration and diagnostic features into the device manager. For information on installation and usage, see Using Intel®
PROSet for Windows Device Manager.
NOTE: You must install Intel® PROSet for Windows Device Manager if you want to use adapter teams or VLANs.
Push Installation for Windows
An unattended install or "Push" of Windows enables you to automate Windows installation when several computers on a network require a fresh install of a Windows operating system.
To automate the process, a bootable disk logs each computer requiring installation or update onto a central server that contains the install executable. After the remote computer logs on, the central server then pushes the operating system to the computer.
Supported operating systems
l Windows Server 2012
l Windows Server 2008
Setting Speed and Duplex
Overview
The Link Speed and Duplex setting lets you choose how the adapter sends and receives data packets over the net­work.
In the default mode, an Intel network adapter using copper connections will attempt to auto-negotiate with its link part­ner to determine the best setting. If the adapter cannot establish link with the link partner using auto-negotiation, you may need to manually configure the adapter and link partner to the identical setting to establish link and pass packets. This should only be needed when attempting to link with an older switch that does not support auto-negotiation or one that has been forced to a specific speed or duplex mode.
Auto-negotiation is disabled by selecting a discrete speed and duplex mode in the adapter properties sheet.
NOTES:
l
Configuring speed and duplex can only be done on Intel gigabit copper-based adapters.
l
Fiber-based adapters operate only in full duplex at their native speed.
l
The Intel Gigabit ET Quad Port Mezzanine Card only operates at 1 Gbps full duplex.
l
The following adapters operate at either10 Gbps or1 Gbps full duplex:
l
Intel® 10 Gigabit AT Server Adapter
l
Intel® Ethernet Server Adapter X520-T2
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet X520 10GbE Dual Port KX4 Mezz
l
Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz
l
Intel® Ethernet 10G 2P X520 Adapter
l
Intel® Ethernet 10G 4P X520/I350 rNDC
l
Intel® Ethernet 10G 2P X520-k bNDC
l
The following adapters operate at either 10Gbps, 1Gbps, or 100Mbps full duplex:
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 4P X540/I350 rNDC
NOTE: X540 ports will support 100Mbps only when both link partners are set to auto­negotiate.
l
The Intel® Ethernet Connection I354 1.0 GbE Backplane only operates at 1 Gbps full duplex.
Per IEEE specification, 10 gigabit and gigabit speeds are available only in full duplex.
The settings available when auto-negotiation is disabled are:
l
10 Gbps full duplex (requires a full duplex link partner set to full duplex). The adapter can send and receive packets at the same time.
l
1 Gbps full duplex (requires a full duplex link partner set to full duplex). The adapter can send and receive pack­ets at the same time. You must set this mode manually (see below).
l
10 Mbps or 100 Mbps full duplex (requires a link partner set to full duplex). The adapter can send and receive packets at the same time. You must set this mode manually (see below).
l
10 Mbps or 100 Mbps half duplex (requires a link partner set to half duplex). The adapter performs one oper­ation at a time; it either sends or receives. You must set this mode manually (see below).
Your link partner must match the setting you choose.
NOTES:
l
Although some adapter property sheets (driver property settings) list 10Mbps and 100Mbps in full or half duplex as options, using those settings is not recommended.
l
Only experienced network administrators should force speed and duplex manually.
l
You cannot change the speed or duplex of Intel adapters that use fiber cabling.
Intel 10 Gigabit adapters that support 1 gigabit speed allow you to configure the speed setting. If this option is not present, your adapter only runs at its native speed.
Manually Configuring Duplex and Speed Settings
If your switch supports the NWay* standard, and both the adapter and switch are set to auto-negotiate, full duplex con­figuration is automatic, and no action is required on your part. Not all switches support auto-negotiation. Check with your network administrator or switch documentation to verify whether your switch supports this feature.
Configuration is specific to the driver you are loading for your network operating system. To set a specific Link Speed and Duplex mode, refer to the section below that corresponds to your operating system.
CAUTION: The settings at the switch must always match the adapter settings. Adapter performance may suffer, or your adapter might not operate correctly if you configure the adapter differently from your switch.
Windows
The default setting is for auto-negotiation to be enabled. Only change this setting to match your link partner's speed and duplex setting if you are having trouble connecting.
1. In Windows Device Manager, double-click the adapter you want to configure.
2. On the Link Speed tab, select a speed and duplex option from the Speed and Duplex drop-down menu.
3. Click OK.
More specific instructions are available in the Intel PROSet help.
Linux
See Linux* Driver for the Intel® Gigabit Family of Adapters for information on configuring Speed and Duplex on Linux systems.

Windows* Server Push Install

Introduction
A "Push", or unattended installation provides a means for network administrators to easily install a Microsoft Windows* operating system on similarly equipped systems. The network administrator can create a bootable media that will auto­matically log into a central server and install the operating system from an image of the Windows installation directory stored on that server. This document provides instructions for a basic unattended installation that includes the install­ation of drivers for Intel® Networking Devices.
As part of the unattended installation, you can create teams and VLANs. If you wish to create one or more team­s/VLANs as part of the unattended installation, you must also follow the instructions in the "Instructions for Creating
Teams and VLANs (Optional)" section of this document.
The elements necessary for the Windows Server unattended installation are:
l
A Windows Server system with a shared image of the Windows Installation CD.
l
If you want to create teams/VLANs as part of the unattended installation, you need to create a configuration file with the team/VLAN information in it. To create this file, you need a sample system that has the same type of adapters that will be in the systems receiving the push installation. On the sample system, use Intel® PROSet for Windows Device Manager to set up the adapters in the teaming/VLAN configuration you want. This system could also be the Windows Server mentioned above. For clarity, this system is referred to in this page as the configured system.
l
An unattended installation configuration file that provides Windows setup with information it needs to complete the installation. The name of this file is UNATTEND.XML.
NOTE: Intel® 10GbE Network Adapters do not support unattended driver installation.
Setting up an Install Directory on the File Server
The server must be set up with a distribution folder that holds the required Windows files. Clients must also be able to read this folder when connecting via TCP/IP or IPX.
For illustration purposes, the examples in this document use the network share D:\WINPUSH. To create this share:
1. Create a directory on the server (EX: D:\WINPUSH).
2. Use the My Computer applet in Windows to locate the D:\WINPUSH folder.
3. Right-click the folder and select Sharing.
4. Select Share this folder, then give it a share name (EX: WINPUSH). This share name will be used to connect to this directory from the remote target systems. By default, the permissions for this share are for Everyone to have Read access.
5. Adjust permissions as necessary and click OK.
To prepare the distribution folder:
1. Copy the entire contents from the Windows Server DVD to D:\WINPUSH. Use Windows Explorer or XCOPY to maintain the same directory structure as on the Windows Server DVD. When the copy is complete, the Windows Server installation files should be in the D:\WINPUSH directory.
2. Use the Windows System Image Manager to edit/generate the Unattend.xml file and save it to the D:\WINPUSH directory. See sample below for example of Unattend.xml.
3. Create the driver install directory structure and copy the driver files to it.
NOTE: The PUSHCOPY.BAT file provided with the drivers in the PUSH directory copies the appropriate files for the installation. PUSHCOPY also copies the components needed to perform the automated installations contained in the [GuiRunOnce] section of the sample UNATTEND.XML file. These include an unattended installation of the Intel® PROSet for Windows Device Manager.
Example: From a Windows command prompt where e: is the drive letter of your CD-ROM drive:
e: cd \PUSH (You must be in the PUSH\ directory to run PUSHCOPY.)
pushcopy D:\WINPUSH WS8 The above command creates the $OEM$ directory structure and copies all the necessary files to install the driver and Intel® PROSet for Windows Device Manager. However, Intel® PROSet is not installed unless the FirstLogonCommands is added as seen in the example below.
[Microsoft-Windows-Shell-Setup\FirstLogonCommands\SynchronousCommand]
CommandLine= %systemdrive%\WMIScr\Install.bat
Description= Begins silent unattended install of Intel PROSet for Windows Device Manager
Order= 1
Instructions for Creating Teams and VLANs (Optional)
NOTE: If you used Pushcopy from the section "To prepare the distribution folder:" the directory structure will
already be created and you can skip step 4.
1. Prepare the distribution folder on the file server as detailed in the following section.
2. Copy SavResDX.vbs from the Intel CD to the configured system. The file is located in the \WMI directory on the Intel CD.
3. Open a command prompt on the configured system and navigate to the directory containing SavResDX.vbs.
4. Run the following command: cscript SavResDX.vbs save. A configuration file called WmiConf.txt is created in the same directory.
5. Copy the SavResDX.vbs and WmiConf.txt files to the $OEM$\$1\WMIScr directory on the file server.
6. Locate the batch file, Install.bat, in $OEM$\$1\WMIScr. Edit the batch file by removing the comment that pre­cedes the second START command. The file should look like the following when finished.
Start /wait %systemdrive%\drivers\net\INTEL\APPS\ProsetDX\Win32\PROSetDX.msi /qn /li %tem­p%\PROSetDX.log REM Uncomment the next line if VLANs or Teams are to be installed. Start /wait /b cscript %systemdrive%\wmiscr\SavResDX.vbs restore %sys­temdrive%\wmiscr\wmiconf.txt > %systemdrive%\wmiscr\output.txt exit
NOTE: If you are adding a team or VLAN, run ImageX.exe afterwards to create the Intel.wim containing the team or VLAN.
Deployment Methods
Boot using your WinPE 2.0 media and connect to the server containing your Windows Server 2008 installation share.
Run the command from the \\Server\WINPUSH prompt:
setup /unattend:<full path to answer file>
NOTE: In the above procedure, setup runs the installation in unattended mode and also detects the plug and play network adapters. All driver files are copied from the shared directory to the target system directories and installation of the OS and Network Adapters continues without user intervention.
If you installed teams/VLANs as part of the unattended installation, view the results of the script execution in the out­put.txt file. This file is in the same directory as the SavResDX.vbs file.
Microsoft Documentation for Unattended Installations of Windows Server 2008 and Windows Server 2012
For a complete description of the parameters supported in Unattend.XML visit support.microsoft.com to view the Win­dows Automated Installation Kit (WAIK) documentation.
Sample Unattend.XML file for Windows Server 2008
Add the following Components to the specified Configuration Pass:
Pass: 1 WindowsPE Component: Microsoft-Windows-Setup\DiskConfiguration\Disk\CreatePartitions\CreatePartition Microsoft-Windows-Setup\DiskConfiguration\Disk\ModifyPartitions\ModifyPartition Microsoft-Windows-Setup\ImageInstall\OSImage\InstallTo Microsoft-Windows-Setup\ImageInstall\DataImage\InstallTo Microsoft-Windows-Setup\ImageInstall\DataImage\InstallFrom Microsoft-Windows-Setup\UserData Microsoft-Windows-International-Core-WinPE
Pass: 4 Specialize Component: Microsoft-Windows-Deployment\RunSynchronous\RunSynchronousCommand
Pass: 7 oobeSystem Component: Microsoft-Windows-Shell-Setup\OOBE Microsoft-Windows-Shell-Setup\AutoLogon Microsoft-Windows-Shell-Setup\FirstLogonCommands
Use the following values to populate the UNATTEND.XML:
[Microsoft-Windows-Setup\DiskConfiguration] WillShowUI = OnError
[Microsoft-Windows-Setup\DiskConfiguration\Disk] DiskID = 0 WillWipeDisk = true (if false then only need to use modify section and adjust it to your system)
[Microsoft-Windows-Setup\DiskConfiguration\Disk\CreatePartitions\CreatePartition] Extend = false Order = 1 Size = 20000 (NOTE: This example creates a 20-GB partition.) Type = Primary
[Microsoft-Windows-Setup\DiskConfiguration\Disk\ModifyPartitions\ModifyPartition] Active = true Extend = false Format = NTFS Label = OS_Install Letter = C Order = 1 PartitionID = 1
[Microsoft-Windows-Setup\ImageInstall\OSImage\] WillShowUI = OnError
[Microsoft-Windows-Setup\ImageInstall\OSImage\InstallTo] DiskID = 0 PartitionID = 1
[Microsoft-Windows-Setup\ImageInstall\DataImage\InstallFrom] Path = \\Server\PushWS8\intel.wim
[Microsoft-Windows-Setup\ImageInstall\DataImage\InstallTo] DiskID = 0 PartitionID = 1
[Microsoft-Windows-Setup\UserData] AcceptEula = true FullName = LADV Organization = Intel Corporation
[Microsoft-Windows-Setup\UserData\ProductKey] Key = <enter appropriate key> WillShowUI = OnError
[Microsoft-Windows-Shell-Setup\OOBE] HideEULAPage = true ProtectYourPC = 3 SkipMachineOOBE = true SkipUserOOBE = true
[Microsoft-Windows-International-Core-WinPE] InputLocale = en-us SystemLocale = en-es UILanguage = en-es UserLocale = en-us
[Microsoft-Windows-International-Core-WinPE\SetupUILanguage] UILanguage = en-us
[Microsoft-Windows-Deployment\RunSynchronous\RunSynchronousCommand] Description= Enable built-in administrator account Order= 1 Path= net user administrator /active:Yes
[Microsoft-Windows-Shell-Setup\AutoLogon] Enabled = true LogonCount = 5 Username = Administrator
[Microsoft-Windows-Shell-Setup\AutoLogon\Password] <strongpassword>
[Microsoft-Windows-Shell-Setup\FirstLogonCommands] Description= Begins silent unattended install of Intel PROSet for Windows Device Manager Order= 1 CommandLine= %systemdrive%\WMIScr\Install.bat

Command Line Installation for Base Drivers and Intel® PROSet

Driver Installation
The driver install utility setup.exe allows unattended installation of drivers from a command line.
These utilities can be used to install the base driver, intermediate driver, and all management applications for sup­ported devices.
NOTE: You must run setup.exe in a DOS Window in Windows Server 2008. Setup.exe will not run on a com­puter booted to DOS.
Setup.exe Command Line Options
By setting the parameters in the command line, you can enable and disable management applications. If parameters are not specified, only existing components are updated.
Setup.exe supports the following command line parameters:
Parameter Definition
BD Base Driver
"0", do not install the base driver.
"1", install the base driver (default).
ANS Advanced Network Services
"0", do not install ANS (default). If ANS is already installed, it will be uninstalled.
"1", install ANS. The ANS property requires DMIX=1.
NOTE: If the ANS parameter is set to ANS=1, both Intel PROSet and ANS will be installed.
DMIX PROSet for Windows Device Manager
"0", do not install Intel PROSet feature (default). If the Intel PROSet feature is already installed, it will be uninstalled.
"1", install Intel PROSet feature. The DMIX property requires BD=1.
NOTE: If DMIX=0, ANS will not be installed. If DMIX=0 and Intel PROSet, ANS, SMASHv2 and FCoE are already installed, Intel PROSet, ANS, SMASHv2 and FCoE will be uninstalled.
Parameter Definition
SNMP Intel SNMP Agent
"0", do not install SNMP (default). If SNMP is already installed, it will be uninstalled.
"1", install SNMP. The SNMP property requires BD=1.
NOTE: Although the default value for the SNMP parameter is 1 (install), the SNMP agent will only be installed if:
l
The Intel SNMP Agent is already installed. In this case, the SNMP agent will be updated.
l
The Windows SNMP service is installed. In this case, the SNMP window will pop up and you may cancel the installation if you do not want it installed.
FCOE Fibre Channel over Ethernet
"0", do not install FCoE (default). If FCoE is already installed, it will be uninstalled.
"1", install FCoE. The FCoE property requires DMIX=1.
NOTE: Even if FCOE=1 is passed, FCoE will not be installed if the operating system and installed adapters do not support FCoE.
ISCSI
LOG [log file name]
XML [XML file name]
-a Extract the components required for installing the base driver to
-f Force a downgrade of the components being installed.
-v Display the current install package version.
/q[r|n] /q --- silent install options
iSCSI
"0", do not install iSCSI (default). If iSCSI is already installed, it will be uninstalled.
"1", install FCoE. The iSCSI property requires DMIX=1.
LOG allows you to enter a file name for the installer log file. The default name is C:\UmbInst.log.
XML allows you to enter a file name for the XML output file.
tel\Drivers.
(/qn) is specified. If this parameter is specified, the installer will exit after the base driver is extracted. Any other parameters will be ignored.
NOTE: If the installed version is newer than the current version, this parameter needs to be set.
r Reduced GUI Install (only displays critical warning messages)
n Silent install
The directory where these files will be extracted to can be modified unless silent mode
C:\Program Files\In-
/l[i|w|e|a] /l --- log file option for DMIX and SNMP installation. Following are log switches:
i log status messages.
w log non-fatal warnings.
e log error messages.
a log the start of all actions.
-u Uninstall the drivers.
NOTE: You must include a space between parameters.
Command line install examples
This section describes some examples used in command line installs.
Assume that setup.exe is in the root directory of the CD, D:\. You can modify the paths for different operating systems and CD layouts and apply the command line examples.
1. How to install the base driver on Windows Server 2008:
D:\Setup DMIX=0 ANS=0 SNMP=0
2. How to install the base driver on Windows Server 2008 using the LOG option:
D:\Setup LOG=C:\installBD.log DMIX=0 ANS=0 SNMP=0
3. How to install Intel PROSet and ANS silently on Windows Server 2008:
D:\Setup DMIX=1 ANS=1 /qn
4. How to install Intel PROSet without ANS silently on Windows Server 2008:
D:\Setup DMIX=1 ANS=0 /qn
5. How to install components but deselect ANS for Windows Server 2008:
D:\Setup DMIX=1 ANS=0 /qn /liew C:\install.log
The /liew log option provides a log file for the DMIX installation.
NOTE: To install teaming and VLAN support on a system that has adapter base drivers and Intel PROSet for Win­dows Device Manager installed, type the command line D:\Setup ANS=1.
Windows Server 2008 Server Core
In Windows Server 2008 Server Core, the base driver can be installed using the Plug and Play Utility, PnPUtil.exe. For more information on this utility, see http://technet2.microsoft.com/windowsserver2008/en/library/c265eb4d-f579-42ca-
b82a-02130d33db531033.mspx?mfr=true.

Using the Adapter

Testing the Adapter

Intel's diagnostic software lets you test the adapter to see if there are problems with the adapter hardware, the cabling, or the network connection.
Tests for Windows
Intel PROSet allows you to run four types of diagnostic tests.
l
Connection Test: Verifies network connectivity by pinging the DHCP server, WINS server, and gateway.
l
Cable Tests: Provide information about cable properties.
NOTE: The Cable Test is not supported on:
l
Intel® Ethernet Server Adapter X520-2
l
Intel® Ethernet X520 10GbE Dual Port KX4 Mezz
l
Intel® Ethernet X520 10GbE Dual Port KX4­KR Mezz
l
Intel® Ethernet 10G 2P X520 Adapter
l
Intel® Ethernet 10G 2P X520-k bNDC
l
Intel® Ethernet 10G 4P X520/I350 rNDC
l
Intel® Ethernet 10G 2P X540-t Adapter
l
Intel® Ethernet 10G 4P X540/I350 rNDC
l
Intel® Gigabit 4P I350-t Mezz
l
Intel® Gigabit 4P X520/I350 rNDC
l
Intel® Ethernet Connection I354 1.0 GbE Backplane
l
Intel® Gigabit 4P I350 bNDC
l
Intel® Gigabit 2P I350 LOM
l
Hardware Tests: Determines if the adapter is functioning properly.
NOTE: Hardware tests will fail if the adapter is configured for iSCSI Boot.
To access these tests, select the adapter in Windows Device Manager, click the Link tab, and click Diagnostics. A Dia- gnostics window displays tabs for each type of test. Click the appropriate tab and run the test.
The availability of these tests is hardware and operating system dependent.
DOS Diagnostics
Use the DIAGS test utility to test adapters under DOS.
Linux Diagnostics
The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical inform­ation. ethtool version 1.6 or later is required for this functionality.
The latest release of ethtool can be found at: http://sourceforge.net/projects/gkernel.
NOTE: ethtool 1.6 only supports a limited set of ethtool options. Support for a more complete ethtool feature set can be enabled by upgrading ethtool to the latest version.
Responder Testing
The Intel adapter can send test messages to another Ethernet adapter on the same network. This testing is available in DOS via the diags.exe utility found in the \DOSUtilities\UserDiag\ directory on the installation CD or downloaded from
Customer Support.

Adapter Teaming

ANS Teaming, a feature of the Intel Advanced Network Services (ANS) component, lets you take advantage of multiple adapters in a system by grouping them together. ANS can use features like fault tolerance and load balancing to increase throughput and reliability.
Teaming functionality is provided through the intermediate driver, ANS. Teaming uses the intermediate driver to group physical adapters into a team that acts as a single virtual adapter. ANS serves as a wrapper around one or more base drivers, providing an interface between the base driver and the network protocol stack. By doing so, the intermediate driver gains control over which packets are sent to which physical interface as well as control over other properties essential to teaming.
There are several teaming modes you can configure ANS adapter teams to use.
Setting Up Adapter Teaming
Before you can set up adapter teaming in Windows*, you must install Intel® PROSet software. For more information on setting up teaming, see the information for your operating system.
Operating Systems Supported
The following links provide information on setting up teaming with your operating system:
l Windows
NOTE: To configure teams in Linux, use Channel Bonding, available in supported Linux kernels. For more information see the channel bonding documentation within the kernel source, located at Docu­mentation/networking/bonding.txt.
Supported Adapters
Teaming options are supported on Intel server adapters. Selected adapters from other manufacturers are also sup­ported. If you are using a Windows-based computer, adapters that appear in Intel PROSet may be included in a team.
NOTE: In order to use adapter teaming, you must have at least one Intel gigabit or 10 gigabit server adapter in your system. Furthermore, all adapters must be linked to the same switch or hub.
Conditions that may prevent you from teaming a device
During team creation or modification, the list of available team types or list of available devices may not include all team types or devices. This may be caused by any of several conditions, including:
l
The operating system does not support the desired team type.
l
The device does not support the desired team type or does not support teaming at all.
l
The devices you want to team together use different driver versions.
l
You are trying to team an Intel PRO/100 device with an Intel 10GbE device.
l
You can add Intel® Active Management Technology (Intel® AMT) enabled devices to Adapter Fault Tolerance (AFT), Switch Fault Tolerance (SFT), and Adaptive Load Balancing (ALB) teams. All other team types are not supported. The Intel AMT enabled device must be designated as the primary adapter for the team.
l
The device's MAC address is overridden by the Locally Administered Address advanced setting.
l
Fibre Channel over Ethernet (FCoE) Boot has been enabled on the device.
l
The device has “OS Controlled” selected on the Data Center tab.
l
The device has a virtual NIC bound to it.
l
The device is part of a Microsoft* Load Balancing and Failover (LBFO) team.
Configuration Notes
l
Not all team types are available on all operating systems.
l
Be sure to use the latest available drivers on all adapters.
l
NDIS 6.2 introduced new RSS data structures and interfaces. Due to this, you cannot enable RSS on teams that contain a mix of adapters that support NDIS 6.2 RSS and adapters that do not.
l
If you are using an Intel® 10GbE Server Adapter and an Intel® Gigabit adapter in the same machine, the driver for the Gigabit adapter must be updated with the version on the Intel 10GbE CD or respective download pack­age.
l
If a team is bound to a Hyper-V virtual NIC, the Primary or Secondary adapter cannot be changed.
l
Some advanced features, including hardware offloading, are automatically disabled when non-Intel adapters are team members to assure a common feature set.
l
TOE (TCP Offload Engine) enabled devices cannot be added to an ANS team and will not appear in the list of available adapters.
To enable teaming using Broadcom Advanced Control Suite 2:
1. Load base drivers and Broadcom Advanced Control Suite 2 (always use the latest software releases from www.support.dell.com)
2. Select the Broadcom device and go to the Advanced Tab
3. Disable Receive Side Scaling
4. Go to Resource Allocations and select TCP Offload Engine (TOE)
5. Click on Configure and uncheck TCP Offload Engine (TOE) from the NDIS Configuration section
To enable teaming using Broadcom Advanced Control Suite 3:
1. Load base drivers and Broadcom Advanced Control Suite 3 (always use the latest software releases from www.support.dell.com)
2. Select the Broadcom device and uncheck TOE from the Configurations Tab
3. Click on Apply
4. Choose NDIS entry of Broadcom device and disable Receive Side Scaling from Configurations Tab
5. Click on Apply
NOTE: Multivendor teaming is not supported in Windows Server 2008 x64 versions.
l
Spanning tree protocol (STP) should be disabled on switch ports connected to team adapters in order to pre­vent data loss when the primary adapter is returned to service (failback). Activation Delay is disabled by default. Alternatively, an activation delay may be configured on the adapters to prevent data loss when spanning tree is used. Set the Activation Delay on the advanced tab of team properties.
l
Fibre Channel over Ethernet (FCoE)/Data Center Bridging (DCB) will be automatically disabled when an adapter is added to a team with non-FCoE/DCB capable adapters.
l
ANS teaming of VF devices inside a Windows 2008 R2 guest running on an open source hypervisor is sup­ported.
l
An Intel® Active Management Technology (Intel AMT) enabled device can be added to Adapter Fault Tolerance (AFT), Switch Fault Tolerance (SFT), and Adaptive Load Balancing (ALB) teams. All other team types are not supported. The Intel AMT enabled device must be designated as the primary adapter for the team.
l
Before creating a team, adding or removing team members, or changing advanced settings of a team member, make sure each team member has been configured similarly. Settings to check include VLANs and QoS Packet Tagging, Jumbo Packets, and the various offloads. These settings are available on the Advanced Settings tab. Pay particular attention when using different adapter models or adapter versions, as adapter capabilities vary.
l
If team members implement Intel ANS features differently, failover and team functionality may be affected. To avoid team implementation issues:
l
Create teams that use similar adapter types and models.
l
Reload the team after adding an adapter or changing any Advanced features. One way to reload the team is to select a new preferred primary adapter. Although there will be a temporary loss of network connectivity as the team reconfigures, the team will maintain its network addressing schema.
l
ANS allows you to create teams of one adapter. A one-adapter team will not take advantage of teaming fea­tures, but it will allow you to "hot-add" another adapter to the team without the loss of network connectivity that occurs when you create a new team.
l
Before hot-adding a new member to a team, make sure that new member's link is down. When a port is added to a switch channel before the adapter is hot-added to the ANS team, disconnections will occur because the switch will start forwarding traffic to the port before the new team member is actually configured. The opposite, where the member is first hot-added to the ANS team and then added to the switch channel, is also problematic because ANS will forward traffic to the member before the port is added to the switch channel, and dis­connection will occur.
l
Intel 10 Gigabit Server Adapters can team with Intel Gigabit adapters and certain server-oriented models from other manufacturers. If you are using a Windows-based computer, adapters that appear in the Intel® PROSet teaming wizard may be included in a team.
l
Network ports using OS2BMC should not be teamed with ports that have OS2BMC disabled.
l
A reboot is required when any changes are made, such as modifying an advanced parameter setting of the base driver or creating a team or VLAN, on the network port that was used for a RIS install.
l
Intel adapters that do not support Intel PROSet may still be included in a team. However, they are restricted in the same way non-Intel adapters are. See Multi-Vendor Teaming for more information.
l
If you create a Multi-Vendor Team, you must manually verify that the RSS settings for all adapters in the team are the same.
l
The table below provides a summary of support for Multi-Vendor Teaming.
Multi-vendor Teaming using Intel
Teaming Driver (iANS/PROSet)
Intel Broadcom AFT SFT ALB/RLB SLA LACP LSO CSO TOE RSS
Intel PCI Express Broadcom Device
with TOE disabled
Intel PCI Express Broadcom Device
with TOE enabled
Teaming Mode Supported Offload
Support
Yes Yes Yes Yes Yes Yes Yes No No
No No No No No No No No No
Other Offload
and RSS Sup-
port
Microsoft* Load Balancing and Failover (LBFO) teams
Intel ANS teaming and VLANs are not compatible with Microsoft's LBFO teams. Intel® PROSet will block a member of an LBFO team from being added to an Intel ANS team or VLAN. You should not add a port that is already part of an Intel ANS team or VLAN to an LBFO team, as this may cause system instability. If you use an ANS team member or VLAN in an LBFO team, perform the following procedure to restore your configuration:
1. Reboot the machine.
2. Remove LBFO team. Even though LBFO team creation failed, after a reboot Server Manager will report that LBFO is Enabled, and the LBFO interface is present in the ‘NIC Teaming’ GUI.
3. Remove the ANS teams and VLANS involved in the LBFO team and recreate them. This is an optional (all bind­ings are restored when the LBFO team is removed ), but strongly recommended step.
NOTE: If you add an Intel AMT enabled port to an LBFO team, do not set the port to Standby in the LBFO team. If you set the port to Standby you may lose AMT functionality.
Teaming Modes
There are several teaming modes, and they can be grouped into these categories:
Fault Tolerance
Provides network connection redundancy by designating a primary controller and utilizing the remaining controllers as backups. Designed to ensure server availability to the network. When the user-specified primary adapter loses link, the iANS driver will "fail over" the traffic to the available secondary adapter. When the link of the primary adapter resumes, the iANS driver will "fail back" the traffic to the primary adapter. See Primary and Secondary Adapters for more inform­ation. The iANS driver uses link-based tolerance and probe packets to detect the network connection failures.
l
Link-based tolerance - The teaming driver checks the link status of the local network interfaces belonging to the team members. Link-based tolerance provides fail over and fail back for the immediate link failures only.
l
Probing - Probing is another mechanism used to maintain the status of the adapters in a fault tolerant team. Probe packets are sent to establish known, minimal traffic between adapters in a team. At each probe interval, each adapter in the team sends a probe packet to other adapters in the team. Probing provides fail over and fail back for immediate link failures as well as external network failures in the single network path of the probe pack­ets between the team members.
Fault Tolerance teams include Adapter Fault Tolerance (AFT) and Switch Fault Tolerance (SFT).
Load Balancing
Provides transmission load balancing by dividing outgoing traffic among all the NICs, with the ability to shift traffic away from any NIC that goes out of service. Receive Load Balancing balances receive traffic.
Load Balancing teams include Adaptive Load Balancing (ALB) teams.
NOTE: If your network is configured to use a VLAN, make sure the load balancing team is configured to use the same VLAN.
Link Aggregation
Combines several physical channels into one logical channel. Link Aggregation is similar to Load Balancing.
Link Aggregation teams include Static Link Aggregation and IEEE 802.3ad: dynamic mode.
IMPORTANT
l For optimal performance, you must disable the Spanning Tree Protocol (STP) on all the switches
in the network when using AFT, ALB, or Static Link Aggregation teaming.
l When you create a team, a virtual adapter instance is created. In Windows, the virtual adapter
appears in both the Device Manager and Network and Dial-up Connections. Each virtual adapter instance appears as "Intel Advanced Network Services Virtual Adapter." Do not attempt to modify (except to change protocol configuration) or remove these virtual adapter instances using Device Manager or Network and Dial-up Connections. Doing so might result in system anomalies.
l Before creating a team, adding or removing team members, or changing advanced settings of a
team member, make sure each team member has been configured similarly. Settings to check include VLANs and QoS Packet Tagging, Jumbo Packets, and the various offloads. These settings are available in Intel PROSet's Advanced tab. Pay particular attention when using different adapter models or adapter versions, as adapter capabilities vary.
If team members implement Advanced Features differently, failover and team functionality will be affected. To avoid team implementation issues:
l
Use the latest available drivers on all adapters.
l
Create teams that use similar adapter types and models.
l
Reload the team after adding an adapter or changing any Advanced Features. One way to reload the team is to select a new preferred primary adapter. Although there will be a temporary loss of network connectivity as the team reconfigures, the team will maintain its network addressing schema.
Primary and Secondary Adapters
Teaming modes that do not require a switch with the same capabilities (AFT, SFT, ALB (with RLB)) use a primary adapter. In all of these modes except RLB, the primary is the only adapter that receives traffic. RLB is enabled by default on an ALB team.
If the primary adapter fails, another adapter will take over its duties. If you are using more than two adapters, and you want a specific adapter to take over if the primary fails, you must specify a secondary adapter. If an Intel AMT enabled device is part of a team, it must be designated as the primary adapter for the team.
There are two types of primary and secondary adapters:
l
Default primary adapter: If you do not specify a preferred primary adapter, the software will choose an adapter of the highest capability (model and speed) to act as the default primary. If a failover occurs, another adapter becomes the primary. Once the problem with the original primary is resolved, the traffic will not automatically restore to the default (original) primary adapter in most modes. The adapter will, however, rejoin the team as a non-primary.
l
Preferred Primary/Secondary adapters: You can specify a preferred adapter in Intel PROSet. Under normal conditions, the Primary adapter handles all traffic. The Secondary adapter will receive fallback traffic if the primary fails. If the Preferred Primary adapter fails, but is later restored to an active status, control is auto­matically switched back to the Preferred Primary adapter. Specifying primary and secondary adapters adds no benefit to SLA and IEEE 802.3ad dynamic teams, but doing so forces the team to use the primary adapter's MAC address.
To specify a preferred primary or secondary adapter in Windows
1. In the Team Properties dialog box's Settings tab, click Modify Team.
2. On the Adapters tab, select an adapter.
3. Click Set Primary or Set Secondary.
NOTE: You must specify a primary adapter before you can specify a secondary adapter.
4. Click OK.
The adapter's preferred setting appears in the Priority column on Intel PROSet's Team Configuration tab. A "1" indic- ates a preferred primary adapter, and a "2" indicates a preferred secondary adapter.
Failover and Failback
When a link fails, either because of port or cable failure, team types that provide fault tolerance will continue to send and receive traffic. Failover is the initial transfer of traffic from the failed link to a good link. Failback occurs when the ori­ginal adapter regains link. You can use the Activation Delay setting (located on the Advanced tab of the team's prop­erties in Device Manager) to specify a how long the failover adapter waits before becoming active. If you don't want your team to failback when the original adapter gets link back, you can set the Allow Failback setting to disabled (loc­ated on the Advanced tab of the team's properties in Device Manager).
Adapter Fault Tolerance (AFT)
Adapter Fault Tolerance (AFT) provides automatic recovery from a link failure caused from a failure in an adapter, cable, switch, or port by redistributing the traffic load across a backup adapter.
Failures are detected automatically, and traffic rerouting takes place as soon as the failure is detected. The goal of AFT is to ensure that load redistribution takes place fast enough to prevent user sessions from being disconnected. AFT sup­ports two to eight adapters per team. Only one active team member transmits and receives traffic. If this primary con­nection (cable, adapter, or port) fails, a secondary, or backup, adapter takes over. After a failover, if the connection to the user-specified primary adapter is restored, control passes automatically back to that primary adapter. For more information, see Primary and Secondary Adapters.
AFT is the default mode when a team is created. This mode does not provide load balancing.
NOTES
l
AFT teaming requires that the switch not be set up for teaming and that spanning tree protocol is turned off for the switch port connected to the NIC or LOM on the server.
l
All members of an AFT team must be connected to the same subnet.
Switch Fault Tolerance (SFT)
Switch Fault Tolerance (SFT) supports only two NICs in a team connected to two different switches. In SFT, one adapter is the primary adapter and one adapter is the secondary adapter. During normal operation, the secondary adapter is in standby mode. In standby, the adapter is inactive and waiting for failover to occur. It does not transmit or receive net­work traffic. If the primary adapter loses connectivity, the secondary adapter automatically takes over. When SFT teams are created, the Activation Delay is automatically set to 60 seconds.
In SFT mode, the two adapters creating the team can operate at different speeds.
NOTE: SFT teaming requires that the switch not be set up for teaming and that spanning tree protocol is turned on.
Configuration Monitoring
You can set up monitoring between an SFT team and up to five IP addresses. This allows you to detect link failure bey­ond the switch. You can ensure connection availability for several clients that you consider critical. If the connection between the primary adapter and all of the monitored IP addresses is lost, the team will failover to the secondary adapter.
Adaptive/Receive Load Balancing (ALB/RLB)
Adaptive Load Balancing (ALB) is a method for dynamic distribution of data traffic load among multiple physical chan­nels. The purpose of ALB is to improve overall bandwidth and end station performance. In ALB, multiple links are provided from the server to the switch, and the intermediate driver running on the server performs the load balancing function. The ALB architecture utilizes knowledge of Layer 3 information to achieve optimum distribution of the server transmission load.
ALB is implemented by assigning one of the physical channels as Primary and all other physical channels as Sec­ondary. Packets leaving the server can use any one of the physical channels, but incoming packets can only use the Primary Channel. With Receive Load Balancing (RLB) enabled, it balances IP receive traffic. The intermediate driver analyzes the send and transmit loading on each adapter and balances the rate across the adapters based on des­tination address. Adapter teams configured for ALB and RLB also provide the benefits of fault tolerance.
NOTES:
l
ALB teaming requires that the switch not be set up for teaming and that spanning tree protocol is turned off for the switch port connected to the network adapter in the server.
l
ALB does not balance traffic when protocols such as NetBUI and IPX* are used.
l
You may create an ALB team with mixed speed adapters. The load is balanced according to the adapter's capabilities and bandwidth of the channel.
l
All members of ALB and RLB teams must be connected to the same subnet.
Virtual Machine Load Balancing
Virtual Machine Load Balancing (VMLB) provides transmit and receive traffic load balancing across Virtual Machines bound to the team interface, as well as fault tolerance in the event of switch port, cable, or adapter failure.
The driver analyzes the transmit and receive load on each member adapter and balances the traffic across member adapters. In a VMLB team, each Virtual Machine is associated with one team member for its TX and RX traffic.
If only one virtual NIC is bound to the team, or if Hyper-V is removed, then the VMLB team will act like an AFT team.
NOTES:
l
VMLB does not load balance non-routed protocols such as NetBEUI and some IPX* traffic.
l
VMLB supports from two to eight adapter ports per team.
l
You can create a VMLB team with mixed speed adapters. The load is balanced according to the lowest common denominator of adapter capabilities and the bandwidth of the channel.
l
An Intel AMT enabled adapter cannot be used in a VLMB team.
Static Link Aggregation
Static Link Aggregation (SLA) is very similar to ALB, taking several physical channels and combining them into a single logical channel.
This mode works with:
l
Cisco EtherChannel capable switches with channeling mode set to "on"
l
Intel switches capable of Link Aggregation
l
Other switches capable of static 802.3ad
The Intel teaming driver supports Static Link Aggregation for:
l
Fast EtherChannel (FEC): FEC is a trunking technology developed mainly to aggregate bandwidth between switches working in Fast Ethernet. Multiple switch ports can be grouped together to provide extra bandwidth. These aggregated ports together are called Fast EtherChannel. Switch software treats the grouped ports as a single logical port. An end node, such as a high-speed end server, can be connected to the switch using FEC.
FEC link aggregation provides load balancing in a way which is very similar to ALB, including use of the same algorithm in the transmit flow. Receive load balancing is a function of the switch.
The transmission speed will never exceed the adapter base speed to any single address (per specification). Teams must match the capability of the switch. Adapter teams configured for static Link Aggregation also provide the benefits of fault tolerance and load balancing. You do not need to set a primary adapter in this mode.
l
Gigabit EtherChannel (GEC): GEC link aggregation is essentially the same as FEC link aggregation.
NOTES:
l
All adapters in a Static Link Aggregation team must run at the same speed and must be connected to a Static Link Aggregation-capable switch. If the speed capabilities of adapters in a Static Link Aggregation team are different, the speed of the team is dependent on the switch.
l
Static Link Aggregation teaming requires that the switch be set up for Static Link Aggregation teaming and that spanning tree protocol is turned off.
l
An Intel AMT enabled adapter cannot be used in an SLA team.
IEEE 802.3ad: Dynamic Link Aggregation
IEEE 802.3ad is the IEEE standard. Teams can contain two to eight adapters. You must use 802.3ad switches (in dynamic mode, aggregation can go across switches). Adapter teams configured for IEEE 802.3ad also provide the benefits of fault tolerance and load balancing. Under 802.3ad, all protocols can be load balanced.
Dynamic mode supports multiple aggregators. Aggregators are formed by port speed connected to a switch. For example, a team can contain adapters running at 1 Gbps and 10 Gbps, but two aggregators will be formed, one for each speed. Also, if a team contains 1 Gbps ports connected to one switch, and a combination of 1Gbps and 10Gbps ports connected to a second switch, three aggregators would be formed. One containing all the ports connected to the first switch, one containing the 1Gbps ports connected to the second switch, and the third containing the 10Gbps ports connected to the second switch.
NOTES:
l
IEEE 802.3ad teaming requires that the switch be set up for IEEE 802.3ad (link aggregation) teaming and that spanning tree protocol is turned off.
l
Once you choose an aggregator, it remains in force until all adapters in that aggregation team lose link.
l
In some switches, copper and fiber adapters cannot belong to the same aggregator in an IEEE 802.3ad configuration. If there are copper and fiber adapters installed in a system, the switch might configure the copper adapters in one aggregator and the fiber-based adapters in another. If you experience this beha­vior, for best performance you should use either only copper-based or only fiber-based adapters in a sys­tem.
l
An Intel AMT enabled adapter cannot be used in a DLA team.
Before you begin
l
Verify that the switch fully supports the IEEE 802.3ad standard.
l
Check your switch documentation for port dependencies. Some switches require pairing to start on a primary port.
l
Check your speed and duplex settings to ensure the adapter and switch are running at full duplex, either forced or set to auto-negotiate. Both the adapter and the switch must have the same speed and duplex configuration. The full-duplex requirement is part of the IEEE 802.3ad specification: http://standards.ieee.org/. If needed, change your speed or duplex setting before you link the adapter to the switch. Although you can change speed and duplex settings after the team is created, Intel recommends you disconnect the cables until settings are in effect. In some cases, switches or servers might not appropriately recognize modified speed or duplex settings if settings are changed when there is an active link to the network.
l
If you are configuring a VLAN, check your switch documentation for VLAN compatibility notes. Not all switches support simultaneous dynamic 802.3ad teams and VLANs. If you do choose to set up VLANs, configure team­ing and VLAN settings on the adapter before you link the adapter to the switch. Setting up VLANs after the switch has created an active aggregator affects VLAN functionality.
Multi-Vendor Teaming
Multi-Vendor Teaming (MVT) allows teaming with a combination of Intel and non-Intel adapters. This feature is cur­rently available under Windows Server 2008 and Windows Server 2008 R2.
NOTE: MVT is not supported on Windows Server 2008 x64.
If you are using a Windows-based computer, adapters that appear in the Intel PROSet teaming wizard can be included in a team.
MVT Design Considerations
l
In order to activate MVT, you must have at least one Intel adapter or integrated connection in the team, which must be designated as the primary adapter.
l
A multi-vendor team can be created for any team type.
l
All members in an MVT must operate on a common feature set (lowest common denominator).
l
For MVT teams, manually verify that the frame setting for the non-Intel adapter is the same as the frame settings for the Intel adapters.
l
If a non-Intel adapter is added to a team, its RSS settings must match the Intel adapters in the team.

Virtual LANs

Overview
NOTE: Windows* users must install Intel® PROSet for Windows Device Manager and Advanced Networking Ser-
vices in order to use VLANs.
The term VLAN (Virtual Local Area Network) refers to a collection of devices that communicate as if they were on the same physical LAN. Any set of ports (including all ports on the switch) can be considered a VLAN. LAN segments are not restricted by the hardware that physically connects them.
VLANs offer the ability to group computers together into logical workgroups. This can simplify network administration when connecting clients to servers that are geographically dis­persed across the building, campus, or enterprise network.
Typically, VLANs consist of co-workers within the same depart­ment but in different locations, groups of users running the same network protocol, or a cross-functional team working on a joint project.
By using VLANs on your network, you can:
l
Improve network performance
l
Limit broadcast storms
l
Improve LAN configuration updates (adds, moves, and changes)
l
Minimize security problems
l
Ease your management task
Supported Operating Systems
IEEE VLANs are supported in the following operating systems. Configuration details are contained in the following links:
l Windows Server 2012
l Windows Server 2008
l Linux
NOTE: Native VLANs are now available in supported Linux kernels.
Other Implementation Considerations
l
To set up IEEE VLAN membership (multiple VLANs), the adapter must be attached to a switch with IEEE 802.1Q VLAN capability.
l
VLANs can co-exist with teaming (if the adapter supports both). If you do this, the team must be defined first, then you can set up your VLAN.
l
The Intel PRO/100 VE and VM Desktop Adapters and Network Connections can be used in a switch based VLAN but do not support IEEE Tagging.
l
You can set up only one untagged VLAN per adapter or team. You must have at least one tagged VLAN before you can set up an untagged VLAN.
IMPORTANT: When using IEEE 802.1Q VLANs, VLAN ID settings must match between the switch and those adapters using the VLANs.
NOTE: Intel ANS VLANs are not compatible with Microsoft's Load Balancing and Failover (LBFO) teams. Intel®
PROSet will block a member of an LBFO team from being added to an Intel ANS VLAN. You should not add a port that is already part of an Intel ANS VLAN to an LBFO team, as this may cause system instability.
Configuring VLANs in Microsoft* Windows*
In Microsoft* Windows*, you must use Intel® PROSet to set up and configure VLANs. For more information, select Intel PROSet in the Table of Contents (left pane) of this window.
NOTES:
l
If you change a setting under the Advanced tab for one VLAN, it changes the settings for all VLANS using that port.
l
In most environments, a maximum of 64 VLANs per network port or team are supported by Intel PROSet.
l
ANS VLANs are not supported on adapters and teams that have VMQ enabled. However, VLAN filtering with VMQ is supported via the Microsoft Hyper-V VLAN interface. For more information see Microsoft
Hyper-V virtual NICs on teams and VLANs.
l
You can have different VLAN tags on a child partition and its parent. Those settings are separate from one another, and can be different or the same. The only instance where the VLAN tag on the parent and child MUST be the same is if you want the parent and child partitions to be able to communicate with each other through that VLAN. For more information see Microsoft Hyper-V virtual NICs on teams and
VLANs.

Advanced Features

Jumbo Frames
Jumbo Frames are Ethernet frames that are larger than 1518 bytes. You can use Jumbo Frames to reduce server CPU utilization and increase throughput. However, additional latency may be introduced.
NOTES:
l
Jumbo Frames are supported at 1000 Mbps and higher. Using Jumbo Frames at 10 or 100 Mbps is not supported and may result in poor performance or loss of link.
l
End-to-end network hardware must support this capability; otherwise, packets will be dropped.
l
Intel adapters that support Jumbo Frames have a frame size limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes.
Jumbo Frames can be implemented simultaneously with VLANs and teaming.
NOTE: If an adapter that has Jumbo Frames enabled is added to an existing team that has Jumbo Frames dis­abled, the new adapter will operate with Jumbo Frames disabled. The new adapter's Jumbo Frames setting in Intel PROSet will not change, but it will assume the Jumbo Frames setting of the other adapters in the team.
To configure Jumbo Frames at the switch, consult your network administrator or switch user's guide.
Loading...
+ 123 hidden pages