Intel I350-T4 User Manual

Intel®Ethernet Adapters and Devices User Guide

Overview

Welcome to the User's Guide for Intel® Ethernet Adapters and devices. This guide covers hardware and software installation, setup procedures, and troubleshooting tips for Intel network adapters, connections, and other devices.
If you are installing a network adapter, follow this procedure from step 1. If you are upgrading the driver software, start with step 5 .
NOTE: If you update the firmware, you must update the driver software to the same family version.
1. Make sure that you are installing the latest driver software for your adapter. Visit Intel's support web-
site to download the latest drivers.
2. Review system requirements.
3. Insert the adapter(s) in the computer.
4. Attach the copper or fiber network cable(s).
5. Install the driver.
6. For Windows systems, install the Intel® PROSet software.
If you have any problems with basic installation, see Troubleshooting.
You can now set up advanced features, if necessary. The available features and the configuration process varies with the adapter and your operating system.

Before You Begin

Supported Devices
For help identifying your network device and finding supported devices, click the link below:
http://www.intel.com/support
Compatibility Notes
In order for an adapter based on the XL710 controller to reach its full potential, you must install it in a PCIe Gen3 x8 slot. Installing it in a shorter slot, or a Gen2 or Gen1 slot, will limit the throughput of the adapter.
Some older Intel® Ethernet Adapters do not have full software support for the most recent versions of Microsoft Windows*. Many older Intel Ethernet Adapters have base drivers supplied by Microsoft Windows. Lists of supported devices per OS are available at
http://www.intel.com/support/go/network/adapter/nicoscomp.htm
Supported Operating Systems
Supported 32-bit Operating Systems
NOTE: Microsoft* Windows* 32-bit operating systems are only supported on Intel 1GbE Ethernet
Adapters and slower devices. All adapters support 32-bit versions of Linux* and FreeBSD*.
Basic software and drivers are supported on the following operating systems:
l DOS
l SunSoft* Solaris* (drivers and support are provided by the operating system vendor)
Advanced software and drivers are supported on the following operating systems:
l Microsoft Windows 7
l Microsoft Windows 8.1
l Microsoft Windows 10
l Linux*, v2.4 kernel or higher
l FreeBSD*
Supported Intel® 64 Architecture Operating Systems
l Microsoft* Windows* 7
l Microsoft Windows 8.1
l Microsoft Windows 10
l Microsoft* Windows Server* 2008 R2
l Microsoft Windows Server 2012
l Microsoft Windows Server 2012 R2
l Microsoft Windows Server 2016
l Microsoft Windows Server 2016 Nano Server
l VMWare ESXi 5.5
l VMWare* ESXi* 6.0
l VMWare ESXi 6.5 U1
l Ubuntu 14.04
l Red Hat* Linux*
l Novell* SUSE* Linux
l FreeBSD*
‡ If you upgrade to VMWare ESXi 6.5 U1 or later, you must use native mode drivers (available in the latest driver software package) instead of VMKLinux legacy mode drivers.
Supported Operating Systems for Itanium-based Systems
l Linux, v2.x kernel and higher, except v2.6
Some older Intel® Ethernet Adapters do not have full software support for the most recent versions of Microsoft Windows*. Many older Intel Ethernet Adapters have base drivers supplied by Microsoft Windows. Lists of supported devices per OS are available at
http://www.intel.com/support/go/network/adapter/nicoscomp.htm
Hardware Compatibility
Before installing the adapter, check your system for the following:
l The latest BIOS for your system
l One open PCI Express slot
NOTE: The Intel® 10 Gigabit AT Server Adapter will only fit into x8 or larger PCI Express slots.
Some systems have physical x8 PCI Express slots that actually support lower speeds. Please check your system manual to identify the slot.
Cabling Requirements
Intel Gigabit Adapters
Fiber Optic Cables
l Laser wavelength: 850 nanometer (not visible).
l SC Cable type:
l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters.
l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters.
l Connector type: SC.
l LC Cable type:
l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters.
l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters.
l Connector type: LC.
Copper Cables
l 1000BASE-T or 100BASE-TX on Category 5 or Category 5e wiring, twisted 4-pair copper:
l Make sure you use Category 5 cabling that complies with the TIA-568 wiring specification. For
more information on this specification, see the Telecommunications Industry Association's web site: www.tiaonline.org.
l Maximum Length is 100 meters.
l Category 3 wiring supports only 10 Mbps.
NOTE: To insure compliance with CISPR 24 and the EU’s EN55024, devices based on the 82576 controller should be used only with CAT 5E shielded cables that are properly terminated according to the recommendations in EN50174-2.
Intel 10 Gigabit Adapters
Fiber Optic Cables
l Laser wavelength: 850 nanometer (not visible).
l SC Cable type:
l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters.
l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters.
l Connector type: SC.
l LC Cable type:
l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters.
l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters.
l Connector type: LC.
Copper Cables
l Maximum lengths for Intel® 10 Gigabit Server Adapters and Connections that use 10GBASE-T on Cat-
egory 6, Category 6a, or Category 7 wiring, twisted 4-pair copper:
l Maximum length for Category 6 is 55 meters.
l Maximum length for Category 6a is 100 meters.
l Maximum length for Category 7 is 100 meters.
l To ensure compliance with CISPR 24 and the EU's EN55024, Intel® 10 Gigabit Server
Adapters and Connections should be used only with CAT 6a shielded cables that are properly terminated according to the recommendations in EN50174-2.
l 10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l Length is 10 meters max.
Intel 40 Gigabit Adapters
Fiber Optic Cables
l Laser wavelength: 850 nanometer (not visible).
l SC Cable type:
l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters.
l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters.
l Connector type: SC.
l LC Cable type:
l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters.
l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters.
l Connector type: LC.
Copper Cables
l 40 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l Length is 7 meters max

Installation Overview

Installing the Adapter
1. Turn off the computer and unplug the power cord.
2. Remove the computer cover and the adapter slot cover from the slot that matches your adapter.
3. Insert the adapter edge connector into the slot and secure the bracket to the chassis.
4. Replace the computer cover, then plug in the power cord.
Install Drivers and Software
Windows* Operating Systems
You must have administrative rights to the operating system to install the drivers.
1. Download the latest drivers from the support website and transfer them to the system.
2. If the Found New Hardware Wizard screen is displayed, click Cancel.
3. Start the autorun located in the downloaded the software package. The autorun may automatically start after you have extracted the files.
4. Click Install Drivers and Software.
5. Follow the instructions in the install wizard.
Installing Linux* Drivers from Source Code
1. Download and expand the base driver tar file.
2. Compile the driver module.
3. Install the module using the modprobe command.
4. Assign an IP address using the ifconfig command.

Optimizing Performance

You can configure Intel network adapter advanced settings to help optimize server performance.
The examples below provide guidance for three server usage models:
l Optimized for quick response and low latency – useful for video, audio, and High Performance Com-
puting Cluster (HPCC) servers
l Optimized for throughput – useful for data backup/retrieval and file servers
l Optimized for CPU utilization – useful for application, web, mail, and database servers
NOTES:
l The recommendations below are guidelines and should be treated as such. Additional factors
such as installed applications, bus type, network topology, and operating system also affect system performance.
l These adjustments should be performed by a highly skilled network administrator. They are
not guaranteed to improve performance. Not all settings shown here may be available through network driver configuration, operating system or system BIOS. Linux users, see the README file in the Linux driver package for Linux-specific performance enhancement details.
l When using performance test software, refer to the documentation of the application for
optimal results.
General Optimization
l Install the adapter in an appropriate slot.
NOTE: Some PCIe x8 slots are actually configured as x4 slots. These slots have insufficient
bandwidth for full line rate with some dual port devices. The driver can detect this situation and will write the following message in the system log: “PCI-Express bandwidth available for this card is not sufficient for optimal performance. For optimal performance a x8 PCI­Express slot is required.”If this error occurs, moving your adapter to a true x8 slot will resolve the issue.
l In order for an Intel® X710/XL710 based Network Adapter to reach its full potential, you must install it
in a PCIe Gen3 x8 slot. Installing it in a shorter slot, or a Gen2 or Gen1 slot, will impact the throughput the adapter can attain.
l Use the proper cabling for your device.
l Enable Jumbo Packets, if your other network components can also be configured for it.
l Increase the number of TCP and Socket resources from the default value. For Windows based sys-
tems, we have not identified system parameters other than the TCP Window Size which significantly impact performance.
l Increase the allocation size of Driver Resources (transmit/receive buffers). However, most TCP traffic
patterns work best with the transmit buffer set to its default value, and the receive buffer set to its min­imum value.
l When passing traffic on multiple network ports using an I/O application that runs on most or all of the
cores in your system, consider setting the CPU Affinity for that application to fewer cores. This should reduce CPU utilization and in some cases may increase throughput for the device. The cores selected for CPU Affinity must be local to the affected network device's Processor Node/Group. You can use the PowerShell command Get-NetAdapterRSS to list the cores that are local to a device. You may need to increase the number of cores assigned to the application to maximize throughput. Refer to your operating system documentation for more details on setting the CPU Affinity.
l If you have multiple 10 Gpbs (or faster) ports installed in a system, the RSS queues of each adapter
port can be adjusted to use non-overlapping sets of processors within the adapter's local NUMA Node/Socket. Change the RSS Base Processor Number for each adapter port so that the combination of the base processor and the max number of RSS processors settings ensure non-overlapping cores.
1. Identify the adapter ports to be adjusted and inspect at their RssProcessorArray using the Get-NetAdapterRSS PowerShell cmdlet.
2. Identify the processors with NUMA distance 0. These are the cores in the adapter's local NUMA Node/Socket and will provide the best performance.
3. Adjust the RSS Base processor on each port to use a non-overlapping set of processors within the local set of processors. You can do this manually or using the following PowerShell com­mand:
Set-NetAdapterAdvancedProperty -Name <Adapter Name> -DisplayName "RSS Base Processor Number" -DisplayValue <RSS Base Proc Value>
4. Use the Get-NetAdpaterAdvancedproperty cmdlet to check that the right values have been set:
Get-NetAdpaterAdvancedproperty -Name <Adapter Name>
For Example: For a 4 port adapter with Local processors 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, and 'Max RSS processor' of 8, set the RSS base processors to 0, 8, 16 and 24.
Optimized for quick response and low latency
l Minimize or disable Interrupt Moderation Rate.
l Disable Offload TCP Segmentation.
l Disable Jumbo Packets.
l Increase Transmit Descriptors.
l Increase Receive Descriptors.
l Increase RSS Queues.
Optimized for throughput
l Enable Jumbo Packets.
l Increase Transmit Descriptors.
l Increase Receive Descriptors.
l On systems that support NUMA, set the Preferred NUMA Node on each adapter to achieve better scal-
ing across NUMA nodes.
Optimized for CPU utilization
l Maximize Interrupt Moderation Rate.
l Keep the default setting for the number of Receive Descriptors; avoid setting large numbers of Receive
Descriptors.
l Decrease RSS Queues.
l In Hyper-V environments, decrease the Max number of RSS CPUs.

Remote Storage

The remote storage features allow you to access a SAN or other networked storage using Ethernet protocols. This includes Data Center Bridging (DCB), iSCSI over DCB, and Fibre Channel over Ethernet (FCoE).
DCB (Data Center Bridging)
Data Center Bridging (DCB) is a collection of standards-based extensions to classical Ethernet. It provides a lossless data center transport layer that enables the convergence of LANs and SANs onto a single unified fabric.
Furthermore, DCB is a configuration Quality of Service implementation in hardware. It uses the VLAN priority tag (802.1p) to filter traffic. That means that there are 8 different priorities that traffic can be filtered into. It also enables priority flow control (802.1Qbb) which can limit or eliminate the number of dropped packets during network stress. Bandwidth can be allocated to each of these priorities, which is enforced at the hardware level (802.1Qaz).
Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and 802.1Qaz respectively. The firmware based DCBX agent runs in willing mode only and can accept settings from a DCBX capable peer. Software configuration of DCBX parameters via dcbtool/lldptool are not supported.
iSCSI Over DCB
Intel® Ethernet adapters support iSCSI software initiators that are native to the underlying operating system. In the case of Windows, the Microsoft iSCSI Software Initiator, enables connection of a Windows host to an external iSCSI storage array using an Intel Ethernet adapter.
In the case of Open Source distributions, virtually all distributions include support for an Open iSCSI Software Initiator and Intel® Ethernet adapters will support them. Please consult your distribution documentation for additional configuration details on their particular Open iSCSI initiator.
Intel® 82599 and X540-based adapters support iSCSI within a Data Center Bridging cloud. Used in conjunction with switches and targets that support the iSCSI/DCB application TLV, this solution can provide guaranteed minimum bandwidth for iSCSI traffic between the host and target. This solution enables storage administrators to segment iSCSI traffic from LAN traffic, similar to how they can currently segment FCoE from LAN traffic. Previously, iSCSI traffic within a DCB supported environment was treated as LAN traffic by switch vendors. Please consult your switch and target vendors to ensure that they support the iSCSI/DCB application TLV.
Intel®Ethernet FCoE (Fibre Channel over Ethernet)
Fibre Channel over Ethernet (FCoE) is the encapsulation of standard Fibre Channel (FC) protocol frames as data within standard Ethernet frames. This link-level encapsulation, teamed with an FCoE-aware Ethernet-to­FC gateway, acts to extend an FC fabric to include Ethernet-based host connectivity. The FCoE specification focuses on encapsulation of FC frames specific to storage class traffic, as defined by the Fibre Channel FC-4 FCP specification.
NOTE: Support for new operating systems will not be added to FCoE. The last operating system versions that support FCoE are as follows:
l Microsoft*Windows Server* 2012 R2
l RHEL 7.2
l RHEL 6.7
l SLES 12 SP1
l SLES 11 SP4
l VMware* ESX 6.0 U3
Jumbo Frames
The base driver supports FCoE mini-Jumbo Frames (2.5k bytes) independent of the LAN Jumbo Frames setting.
FCoE VN to VN (VN2VN) Support
FCoE VN to VN, also called VN2VN, is a standard for connecting two end-nodes (ENodes) directly using FCoE. An ENode can create a VN2VN virtual link with another remote ENode by not connecting to FC or FCoE switches (FCFs) in between, so neither port zoning nor advance fibre channel services is required. The storage software controls access to, and security of, LUNs using LUN masking. The VN2VN fabric may have a lossless Ethernet switch between the ENodes. This allows multiple ENodes to participate in creating more than one VN2VN virtual link in the VN2VN fabric. VN2VN has two operational modes: Point to Point (PT2PT) and Multipoint.
NOTE: The mode of operation is used only during initialization.
Point to Point (PT2PT) Mode
In Point to Point mode, there are only two ENodes, and they are connected either directly or through a lossless Ethernet switch:
MultiPoint Mode
If more than two ENodes are detected in the VN2VN fabric, then all nodes should operate in Multipoint mode:
Enabling VN2VN in Microsoft Windows
To enable VN2VN in Microsoft Windows:
1. Start Windows Device Manager.
2. Open the appropriate FCoE miniport property sheet (generally under Storage controllers) and click on the Advanced tab.
3. Select the VN2VN setting and choose "Enable."

Remote Boot

Remote Boot allows you to boot a system using only an Ethernet adapter. You connect to a server that contains an operating system image and use that to boot your local system.
Intel®Boot Agent
The Intel® Boot Agent is a software product that allows your networked client computer to boot using a program code image supplied by a remote server. Intel Boot Agent complies with the Pre-boot eXecution Environment (PXE) Version 2.1 Specification. It is compatible with legacy boot agent environments that use BOOTP protocol.
Supported Devices
Intel Boot Agent supports all Intel 10 Gigabit Ethernet, 1 Gigabit Ethernet, and PRO/100 Ethernet Adapters.
Intel®Ethernet iSCSI Boot
Intel® Ethernet iSCSI Boot provides the capability to boot a client system from a remote iSCSI disk volume located on an iSCSI-based Storage Area Network (SAN).
NOTE: Release 20.6 is the last release in which Intel® Ethernet iSCSI Boot supports Intel® Eth­ernet Desktop Adapters and Network Connections. Starting with Release 20.7, Intel Ethernet iSCSI Boot no longer supports Intel Ethernet Desktop Adapters and Network Connections.
Intel®Ethernet FCoE Boot
Intel® Ethernet FCoE Boot provides the capability to boot a client system from a remote disk volume located on an Fibre Channel Storage Area Network (SAN).
Using Intel®PROSet for Windows Device Manager
There are two ways to navigate to the FCoE properties in Windows Device Manager: by using the "Data Center" tab on the adapter property sheet or by using the Intel® "Ethernet Virtual Storage Miniport Driver for FCoE Storage Controllers" property sheet.
Supported Devices
A list of Intel Ethernet Adapters that support FCoE can be found at
http://www.intel.com/support/go/network/adapter/fcoefaq.htm

Virtualization Support

Virtualization makes it possible for one or more operating systems to run simultaneously on the same physical system as virtual machines. This allows you to consolidate several servers onto one system, even if they are running different operating systems. Intel® Network Adapters work with, and within, virtual machines with their standard drivers and software.
NOTES:
l Some virtualization options are not available on some adapter/operating system com-
binations.
l The jumbo frame setting inside a virtual machine must be the same, or lower than, the setting
on the physical port.
l When you attach a Virtual Machine to a tenant overlay network through the Virtual NIC ports
on a Virtual Switch, the encapsulation headers increase the Maximum Transmission Unit (MTU) size on the virtual port. The Encapsulation Overhead feature automatically adjusts the physical port's MTU size to compensate for this increase.
l See http://www.intel.com/technology/advanced_comm/virtualization.htm for more inform-
ation on using Intel Network Adapters in virtualized environments.
Using Intel®Network Adapters in a Microsoft* Hyper-V* Environment
When a Hyper-V Virtual NIC (VNIC) interface is created in the parent partition, the VNIC takes on the MAC address of the underlying physical NIC. The same is true when a VNIC is created on a team or VLAN. Since the VNIC uses the MAC address of the underlying interface, any operation that changes the MAC address of
the interface (for example, setting LAA on the interface, changing the primary adapter on a team, etc.), will cause the VNIC to lose connectivity. In order to prevent this loss of connectivity, Intel® PROSet will not allow you to change settings that change the MAC address.
NOTES:
l If Fibre Channel over Ethernet (FCoE)/Data Center Bridging (DCB) is present on the port,
configuring the device in Virtual Machine Queue (VMQ) + DCB mode reduces the number of VMQ VPorts available for guest OSes. This does not apply to Intel® Ethernet Controller X710 based devices.
l When sent from inside a virtual machine, LLDP and LACP packets may be a security risk.
The Intel® Virtual Function driver blocks the transmission of such packets.
l The Virtualization setting on the Advanced tab of the adapter's Device Manager property
sheet is not available if the Hyper-V role is not installed.
l While Microsoft supports Hyper-V on the Windows* 8.1 client OS, Intel® Ethernet adapters
do not support virtualization settings (VMQ, SR-IOV) on Windows 8.1 client.
l ANS teaming of VF devices inside a Windows 2008 R2 guest running on an open source
hypervisor is supported.
The Virtual Machine Switch
The virtual machine switch is part of the network I/O data path. It sits between the physical NIC and the virtual machine NICs and routes packets to the correct MAC address. Enabling Virtual Machine Queue (VMQ) offloading in Intel® PROSet will automatically enable VMQ in the virtual machine switch. For driver-only installations, you must manually enable VMQ in the virtual machine switch.
Using ANS VLANs
If you create ANS VLANs in the parent partition, and you then create a Hyper-V Virtual NIC interface on an ANS VLAN, then the Virtual NIC interface *must* have the same VLAN ID as the ANS VLAN. Using a different VLAN ID or not setting a VLAN ID on the Virtual NIC interface will result in loss of communication on that interface.
Virtual Switches bound to an ANS VLAN will have the same MAC address as the VLAN, which will have the same address as the underlying NIC or team. If you have several VLANs bound to a team and bind a virtual switch to each VLAN, all of the virtual switches will have the same MAC address. Clustering the virtual switches together will cause a network error in Microsoft’s cluster validation tool. In some cases, ignoring this error will not impact the performance of the cluster. However, such a cluster is not supported by Microsoft. Using Device Manager to give each of the virtual switches a unique address will resolve the issue. See the Microsoft TechNet article Configure MAC Address Spoofing for Virtual Network Adapters for more information.
Virtual Machine Queues (VMQ) and SR-IOV cannot be enabled on a Hyper-V Virtual NIC interface bound to a VLAN configured using the VLANs tab in Windows Device Manager.
Using an ANS Team or VLAN as a Virtual NIC
If you want to use a team or VLAN as a virtual NIC you must follow these steps:
NOTES:
l This applies only to virtual NICs created on a team or VLAN. Virtual NICs created on a
physical adapter do not require these steps.
l Receive Load Balancing (RLB) is not supported in Hyper-V. Disable RLB when using
Hyper-V.
1. Use Intel® PROSet to create the team or VLAN.
2. Open the Network Control Panel.
3. Open the team or VLAN.
4. On the General Tab, uncheck all of the protocol bindings and click OK.
5. Create the virtual NIC. (If you check the "Allow management operating system to share the network adapter." box you can do the following step in the parent partition.)
6. Open the Network Control Panel for the Virtual NIC.
7. On the General Tab, check the protocol bindings that you desire.
NOTE: This step is not required for the team. When the Virtual NIC is created, its protocols are correctly bound.
Command Line for Microsoft Windows Server* Core
Microsoft Windows Server* Core does not have a GUI interface. If you want to use an ANS Team or VLAN as a Virtual NIC, you must use Microsoft*Windows PowerShell* to set up the configuration. Use Windows PowerShell to create the team or VLAN.
NOTE: Support for the Intel PROSet command line utilities (prosetcl.exe and crashdmp.exe) has been removed, and is no longer installed. This functionality has been replaced by the Intel Netcmdlets for Microsoft* Windows PowerShell*. Please transition all of your scripts and processes to use the Intel Netcmdlets for Microsoft Windows PowerShell.
The following is an example of how to set up the configuration using Microsoft* Windows PowerShell*.
1. Get all the adapters on the system and store them into a variable.
$a = Get-IntelNetAdapter
2. Create a team by referencing the indexes of the stored adapter array.
New-IntelNetTeam -TeamMembers $a[1],$a[2] -TeamMode
VirtualMachineLoadBalancing -TeamName “Team1”
Virtual Machine Queue Offloading
Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to perform these tasks faster than the operating system. Offloading also frees up CPU resources. Filtering is based on MAC and/or VLAN filters. For devices that support it, VMQ offloading is enabled in the host partition on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab.
Each Intel® Ethernet Adapter has a pool of virtual ports that are split between the various features, such as VMQ Offloading, SR-IOV, Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE). Increasing the number of virtual ports used for one feature decreases the number available for other features. On devices that support it, enabling DCB reduces the total pool available for other features to 32. Enabling FCoE further reduces the total pool to 24.
NOTE: This does not apply to devices based on the Intel® Ethernet X710 or XL710 controllers.
Intel PROSet displays the number of virtual ports available for virtual functions under Virtualization properties
on the device's Advanced Tab. It also allows you to set how the available virtual ports are distributed between VMQ and SR-IOV.
Teaming Considerations
l If VMQ is not enabled for all adapters in a team, VMQ will be disabled for the team.
l If an adapter that does not support VMQ is added to a team, VMQ will be disabled for the team.
l Virtual NICs cannot be created on a team with Receive Load Balancing enabled. Receive Load Balan-
cing is automatically disabled if you create a virtual NIC on a team.
l If a team is bound to a Hyper-V virtual NIC, you cannot change the Primary or Secondary adapter.
Virtual Machine Multiple Queues
Virtual Machine Multiple Queues (VMMQ)enables Receive Side Scaling (RSS) for virtual ports attached to a physical port. This allows RSS to be used with SR-IOV and inside a VMQ virtual machine, and offloads the RSS processing to the network adapter. RSS balances receive traffic across multiple CPUs or CPU cores. This setting has no effect if your system has only one processing unit.
SR-IOV Overview
Single Root IO Virtualization (SR-IOV) is a PCI SIG specification allowing PCI Express devices to appear as multiple separate physical PCI Express devices. SR-IOV allows efficient sharing of PCI devices among Virtual Machines (VMs). It manages and transports data without the use of a hypervisor by providing independent memory space, interrupts, and DMA streams for each virtual machine.
SR-IOV architecture includes two functions:
l Physical Function (PF) is a full featured PCI Express function that can be discovered, managed and
configured like any other PCI Express device.
l Virtual Function (VF) is similar to PF but cannot be configured and only has the ability to transfer data in
and out. The VF is assigned to a Virtual Machine.
NOTES:
l SR-IOV must be enabled in the BIOS.
l In Windows Server 2012, SR-IOV is not supported with teaming and VLANS. This occurs
because the Hyper-V virtual switch does not enable SR-IOV on virtual interfaces such as teaming or VLANs. To enable SR-IOV, remove all teams and VLANs.
SR-IOV Benefits
SR-IOV has the ability to increase the number of virtual machines supported per physical host, improving I/O device sharing among virtual machines for higher overall performance:
l Provides near native performance due to direct connectivity to each VM through a virtual function
l Preserves VM migration
l Increases VM scalability on a virtualized server
l Provides data protection

iWARP (Internet Wide Area RDMA Protocol)

Remote Direct Memory Access, or RDMA, allows a computer to access another computer's memory without interacting with either computer's operating system data buffers, thus increasing networking speed and throughput. Internet Wide Area RDMA Protocol (iWARP) is a protocol for implementing RDMA across Internet Protocol networks.
Microsoft* Windows* provides two forms of RDMA: Network Direct (ND) and Network Direct Kernel (NDK). ND allows user-mode applications to use iWARP features. NDK allows kernel mode Windows components (such as File Manager) to use iWARP features. NDK functionality is included in the Intel base networking drivers. ND functionality is a separate option available during Intel driver and networking software installation. If you plan to make use of iWARP features in applications you are developing, you will need to install the user­mode Network Direct (ND) feature when you install the drivers. (See Installation below.)
NOTE: Even though NDK functionality is included in the base drivers, if you want to allow NDK's RDMA feature across subnets, you will need to select "Enable iWARP routing across IP Subnets" on the iWARP Configuration Options screen during base driver installation (see Installation below).
Requirements
The Intel® Ethernet User Mode iWARP Provider is supported on Linux* operating systems and Microsoft* Windows Server* 2012 R2 or later. For Windows installations, Microsoft HPC Pack or Intel MPI Library must be installed.
Installation
NOTE: For installation on Windows Server 2016 Nano Server, see Installing on Nano Server below.
Network Direct Kernel (NDK) features are included in the Intel base drivers. Follow the steps below to install user-mode Network Direct (ND) iWARP features.
1. From the installation media, run Autorun.exe to launch the installer, then choose "Install Drivers and Software" and accept the license agreement.
2. On the Setup Options screen, select "Intel® Ethernet User Mode iWARP Provider".
3. On the iWARP Configuration Options screen, select "Enable iWARP routing across IP Subnets" if desired. Note that this option is displayed during base driver installation even if user mode iWARP was not selected, as this option is applicable to Network Direct Kernel functionality as well.
4. If Windows Firewall is installed and active, select "Create an Intel® Ethernet iWARP Port Mapping Ser­vice rule in Windows Firewall" and the networks to which to apply the rule. If Windows Firewall is dis­abled or you are using a third party firewall, you will need to manually add this rule.
5. Continue with driver and software installation.
Installing on Nano Server
Follow the steps below to install the Intel® Ethernet User Mode iWARP Provider on Microsoft Windows Server 2016 Nano Server.
1. Create a directory from which to install the iWARP files. For example, C:\Nano\iwarp.
2. Copy the following files into your new directory:
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\i40wb.dll
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\i40wbmsg.dll
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.cat
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.inf
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.sys
3. Run the DISM command to inject the iWARP files into your Nano Server image, using the directory you created in step 1 for the AddDriver path parameter. For example, "DISM .../Add-Driver C:\Nan­o\iwarp"
4. Create an inbound firewall rule for UDP port 3935.
5. If desired, use the Windows PowerShell commands below to enable iWARP routing across IP Sub­nets.
l Set-NetOffloadGlobalSetting -NetworkDirectAcrossIPSubnets Allow
l Disable Adapter
l Enable Adapter
Customer Support
l Main Intel web support site: http://support.intel.com
l Network products information: http://www.intel.com/network
Legal / Disclaimers
Copyright (C) 2016, Intel Corporation. All rights reserved.
Intel Corporation assumes no responsibility for errors or omissions in this document. Nor does Intel make any commitment to update the information contained herein.
Intel is a trademark of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
This software is furnished under license and may only be used or copied in accordance with the terms of the license. The information in this manual is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability
for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express written consent of Intel Corporation.

Installing the Adapter

Select the Correct Slot

One open PCI-Express slot, x4, x8, or x16, depending on your adapter.
NOTE: Some systems have physical x8 PCI Express slots that actually only support lower speeds. Please check your system manual to identify the slot.

Insert the Adapter into the Computer

1. If your computer supports PCI Hot Plug, see your computer documentation for special installation instructions.
2. Turn off and unplug your computer. Then remove the cover.
CAUTION: Turn off and unplug the power before removing the computer's cover. Failure to do so could endanger you and may damage the adapter or computer.
3. Remove the cover bracket from an available slot.
4. Insert the adapter, pushing it into the slot until the adapter is firmly seated. You can install a smaller PCI Express adapter in a larger PCI Express slot.
CAUTION: Some PCI Express adapters may have a short connector, making them more fragile than PCI adapters. Excessive force could break the connector. Use cau­tion when pressing the board in the slot.
5. Secure the adapter bracket with a screw, if required.
6. Replace the computer cover and plug in the power cord.
7. Power on the computer.

Connecting Network Cables

Connect the appropriate network cable, as described in the following sections.
Connect the RJ-45 Network Cable
Connect the RJ-45 network cable as shown:
Type of cabling to use:
l 10GBASE-T on Category 6, Category 6a, or Category 7 wiring, twisted 4-pair copper:
l Length is 55 meters max for Category 6.
l Length is 100 meters max for Category 6a.
l Length is 100 meters max for Category 7.
NOTE: For the Intel® 10 Gigabit AT Server Adapter, to ensure compliance with CISPR 24 and the EU’s EN55024, this product should be used only with Category 6a shielded cables that are properly terminated according to the recommendations in EN50174-2.
l For 1000BASE-T or 100BASE-TX, use Category 5 or Category 5e wiring, twisted 4-pair copper:
l Make sure you use Category 5 cabling that complies with the TIA-568 wiring specification. For
more information on this specification, see the Telecommunications Industry Association's web site: www.tiaonline.org.
l Length is 100 meters max.
l Category 3 wiring supports only 10 Mbps.
CAUTION: If using less than 4-pair cabling, you must manually configure the speed and duplex setting of the adapter and the link partner. In addition, with 2- and 3-pair cabling the adapter can only achieve speeds of up to 100Mbps.
l For 100BASE-TX, use Category 5 wiring.
l For 10Base-T, use Category 3 or 5 wiring.
l If you want to use this adapter in a residential environment (at any speed), use Category 5 wiring. If the
cable runs between rooms or through walls and/or ceilings, it should be plenum-rated for fire safety.
In all cases:
l The adapter must be connected to a compatible link partner, preferably set to auto-negotiate speed and
duplex for Intel gigabit adapters.
l Intel Gigabit and 10 Gigabit Server Adapters using copper connections automatically accommodate
either MDI or MDI-X connections. The auto-MDI-X feature of Intel gigabit copper adapters allows you to directly connect two adapters without using a cross-over cable.
Connect the Fiber Optic Network Cable
CAUTION: The fiber optic ports contain a Class 1 laser device. When the ports are dis­connected, always cover them with the provided plug. If an abnormal fault occurs, skin or eye damage may result if in close proximity to the exposed ports.
Remove and save the fiber optic connector cover. Insert a fiber optic cable into the ports on the network adapter bracket as shown below.
Most connectors and ports are keyed for proper orientation. If the cable you are using is not keyed, check to be sure the connector is oriented properly (transmit port connected to receive port on the link partner, and vice versa).
The adapter must be connected to a compatible link partner operating at the same laser wavelength as the adapter.
Conversion cables to other connector types (such as SC-to-LC) may be used if the cabling matches the optical specifications of the adapter, including length limitations.
Insert the fiber optic cable as shown below.
Connection requirements
ll 40GBASE-SR4/MPO on 850 nanometer optical fiber:
l Utilizing 50/125 micron OM3, length is 100 meters max.
l Utilizing 50/125 micron OM4, length is 150 meters max.
l 25GBASE--SR/LC on 850 nanometer optical fiber:
l Utilizing 50 micron multimode, length is 300 meters max.
l Utilizing 62.5 micron multimode, length is 33 meters max.
l 10GBASE-SR/LC on 850 nanometer optical fiber:
l Utilizing 50 micron multimode, length is 300 meters max.
l Utilizing 62.5 micron multimode, length is 33 meters max.
l 1000BASE-SX/LC on 850 nanometer optical fiber:
l Utilizing 50 micron multimode, length is 550 meters max.
l Utilizing 62.5 micron multimode, length is 275 meters max.
Supported SFP+ and QSFP+ Modules
Adapters Based on the 710 Series of Controllers
For information on supported media, see the following link:
http://www.intel.com/content/dam/www/public/us/en/documents/release-notes/xl710-ethernet-controller­feature-matrix.pdf
NOTES:
l Some Intel branded network adapters based on the X710/XL710 controller only support Intel
branded modules. On these adapters, other modules are not supported and will not function.
l For connections based on the 710 series of controllers, support is dependent on your system
board. Please see your vendor for details.
l In all cases Intel recommends using Intel optics; other modules may function but are not val-
idated by Intel. Contact Intel for supported media types.
l In systems that do not have adequate airflow to cool the adapter and optical modules, you
must use high temperature optical modules.
l For XXV710 based SFP+ adapters Intel recommends using Intel optics and cables. Other
modules may function but are not validated by Intel. Contact Intel for supported media types.
82599-Based Adapters
NOTES:
l If your 82599-based Intel® Network Adapter came with Intel optics, or is an Intel® Ethernet
Server Adapter X520-2, then it only supports Intel optics and/or the direct attach cables listed below.
l 82599-Based adapters support all passive and active limiting direct attach cables that com-
ply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
Supplier Type Part Numbers
SR Modules
Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-
IN2
Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-
IT
Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-
IN1
LR Modules
Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-
IT
Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-
IN2
Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-
IN1
QSFP Mod­ules
Intel TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) (40G not sup-
E40GQSFPSR
ported on 82599)
The following is a list of 3rd party SFP+ modules that have received some testing. Not all modules are applicable to all devices.
Supplier Type Part Numbers
Finisar SFP+ SR bailed, 10G single rate FTLX8571D3BCL
Avago SFP+ SR bailed, 10G single rate AFBR-700SDZ
Finisar SFP+ LR bailed, 10G single rate FTLX1471D3BCL
Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
Finisar 1000BASE-T SFP FCLF8522P2BTL
Avago 1000BASE-T SFP ABCU-5710RZ
HP 1000BASE-SX SFP 453153-001
82598-Based Adapters
NOTES:
l Intel® Network Adapters that support removable optical modules only support their original
module type (i.e., the Intel® 10 Gigabit SR Dual Port Express Module only supports SR optical modules). If you plug in a different type of module, the driver will not load.
l 82598-Based adapters support all passive direct attach cables that comply with SFF-8431
v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables are not supported.
l Hot Swapping/hot plugging optical modules is not supported.
l Only single speed, 10 Gigabit modules are supported.
l LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module types are
not supported. Please see your system documentation for details.
The following is a list of SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices.
Supplier Type Part Numbers
Finisar SFP+ SR bailed, 10G single rate FTLX8571D3BCL
Avago SFP+ SR bailed, 10G single rate AFBR-700SDZ
Finisar SFP+ LR bailed, 10G single rate FTLX1471D3BCL
Molex 1m - Twin-ax cable 74752-1101
Molex 3m - Twin-ax cable 74752-2301
Molex 5m - Twin-ax cable 74752-3501
Molex 10m - Twin-ax cable 74752-9004
Tyco 1m - Twin-ax cable 2032237-2
Tyco 3m - Twin-ax cable 2032237-4
Tyco 5m - Twin-ax cable 2032237-6
Tyco 10m - Twin-ax cable 1-2032237-1
THIRD PARTY OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE LISTED ONLY FOR THE PURPOSE OF HIGHLIGHTI NG THIRD PARTY SPECIFICATIONS AND POTENTIAL COMPATIBILITY, AND ARE NOT RECOMMENDATIONS OR ENDORSEMENT OR SPONSORSHIP OF ANY THIRD PARTY'S PRODUCT BY INTEL. INTEL IS NOT ENDORSING OR PROMOTING PRODUCTS MADE BY ANY THIRD PARTY AND THE THIRD PARTY REFERENCE I S PROVIDED ONLY TO SHARE INFORMATION REGARDING CERTAIN OPTIC MODULES AND CABLES WI TH THE ABOVE SPECIFICATIONS. THERE MAY BE OTHER MANUFACTURERS OR SUPPLIERS, PRODUCING OR SUPPLYING OPTIC MODULES AND CABLES WITH SIMILAR OR MATCHING DESCRIPTIONS. CUSTOMERS MUST USE THEIR OWN DISCRETI ON AND DILIGENCE TO PURCHASE OPTIC MODULES AND CABLES FROM ANY THI RD PARTY OF THEIR CHOICE. CUSTOMERS ARE SOLELY RESPONSIBLE FOR ASSESSING THE SUITABILIT Y OF THE PRODUCT AND/OR DEVICES AND FOR THE SELECTION OF THE VENDOR FOR PURCHASING ANY PRODUCT. THE OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL ASSUMES NO LIABILITY WHATSOEVER, AND I NTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THI RD PARTY PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS.
Connect the Direct Attach Cable
Insert the Direct Attach network cable as shown below.
Type of cabling:
l 40 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l Length is 7 meters max.
l 25 Gigabit Ethernet over SFP28 Direct Attached Cable (Twinaxial)
l Length is 5 meters max.
l For optimal performance, must use CA-25G-L with RS-FEC and 25GBASE-CR
l 10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l Length is 10 meters max.

PCI Hot Plug Support

Most Intel® Ethernet Server Adapters are enabled for use in selected servers equipped with Hot Plug support. Exceptions: Intel Gigabit Quad Port Server adapters do not support Hot Plug operations.
If you replace an adapter in a Hot Plug slot, do not place the removed adapter back into the same network until the server has rebooted (unless you return it to the same slot and same team as before). This prevents a conflict in having two of the same Ethernet addresses on the same network.
The system will require a reboot if you
l Change the primary adapter designator.
l Add a new adapter to an existing team and make the new adapter the primary adapter.
l Remove the primary adapter from the system and replace it with a different type of adapter.
NOTE: To replace an existing SLA-teamed adapter in a Hot Plug slot, first unplug the adapter cable. When the adapter is replaced, reconnect the cable.
PCI Hot Plug Support for Microsoft* Windows* Operating Systems
Intel® network adapters are enabled for use in selected servers equipped with PCI Hot Plug support and runningMicrosoft* Windows* operating systems. For more information on setting up and using PCI Hot Plug support in your server, see your hardware and/or Hot Plug support documentation for details. PCI Hot Plug only works when you hot plug an identical Intel network adapter.
NOTES:
l The MAC address and driver from the removed adapter will be used by the replacement
adapter unless you remove the adapter from the team and add it back in. If you do not remove and restore the replacement adapter from the team, and the original adapter is used elsewhere on your network, a MAC address conflict will occur.
l For SLA teams, ensure that the replacement NIC is a member of the team before connecting
it to the switch.

Microsoft* Windows* Installation and Configuration

Installing Windows Drivers and Software
NOTE: To successfully install or uninstall the drivers or software, you must have administrative
privileges on the computer completing installation.
Install the Drivers
NOTES:
l This will update the drivers for all supported Intel® network adapters in your system.
l The Roll Back Driver feature of Windows Server (available on the Adapter Properties dialog's
Driver tab) will not work correctly if an adapter team or Intel PROSet are present on the sys­tem. Before you use the Roll Back Driver feature, use Intel PROSet to remove any teams, then remove Intel PROSet using Programs and Features from the Control Panel of Win- dows.
l Using Microsoft Windows Update to upgrade or downgrade your Ethernet network drivers is
not supported. Please download the latest driver package from the support website.
Before installing or updating the drivers, insert your adapter(s) in the computer and plug in the network cable. When Windows discovers the new adapter, it attempts to find an acceptable Windows driver already installed with the operating system.
If found, the driver is installed without any user intervention. If Windows cannot find the driver, the Found New Hardware Wizard window is displayed.
Regardless of whether Windows finds the driver, it is recommended that you follow the procedure below to install the driver. Drivers for all Intel adapters supported by this software release are installed.
1. Download the latest drivers from the support website and transfer them to the system.
2. If the Found New Hardware Wizard screen is displayed, click Cancel.
3. Start the autorun located in the downloaded the software package. The autorun may automatically start after you have extracted the files.
4. Click Install Drivers and Software.
5. Follow the instructions in the install wizard.
NOTE: Intel® PROSet is installed by default when you install the device drivers.
Installing the Base Driver and Intel® PROSet on Nano Server
Driver Installation
NOTE: Installing drivers requires administrator rights to the operating system.
To install drivers on Microsoft* Windows Server* Nano Server:
1. Identify which drivers to inject into the operating system.
2. Create a directory from which to install the drivers. For example, C:\Nano\Drivers
3. Copy the appropriate drivers for the operating system and hardware. For example, "copy D:\PROXGB\Winx64\NDIS65\*.* c:\Nano\Drivers /y"
4. If you are using the New-NanoServerImage module, use the above path for the -DriversPath para­meter. For example, "New-NanoServerImage ...-DriversPath C:\Nano\Drivers"
5. If you are using DISM.exe as well, use the above path for the /AddDriver parameter. For example, "DISM .../Add-Driver C:\Nano\Drivers"
Intel PROSet Installation
To install Intel PROSet on Microsoft* Windows Server* Nano Server:
1. Use the New-NanoServerImage cmdlet to add the PROSetNS.zip file from the .\Disk\APPS\PROSETDX\NanoServer directory to your -CopyPath parameter.
2. Append the NanoSetup.ps1 file (located in the same directory) to your -SetupCompleteCommands parameter.
For example:
New-NanoServerImage ...
-CopyPath "<PATH>\PROSetNS.zip", "<PATH>\NanoSetup.ps1" `
-SetupCompleteCommands "PowerShell ""C:\NanoSetup.ps1"""
See the link below for more information on deploying a Nano Server image and using the cmdlet:
https://msdn.microsoft.com/en-us/library/mt126167.aspx
Installing Intel PROSet
Intel PROSet for Windows Device Manager is an advanced configuration utility that incorporates additional configuration and diagnostic features into the device manager.
NOTES:
l You must install Intel® PROSet for Windows Device Manager if you want to use Intel® ANS
teams or VLANs.
l Intel PROSet for Windows Device Manager is installed by default when you install the
device drivers. For information on usage, see Using Intel® PROSet for Windows Device
Manager.
Intel PROSet for Windows Device Manager is installed with the same process used to install drivers.
NOTES:
l You must have administrator rights to install or use Intel PROSet for Windows Device
Manager.
l Upgrading PROSet for Windows Device Manager may take a few minutes.
1. On the autorun, click Install Base Drivers and Software.
NOTE: You can also run setup64.exe from the files downloaded from Customer Support.
2. Proceed with the installation wizard until the Custom Setup page appears.
3. Select the features to install.
4. Follow the instructions to complete the installation.
If Intel PROSet for Windows Device Manager was installed without ANS support, you can install support by clicking Install Base Drivers and Software on the autorun, or running setup64.exe, and then selecting the
Modify option when prompted. From the Intel® Network Connections window, select Advanced Network Services then click Next to continue with the installation wizard.
Command Line Installation for Base Drivers and Intel® PROSet
Driver Installation
The driver install utility DxSetup.exe allows unattended installation of drivers from a command line.
NOTES:
l Intel® 10GbE Network Adapters do not support unattended driver install-
ation.
l Intel PROSet cannot be installed with msiexec.exe. You must use
DxSetup.exe.
These utilities can be used to install the base driver, intermediate driver, and all management applications for supported devices.
DxSetup.exe Command Line Options
By setting the parameters in the command line, you can enable and disable management applications. If parameters are not specified, only existing components are updated.
DxSetup.exe supports the following command line parameters:
Parameter Definition
BD Base Driver
"0", do not install the base driver.
"1", install the base driver.
ANS Advanced Network Services
"0", do not install ANS. If ANS is already installed, it will be uninstalled.
"1", install ANS. The ANS property requires DMIX=1.
NOTE: If the ANS parameter is set to ANS=1, both Intel PROSet and ANS will be installed.
DMIX PROSet for Windows Device Manager
NOTE: If DMIX=0, ANS will not be installed. If DMIX=0 and Intel PROSet, ANS, and FCoE are already installed, Intel PROSet, ANS, and FCoE will be uninstalled.
Parameter Definition
"1", install Intel PROSet feature. The DMIX property requires BD=1.
NOTE: If DMIX=0, ANS will not be installed. If DMIX=0 and Intel PROSet, ANS, and FCoE are already installed, Intel PROSet, ANS, and FCoE will be uninstalled.
FCOE Fibre Channel over Ethernet
"0", do not install FCoE. If FCoE is already installed, it will be uninstalled.
"1", install FCoE. The FCoE property requires DMIX=1.
NOTE: Even if FCOE=1 is passed, FCoE will not be installed if the operating system and installed adapters do not support FCoE.
ISCSI iSCSI
"0", do not install iSCSI. If iSCSI is already installed, it will be uninstalled.
"1", install FCoE. The iSCSI property requires DMIX=1.
IWARP_ ROUTING
iWARProuting
"0", do not install iWARProuting.
"1", install iWARProuting.
IWARP_ FIREWALL
Installs the iWARPfirewall rule. For more information see iWARP(Internet Wide Area
RDMAProtocol) section.
"0", do not install iWARPfirewall rule.
"1", install iWARPfirewall rule. If "1"is selected, the following parameters are allowed in addition to IWARP_FIREWALL.
l IWARP_FIREWALL_DOMAIN [0|1] - Applies firewall rule to corporate domains.
l IWARP_FIREWALL_PUBLIC [0|1] - Applies firewall rule to public networks
l IWARP_FIREWALL_PRIVATE [0|1] - Applies firewall rule to private networks
FORCE "0", check that the installed device supports a feature (FCOE, iSCSI) and only install
the feature if such a device is found.
"1", install the specified features regardless of the presence of supporting devices.
/q[r|n] /q --- silent install options
r Reduced GUI Install (only displays critical warning messages)
n Silent install
Loading...
+ 137 hidden pages