Copyright 2018, Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard
Enterprise products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett
Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use,
or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise
website.
Acknowledgments
Intel®, Itanium®, Pentium®, Xeon®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation
in the U.S. and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
The QLogic 8400/3400 Series adapters are based on a new class of Gigabit Ethernet (GbE) and 10GbE
converged network interface controller (C-NIC) that can simultaneously perform accelerated data networking
and storage networking on a standard Ethernet network. The C-NIC offers acceleration for popular protocols
used in the data center, such as:
•Internet Small Computer Systems Interface (iSCSI) offload for accelerating network storage access
featuring centralized boot (iSCSI boot)
•Fibre Channel over Ethernet (FCoE) offload and acceleration for Fibre Channel block storage
Enterprise networks that use multiple protocols and multiple network fabrics benefit from the ability of the
network adapter to combine data communications, storage, and clustering over a single Ethernet fabric by
boosting server CPU processing performance and memory use while alleviating I/O bottlenecks.
The QLogic 8400/3400 Series adapters include a 100/1000Mbps or 10Gbps Ethernet MAC with both halfduplex and full-duplex capability and a 100/100 0Mbps or 10Gbps physical layer (PHY). The transceiver is
fully compatible with the IEEE 802.3 standard for auto-negotiation of speed.
Using the teaming software, you can split your networking virtual LANs (VLANs) and group multiple network
adapters together into teams to provide network load balancing and fault tolerance.
•See
teaming.
•See Using Virtual LANs in Windows for a description of VLANs.
•See Configuring Teaming for instructions about configuring teaming and creating VLANs on Windows
operating systems.
Features
The following is a list of the QLogic 8400/3400Series adapters features. Some features might not be available
on all adapters.
Teaming Services and Configuring Teaming in Windows Server for detailed information about
◦Data center bridging capability exchange protocol(DCBX; CEE version 1.01)
•Single-chip solution (excluding QLE3442-RJ)
•100M/1G/10G triple-speed MAC (QLE3442-RJ)
•IEEE 802.3az Energy Efficient Ethernet (EEE)(QLE3442-RJ)
•1G/10G speed supported on SFP (optics/DAC) interface
•SerDes interface for optical transceiver connection
•PCIe® Gen3x8 (10GE)
•Zero copy capable hardware
•Other offload performance features:
◦TCP, IP, user datagram protocol (UDP) checksum
◦TCP segmentation
◦Adaptive interrupts
◦Receive side scaling (RSS)
◦Transmit side scaling (TSS)
◦Hardware transparent packet aggregation (TPA)
•Manageability
◦QCC GUI for Windows and Linux. For information, see the Installation Guide: QConvergeConsole GUI
(part number SN0051105-00) and QConvergeConsole GUI Help system.
◦QLogic Control Suite CLI for Windows and Linux. For information, see User’s Guide: QLogic Control
Suite CLI. (part number BC0054511-00)
•QCC GUI Plug-ins for vSphere® through VMware vCenter™ Server software. For information, see the
User’s Guide: QConvergeConsole Plug-ins for vSphere (part number SN0054677-00).
•QCC ESXCLI Plug-in for VMware. For information, see the User’s Guide: FastLinQ ESXCLI VMware Plug-in (part number BC0151101-00).
•QCC PowerKit for Windows and Linux. For information, see the User’s Guide: PowerShell (part number
BC0054518-00).
•QLogic Comprehensive Configuration Management Preboot Utility. For more information, see the User’s
Guide: Comprehensive Configuration Management, QLogic 3400 and 8400 Series Adapters (part number
BC0054512-00).
•Supports the pre-execution environment (PXE) 1.0 and 2.0 specifications
•Universal management port (UMP)
•System management bus (SMBus) controller
•Advanced configuration and power interface (ACPI) 1.1a compliant (multiple power modes)
•Intelligent platform management interface (IPMI) support
•Advanced network features
Product overview9
◦Jumbo frames (up to 9,600 bytes). The OS and the link partner must support jumbo frames.
◦Virtual LANs
◦IEEE Std 802.3ad Teaming
◦Smart Load Balancing™ (SLB) teaming supported by the QLogic QLASP NIC teaming driver on 32-
bit/64-bit Windows Server 2008, 64-bit Windows Server 2008 R2/2012/2012 R2 operating systems
◦Flow control (IEEE Std 802.3x)
◦LiveLink™ (supported by QLogic QLASP NIC teaming driver on 32-bit/64-bit Windows Server 2008, 64-
bit Windows Server 2008 R2/2012/2012 R2 )
•Logical link control (IEEE Std 802.2)
•High-speed on-chip reduced instruction set computer (RISC) processor
•Integrated 96KB frame buffer memory
•Quality of service (QoS)
•Serial gigabit media independent interface (SGMII)/
Gigabit media independent interface (GMII)/
Media independent interface (MII)
•256 unique MAC unicast addresses
iSCSI
•Support for multicast addresses through the 128 bits hashing hardware function
•Serial flash NVRAM memory
•JTAG support
•PCI power management interface (v1.1)
•64-bit base address register (BAR) support
•EM64T processor support
•iSCSI and FCoE boot support
•Virtualization
◦Microsoft
®
◦VMware
◦Linux
◦XenServer
®
®
•Single root I/O virtualization (SR-IOV)
The Internet engineering task force (IETF) has standardized iSCSI. SCSI is a popular protocol that enables
systems to communicate with storage devices, using block-level transfer (address data stored on a storage
device that is not a whole file). iSCSI maps the SCSI request/response application protocols and its
standardized command set over TCP/IP networks.
As iSCSI uses TCP as its sole transport protocol, it greatly benefits from hardware acceleration of the TCP
processing. However, iSCSI as a layer 5 protocol has additional mechanisms beyond the TCP layer. iSCSI
processing can also be offloaded, thereby reducing CPU use even further.
10Product overview
The QLogic 8400/3400 Series adapters target best-system performance, maintains system flexibility to
changes, and supports current and future OS convergence and integration. Therefore, the adapter's iSCSI
offload architecture is unique because of the split between hardware and host processing.
FCoE
FCoE allows Fibre Channel protocol to be transferred over Ethernet. FCoE preserves existing Fibre Channel
infrastructure and capital investments. The following FCoE features are supported:
•Full stateful hardware FCoE offload
•Receiver classification of FCoE and Fibre Channel initialization protocol (FIP) frames. FIP is the FCoE
initialization protocol used to establish and maintain connections.
•Receiver CRC offload
•Transmitter CRC offload
•Dedicated queue set for Fibre Channel traffic
•DCB provides lossless behavior with PFC
•DCB allocates a share of link bandwidth to FCoE traffic with ETS
Power management
Wake on LAN (WOL) is supported on ALOMs, BLOMs, and Synergy Mezzanine cards.
Adaptive Interrupt Frequency
The adapter driver intelligently adjusts host interrupt frequency based on traffic conditions to increase overall
application throughput. When traffic is light, the adapter driver interrupts the host for each received packet,
minimizing latency. When traffic is heavy, the adapter issues one host interrupt for multiple, back-to-back
incoming packets, preserving host CPU cycles.
ASIC with Embedded RISC Processor
The core control for QLogic 8400/3400 Series adapters resides in a tightly integrated, high-performance
ASIC. The ASIC includes a RISC processor that provides the flexibility to add new features to the card and
adapt to future network requirements through software downloads. In addition, the adapter drivers can exploit
the built-in host offload functions on the adapter as host operating systems are enhanced to take advantage
of these functions.
Adapter management
The following applications are available to manage QLogic 8400/3400 Series Adapters:
Procedure
•QLogic Control Suite CLI
•QLogic QConvergeConsole Graphical User Interface
•QLogic QConvergeConsole vCenter Plug-In
•QLogic FastLinQ ESXCLI VMware Plug-In
Product overview11
QLogic Control Suite CLI
The QCS CLI is a console application that you can run from a Windows command prompt or a Linux terminal
console. Use the QCS CLI to manage QLogic 8400/3400 Series Adapters on both local and remote computer
systems. For information about installing and using the QCS CLI, see the QLogic Control Suite CLI User’sGuide: QLogic Control Suite CLI (part number BC0054511-00).
QLogic QConvergeConsole Graphical User Interface
The QCC GUI is a Web-based management tool for configuring and managing Fibre Channel adapters and
Intelligent Ethernet adapters. You can use the QCC GUI on Windows and Linux platforms to manage QLogic
8400/3400 Series Adapters on both local and remote computer systems. For information about installing the
QCC GUI, see the QConvergeConsole GUI Installation Guide. For information about using the QCC GUI, see
the online help.
QLogic QConvergeConsole vCenterPlug-In
The QCC vCenter Plug-In is a web-based management tool that is integrated into the VMware vCenter
Server for configuring and managing Fibre Channel adapters and Intelligent Ethernet adapters in a virtual
environment. You can use the vCenter Plug-in VMware vSphere clients to manage QLogic 8400/3400 Series
Intelligent Ethernet Adapters. For information about installing and using the vCenter Plug-in, see the
QConvergeConsole Plug-ins for vSphere User’s Guide (part number SN0054677-00).
QLogic FastLinQ ESXCLIVMware Plug-In
The FastLinQ ESXCLI VMware plug-in extends the capabilities of the ESX® command-line interface to
manage QLogic 8400/3400 Series Adapters installed in VMware ESX/ESXi hosts. For information about using
the ESXCLI Plug-In, see the QLogic FastLinQ ESXCLI VMware Plug-in User Guide (BC0151101-00).
QLogic QConvergeConsole PowerKit
The QLogic QCC PowerKit allows you to manage your QLogic adapters locally and remotely through the
PowerShell interface on Windows and Linux. For information about installing and using the QCC PowerKit,
see the PowerShell User's Guide (part number BC0054518-00).
QLogic Comprehensive Configuration Management
The QLogic Comprehensive Configuration Management (CCM) is a preboot utility that is used to manage
preboot settings on the QLogic 3400/8400 Series Adapters on the local computer systems. This utility is
accessible during system boot-up. For more information about using CCM, see the ComprehensiveConfiguration Management User's Guide (part number BC0054512-00).
Supported Operating Environments
The QLogic 8400/3400 Series adapters support several operating systems including Windows, Linux (RHEL®,
SUSE®, Ubuntu®, CentOSSM)1, VMware ESXi Server®, and Citrix® XenServer. For a complete list of
supported operating systems and versions, see the Product QuickSpecs (http://www.hpe.com/info/qs).
Supported adapter list
HPE Converged Network Adapters with equivalent features to QLogic 8400 Series:
•HPE FlexFabric 10Gb 4-port 536FLR-T Adapter
•HPE FlexFabric 10Gb 2-port 533FLR-T Adapter
1
Ubuntu and CentOS operating systems are supported only on 3400 Series adapters.
12Product overview
•HPE FlexFabric 10Gb 2-port 534FLR-SFP+ Adapter
•HPE StoreFabric CN1100R Dual Port Converged Network Adapter
•HPE StoreFabric CN1100R 10GBase-T Dual Port Converged Network Adapter
HPE Ethernet Adapters with equivalent features to QLogic 3400 Series:
•HPE Ethernet 10Gb 2-port 530T Adapter
•HPE Ethernet 10Gb 2-port 530SFP+ Adapter
Physical Characteristics
The QLogic 8400/3400 Series Adapters and HPE stand-up adapters are implemented as low profile PCIe
cards. The adapters ship with a full-height bracket for use in a standard PCIe slot or an optional spare low
profile bracket for use in a low profile PCIe slot. Low profile slots are typically found in compact servers.
HPE adapters are also available in ALOM, BLOM, and Mezzanine formats.
Standards specifications
The QLogic 8400/3400 Series Adapters support the following standards specifications:
•IEEE 802.3ae (10Gb Ethernet)
•IEEE 802.1q (VLAN)
•IEEE 802.3ad (Link Aggregation)
•IEEE 802.3x (Flow Control)
•IPv4 (RFC 791)
•IPv6 (RFC 2460)
•IEEE 802.1Qbb (Priority-based Flow Control)
•IEEE 802.1Qaz (Data Center Bridging Exchange [DCBX] and Enhanced Transmission Selection ([ETS])
•IEEE 802.3an 10GBASE-T2
•IEEE 802.ab 100BASE-T
•IEEE 802.3u 100BASE-TX
•IEEE 802.3az EEE
3
2
2
2
2
3400 Series Adapters only
Product overview13
Setting Up Multiboot agent (MBA) driver software
Procedure
•
Overview
•Setting up MBA in a client environment
•Setting up MBA in a server environment
Multiboot agent (MBA) driver software overview
QLogic 8400/3400 Series adapters support Preboot Execution Environment (PXE), Remote Program Load
(RPL), iSCSI, and Bootstrap Protocol (BootP). Multi-Boot Agent (MBA) is a software module that allows your
network computer to boot with the images provided by remote servers across the network. The MBA driver
complies with the PXE 2.1 specification and is released with split binary images. This provides flexibility to
users in different environments where the motherboard may or may not have built-in base code.
The MBA module operates in a client/server environment. A network consists of one or more boot servers
that provide boot images to multiple computers through the network. The implementation of the MBA module
has been tested successfully in the following environments:
•Linux Red Hat® PXE Server. PXE clients are able to remotely boot and use network resources (NFS
mount, and so forth) and to perform Linux installations. In the case of a remote boot, the Linux universal
driver binds seamlessly with the Universal Network Driver Interface (UNDI) and provides a network
interface in the Linux remotely booted client environment.
•Intel® APITEST. The PXE driver passes all API compliance test suites.
•MS-DOS UNDI. The MS-DOS UNDI seamlessly binds with the UNDI to provide a network adapter driver
interface specification (NDIS2) interface to the upper layer protocol stack. This allows computers to
connect to network resources in an MS-DOS environment.
NOTE: The HPE FlexFabric 10Gb 4-port 536FLR-T adapter only supports remote boot capability at the preboot level (Legacy and uEFI) on ports 1 and 2. Ports 3 and 4 only support network functionality.
Setting up MBA in a client environment
Setting up MBA in a client environment involves:
Procedure
1. Enabling the MBA driver.
2. Configuring the MBA driver.
3. Controlling EEE.
4. Setting up the BIOS for the boot order.
Enabling the MBA driver
To enable or disable the MBA driver:
14 Setting Up Multiboot agent (MBA) driver software
Procedure
1. Insert an MS-DOS 6.22 or a Real Mode Kernel bootable disk containing the uxdiag.exe file (for
10/100/1000Mbps network adapters) or uediag.exe (for 10Gbps network adapters) in the removable
disk drive and power up your system.
NOTE: The uxdiag.exe (or uediag.exe) file is on the installation CD or in the DOS Utilities package
available from driverdownloads.qlogic.com/.
2. Enter the following code:
uxdiag -mba [ 0-disable | 1-enable ] -c devnum
(or uediag -mba [ 0-disable | 1-enable ] -c devnum)
where
devnum is the specific device(s) number (0,1,2, …) to be programmed.
Configuring the MBA driver
This section describes the configuration of the MBA driver on add-in NIC models of the network adapter using
the Comprehensive Configuration Management (CCM) utility. To configure the MBA driver on LOM models of
the network adapter, check your system documentation. Both the MBA driver and the CCM utility reside on
the adapter Flash memory.
You can use the CCM utility to configure the MBA driver one adapter at a time as described in this section. To
simultaneously configure the MBA driver for multiple adapters, use the MS-DOS-based user diagnostics
application. For more information about the CCM utility, see the Comprehensive Configuration ManagementUser’s Guide.
Procedure
1. Restart your system.
2. Press Ctrl + S within four seconds after you are prompted to do so. A list of adapters displays.
a. Select the adapter to configure, and then press the Enter key. The Main Menu displays.
b. Select MBA Configuration to display the MBA Configuration Menu.
Setting Up Multiboot agent (MBA) driver software15
Figure 1: MBA Configuration Menu
3. Use the Up arrow and Down arrow keys to move to the Boot Protocol menu item. Then use the Right
arrow or Left arrow key to select the boot protocol of choice if other boot protocols besides PXE are
available. If available, other boot protocols include Remote Program Load (RPL), iSCSI, and BOOTP.
NOTE:
•For iSCSI boot-capable LOMs, the boot protocol is set through the BIOS. See your system
documentation for more information.
•If you have multiple adapters in your system and you are unsure which adapter you are configuring,
press Ctrl+F6, which causes the port LEDs on the adapter to start blinking.
4. Use the Up arrow, Down arrow, Left arrow, and Right arrow keys to move to and change the
values for other menu items, as desired.
5. Press F4 to save your settings.
6. Press Esc when you are finished.
Controlling EEE
The QLogic 8400/3400 Series CCM control for EEE can only be enabled or disabled. You cannot control the
actual power-saving mode (how long it waits between activity before entering power-saving mode).
The QLogic 8400/3400 Series does not use the Broadcom AutoGrEEEn® mode.
For more information about EEE, see http://www.cavium.com/Resources/Documents/WhitePapers/
16Setting Up Multiboot agent (MBA) driver software
Setting up the BIOS
Procedure
1. To boot from the network with the MBA, make the MBA enabled adapter the first bootable device under the
BIOS. This procedure depends on the system BIOS implementation.
2. See the user manual for the system for instructions.
Setting up MBA in a server environment
Procedure
•Red Hat Linux PXE Server
•MS-DOS UNDI/Intel APITEST
RedHat Linux PXE Server
The RedHat Enterprise Linux distribution has PXE Server support. It allows users to remotely perform a
complete Linux installation over the network. The distribution comes with the boot images boot kernel
(vmlinuz) and initial ram disk (initrd), which are on the RedHat disk#1:
Refer to the RedHat documentation for instructions on how to install PXE Server on Linux.
The Initrd.img file distributed with RedHat Enterprise Linux, however, does not have a Linux network driver for
the QLogic 8400/3400 Series adapters. This version requires a driver disk for drivers that are not part of the
standard distribution. You can create a driver disk for the QLogic 8400/3400 Series adapters from the image
distributed with the installation CD. Refer to the Linux Readme.txt file for more information.
MS-DOS UNDI/Intel APITEST
Procedure
1. Download the Intel PXE PDK from the Intel website to boot in MS-DOS mode and connect to a network for
the MS-DOS environment.
This PXE PDK comes with a TFTP/ProxyDHCP/Boot server.
2. Download the PXE PDK from Intel at https://downloadcenter.intel.com/search?keyword=Intel+
%C2%AE+Boot+Agent
Setting Up Multiboot agent (MBA) driver software17
Windows driver software
Procedure
•
Installing the driver software
•Removing the device drivers
•Installing management applications
•Viewing or changing the adapter properties
•Setting power management options
NOTE: The QConvergeConsole GUI is supported as the only GUI management tool across all QLogic
adapters. The QLogic Control Suite (QCS) GUI is no longer supported for the QLogic 8400/3400 Series
Adapters and has been replaced by the QCC GUI management tool. The QCC GUI provides single-paneof-glass GUI management for all QLogic adapters.
In Windows environments, when you run the QCS CLI and the Management Agents Installer, it will
uninstall the QCS GUI (if installed on the system) and any related components from your system.
Download QCC GUI for your adapter from the QLogic Downloads Web page:
driverdownloads.qlogic.com to obtain the new GUI.
Windows drivers
Table 1: QLogic 8400/3400 Series Windows Drivers
Windows DriverDescription
evbdThis system driver manages all PCI device resources (registers, host interface queues)
on the QLogic 8400/3400 Series. The driver provides the slow path interface between
upper-layer protocol drivers and the controller. The evbd driver works with the bxND
network and bxfcoe/bxois offload storage drivers.
bxndThis driver acts as the layer-2 low-level network driver for the adapter. This driver has a
fast path and slow path to the firmware and is responsible for sending and receiving
Ethernet packets on behalf of the host networking stacks.
bxoisThis iSCSI HBA driver provides a translation layer between the Windows SCSI stack
and the iSCSI firmware and handles all iSCSI related activities. The bxois driver has
both a fast path and slow path to the firmware.
bxfcoe
This FCoE HBA driver provides a translation layer between the Windows SCSI stack
and the FCoE firmware and handles all FCoE related activities. The bxfcoe driver has
both a fast path and slow path to the firmware.
Installing the driver software
Prerequisites
•Using the Installer
•Using Silent Installation
•Manually Extracting the Device Drivers
18 Windows driver software
NOTE: These instructions assume that your QLogic 8400/3400Series adapter was not factory installed. If
your controller was installed at the factory, the driver software has been installed for you.
Procedure
•When Windows first starts after a hardware device has been installed (such as an QLogic 8400/3400
Series adapter), or after the existing device driver has been removed, the operating system automatically
detects the hardware and prompts you to install the driver software for that device.
•You can use either the graphical interactive installation mode (see "Using the Installer") or the commandline silent mode for unattended installation (see "Using Silent Installation") are available.
NOTE:
◦
Before installing the driver software, verify that the Windows operating system has been upgraded to
the latest version with the latest service pack applied.
◦A network device driver must be physically installed before the QLogic 8400/3400 Series Adapters can
be used with your Windows operating system. You can find drivers on the HPE Support Center.
◦Up to 256 NPIV WWIDs are supported per FCoE offload 8400 port and configured using QCC GUI,
QCS CLI or QCC PowerKit.
◦Up to 255 Live Migration-able virtual Fibre Channel (vFC) Virtual Machine (VM) instances are
supported per FCoE offload 8400 port. To enable Windows Hyper-V vFC, follow the steps at https://technet.microsoft.com/en-us/library/dn551169.aspx. Otherwise, use the PowerShell
VMFibreChannelHba commands. You do not need to configure NPIV to use vFCs in a VM. A maximum
of 4 vFCs (from one or more FCoE ports) can be used per VM.
Using the installer
If supported and if you will use the iSCSI Crash Dump utility, it is important to follow the installation sequence:
Procedure
1. Run the installer.
2. Install the Microsoft iSCSI Software Initiator along with the patch.
Installing the QLogic 8400/3400 series drivers
Procedure
1. When the Found New Hardware Wizard appears, click Cancel.
2. Insert the installation CD into the CD or DVD drive.
3. On the installation CD, open the folder for your operating system, open the DrvInst folder, and then double-click Setup.exe to open the InstallShield Wizard.
4. Click Next to continue.
5. After you review the license agreement, click I accept the terms in the license agreement, and then click
Next to continue.
6. Click Install.
Windows driver software19
7. Click Finish to close the wizard.
8. The installer will determine if a system restart is necessary. Follow the on-screen instructions.
Installing the Microsoft iSCSI Software Initiator for iSCSI Crash Dump
If supported and if you will use the iSCSI Crash Dump utility, it is important to follow the installation sequence:
Procedure
1. Run the installer.
2. Install Microsoft iSCSI Software Initiator along with the patch (MS KB939875).
NOTE: If performing an upgrade of the device drivers from the installer, re-enable iSCSI Crash Dump from
the Advanced section of the QCC Configuration tab.
If not included in your OS, install the Microsoft iSCSI Software Initiator (version 2.06 or later). To download
the iSCSI Software Initiator from Microsoft, go to: http://www.microsoft.com/downloads/en/
details.aspx?familyid=
After running the installer to install the device drivers
Procedure
1. Install Microsoft iSCSI Software Initiator (version 2.06 or later) if not included in your OS. To determine
when to install the Microsoft iSCSI Software Initiator, see the following table.
2. Install Microsoft patch for iSCSI crash dump file generation (Microsoft KB939875). See the following table
to determine if you must install the Microsoft patch.
Table 2: Windows Operating Systems and iSCSI Crash Dump
Operating SystemMS iSCSI Software Initiator
Required
NDIS
Windows Server 2008Yes (included in OS)No
Windows Server 2008 R2Yes (included in OS)No
OIS
Windows Server 2008NoNo
Windows Server 2008 R2NoNo
Microsoft Patch (MS KB939875)
Required
20Windows driver software
Using silent installation
NOTE:
•All commands are case sensitive.
•See the silent.txt file in the folder for detailed instructions and information about unattended installs.
To perform a silent install from within the installer source folder, enter the following:
setup /s /v/qn
To perform a silent upgrade from within the installer source folder, enter the following:
setup /s /v/qn
To perform a silent reinstall of the same installer, enter the following:
setup /s /v"/qn REINSTALL=ALL"
NOTE: The REINSTALL switch should only be used if the same installer is already installed on the system. If
upgrading an earlier version of the installer, use setup /s /v/qn as listed above.
To perform a silent install to force a downgrade (default is NO), enter the following:
setup /s /v" /qn DOWNGRADE=Y"
Manually extracting the device drivers
Procedure
Enter the following command:
setup /a
Entering the above command runs the setup utility, extracts the drivers, and places them in the designated
location.
Removing the device drivers
IMPORTANT: Uninstall the QLogic 8400/3400 Series device drivers from your system only through the
InstallShield wizard. Uninstalling the device drivers with Device Manager or any other method might not
provide a clean uninstall and might cause the system to become unstable.
Windows Server 2008 and Windows Server 2008 R2 provide the Device Driver Rollback feature to
replace a device driver with one that was previously installed. However, the complex software
architecture of the QLogic 8400/3400 Series device might present problems if the rollback feature is
used on one of the individual components. Therefore, QLogic recommends that changes to driver
versions be made only through the use of a driver installer.
Procedure
1. Open Control Panel.
2. Double-click Add or Remove Programs.
Windows driver software21
Installing QLogic Management Applications
Procedure
1. To open the Management Programs installation wizard, run the setup file (setup.exe).
2. Accept the terms of the license agreement, and then click Next.
3. In the Custom Setup dialog box, review the components to be installed, make any necessary changes,and then click Next.
4. In the Ready to Install the Program dialog box, click Install to proceed with the installation.
Viewing or changing the adapter properties
Procedure
1. Open the QCC GUI.
2. Click the Advanced section of the Configurations tab.
Setting power management options
Procedure
•If the device is busy doing something (such as servicing a call), the operating system does not shut down
the device.
The operating system attempts to shut down every possible device only when the computer attempts to go
into hibernation. To have the controller stay on at all times, do not click the Allow the computer to turn
off the device to save power check box.
Figure 2: Power Management
22Windows driver software
NOTE:
◦The Power Management tab is available only for servers that support power management.
◦If you select Only allow management stations to bring the computer out of standby, the computer
can be brought out of standby only by Magic Packet.
CAUTION: Do not select Allow the computer to turn off the device to save power for any adapter
that is a member of a team.
Windows driver software23
Windows Server 2016
This chapter provides VxLAN information for Windows Server 2016.
Configuring VXLAN
This section provides procedures for enabling the virtual extensible LAN (VXLAN) offload and deploying a
software-defined network.
Enabling VxLAN Offload on the Adapter
Procedure
1. Open the QLogic adapter properties.
2. Click the Advanced tab.
3. On the Advanced page under Property, select VxLAN Encapsulated Task Offload.
4. In the Value box, select Enabled.
5. Click OK.
The following figure shows the QLogic adapter properties on the Advanced page.
To take advantage of VXLAN Encapsulation Task Offload on virtual machines, you must deploy a software
defined network (SDN) that uses a Microsoft Network Controller.
See Microsoft TechNet on Software Defined Networking for more information.
Windows Server 201625
Linux driver software
Procedure
•
Introduction
•Limitations
•Packaging
•Installing Linux driver software
•Unloading/Removing the Linux driver
•Patching PCI files (optional)
•Network installations
•Setting values for optional properties
•Driver defaults
•Driver messages
•Teaming with channel bonding
•Statistics
Introduction
This section contains information about the Linux drivers for the QLogic 8400/3400 Series network adapters.
The following table lists the QLogic 8400/3400 Series Linux drivers.
Table 3: QLogic 8400/3400 Series Linux Drivers
Linux driverDescription
bnx2xLinux driver for the QLogic 8400/3400 Series 10Gb
cnicThe cnic driver provides the interface between
network adapters. This driver directly controls the
hardware and is responsible for sending and
receiving Ethernet packets on behalf of the Linux
host networking stack. This driver also receives and
processes device interrupts, both on behalf of itself
(for L2 networking) and on behalf of the bnx2fc
(FCoE) and cnic drivers.
QLogic’s upper layer protocol (storage) drivers and
QLogic’s 8400/3400 Series 10Gb network adapters.
The CNIC module works with the bnx2 and bnx2x
network drives in the downstream and the bnx2fc
(FCoE) and bnx2i (iSCSI) drivers in the upstream.
26 Linux driver software
Table Continued
Linux driverDescription
bnx2iLinux iSCSI HBA driver to enable iSCSI offload on
bnx2fcLinux FCoE kernel mode driver used to provide a
Limitations
Procedure
•bnx2x driver
•bnx2i driver
•bnx2fc driver
bnx2x driver
the QLogic 8400/3400 Series 10Gb network
adapters.
translation layer between the Linux SCSI stack and
the QLogic FCoE firmware/hardware. In addition, the
driver interfaces with the networking layer to transmit
and receive encapsulated FCoE frames on behalf of
open-fcoe’s libfc/libfcoe for FIP/device discovery.
The current version of the driver has been tested on 2.6.x kernels starting from 2.6.9. The driver might not
compile on kernels older than 2.6.9. Testing is concentrated on i386 and x86_64 architectures. Only limited
testing has been done on some other architectures. Minor changes to some source files and Makefile may be
needed on some kernels.
bnx2i driver
The current version of the driver has been tested on 2.6.x kernels, starting from 2.6.18 kernel. The driver may
not compile on older kernels. Testing is concentrated on i386 and x86_64 architectures, RHEL 6, SLES 11,
and SLES 12.
bnx2fc driver
The current version of the driver has been tested on 2.6.x kernels, starting from 2.6.32 kernel, which is
included in RHEL 6.1 distribution. This driver may not compile on older kernels. Testing was limited to i386
and x86_64 architectures, RHEL 6 RHEL 7.0, and SLES 11, and SLE 12 and later distributions.
Packaging
The Linux drivers are released in the following packaging formats:
DKMS Packages
•KMP Packages
◦SLES
Linux driver software27
– netxtreme2-kmp-[kernel]-version.i586.rpm
– netxtreme2-kmp-[kernel]-version.x86_64.rpm
•Red Hat
◦kmod-kmp-netxtreme2-[kernel]-version.i686.rpm
◦kmod-kmp-netxtreme2-[kernel]-version.x86_64.rpm
The QCS CLI management utility is also distributed as an RPM package (QCS-{version}.{arch}.rpm). For
information about installing the Linux QCS CLI, see the QLogic Control Suite CLI User’s Guide.
•Source Packages:
Identical source files to build the driver are included in both RPM and TAR source packages. The
supplemental .tar file contains additional utilities, such as patches and driver diskette images for network
installation, including the following:
◦netxtreme2-<version>.src.rpm: RPM package with QLogic 8400/3400 Series bnx2/bnx2x/cnic/
bnx2fc/bnx2ilibfc/libfcoe driver source.
◦netxtreme2-<version>.tar.gz: TAR compressed package with 8400/3400 Series bnx2/bnx2x/
cnic/bnx2fc/bnx2i/libfc/libfcoe driver source.
◦iscsiuio-<version>.tar.gz: iSCSI user space management tool binary.
The Linux driver has a dependency on open-fcoe userspace management tools as the front end to control
FCoE interfaces. The package name of the open-fcoe tool is <codeemph>fcoe-utils</codeemph> for RHEL
6.4 and <codeemph>open-fcoe</codeemph> for SLES 11 SP2and legacy versions.
Installing Linux Driver Software
•Installing the Source RPM Package
•Installing the KMP Package
•
•Building the Driver from the Source TAR File
NOTE: If a bnx2x, bnx2i, or bnx2fc driver is loaded and the Linux kernel is updated, you must recompile the
driver module if the driver module was installed using the source RPM or the TAR package.
Installing the source RPM package
The following are guidelines for installing the driver source RPM Package.
28Linux driver software
Prerequisites
•Linux kernel source
•C compiler
Procedure
1.Install the source RPM package:
rpm -ivh netxtreme2-<version>.src.rpm
2.Change the directory to the RPM path and build the binary RPM for your kernel:
For RHEL:
cd ~/rpmbuild
rpmbuild -bb SPECS/netxtreme2.spec
For SLES:
cd /usr/src/packages
rpmbuild -bb SPECS/netxtreme2.spec
For RHEL 6.4 and SLES11 SP2 and legacy versions, the version of fcoe-utils/open-fcoe included in your
distribution is sufficient and no out of box upgrades are provided.
Where available, installation with yum automatically resolves dependencies. Otherwise, you can locate
required dependencies on your O/S installation media.
5.For SLES, turn on the fcoe and lldpad services for FCoE offload, and just lldpad for iSCSI-offload-TLV.
For SLES11 SP1:
chkconfig lldpad on
chkconfig fcoe on
For SLES11 SP2:
chkconfig boot.lldpad on
chkconfig boot.fcoe on
Linux driver software29
6.Inbox drivers are included with all of the supported operating systems. The simplest means to ensure the
newly installed drivers are loadedis to reboot.
7.For FCoE offload, after rebooting, create configuration files for all FCoE ethX interfaces:
cd /etc/fcoe
cp cfg-ethx cfg-<ethX FCoE interface name>
NOTE: Your distribution might have a different naming scheme for Ethernet devices (pXpX or emX
instead of ethX).
8.For FCoE offload or iSCSI-offload-TLV, modify /etc/fcoe/cfg-<interface> by setting
DCB_REQUIRED=yes to DCB_REQUIRED=no.
9.Turn on all ethX interfaces.
ifconfig <ethX> up
10. For SLES, use YaST to configure your Ethernet interfaces to automatically start at boot by setting a static
IP address or enabling DHCP on the interface.
11. For FCoE offload and iSCSI-offload-TLV, disable lldpad on QLogic converged network adapter
interfaces. This is required because QLogic uses an offloadedDCBX client.
lldptool set-lldp –i <ethX> adminStatus=disasbled
12. For FCoE offload and iSCSI-offload-TLV, be sure that /var/lib/lldpad/lldpad.conf is created
and each <ethX> block does not specify “adminStatus” or if specified, it is set to 0 (“adminStatus=0”) as
below.