HEWLETT PACKARD ENTERPRISE HP 533FLR-T User guide

HPE Ethernet 53x, FlexFabric 53x/63x, and Synergy Converged Network Adapters

QLogic® FastLinQ™ 3400 and 8400 Series User's Guide
Abstract
This guide is intended for personnel responsible for installing and maintaining computer networking equipment.
Part Number: P09228-001 Published: September 2018 Edition: 1
©
Copyright 2018, Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website.
Acknowledgments
Intel®, Itanium®, Pentium®, Xeon®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in the U.S. and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.

Contents

Product overview.........................................................................................8
Functional description..........................................................................................................................8
Features.............................................................................................................................................. 8
iSCSI.......................................................................................................................................10
FCoE.......................................................................................................................................11
Power management................................................................................................................11
Adaptive Interrupt Frequency..................................................................................................11
ASIC with Embedded RISC Processor................................................................................... 11
Adapter management........................................................................................................................ 11
QLogic Control Suite CLI........................................................................................................12
QLogic QConvergeConsole Graphical User Interface............................................................12
QLogic QConvergeConsole vCenterPlug-In...........................................................................12
QLogic FastLinQ ESXCLIVMware Plug-In............................................................................. 12
QLogic QConvergeConsole PowerKit.................................................................................... 12
QLogic Comprehensive Configuration Management..............................................................12
Supported Operating Environments.................................................................................................. 12
Supported adapter list....................................................................................................................... 12
Physical Characteristics.................................................................................................................... 13
Standards specifications....................................................................................................................13
Setting Up Multiboot agent (MBA) driver software.................................14
Multiboot agent (MBA) driver software overview............................................................................... 14
Setting up MBA in a client environment.............................................................................................14
Enabling the MBA driver.........................................................................................................14
Configuring the MBA driver.....................................................................................................15
Controlling EEE...................................................................................................................... 16
Setting up the BIOS................................................................................................................17
Setting up MBA in a server environment...........................................................................................17
RedHat Linux PXE Server...................................................................................................... 17
MS-DOS UNDI/Intel APITEST................................................................................................17
Windows driver software.......................................................................... 18
Windows drivers................................................................................................................................ 18
Installing the driver software..............................................................................................................18
Using the installer................................................................................................................... 19
Using silent installation........................................................................................................... 21
Manually extracting the device drivers....................................................................................21
Removing the device drivers............................................................................................................. 21
Installing QLogic Management Applications......................................................................................22
Viewing or changing the adapter properties......................................................................................22
Setting power management options..................................................................................................22
Windows Server 2016................................................................................24
Configuring VXLAN........................................................................................................................... 24
Enabling VxLAN Offload on the Adapter................................................................................ 24
Deploying a Software Defined Network.................................................................................. 25
3
Linux driver software................................................................................ 26
Introduction........................................................................................................................................26
Limitations......................................................................................................................................... 27
bnx2x driver ......................................................................................................................... 27
bnx2i driver .......................................................................................................................... 27
bnx2fc driver........................................................................................................................... 27
Packaging..........................................................................................................................................27
Installing Linux Driver Software.........................................................................................................28
Installing the source RPM package........................................................................................ 28
Installing the KMP package.................................................................................................... 31
Building the driver from the source TAR file........................................................................... 31
Loading and running necessary iSCSI components..........................................................................32
Unloading/Removing the Linux driver................................................................................................32
Unloading/Removing the driver from an RPM installation...................................................... 33
Removing the driver from a TAR installation.......................................................................... 33
Uninstalling the QCC GUI.......................................................................................................33
Patching PCI files (optional).............................................................................................................. 33
Network Installations......................................................................................................................... 34
Setting values for optional properties................................................................................................ 34
bnx2x driver............................................................................................................................ 34
bnx2i Driver......................................................................................................................36
bnx2fc Driver....................................................................................................................38
Driver defaults................................................................................................................................... 38
bnx2 driver..............................................................................................................................39
bnx2x driver............................................................................................................................ 39
Driver messages................................................................................................................................40
bnx2x Driver......................................................................................................................40
bnx2i Driver......................................................................................................................41
bnxfc Driver......................................................................................................................45
Teaming with channel bonding.......................................................................................................... 47
Statistics............................................................................................................................................ 47
VMware driver software............................................................................ 48
VMware Drivers.................................................................................................................................48
Downloading, installing, and updating drivers................................................................................... 48
Networking support............................................................................................................................51
Driver parameters...................................................................................................................51
Driver defaults.........................................................................................................................53
Unloading the driver................................................................................................................53
Driver messages.....................................................................................................................53
FCoE Support....................................................................................................................................55
Enabling FCoE........................................................................................................................55
Verifying the correct installation of the driver..........................................................................56
Limitations...............................................................................................................................56
Supported distributions...........................................................................................................56
Upgrading the Firmware........................................................................... 57
Configuring iSCSI Protocol...................................................................... 58
iSCSI boot......................................................................................................................................... 58
Supported operating systems for iSCSI boot..........................................................................58
4
Setting up iSCSI boot............................................................................................................. 58
Configuring VLANs for iSCSI boot..........................................................................................94
Other iSCSI Boot considerations............................................................................................96
iSCSI crash dump..............................................................................................................................99
iSCSI offload in Windows Server.......................................................................................................99
Configuring iSCSI offload..................................................................................................... 100
iSCSI offload FAQs...............................................................................................................108
Event log messages............................................................................................................. 108
iSCSI offload in Linux server............................................................................................................114
Open iSCSI user applications............................................................................................... 114
User application - qlgc_iscsiuio.............................................................................................114
Bind iSCSI Target to iSCSI Transport Name.........................................................................114
VLAN Configuration for iSCSI Offload (Linux)...................................................................... 115
Making Connections to iSCSI Targets.................................................................................. 116
Maximum offload iSCSI connections.................................................................................... 117
Linux iSCSI offload FAQ....................................................................................................... 117
iSCSI Offload on VMware server.....................................................................................................117
Configuring the VLAN using the vSphere client (GUI).......................................................... 118
Configuring Fibre Channel Over Ethernet.............................................120
Overview..........................................................................................................................................120
FCoE Boot from SAN...................................................................................................................... 120
Preparing System BIOS for FCoE Build and Boot................................................................121
Prepare Multiple Boot Agent for FCoE Boot......................................................................... 121
UEFI Boot LUN Scanning.....................................................................................................123
Provisioning storage access in the SAN...............................................................................125
One-time disabled.................................................................................................................126
Windows Server 2008 R2 and Windows Server 2008 SP2 FCoE Boot Installation............. 127
Windows Server 2012/2102 R2 FCoE Boot Installation....................................................... 129
Linux FCoE Boot Installation................................................................................................ 130
VMware ESXi FCoE Boot Installation...................................................................................145
Configuring FCoE............................................................................................................................149
Configuring NIC partitioning and managing bandwidth...................... 150
Using Virtual LANs in Windows............................................................. 151
VLAN overview................................................................................................................................151
Adding VLANs to Teams..................................................................................................................153
Enabling SR-IOV...................................................................................... 154
Overview..........................................................................................................................................154
Enabling SR-IOV............................................................................................................................. 154
Verifying that the SR-IOV is operational............................................................................... 155
SR-IOV and Storage.............................................................................................................155
SR-IOV and Jumbo Packets.................................................................................................155
Using Microsoft Virtualization with Hyper-V......................................... 156
Supported features..........................................................................................................................156
Configuring a single network adapter..............................................................................................157
Windows Server 2008...........................................................................................................157
Windows Server 2008 R2 and 2012.....................................................................................157
5
Teamed Network Adapters.............................................................................................................. 157
Windows 2008...................................................................................................................... 160
Windows Server 2008 R2.....................................................................................................160
Configuring VMQ with SLB Teaming.................................................................................... 161
Upgrading Windows Operating Systems.........................................................................................161
Data Center Bridging (DCB)....................................................................162
Overview..........................................................................................................................................162
Enhanced Transmission Selection........................................................................................162
Priority-based Flow Control.................................................................................................. 162
Data Center Bridging Exchange........................................................................................... 163
Configuring DCB..............................................................................................................................163
DCB conditions................................................................................................................................163
Data Center Bridging in Windows Server 2012............................................................................... 163
Enabling the QoS Windows feature......................................................................................164
Using QLogic Teaming Services............................................................ 165
Executive summary......................................................................................................................... 165
Glossary of Teaming Terms.................................................................................................. 165
Software components...........................................................................................................171
Hardware requirements........................................................................................................ 171
Teaming Support by Processor.............................................................................................172
Configuring Teaming.............................................................................................................172
Supported features by team type..........................................................................................172
Selecting a Team Type......................................................................................................... 173
Teaming Mechanisms........................................................................................................... 174
Teaming and Other Advanced Networking Properties..........................................................181
General Network Considerations..........................................................................................183
Application Considerations................................................................................................... 190
Troubleshooting Teaming Problems..................................................................................... 197
Frequently Asked Questions.................................................................................................199
Event Log Messages............................................................................................................ 201
Configuring Teaming in Windows Server..............................................209
QSLAP Overview.............................................................................................................................209
Load Balancing and Fault Tolerance............................................................................................... 209
Types of Teams.....................................................................................................................209
Smart Load Balancing and Failover......................................................................................210
Link Aggregation (802.3ad).................................................................................................. 210
Generic Trunking (FEC/GEC)/802.3ad-Draft Static..............................................................210
SLB (Auto-Fallback Disable).................................................................................................211
Limitations of Smart Load Balancing and Failover/SLB (Auto-Fallback Disable) Types of
Teams................................................................................................................................... 211
Teaming and Large Send Offload/Checksum Offload Support.............................................212
Troubleshooting.......................................................................................213
Hardware Diagnostics..................................................................................................................... 213
QCC GUI Diagnostic Tests Failures..................................................................................... 213
Troubleshooting checklist................................................................................................................214
Checking if Current Drivers are Loaded.......................................................................................... 214
Windows............................................................................................................................... 214
Linux..................................................................................................................................... 215
6
Possible Problems and Solutions....................................................................................................216
Multi-boot agent....................................................................................................................216
QSLAP..................................................................................................................................216
Linux..................................................................................................................................... 218
Miscellaneous.......................................................................................................................219
Websites................................................................................................... 222
Support and other resources................................................................. 223
Accessing Hewlett Packard Enterprise Support.............................................................................. 223
Accessing updates.......................................................................................................................... 223
Customer self repair........................................................................................................................ 224
Remote support...............................................................................................................................224
Warranty information....................................................................................................................... 224
Regulatory information.................................................................................................................... 224
Documentation feedback.................................................................................................................225
Glossary................................................................................................... 226
7

Product overview

Procedure

Functional description

Features
Supported operating environments
Supported adapters
Physical characteristics
Standards specifications
Functional description
The QLogic 8400/3400 Series adapters are based on a new class of Gigabit Ethernet (GbE) and 10GbE converged network interface controller (C-NIC) that can simultaneously perform accelerated data networking and storage networking on a standard Ethernet network. The C-NIC offers acceleration for popular protocols used in the data center, such as:
Internet Small Computer Systems Interface (iSCSI) offload for accelerating network storage access featuring centralized boot (iSCSI boot)
Fibre Channel over Ethernet (FCoE) offload and acceleration for Fibre Channel block storage
Enterprise networks that use multiple protocols and multiple network fabrics benefit from the ability of the network adapter to combine data communications, storage, and clustering over a single Ethernet fabric by boosting server CPU processing performance and memory use while alleviating I/O bottlenecks.
The QLogic 8400/3400 Series adapters include a 100/1000Mbps or 10Gbps Ethernet MAC with both half­duplex and full-duplex capability and a 100/100 0Mbps or 10Gbps physical layer (PHY). The transceiver is fully compatible with the IEEE 802.3 standard for auto-negotiation of speed.
Using the teaming software, you can split your networking virtual LANs (VLANs) and group multiple network adapters together into teams to provide network load balancing and fault tolerance.
See teaming.
See Using Virtual LANs in Windows for a description of VLANs.
See Configuring Teaming for instructions about configuring teaming and creating VLANs on Windows operating systems.

Features

The following is a list of the QLogic 8400/3400Series adapters features. Some features might not be available on all adapters.
Teaming Services and Configuring Teaming in Windows Server for detailed information about
iSCSI offload
FCoE offload
NIC partitioning (NPAR)
Data center bridging (DCB)
8 Product overview
Enhanced transmission selection (ETS; IEEE 802.1Qaz)
Priority-based flow control (PFC; IEEE 802.1Qbb)
Data center bridging capability exchange protocol(DCBX; CEE version 1.01)
Single-chip solution (excluding QLE3442-RJ)
100M/1G/10G triple-speed MAC (QLE3442-RJ)
IEEE 802.3az Energy Efficient Ethernet (EEE)(QLE3442-RJ)
1G/10G speed supported on SFP (optics/DAC) interface
SerDes interface for optical transceiver connection
PCIe® Gen3x8 (10GE)
Zero copy capable hardware
Other offload performance features:
TCP, IP, user datagram protocol (UDP) checksum
TCP segmentation
Adaptive interrupts
Receive side scaling (RSS)
Transmit side scaling (TSS)
Hardware transparent packet aggregation (TPA)
Manageability
QCC GUI for Windows and Linux. For information, see the Installation Guide: QConvergeConsole GUI
(part number SN0051105-00) and QConvergeConsole GUI Help system.
QLogic Control Suite CLI for Windows and Linux. For information, see User’s Guide: QLogic Control
Suite CLI. (part number BC0054511-00)
QCC GUI Plug-ins for vSphere® through VMware vCenter™ Server software. For information, see the User’s Guide: QConvergeConsole Plug-ins for vSphere (part number SN0054677-00).
QCC ESXCLI Plug-in for VMware. For information, see the User’s Guide: FastLinQ ESXCLI VMware Plug- in (part number BC0151101-00).
QCC PowerKit for Windows and Linux. For information, see the User’s Guide: PowerShell (part number BC0054518-00).
QLogic Comprehensive Configuration Management Preboot Utility. For more information, see the User’s Guide: Comprehensive Configuration Management, QLogic 3400 and 8400 Series Adapters (part number BC0054512-00).
Supports the pre-execution environment (PXE) 1.0 and 2.0 specifications
Universal management port (UMP)
System management bus (SMBus) controller
Advanced configuration and power interface (ACPI) 1.1a compliant (multiple power modes)
Intelligent platform management interface (IPMI) support
Advanced network features
Product overview 9
Jumbo frames (up to 9,600 bytes). The OS and the link partner must support jumbo frames.
Virtual LANs
IEEE Std 802.3ad Teaming
Smart Load Balancing™ (SLB) teaming supported by the QLogic QLASP NIC teaming driver on 32-
bit/64-bit Windows Server 2008, 64-bit Windows Server 2008 R2/2012/2012 R2 operating systems
Flow control (IEEE Std 802.3x)
LiveLink™ (supported by QLogic QLASP NIC teaming driver on 32-bit/64-bit Windows Server 2008, 64-
bit Windows Server 2008 R2/2012/2012 R2 )
Logical link control (IEEE Std 802.2)
High-speed on-chip reduced instruction set computer (RISC) processor
Integrated 96KB frame buffer memory
Quality of service (QoS)
Serial gigabit media independent interface (SGMII)/
Gigabit media independent interface (GMII)/
Media independent interface (MII)
256 unique MAC unicast addresses

iSCSI

Support for multicast addresses through the 128 bits hashing hardware function
Serial flash NVRAM memory
JTAG support
PCI power management interface (v1.1)
64-bit base address register (BAR) support
EM64T processor support
iSCSI and FCoE boot support
Virtualization
Microsoft
®
VMware
Linux
XenServer
®
®
Single root I/O virtualization (SR-IOV)
The Internet engineering task force (IETF) has standardized iSCSI. SCSI is a popular protocol that enables systems to communicate with storage devices, using block-level transfer (address data stored on a storage device that is not a whole file). iSCSI maps the SCSI request/response application protocols and its standardized command set over TCP/IP networks.
As iSCSI uses TCP as its sole transport protocol, it greatly benefits from hardware acceleration of the TCP processing. However, iSCSI as a layer 5 protocol has additional mechanisms beyond the TCP layer. iSCSI processing can also be offloaded, thereby reducing CPU use even further.
10 Product overview
The QLogic 8400/3400 Series adapters target best-system performance, maintains system flexibility to changes, and supports current and future OS convergence and integration. Therefore, the adapter's iSCSI offload architecture is unique because of the split between hardware and host processing.

FCoE

FCoE allows Fibre Channel protocol to be transferred over Ethernet. FCoE preserves existing Fibre Channel infrastructure and capital investments. The following FCoE features are supported:
Full stateful hardware FCoE offload
Receiver classification of FCoE and Fibre Channel initialization protocol (FIP) frames. FIP is the FCoE initialization protocol used to establish and maintain connections.
Receiver CRC offload
Transmitter CRC offload
Dedicated queue set for Fibre Channel traffic
DCB provides lossless behavior with PFC
DCB allocates a share of link bandwidth to FCoE traffic with ETS

Power management

Wake on LAN (WOL) is supported on ALOMs, BLOMs, and Synergy Mezzanine cards.

Adaptive Interrupt Frequency

The adapter driver intelligently adjusts host interrupt frequency based on traffic conditions to increase overall application throughput. When traffic is light, the adapter driver interrupts the host for each received packet, minimizing latency. When traffic is heavy, the adapter issues one host interrupt for multiple, back-to-back incoming packets, preserving host CPU cycles.

ASIC with Embedded RISC Processor

The core control for QLogic 8400/3400 Series adapters resides in a tightly integrated, high-performance ASIC. The ASIC includes a RISC processor that provides the flexibility to add new features to the card and adapt to future network requirements through software downloads. In addition, the adapter drivers can exploit the built-in host offload functions on the adapter as host operating systems are enhanced to take advantage of these functions.

Adapter management

The following applications are available to manage QLogic 8400/3400 Series Adapters:
Procedure
QLogic Control Suite CLI
QLogic QConvergeConsole Graphical User Interface
QLogic QConvergeConsole vCenter Plug-In
QLogic FastLinQ ESXCLI VMware Plug-In
Product overview 11

QLogic Control Suite CLI

The QCS CLI is a console application that you can run from a Windows command prompt or a Linux terminal console. Use the QCS CLI to manage QLogic 8400/3400 Series Adapters on both local and remote computer systems. For information about installing and using the QCS CLI, see the QLogic Control Suite CLI User’s Guide: QLogic Control Suite CLI (part number BC0054511-00).

QLogic QConvergeConsole Graphical User Interface

The QCC GUI is a Web-based management tool for configuring and managing Fibre Channel adapters and Intelligent Ethernet adapters. You can use the QCC GUI on Windows and Linux platforms to manage QLogic 8400/3400 Series Adapters on both local and remote computer systems. For information about installing the QCC GUI, see the QConvergeConsole GUI Installation Guide. For information about using the QCC GUI, see the online help.

QLogic QConvergeConsole vCenterPlug-In

The QCC vCenter Plug-In is a web-based management tool that is integrated into the VMware vCenter Server for configuring and managing Fibre Channel adapters and Intelligent Ethernet adapters in a virtual environment. You can use the vCenter Plug-in VMware vSphere clients to manage QLogic 8400/3400 Series Intelligent Ethernet Adapters. For information about installing and using the vCenter Plug-in, see the QConvergeConsole Plug-ins for vSphere User’s Guide (part number SN0054677-00).

QLogic FastLinQ ESXCLIVMware Plug-In

The FastLinQ ESXCLI VMware plug-in extends the capabilities of the ESX® command-line interface to manage QLogic 8400/3400 Series Adapters installed in VMware ESX/ESXi hosts. For information about using the ESXCLI Plug-In, see the QLogic FastLinQ ESXCLI VMware Plug-in User Guide (BC0151101-00).

QLogic QConvergeConsole PowerKit

The QLogic QCC PowerKit allows you to manage your QLogic adapters locally and remotely through the PowerShell interface on Windows and Linux. For information about installing and using the QCC PowerKit, see the PowerShell User's Guide (part number BC0054518-00).

QLogic Comprehensive Configuration Management

The QLogic Comprehensive Configuration Management (CCM) is a preboot utility that is used to manage preboot settings on the QLogic 3400/8400 Series Adapters on the local computer systems. This utility is accessible during system boot-up. For more information about using CCM, see the Comprehensive Configuration Management User's Guide (part number BC0054512-00).

Supported Operating Environments

The QLogic 8400/3400 Series adapters support several operating systems including Windows, Linux (RHEL®, SUSE®, Ubuntu®, CentOSSM)1, VMware ESXi Server®, and Citrix® XenServer. For a complete list of supported operating systems and versions, see the Product QuickSpecs (http://www.hpe.com/info/qs).

Supported adapter list

HPE Converged Network Adapters with equivalent features to QLogic 8400 Series:
HPE FlexFabric 10Gb 4-port 536FLR-T Adapter
HPE FlexFabric 10Gb 2-port 533FLR-T Adapter
1
Ubuntu and CentOS operating systems are supported only on 3400 Series adapters.
12 Product overview
HPE FlexFabric 10Gb 2-port 534FLR-SFP+ Adapter
HPE StoreFabric CN1100R Dual Port Converged Network Adapter
HPE StoreFabric CN1100R 10GBase-T Dual Port Converged Network Adapter
HPE FlexFabric 10Gb 2-port 536FLB Adapter
HPE FlexFabric 10Gb 2-port 534M Adapter
HPE FlexFabric 20Gb 2-port 630FLB Adapter
HPE FlexFabric 20Gb 2-port 630M Adapter
HPE Synergy 3820C 10/20Gb Converged Network Adapter
HPE Synergy 2820C 10Gb Converged Network Adapter
HPE Ethernet Adapters with equivalent features to QLogic 3400 Series:
HPE Ethernet 10Gb 2-port 530T Adapter
HPE Ethernet 10Gb 2-port 530SFP+ Adapter

Physical Characteristics

The QLogic 8400/3400 Series Adapters and HPE stand-up adapters are implemented as low profile PCIe cards. The adapters ship with a full-height bracket for use in a standard PCIe slot or an optional spare low profile bracket for use in a low profile PCIe slot. Low profile slots are typically found in compact servers.
HPE adapters are also available in ALOM, BLOM, and Mezzanine formats.

Standards specifications

The QLogic 8400/3400 Series Adapters support the following standards specifications:
IEEE 802.3ae (10Gb Ethernet)
IEEE 802.1q (VLAN)
IEEE 802.3ad (Link Aggregation)
IEEE 802.3x (Flow Control)
IPv4 (RFC 791)
IPv6 (RFC 2460)
IEEE 802.1Qbb (Priority-based Flow Control)
IEEE 802.1Qaz (Data Center Bridging Exchange [DCBX] and Enhanced Transmission Selection ([ETS])
IEEE 802.3an 10GBASE-T2
IEEE 802.ab 100BASE-T
IEEE 802.3u 100BASE-TX
IEEE 802.3az EEE
3
2
2
2
2
3400 Series Adapters only
Product overview 13

Setting Up Multiboot agent (MBA) driver software

Procedure
Overview
Setting up MBA in a client environment
Setting up MBA in a server environment

Multiboot agent (MBA) driver software overview

QLogic 8400/3400 Series adapters support Preboot Execution Environment (PXE), Remote Program Load (RPL), iSCSI, and Bootstrap Protocol (BootP). Multi-Boot Agent (MBA) is a software module that allows your network computer to boot with the images provided by remote servers across the network. The MBA driver complies with the PXE 2.1 specification and is released with split binary images. This provides flexibility to users in different environments where the motherboard may or may not have built-in base code.
The MBA module operates in a client/server environment. A network consists of one or more boot servers that provide boot images to multiple computers through the network. The implementation of the MBA module has been tested successfully in the following environments:
Linux Red Hat® PXE Server. PXE clients are able to remotely boot and use network resources (NFS mount, and so forth) and to perform Linux installations. In the case of a remote boot, the Linux universal driver binds seamlessly with the Universal Network Driver Interface (UNDI) and provides a network interface in the Linux remotely booted client environment.
Intel® APITEST. The PXE driver passes all API compliance test suites.
MS-DOS UNDI. The MS-DOS UNDI seamlessly binds with the UNDI to provide a network adapter driver interface specification (NDIS2) interface to the upper layer protocol stack. This allows computers to connect to network resources in an MS-DOS environment.
NOTE: The HPE FlexFabric 10Gb 4-port 536FLR-T adapter only supports remote boot capability at the pre­boot level (Legacy and uEFI) on ports 1 and 2. Ports 3 and 4 only support network functionality.

Setting up MBA in a client environment

Setting up MBA in a client environment involves:
Procedure
1. Enabling the MBA driver.
2. Configuring the MBA driver.
3. Controlling EEE.
4. Setting up the BIOS for the boot order.

Enabling the MBA driver

To enable or disable the MBA driver:
14 Setting Up Multiboot agent (MBA) driver software
Procedure
1. Insert an MS-DOS 6.22 or a Real Mode Kernel bootable disk containing the uxdiag.exe file (for
10/100/1000Mbps network adapters) or uediag.exe (for 10Gbps network adapters) in the removable disk drive and power up your system.
NOTE: The uxdiag.exe (or uediag.exe) file is on the installation CD or in the DOS Utilities package available from driverdownloads.qlogic.com/.
2. Enter the following code:
uxdiag -mba [ 0-disable | 1-enable ] -c devnum (or uediag -mba [ 0-disable | 1-enable ] -c devnum) where devnum is the specific device(s) number (0,1,2, …) to be programmed.

Configuring the MBA driver

This section describes the configuration of the MBA driver on add-in NIC models of the network adapter using the Comprehensive Configuration Management (CCM) utility. To configure the MBA driver on LOM models of the network adapter, check your system documentation. Both the MBA driver and the CCM utility reside on the adapter Flash memory.
You can use the CCM utility to configure the MBA driver one adapter at a time as described in this section. To simultaneously configure the MBA driver for multiple adapters, use the MS-DOS-based user diagnostics application. For more information about the CCM utility, see the Comprehensive Configuration Management User’s Guide.
Procedure
1. Restart your system.
2. Press Ctrl + S within four seconds after you are prompted to do so. A list of adapters displays.
a. Select the adapter to configure, and then press the Enter key. The Main Menu displays.
b. Select MBA Configuration to display the MBA Configuration Menu.
Setting Up Multiboot agent (MBA) driver software 15
Figure 1: MBA Configuration Menu
3. Use the Up arrow and Down arrow keys to move to the Boot Protocol menu item. Then use the Right
arrow or Left arrow key to select the boot protocol of choice if other boot protocols besides PXE are available. If available, other boot protocols include Remote Program Load (RPL), iSCSI, and BOOTP.
NOTE:
For iSCSI boot-capable LOMs, the boot protocol is set through the BIOS. See your system
documentation for more information.
If you have multiple adapters in your system and you are unsure which adapter you are configuring,
press Ctrl+F6, which causes the port LEDs on the adapter to start blinking.
4. Use the Up arrow, Down arrow, Left arrow, and Right arrow keys to move to and change the values for other menu items, as desired.
5. Press F4 to save your settings.
6. Press Esc when you are finished.

Controlling EEE

The QLogic 8400/3400 Series CCM control for EEE can only be enabled or disabled. You cannot control the actual power-saving mode (how long it waits between activity before entering power-saving mode).
The QLogic 8400/3400 Series does not use the Broadcom AutoGrEEEn® mode.
For more information about EEE, see http://www.cavium.com/Resources/Documents/WhitePapers/
Adapters/QLogic_Solutions_Deliver_Energy_Efficient_Ethernet_for_10GBASE.pdf
16 Setting Up Multiboot agent (MBA) driver software

Setting up the BIOS

Procedure
1. To boot from the network with the MBA, make the MBA enabled adapter the first bootable device under the
BIOS. This procedure depends on the system BIOS implementation.
2. See the user manual for the system for instructions.

Setting up MBA in a server environment

Procedure
Red Hat Linux PXE Server
MS-DOS UNDI/Intel APITEST

RedHat Linux PXE Server

The RedHat Enterprise Linux distribution has PXE Server support. It allows users to remotely perform a complete Linux installation over the network. The distribution comes with the boot images boot kernel (vmlinuz) and initial ram disk (initrd), which are on the RedHat disk#1:
/images/pxeboot/vmlinuz /images/pxeboot/initrd.img
Refer to the RedHat documentation for instructions on how to install PXE Server on Linux.
The Initrd.img file distributed with RedHat Enterprise Linux, however, does not have a Linux network driver for the QLogic 8400/3400 Series adapters. This version requires a driver disk for drivers that are not part of the standard distribution. You can create a driver disk for the QLogic 8400/3400 Series adapters from the image distributed with the installation CD. Refer to the Linux Readme.txt file for more information.

MS-DOS UNDI/Intel APITEST

Procedure
1. Download the Intel PXE PDK from the Intel website to boot in MS-DOS mode and connect to a network for
the MS-DOS environment.
This PXE PDK comes with a TFTP/ProxyDHCP/Boot server.
2. Download the PXE PDK from Intel at https://downloadcenter.intel.com/search?keyword=Intel+ %C2%AE+Boot+Agent
Setting Up Multiboot agent (MBA) driver software 17

Windows driver software

Procedure

Installing the driver software

Removing the device drivers
Installing management applications
Viewing or changing the adapter properties
Setting power management options
NOTE: The QConvergeConsole GUI is supported as the only GUI management tool across all QLogic adapters. The QLogic Control Suite (QCS) GUI is no longer supported for the QLogic 8400/3400 Series Adapters and has been replaced by the QCC GUI management tool. The QCC GUI provides single-pane­of-glass GUI management for all QLogic adapters.
In Windows environments, when you run the QCS CLI and the Management Agents Installer, it will uninstall the QCS GUI (if installed on the system) and any related components from your system. Download QCC GUI for your adapter from the QLogic Downloads Web page:
driverdownloads.qlogic.com to obtain the new GUI.

Windows drivers

Table 1: QLogic 8400/3400 Series Windows Drivers
Windows Driver Description
evbd This system driver manages all PCI device resources (registers, host interface queues)
on the QLogic 8400/3400 Series. The driver provides the slow path interface between upper-layer protocol drivers and the controller. The evbd driver works with the bxND network and bxfcoe/bxois offload storage drivers.
bxnd This driver acts as the layer-2 low-level network driver for the adapter. This driver has a
fast path and slow path to the firmware and is responsible for sending and receiving Ethernet packets on behalf of the host networking stacks.
bxois This iSCSI HBA driver provides a translation layer between the Windows SCSI stack
and the iSCSI firmware and handles all iSCSI related activities. The bxois driver has both a fast path and slow path to the firmware.
bxfcoe
This FCoE HBA driver provides a translation layer between the Windows SCSI stack and the FCoE firmware and handles all FCoE related activities. The bxfcoe driver has both a fast path and slow path to the firmware.
Installing the driver software
Prerequisites
Using the Installer
Using Silent Installation
Manually Extracting the Device Drivers
18 Windows driver software
NOTE: These instructions assume that your QLogic 8400/3400Series adapter was not factory installed. If
your controller was installed at the factory, the driver software has been installed for you.
Procedure
When Windows first starts after a hardware device has been installed (such as an QLogic 8400/3400 Series adapter), or after the existing device driver has been removed, the operating system automatically detects the hardware and prompts you to install the driver software for that device.
You can use either the graphical interactive installation mode (see "Using the Installer") or the command­line silent mode for unattended installation (see "Using Silent Installation") are available.
NOTE:
Before installing the driver software, verify that the Windows operating system has been upgraded to the latest version with the latest service pack applied.
A network device driver must be physically installed before the QLogic 8400/3400 Series Adapters can
be used with your Windows operating system. You can find drivers on the HPE Support Center.
Up to 256 NPIV WWIDs are supported per FCoE offload 8400 port and configured using QCC GUI,
QCS CLI or QCC PowerKit.
Up to 255 Live Migration-able virtual Fibre Channel (vFC) Virtual Machine (VM) instances are
supported per FCoE offload 8400 port. To enable Windows Hyper-V vFC, follow the steps at https:// technet.microsoft.com/en-us/library/dn551169.aspx. Otherwise, use the PowerShell VMFibreChannelHba commands. You do not need to configure NPIV to use vFCs in a VM. A maximum of 4 vFCs (from one or more FCoE ports) can be used per VM.

Using the installer

If supported and if you will use the iSCSI Crash Dump utility, it is important to follow the installation sequence:
Procedure
1. Run the installer.
2. Install the Microsoft iSCSI Software Initiator along with the patch.
Installing the QLogic 8400/3400 series drivers
Procedure
1. When the Found New Hardware Wizard appears, click Cancel.
2. Insert the installation CD into the CD or DVD drive.
3. On the installation CD, open the folder for your operating system, open the DrvInst folder, and then double- click Setup.exe to open the InstallShield Wizard.
4. Click Next to continue.
5. After you review the license agreement, click I accept the terms in the license agreement, and then click Next to continue.
6. Click Install.
Windows driver software 19
7. Click Finish to close the wizard.
8. The installer will determine if a system restart is necessary. Follow the on-screen instructions.
Installing the Microsoft iSCSI Software Initiator for iSCSI Crash Dump
If supported and if you will use the iSCSI Crash Dump utility, it is important to follow the installation sequence:
Procedure
1. Run the installer.
2. Install Microsoft iSCSI Software Initiator along with the patch (MS KB939875).
NOTE: If performing an upgrade of the device drivers from the installer, re-enable iSCSI Crash Dump from
the Advanced section of the QCC Configuration tab.
If not included in your OS, install the Microsoft iSCSI Software Initiator (version 2.06 or later). To download the iSCSI Software Initiator from Microsoft, go to: http://www.microsoft.com/downloads/en/
details.aspx?familyid=
After running the installer to install the device drivers
Procedure
1. Install Microsoft iSCSI Software Initiator (version 2.06 or later) if not included in your OS. To determine
when to install the Microsoft iSCSI Software Initiator, see the following table.
2. Install Microsoft patch for iSCSI crash dump file generation (Microsoft KB939875). See the following table to determine if you must install the Microsoft patch.
Table 2: Windows Operating Systems and iSCSI Crash Dump
Operating System MS iSCSI Software Initiator
Required
NDIS
Windows Server 2008 Yes (included in OS) No
Windows Server 2008 R2 Yes (included in OS) No
OIS
Windows Server 2008 No No
Windows Server 2008 R2 No No
Microsoft Patch (MS KB939875) Required
20 Windows driver software

Using silent installation

NOTE:
All commands are case sensitive.
See the silent.txt file in the folder for detailed instructions and information about unattended installs.
To perform a silent install from within the installer source folder, enter the following:
setup /s /v/qn
To perform a silent upgrade from within the installer source folder, enter the following:
setup /s /v/qn
To perform a silent reinstall of the same installer, enter the following:
setup /s /v"/qn REINSTALL=ALL"
NOTE: The REINSTALL switch should only be used if the same installer is already installed on the system. If upgrading an earlier version of the installer, use setup /s /v/qn as listed above.
To perform a silent install to force a downgrade (default is NO), enter the following:
setup /s /v" /qn DOWNGRADE=Y"

Manually extracting the device drivers

Procedure
Enter the following command:
setup /a
Entering the above command runs the setup utility, extracts the drivers, and places them in the designated location.

Removing the device drivers

IMPORTANT: Uninstall the QLogic 8400/3400 Series device drivers from your system only through the
InstallShield wizard. Uninstalling the device drivers with Device Manager or any other method might not provide a clean uninstall and might cause the system to become unstable.
Windows Server 2008 and Windows Server 2008 R2 provide the Device Driver Rollback feature to replace a device driver with one that was previously installed. However, the complex software architecture of the QLogic 8400/3400 Series device might present problems if the rollback feature is used on one of the individual components. Therefore, QLogic recommends that changes to driver versions be made only through the use of a driver installer.
Procedure
1. Open Control Panel.
2. Double-click Add or Remove Programs.
Windows driver software 21

Installing QLogic Management Applications

Procedure
1. To open the Management Programs installation wizard, run the setup file (setup.exe).
2. Accept the terms of the license agreement, and then click Next.
3. In the Custom Setup dialog box, review the components to be installed, make any necessary changes, and then click Next.
4. In the Ready to Install the Program dialog box, click Install to proceed with the installation.

Viewing or changing the adapter properties

Procedure
1. Open the QCC GUI.
2. Click the Advanced section of the Configurations tab.

Setting power management options

Procedure
If the device is busy doing something (such as servicing a call), the operating system does not shut down the device.
The operating system attempts to shut down every possible device only when the computer attempts to go into hibernation. To have the controller stay on at all times, do not click the Allow the computer to turn
off the device to save power check box.
Figure 2: Power Management
22 Windows driver software
NOTE:
The Power Management tab is available only for servers that support power management.
If you select Only allow management stations to bring the computer out of standby, the computer
can be brought out of standby only by Magic Packet.
CAUTION: Do not select Allow the computer to turn off the device to save power for any adapter that is a member of a team.
Windows driver software 23

Windows Server 2016

This chapter provides VxLAN information for Windows Server 2016.

Configuring VXLAN

This section provides procedures for enabling the virtual extensible LAN (VXLAN) offload and deploying a software-defined network.

Enabling VxLAN Offload on the Adapter

Procedure
1. Open the QLogic adapter properties.
2. Click the Advanced tab.
3. On the Advanced page under Property, select VxLAN Encapsulated Task Offload.
4. In the Value box, select Enabled.
5. Click OK.
The following figure shows the QLogic adapter properties on the Advanced page.
24 Windows Server 2016
Figure 3: Enabling VxLAN Encapsulated Task Offload

Deploying a Software Defined Network

To take advantage of VXLAN Encapsulation Task Offload on virtual machines, you must deploy a software defined network (SDN) that uses a Microsoft Network Controller.
See Microsoft TechNet on Software Defined Networking for more information.
Windows Server 2016 25

Linux driver software

Procedure

Introduction

Limitations
Packaging
Installing Linux driver software
Unloading/Removing the Linux driver
Patching PCI files (optional)
Network installations
Setting values for optional properties
Driver defaults
Driver messages
Teaming with channel bonding
Statistics
Introduction
This section contains information about the Linux drivers for the QLogic 8400/3400 Series network adapters. The following table lists the QLogic 8400/3400 Series Linux drivers.
Table 3: QLogic 8400/3400 Series Linux Drivers
Linux driver Description
bnx2x Linux driver for the QLogic 8400/3400 Series 10Gb
cnic The cnic driver provides the interface between
network adapters. This driver directly controls the hardware and is responsible for sending and receiving Ethernet packets on behalf of the Linux host networking stack. This driver also receives and processes device interrupts, both on behalf of itself (for L2 networking) and on behalf of the bnx2fc (FCoE) and cnic drivers.
QLogic’s upper layer protocol (storage) drivers and QLogic’s 8400/3400 Series 10Gb network adapters. The CNIC module works with the bnx2 and bnx2x network drives in the downstream and the bnx2fc (FCoE) and bnx2i (iSCSI) drivers in the upstream.
26 Linux driver software
Table Continued
Linux driver Description
bnx2i Linux iSCSI HBA driver to enable iSCSI offload on
bnx2fc Linux FCoE kernel mode driver used to provide a

Limitations

Procedure
bnx2x driver
bnx2i driver
bnx2fc driver

bnx2x driver

the QLogic 8400/3400 Series 10Gb network adapters.
translation layer between the Linux SCSI stack and the QLogic FCoE firmware/hardware. In addition, the driver interfaces with the networking layer to transmit and receive encapsulated FCoE frames on behalf of open-fcoe’s libfc/libfcoe for FIP/device discovery.
The current version of the driver has been tested on 2.6.x kernels starting from 2.6.9. The driver might not compile on kernels older than 2.6.9. Testing is concentrated on i386 and x86_64 architectures. Only limited testing has been done on some other architectures. Minor changes to some source files and Makefile may be needed on some kernels.

bnx2i driver

The current version of the driver has been tested on 2.6.x kernels, starting from 2.6.18 kernel. The driver may not compile on older kernels. Testing is concentrated on i386 and x86_64 architectures, RHEL 6, SLES 11, and SLES 12.

bnx2fc driver

The current version of the driver has been tested on 2.6.x kernels, starting from 2.6.32 kernel, which is included in RHEL 6.1 distribution. This driver may not compile on older kernels. Testing was limited to i386 and x86_64 architectures, RHEL 6 RHEL 7.0, and SLES 11, and SLE 12 and later distributions.

Packaging

The Linux drivers are released in the following packaging formats:
DKMS Packages
KMP Packages
SLES
Linux driver software 27
– netxtreme2-kmp-[kernel]-version.i586.rpm
– netxtreme2-kmp-[kernel]-version.x86_64.rpm
Red Hat
kmod-kmp-netxtreme2-[kernel]-version.i686.rpm
kmod-kmp-netxtreme2-[kernel]-version.x86_64.rpm
The QCS CLI management utility is also distributed as an RPM package (QCS-{version}.{arch}.rpm). For information about installing the Linux QCS CLI, see the QLogic Control Suite CLI User’s Guide.
Source Packages:
Identical source files to build the driver are included in both RPM and TAR source packages. The supplemental .tar file contains additional utilities, such as patches and driver diskette images for network installation, including the following:
netxtreme2-<version>.src.rpm: RPM package with QLogic 8400/3400 Series bnx2/bnx2x/cnic/
bnx2fc/bnx2ilibfc/libfcoe driver source.
netxtreme2-<version>.tar.gz: TAR compressed package with 8400/3400 Series bnx2/bnx2x/
cnic/bnx2fc/bnx2i/libfc/libfcoe driver source.
iscsiuio-<version>.tar.gz: iSCSI user space management tool binary.
open-fcoe-*.qlgc.<subvert>.<arch>.rpm: open-fcoe userspace management tool binary
RPM for SLES 11 SP2 and legacy versions.
fcoe-utils-*.qlgc.<subver>.<arch>.rpm: open-fcoe userspace management tool binary
RPM for RHEL 6.4 and legacy versions.
The Linux driver has a dependency on open-fcoe userspace management tools as the front end to control FCoE interfaces. The package name of the open-fcoe tool is <codeemph>fcoe-utils</codeemph> for RHEL
6.4 and <codeemph>open-fcoe</codeemph> for SLES 11 SP2and legacy versions.

Installing Linux Driver Software

Installing the Source RPM Package
Installing the KMP Package
Building the Driver from the Source TAR File
NOTE: If a bnx2x, bnx2i, or bnx2fc driver is loaded and the Linux kernel is updated, you must recompile the driver module if the driver module was installed using the source RPM or the TAR package.

Installing the source RPM package

The following are guidelines for installing the driver source RPM Package.
28 Linux driver software
Prerequisites
Linux kernel source
C compiler
Procedure
1. Install the source RPM package:
rpm -ivh netxtreme2-<version>.src.rpm
2. Change the directory to the RPM path and build the binary RPM for your kernel:
For RHEL:
cd ~/rpmbuild rpmbuild -bb SPECS/netxtreme2.spec
For SLES:
cd /usr/src/packages rpmbuild -bb SPECS/netxtreme2.spec
3. Install the newly compiled RPM:
rpm -ivh RPMS/<arch>/netxtreme2-<version>.<arch>.rpm
Note that the --force option may be needed on some Linux distributions if conflicts are reported.
4. For FCoE offload, install the open-fcoe utility.
For RHEL 6.4 and legacy versions, install either of the following:
yum install fcoe-utils-<version>.rhel.64.qlgc.<subver>.<arch>.rpm
or
rpm -ivh fcoe-utils-<version>.rhel.64.qlgc.<subver>.<arch>.rpm
For SLES11 SP2:
rpm -ivh open-fcoe-<version>.sles.sp2.qlgc.<subver>.<arch>.rpm
For RHEL 6.4 and SLES11 SP2 and legacy versions, the version of fcoe-utils/open-fcoe included in your distribution is sufficient and no out of box upgrades are provided.
Where available, installation with yum automatically resolves dependencies. Otherwise, you can locate required dependencies on your O/S installation media.
5. For SLES, turn on the fcoe and lldpad services for FCoE offload, and just lldpad for iSCSI-offload-TLV.
For SLES11 SP1:
chkconfig lldpad on chkconfig fcoe on
For SLES11 SP2:
chkconfig boot.lldpad on chkconfig boot.fcoe on
Linux driver software 29
6. Inbox drivers are included with all of the supported operating systems. The simplest means to ensure the
newly installed drivers are loadedis to reboot.
7. For FCoE offload, after rebooting, create configuration files for all FCoE ethX interfaces:
cd /etc/fcoe cp cfg-ethx cfg-<ethX FCoE interface name>
NOTE: Your distribution might have a different naming scheme for Ethernet devices (pXpX or emX instead of ethX).
8. For FCoE offload or iSCSI-offload-TLV, modify /etc/fcoe/cfg-<interface> by setting
DCB_REQUIRED=yes to DCB_REQUIRED=no.
9. Turn on all ethX interfaces.
ifconfig <ethX> up
10. For SLES, use YaST to configure your Ethernet interfaces to automatically start at boot by setting a static
IP address or enabling DHCP on the interface.
11. For FCoE offload and iSCSI-offload-TLV, disable lldpad on QLogic converged network adapter
interfaces. This is required because QLogic uses an offloadedDCBX client.
lldptool set-lldp –i <ethX> adminStatus=disasbled
12. For FCoE offload and iSCSI-offload-TLV, be sure that /var/lib/lldpad/lldpad.conf is created
and each <ethX> block does not specify “adminStatus” or if specified, it is set to 0 (“adminStatus=0”) as below.
lldp : { eth5 : { tlvid00000001 : { info = "04BC305B017B73"; }; tlvid00000002 : { info = "03BC305B017B73"; }; };
13. For FCoE offload and iSCSI-offload-TLV, restart lldpad service to apply new settings.
For SLES11 SP1, RHEL 6.4 and legacy versions:
service lldpad restart
For SLES11 SP2:
rclldpad restart
30 Linux driver software
For SLES12:
systemctl restart lldpad
14. For FCOE offload, restart fcoe service to apply new settings For SLES11 SP1, RHEL 6.4, and legacy versions:
service fcoe restart
For SLES11 SP2:
rcfcoe restart
For SLES12:
systemctl restart fcoe

Installing the KMP package

NOTE: The examples in this procedure refer to the bnx2x driver, but also apply to the bxn2fc and bnx2i
drivers.
Procedure
1. To install the KMP package, issue the following commands:
rpm -ivh <file> rmmod bnx2x
2. To load the driver, issue the following command:
modprobe bnx2x

Building the driver from the source TAR file

NOTE: The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2i and bnx2fc
drivers.
Procedure
1. Create a directory and extract the TAR files to the directory:
tar xvzf netxtreme2-<version>.tar.gz
2. Build the driver bnx2x.ko (or bnx2x.o) as a loadable module for the running kernel:
cd netxtreme2-<version> make
3. Test the driver by loading it (first unload the existing driver, if necessary):
rmmod bnx2x( or bnx2fcorbnx2i ) insmod bnx2x/src/bnx2x.ko ( or bnx2fc/src/bnx2fc.ko, or bnx2i/src/bnx2i.ko)
Linux driver software 31
4. For iSCSI offload and FCoE offload, load the cnic driver (if applicable):
insmod cnic.ko
5. Install the driver and man page:
make install
NOTE: See the preceding RPM instructions for the location of the installed driver.
6. Install the user daemon (qlgc_iscsiuio).
See "Load and Run Necessary iSCSI Software Components" for instructions about loading the software components that are required to use the iSCSI offload feature.
To configure the network protocol and address after building the driver, see the manuals supplied with your operating system.

Loading and running necessary iSCSI components

The QLogic iSCSI Offload software suite comprises three kernel modules and a user daemon. Required software components can be loaded either manually or through system services.
Procedure
1. Unload the existing driver, if necessary:
Manual:
rmmod bnx2i
or
modprobe -r bnx2i
2. Load the iSCSI driver:
Manual:
insmod bnx2i.ko or modprobe bnx2i

Unloading/Removing the Linux driver

Procedure
Unloading/Removing the Driver from an RPM Installation
Removing the Driver from a TAR Installation
Uninstalling the QCC GUI
32 Linux driver software

Unloading/Removing the driver from an RPM installation

NOTE:
The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i
drivers.
On 2.6 kernels, it is not necessary to bring down the eth# interfaces before unloading the driver module.
If the cnic driver is loaded, unload the cnic driver before unloading the bnx2x driver.
Prior to unloading the bnx2i driver, disconnect all active iSCSI sessions to targets.
Procedure
1. Use ifconfig to download all of the eth# interfaces that the driver opened.
2. Enter the rmmod bnx2x command.
NOTE: The rmmod bnx2x command also removes the CNIC module,
3. Enter the rpm -e netxtreme2 command to remove the driver if it was installed using RPM.

Removing the driver from a TAR installation

NOTE: The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i
drivers.
Procedure
1. If the driver was installed using make install from the tar file, the bnx2x.ko driver file must be manually
deleted from the operating system.
2. See "Installing the Source RPM package" for the location of the installed driver.

Uninstalling the QCC GUI

See the QConvergeConsole GUI Installation Guide (part number SN0051105-00) for information about removing the QCC GUI.

Patching PCI files (optional)

NOTE: The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i
drivers.
For hardware detection utilities, such as Red Hat kudzu,to properly identify bnx2x supported devices,a number of files containing PCI vendor and device information may need to be updated.
Linux driver software 33
Procedure
1. Apply the updates by running the scripts provided in the supplemental tar file. For example, on Red Hat
Enterprise Linux, apply the updates by entering the following:
./patch_pcitbl.sh /usr/share/hwdata/pcitable pci.updates /usr/share/hwdata/pcitable.new bnx2 ./patch_pciids.sh /usr/share/hwdata/pci.ids pci.updates /usr/share/hwdata/pci.ids.new
2. Back up the old files and rename the new ones for use.
cp /usr/share/hwdata/pci.ids/usr/share/hwdata/old.pci.ids cp /usr/share/hwdata/pci.ids.new/usr/share/hwdata/pci.ids cp /usr/share/hwdata/pcitable/usr/share/hwdata/old.pcitable cp /usr/share/hwdata/pcitable.new/usr/share/hwdata/pcitable

Network Installations

For network installations through NFS, FTP, or HTTP (using a network boot disk or PXE), you might need a driver disk that contains the bnx2x driver. The driver disk includes images for the most recent RedHat and SUSE versions. Boot drivers for other Linux versions can be compiled by modifying the makefile and the make environment. Additional information is available from the RedHat website.

Setting values for optional properties

Optional properties exist for these drivers:
bnx2x driver
bnx2i driver
bnx2fc driver

bnx2x driver

Parameters
disable_tpa
The disable_tpa parameter can be supplied as a command-line argument to disable the Transparent Packet Aggregation (TPA) feature. By default, the driver aggregates TCP packets. Use disable_tpa to disable the advanced TPA feature.
Set the disable_tpa parameter to 1 to disable the TPA feature on all QLogic 8400/3400Series network adapters in the system. The parameter can also be set in modprobe.conf. See the manpage for more information.
insmod bnx2x.ko disable_tpa=1
-or-
modprobe bnx2x disable_tpa=1
int_mode
The int_mode parameter is used to force using an interrupt mode.
34 Linux driver software
Set the int_mode parameter to 1 to force using the legacy INTx mode on all QLogic 8400/3400 Series adapters in the system.
insmod bnx2x.ko int_mode=1
-or-
modprobe bnx2x int_mode=1
Set the int_mode parameter to 2 to force using MSI mode on all QLogic 8400/3400 Series adaptersin the system.
insmod bnx2x.ko int_mode=2
-or-
modprobe bnx2x int_mode=2
Set the int_mode parameter to 3 to force using MSI-X mode on all QLogic 8400/3400 Series adapters in the system.
dropless_fc
The dropless_fc parameter can be used to enable a complementary flow control mechanism on QLogic 8400/3400 Series adapters. The default flow control mechanism is to send pause frames when the on­chip buffer (BRB) is reaching a certain level of occupancy. This is a performance targeted flow control mechanism. On QLogic 8400/3400 Series adapters, one can enable another flow control mechanism to send pause frames, where one of the host buffers (when in RSSmode) are exhausted.
This is a zero packet drop targeted flow control mechanism.
Set the dropless_fc parameter to 1 to enable the dropless flow control mechanism feature on all QLogic 8400/3400 Series adapters in the system.
insmod bnx2x.ko dropless_fc=1
-or-
modprobe bnx2x dropless_fc=1
disable_iscsi_ooo
The disable_iscsi_ooo parameter is to disable the allocation of the iSCSI TCP Out-of-Order (OOO) reception resources, specifically for VMware for low-memory systems.
multi_mode
The optional parameter multi_mode is for use on systems that support multiqueue networking. Multiqueue networking on the receive side depends only on MSI-X capability of the system, multiqueue networking on the transmit side is supported only on kernels starting from 2.6.27. By default, multi_mode parameter is set to 1. Thus, on kernels up to 2.6.26, the driver will allocate on the receive side one queue per-CPU and on the transmit side only one queue. On kernels starting from 2.6.27, the driver will allocate onboth receive and transmit sides, one queue per-CPU. In any case, the number of allocated queues will be limited by number of queues supported by hardware.
The multi_mode optional parameter can also be used to enable SAFC (Service Aware Flow Control) by differentiating the traffic to up to 3 CoS (Class of Service) in the hardware according to the VLAN PRI value or according to theIP DSCP value (least 3 bits).
Linux driver software 35
num_queues
The optional parameter num_queues may be used to set the number of queues when multi_mode is set to 1 and interrupt mode is MSI-X. If interrupt mode is different than MSI-X (see int_mode), the number of queues will be set to 1, discarding the value of this parameter.
pri_map
The optional parameterpri_map is used to map the VLAN PRI value or the IP DSCP value to a different or same CoS in the hardware. This 32-bit parameter is evaluated by the driver as an 8 value of 4 bits each. Each nibble sets the desired hardware queue number for that priority. For example, set pri_map to 0x11110000 to map priority 0 to 3 to CoS 0 and map priority 4 to 7 to CoS 1.
qs_per_cos
The optional parameter qs_per_cos is used to specify how many queues will share the same CoS. This parameter is evaluated by the driver up to 3 values of 8 bits each. Each byte sets the desired number of queues for that CoS. The total number of queues is limited by the hardware limit. For example, set
qs_per_cos to 0x10101 to create a total of three queues, one per CoS. In another example, set qs_per_cos to 0x404 to create a total of 8 queues,divided into 2 CoS, 4 queues in each CoS.
cos_min_rate
The optional parameter cos_min_rate is used to determine the weight of each CoS for round-robin scheduling in transmission. This parameter is evaluated by the driver as up to 3 values of 8 bits each. Each byte sets the desired weight for that CoS. The weight ranges from 0 to 100. For example, set
cos_min_rate to 0x101 for fair transmission rate between 2 CoS. In another example, set the cos_min_rate to 0x30201 togive CoS the higher rate of transmission. To avoid using the fairness
algorithm, omit setting cos_min_rate or set it to 0.
Set the multi_mode parameter to 2 as shown in the following code to differentiate the traffic according to the VLAN PRI value.
insmod bnx2x.ko multi_mode=2 pri_map=0x11110000 qs_per_cos=0x404
-or-
modprobe bnx2x multi_mode=2 pri_map=0x11110000 qs_per_cos=0x404
Set the multi_mode parameter to 4, as shown in the following code, to differentiate the traffic according to the IP DSCP value.
insmod bnx2x.ko multi_mode=4 pri_map=0x22221100 qs_per_cos=0x10101 cos_min_rate=0x30201
-or-
modprobe bnx2x multi_mode=4 pri_map=0x22221100 qs_per_cos=0x10101 cos_min_rate=0x30201

bnx2i Driver

Description
Optional parameters en_tcp_dack , error_mask1, and error_mask2 can be supplied as command line arguments to the insmod or modprobe command for bnx2i.
Parameters
error_mask1 and error_mask2
"Config FW iSCSI Error Mask #", use to configure certain iSCSI protocol violation to be treated either as a warning or a fatal error. All fatal iSCSI protocol violations will result in session recovery (ERL 0). These are bitmasks.
36 Linux driver software
Defaults: All violations will be treated as errors.
CAUTION: Do not use error_mask if you are not sure about the consequences. These values are to be discussed with the development team on a case-by-case basis. This is just a mechanism to work around iSCSI implementation issues on the target side. Without proper knowledge of iSCSI protocol details, users are advised not to experiment with these parameters.
en_tcp_dack
"Enable TCP Delayed ACK", enables/disables TCP delayed ACK feature on offloaded iSCSI connections. Defaults: TCP delayed ACK is ENABLED. For example:
insmod bnx2i.ko en_tcp_dack=0
-or-
modprobe bnx2i en_tcp_dack=0
time_stamps
"Enable TCP TimeStamps", enables/disables TCP time stamp feature on offloaded iSCSI connections.
Defaults: TCP time stamp option is DISABLED.
For example:
insmod bnx2i.ko time_stamps=1
-or-
modprobe bnx2i time_stamps=1
sq_size
"Configure SQ size", used to choose send queue size for offloaded connections and SQ size determines the maximum SCSI commands that can be queued. SQ size also has a bearing on the number of connections that can be offloaded; as QP size increases, the number of connections supported will decrease. With the default values, the adapter can offload 28 connections.
Defaults: 128
Range: 32 to 128
Note that validation is limited to a power of 2; for example, 32, 64, 128 .
rq_size
"Configure RQ size", used to choose the size of asynchronous buffer queue size per offloaded connections. RQ size is not required greater than 16 as it is used to place iSCSI ASYNC/NOP/REJECT messages and SCSI sense data.
Defaults: 16
Range: 16 to 32
Note that validation is limited to a power of 2; for example, 16, 32.
event_coal_div
"Event Coalescing Divide Factor", performance tuning parameter used to moderate the rate of interrupt generation by the iSCSI firmware.
Defaults: 2
Linux driver software 37
Valid values: 1, 2, 4, 8
last_active_tcp_port
"Last active TCP port used", status parameter used to indicate the last TCP port number used in the iSCSI offload connection.
Defaults: N/A Valid values: N/A
Note: This is a read-only parameter.
ooo_enable
"Enable TCP out-of-order feature", enables/disables TCP out-of-order rx handling feature on offloaded iSCSI connections.
Defaults: TCP out-of-order feature is ENABLED.
For example:
insmod bnx2i.ko ooo_enable=1
-or-
modprobe bnx2i ooo_enable=1

bnx2fc Driver

Description
Optional parameter debug_logging can be supplied as a command line arguments to the insmod or modprobe command for bnx2fc.
Parameters
debug_logging
"Bit mask to enable debug logging", enables/disables driver debug logging. Defaults: None.
For example:
insmod bnx2fc.ko debug_logging=0xff
-or-
modprobe bnx2fc debug_logging=0xff
IO level debugging = 0x1 Session level debugging = 0x2 HBA level debugging = 0x4
ELS debugging = 0x8.
Misc debugging = 0x10 Max debugging = 0xff

Driver defaults

Procedure
bnx2 Driver
bnx2x Driver
38 Linux driver software

bnx2 driver

Speed Autonegotiation with all speeds advertised
Flow Control Autonegotiation with RX and TX advertised
MTU 1500 (range is 46–9000)
RXRingSize 255 (range is 0–4080)
RX Jumbo Ring Size 0 (range 0–16320) adjusted by the driver based on
TX Ring Size 255 (range is (MAX_SKB_FRAGS+1)–255).
CoalesceRX Microseconds 18 (range is 0-1023)
CoalesceRX Microseconds IRQ 18 (range is 0–1023)
CoalesceRX Frames 6 (range is 0–255)
MTU and RX Ring Size
MAX_SKB_FRAGS varies on different kernels and different architectures. On a 2.6 kernel for x86, MAX_SKB_FRAGS is 18.
CoalesceRX Frames IRQ 6 (range is 0–255)
CoalesceTX Microseconds 80 (range is 0–1023)
CoalesceTX MicrosecondsIRQ 80 (range is 0–1023)
CoalesceTX Frames 20 (range is 0–255)
CoalesceTX Frames IRQ 20 (range is 0–255)
CoalesceStatisticsMicroseconds 999936 (approximately 1 second) (range is 0–
MSI Enabled (if supported by the 2.6 kernel and the
TSO Enabled (on 2.6 kernels)

bnx2x driver

Speed Autonegotiation with all speeds advertised
Flow Control Autonegotiation with RX and TX advertised
16776960 in increments of 256)
interrupt test passes)
MTU 1500 (range is 46–9000)
RXRingSize 4078 (range is 0–4078)
Table Continued
Linux driver software 39
TX Ring Size 4078 (range is (MAX_SKB_FRAGS+4)–4078).
CoalesceRX Microseconds 25 (range is 0-3000)
CoalesceTX Microseconds 50 (range is 0-12288)
CoalesceStatisticsMicroseconds 999936 (approximately 1 second) (range is 0–
MSI-X Enabled (if supported by the 2.6 kernel and the
TSO Enabled

Driver messages

Description
Use dmesg -n <level> to control the level at which messages will appear on the console. Most systems are set to level 6 by default. To see all messages, set the level higher. The following are the most common sample messages that might be logged in the /var/log/messages file:
MAX_SKB_FRAGS varies on different kernels and different architectures. On a 2.6 kernel for x86, MAX_SKB_FRAGS is 18.
16776960 in increments of 256)
interrupt test passes)
bnx2x Driver
bnx2i Driver
bnx2fc Driver

bnx2x Driver

Driver Messages
Driver Sign On
QLogic 8400/3400 Series 10 Gigabit Ethernet Driver bnx2x v1.6.3c (July 23, 20xx)
CNIC Driver Sign On (bnx2 only)
QLogic 8400/3400 Series cnic v1.1.19 (Sep 25, 20xx)
NIC Detected
eth#: QLogic 8400/3400 Series xGb (B1) PCI-E x8 found at mem f6000000, IRQ 16, node addr 0010180476ae
cnic: Added CNIC device: eth0
Link Down Indication
bnx2x: eth# NIC Link is Down
40 Linux driver software
MSI-X Enabled Successfully
bnx2x: eth0: using MSI-X
Driver Sign On
QLogic 8400/3400 Series 10 Gigabit Ethernet Driver bnx2x v1.6.3c (July 23, 20xx)
CNIC Driver Sign On (bnx2 only)
QLogic 8400/3400 Series cnic v1.1.19 (Sep 25, 20xx)
NIC Detected
eth#: QLogic 8400/3400 Series xGb (B1) PCI-E x8 found at mem f6000000, IRQ 16, node addr 0010180476ae cnic: Added CNIC device: eth0
Link Up and Speed Indication
bnx2x: eth# NIC Link is Up, 10000 Mbps full duplex
Link Down Indication
bnx2x: eth# NIC Link is Down
MSI-X Enabled Successfully
bnx2x: eth0: using MSI-X

bnx2i Driver

Driver Messages
BNX2I Driver Signon
QLogic 8400/3400 Series iSCSI Driver bnx2i v2.1.1D (May 12, 20xx)
Network Port to iSCSI Transport Name Binding
bnx2i: netif=eth2, iscsi=bcm570x-050000 bnx2i: netif=eth1, iscsi=bcm570x-030c00
Driver Completes handshake with iSCSI Offload-enabled CNIC Device
bnx2i [05:00.00]: ISCSI_INIT passed
NOTE: This message is displayed only when the user attempts to make an iSCSI connection.
Driver Detects iSCSI Offload IsNot Enabled on the CNIC Device
bnx2i: iSCSI not supported, dev=eth3 bnx2i: bnx2i: LOM is not enabled to offload iSCSI connections, dev=eth0 bnx2i: dev eth0 does not support iSCSI
Linux driver software 41
Exceeds Maximum Allowed iSCSI Connection Offload Limit
bnx2i: alloc_ep: unable to allocate iscsi cid bnx2i: unable to allocate iSCSI context resources
Network Route to Target Node and Transport Name Binding Are Two Different Devices
bnx2i: conn bind, ep=0x... ($ROUTE_HBA) does not belong to hba $USER_CHOSEN_HBA
where:
ROUTE_HBA is the net device on which connection was offloaded based on route information.
USER_CHOSEN_HBA is the adapter to which target node is bound (using iSCSI transport name).
Target Cannot Be Reached on Any of the CNIC Devices
bnx2i: check route, cannot connect using cnic
Network Route Is Assigned to Network Interface, Which Is Down
bnx2i: check route, hba not found
SCSI-ML InitiatedHost Reset(SessionRecovery)
bnx2i: attempting to reset host, #3
CNIC Detects iSCSI Protocol Violation - Fatal Errors
bnx2i: iscsi_error - wrong StatSN rcvd bnx2i: iscsi_error - hdr digest err bnx2i: iscsi_error - data digest err bnx2i: iscsi_error - wrong opcode rcvd bnx2i: iscsi_error - AHS len > 0 rcvd bnx2i: iscsi_error - invalid ITT rcvd bnx2i: iscsi_error - wrong StatSN rcvd bnx2i: iscsi_error - wrong DataSN rcvd bnx2i: iscsi_error - pend R2T violation bnx2i: iscsi_error - ERL0, UO bnx2i: iscsi_error - ERL0, U1 bnx2i: iscsi_error - ERL0, U2 bnx2i: iscsi_error - ERL0, U3 bnx2i: iscsi_error - ERL0, U4 bnx2i: iscsi_error - ERL0, U5 bnx2i: iscsi_error - ERL0, U bnx2i: iscsi_error - invalid resi len bnx2i: iscsi_error - MRDSL violation bnx2i: iscsi_error - F-bit not set bnx2i: iscsi_error - invalid TTT bnx2i: iscsi_error - invalid DataSN bnx2i: iscsi_error - burst len violation bnx2i: iscsi_error - buf offset violation bnx2i: iscsi_error - invalid LUN field bnx2i: iscsi_error - invalid R2TSN field bnx2i: iscsi_error - invalid cmd len1 bnx2i: iscsi_error - invalid cmd len2 bnx2i: iscsi_error - pend r2t exceeds MaxOutstandingR2T value bnx2i: iscsi_error - TTT is rsvd
42 Linux driver software
bnx2i: iscsi_error - MBL violation bnx2i: iscsi_error - data seg len != 0 bnx2i: iscsi_error - reject pdu len error bnx2i: iscsi_error - async pdu len error bnx2i: iscsi_error - nopin pdu len error bnx2i: iscsi_error - pend r2t in cleanup bnx2i: iscsi_error - IP fragments rcvd bnx2i: iscsi_error - IP options error bnx2i: iscsi_error - urgent flag error
CNIC Detects iSCSI Protocol Violation - Non-FATAL, Warning
bnx2i: iscsi_warning - invalid TTT bnx2i: iscsi_warning - invalid DataSN bnx2i: iscsi_warning - invalid LUN field
NOTE: The driver needs to be configured to consider certain violation to treat as warning and not as a critical error.
Driver Puts a Session Through Recovery
conn_err - hostno 3 conn 03fbcd00, iscsi_cid 2 cid a1800
Reject iSCSI PDU Received from theTarget
bnx2i - printing rejected PDU contents [0]: 1 ffffffa1 0 0 0 0 20 0 [8]: 0 7 0 0 0 0 0 0 [10]: 0 0 40 24 0 0 ffffff80 0 [18]: 0 0 3 ffffff88 0 0 3 4b [20]: 2a 0 0 2 ffffffc8 14 0 0 [28]: 40 0 0 0 0 0 0 0
Open-iSCSI Daemon Handing Over Session to Driver
bnx2i: conn update - MBL 0x800 FBL 0x800MRDSL_I 0x800 MRDSL_T 0x2000
BNX2I Driver Sign-on
QLogic 8400/3400 Series iSCSI Driver bnx2i v2.1.1D (May 12, 20xx)
Network Port to iSCSI Transport Name Binding
bnx2i: netif=eth2, iscsi=bcm570x-050000 bnx2i: netif=eth1, iscsi=bcm570x-030c00
Driver Completes handshake with iSCSI Offload-enabled CNIC Device
bnx2i [05:00.00]: ISCSI_INIT passed
NOTE:
This message appears only when the user attempts to make an iSCSI connection.
Driver Detects iSCSI Offload is Not Enabled on the CNIC Device
bnx2i: iSCSI not supported, dev=eth3
Linux driver software 43
Exceeds Maximum Allowed iSCSI Connection Offload Limit
bnx2i: alloc_ep: unable to allocate iscsi cid bnx2i: unable to allocate iSCSI context resources
Network Route to Target Node and Transport Name Binding are Two Different Devices
bnx2i: conn bind, ep=0x... ($ROUTE_HBA) does not belong to hba $USER_CHOSEN_HBA
Where:
ROUTE_HBA is the net device on which connection was offloaded based on route information.
USER_CHOSEN_HBA is the adapter to which target node is bound (using iSCSI transport name).
Target Cannot Be Reached on Any of the CNIC Devices
bnx2i: check route, cannot connect using cnic
Network Route is Assigned to Network Interface, Which is Down
bnx2i: check route, hba not found
SCSI-ML Initiated Host Reset (Session Recovery)
bnx2i: attempting to reset host, #3
CNIC Detects iSCSI Protocol Violation - Fatal Errors
bnx2i: attempting to reset host, #3
CNIC Detects iSCSI Protocol Violation - Non-FATAL Warning
bnx2i: iscsi_warning - invalid TTT bnx2i: iscsi_warning - invalid DataSN bnx2i: iscsi_warning - invalid LUN field
NOTE: The driver must be configured to consider any specific violation as a warning and not as a critical error.
Driver Puts a Session Through Recovery
conn_err - hostno 3 conn 03fbcd00, iscsi_cid 2 cid a1800
Reject iSCSI PDU Received from the Target
bnx2i - printing rejected PDU contents [0]: 1 ffffffa1 0 0 0 0 20 0 [8]: 0 7 0 0 0 0 0 0 [10]: 0 0 40 24 0 0 ffffff80 0 [18]: 0 0 3 ffffff88 0 0 3 4b [20]: 2a 0 0 2 ffffffc8 14 0 0 [28]: 40 0 0 0 0 0 0 0
44 Linux driver software
Open-iSCSI Daemon Handling Over Session to Driver
bnx2i: conn update - MBL 0x800 FBL 0x800MRDSL_I 0x800 MRDSL_T 0x2000

bnxfc Driver

Driver Messages
BNX2FC Driver Signon:
QLogic NetXtreme II FCoE Driverbnx2fc v0.8.7 (Mar 25, 2011
Driver Compiles Handshake with FCoEOffloadEnabledCNICDevice
bnx2fc [04:00.00]: FCOE_INIT passed
Driver Fails Handshake with FCoE Offload Enabled CNIC Device
bnx2fc: init_failure due to invalid opcode bnx2fc: init_failure due to context allocation failure bnx2fc: init_failure due to NIC error bnx2fc: init_failure due to completion status error bnx2fc: init_failure due to HSI mismatch
No Valid License to Start FCoE
bnx2fc: FCoE function not enabled<ethX> bnx2fC: FCoE not supportedon <ethX>
Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
bnx2fc: Failed to allocate conn id for port_id <remote port id> bnx2fc: exceeded max sessions..logoff this tgt bnx2fc: Failed to allocate resources
Session Offload Failures
bnx2fc: bnx2fc_offload_session - Offload error <rport> not FCP type. not offloading <rport> not FCP_TARGET. not offloading
Session Upload Failures
bnx2fc: ERROR!! destroy timed out bnx2fc: Disable request timed out. Destroy not set to FW bnx2fc: Disable failed with completion status <status> bnx2fc: Destroy failed with completion status <status>
Unable to Issue ABTS
bnx2fc: initiate_abts: tgt not offloaded bnx2fc: initiate_abts: rport not ready bnx2fc: initiate_abts: link is not ready bnx2fc: abort failed, xid = <xid>
Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
bnx2fc: Relogin to the target
Linux driver software 45
Unable to Issue IO Request Due toSession Not Ready
bnx2fc: Unableto post io_req
Drop IncorrectL2ReceiveFrames
bnx2fc: FPMA mismatch... drop packet bnx2fc: dropping frame with CRC error
HBA/lport Allocation Failures
bnx2fc: Unableto allocate hba bnx2fc: Unableto allocate scsi host
NPIV Port Creation
bnx2fc: Setting vportnames, <WWNN>, <WWPN>
BNX2FC Driver Signon
QLogic NetXtreme II FCoE Driver bnx2fc v0.8.7 (Mar 25, 2011)
Driver Compiles Handshake with FCoE Offload Enabled CNIC Device
bnx2fc [04:00.00]: FCOE_INIT passed
Driver Fails Handshake with FCoE Offload Enabled CNIC Device
bnx2fc: init_failure due to invalid opcode bnx2fc: init_failure due to context allocation failure bnx2fc: init_failure due to NIC error bnx2fc: init_failure due to completion status error bnx2fc: init_failure due to HSI mismatch
No Valid License to Start FCoE
bnx2fc: FCoE function not enabled <ethX> bnx2fC: FCoE not supported on <ethX>
Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
bnx2fc: Failed to allocate conn id for port_id <remote port id> bnx2fc: exceeded max sessions..logoff this tgt bnx2fc: Failed to allocate resources
Session Offload Failures
bnx2fc: bnx2fc_offload_session - Offload error <rport> not FCP type. not offloading <rport> not FCP_TARGET. not offloading
Session Upload Failures
bnx2fc: ERROR!! destroy timed out bnx2fc: Disable request timed out. destroy not set to FW bnx2fc: Disable failed with completion status <status> bnx2fc: Destroy failed with completion status <status>
46 Linux driver software
Unable to Issue ABTS
bnx2fc: ERROR!! destroy timed out bnx2fc: Disable request timed out. destroy not set to FW bnx2fc: Disable failed with completion status <status> bnx2fc: Destroy failed with completion status <status>
Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
bnx2fc: Relogin to the target
Unable to Issue IO Request Session Not Ready
bnx2fc: Relogin to the target
Drop Incorrect L2 Receive Frames
bnx2fc: FPMA mismatch... drop packet bnx2fc: dropping frame with CRC error
HBA/Iport Allocation Failures
bnx2fc: Unable to allocate hba bnx2fc: Unable to allocate scsi host
NPIV Port Creation
bnx2fc: Setting vport names, <WWNN>, <WWPN>

Teaming with channel bonding

With the Linux drivers, you can team adapters together using the bonding kernel module and a channel bonding interface. For more information, see the Channel Bonding information in your operating system documentation.

Statistics

Detailed statistics and configuration information can be viewed using the ethtool utility. See the ethtool man page for more information.
Linux driver software 47

VMware driver software

Procedure
Packaging
Downloading, Installing, and Updating drivers
Networking support
FCoE support

VMware Drivers

VMware Driver Description
bnx2x VMware driver for the QLogic 8400/3400 Series 10Gb network adapters. This driver
manages all PCI device resources (registers, host interface queues) and also acts as the layer 2 VMware low-level network driver forQLogic's QLogic 8400/3400 Series 10G adapters. This driver directly controls the hardware and is responsible for sending and receiving Ethernet packets on behalf of the VMware host networking stack. The bnx2xdriver also receives and processes device interrupts, both on behalf of itself (for L2 networking) and on behalf of the bnx2fc (FCoE protocol) and C-NIC drivers.
cnic This driver provides the interface between QLogic's upper layer protocol (storage) drivers
and QLogic's 8400/3400 Series 10Gb network adapters. The converged network interface controller (C-NIC) module works with the bnx2 and bnx2x network drivers in the downstream, and the bnx-2fc (FCoE) and bnx2i (iSCSI) drivers in the upstream.
bnx2i This VMware iSCSI HBA driver enables iSCSI offload on the QLogic 8400/3400 Series
10Gb network adapters.
bnx2fc This QLogic VMware FCoE driver is a kernel mode driver that provides a translation layer
between the VMware SCSI stack and the QLogic FCoE firmware and hardware. In addition, the bnx2fc driver interfaces with the networking layer to transmit and receive encapsulated FCoE frames on behalf of the Open-FCoE libfc/libfcoe for FIP and device discovery.

Downloading, installing, and updating drivers

Prerequisites Go to the VMware website.
Procedure
1. Enter the adapter name in quotes (for example, 630M) into the Keyword field, and then click Update and View Results.
48 VMware driver software
Figure 4: Selecting an Adapter
The following figure shows the available 630M driver versions.
Figure 5: 630M Driver Versions
2. Mouse over the 630M link in the results section to show the PCI identifiers.
Figure 6: PCI Identifiers
3. Click the model link to show a listing of all the driver packages as shown in the following figure. Click the
desired ESXi version, and then click the link to go to the VMware driver download webpage.
VMware driver software 49
Figure 7: List of Driver Packages
4. Log in to the VMware driver download page, and then click Download to download the desired driver
package as shown in the following figure.
Figure 8: Download Driver Package
50 VMware driver software
5. This package is double compressed. Unzip the package once before copying the offline bundle zip file to
the ESXi host.
6. Issue the following command to install the driver package:
esxcli software vib install -d <path>/<offline bundle name.zip> --maintenance-mode
-or-
esxcli software vib install --depot=/<path>/<offline bundle name.zip> --maintenance-mode
NOTE:
If you do not unzip the outer zipping, the installation reports that it cannot find the drivers.
Use double dashes (--) before the depot and maintenance-mode parameters.
Do not use the -v method of installing individual driver vSphere installation bundles (VIBs).
A reboot is required after all driver installations.

Networking support

This section describes the bnx2x VMware ESXi driver for the QLogic 8400/3400 Series PCIe 10 GbE network adapters.

Driver parameters

Description
Several optional parameters can be supplied as a command-line argument to the vmkload_mod command. These parameters can also be set with the esxcfg-module command. See the manpage for more information.
Driver Parameters
int_mode
The optional parameter int_modeis used to force using an interrupt mode other than MSI-X. By default, the driver will try to enable MSI-X if it is supported by the kernel. If MSI-X is not attainable, then the driver will try to enableMSI if it is supported by the kernel. If MSI is not attainable, then the driver will use the legacy INTx mode.
Set the int_mode parameter to 1 as shown below to force using the legacy INTx mode on all QLogic 8400/3400 Series network adapters in the system.
vmkload_mod bnx2x int_mode=1
Set the int_mode parameter to 2 as shown below to force using MSI mode on all QLogic 8400/3400Series network adapters in the system.
vmkload_mod bnx2x int_mode=2
disable_tpa
The optional parameter disable_tpa can be used to disable the Transparent Packet Aggregation (TPA) feature. By default, the driver will aggregate TCP packets, but if you would like to disable this advanced feature,it can be done.
Set the disable_tpa parameter to 1 as shown below to disable the TPA feature on all QLogic 8400/3400Series network adapters in the system.
VMware driver software 51
vmkload_mod bnx2x.ko disable_tpa=1
Use ethtool to disable TPA (LRO) for a specific network adapter.
num_rx_queues
The optional parameter num_rx_queues may be used to set the number of Rx queues on kernels starting from 2.6.24 when multi_modeis set to 1 and interrupt mode is MSI-X. Number of Rx queues must be equal to or greater than the number of Tx queues (see num_tx_queues parameter). If the interrupt mode is different than MSI-X (see int_mode parameter), then the number of Rx queueswill be set to 1, discarding the value of this parameter.
num_tx_queues
The optional parameter num_tx_queues may be used to set the number of Tx queues on kernels starting from 2.6.27 when multi_modeis set to 1 and interrupt mode is MSI-X.The number of Rx queues must be equal to or greater than the number of Tx queues (see num_rx_queues parameter). If the interrupt mode is differentthan MSI-X (see int_mode parameter), then the number of Tx queues will be set to 1, discarding the value of this parameter.
pri_map
The optional parameter pri_map is used to map the VLAN PRI value or the IP DSCP value to a different or the same CoS in the hardware. This 32-bit parameter is evaluated by the driver as 8 values of 4 bits each. Each nibble sets the desired hardware queue number for that priority.
For example, set the pri_map parameter to 0x22221100 to map priority 0 and 1 to CoS 0, map priority 2 and 3 to CoS 1, and map priority 4 to 7 to CoS 2. In another example, set the pri_map parameter to 0x11110000 to map priority 0 to 3 to CoS 0, and map priority 4 to 7 to CoS 1.
qs_per_cos
The optional parameter qs_per_cos is used to specify the number of queues that will share the same CoS. This parameter is evaluated by the driver up to 3 values of 8 bits each. Each byte sets the desired number of queues for that CoS. The total number of queues is limited by the hardware limit.
For example, set the qs_per_cos parameter to 0x10101 to create a total of three queues, one per CoS. In another example, set the qs_per_cos parameter to 0x404 to create a total of 8 queues, divided into only 2 CoS, 4 queues in each CoS.
cos_min_rate
The optional parameter cos_min_rate is used to determine the weight of each CoS for round-robin scheduling in transmission. This parameter is evaluated by the driver up to three values of eight bits each. Each byte sets the desired weight for that CoS. The weight ranges from 0 to 100.
For example, set the cos_min_rate parameter to 0x101 for fair transmission rate betweentwo CoS. In another example, set the cos_min_rate parameter to 0x30201 to give the higher CoS the higher rate of transmission. To avoid using the fairness algorithm, omit setting the optional parameter cos_min_rate or set it to 0.
dropless_fc
The optional parameter dropless_fc can be used to enable a complementary flow control mechanism on QLogic network adapters. The default flow control mechanism is to send pause frames when the BRB is reaching a certain level of occupancy. This is a performance targeted flow control mechanism. On QLogic network adapters, you can enable another flow control mechanism to send pause frames if one of the host buffers (when in RSS mode) is exhausted. This is a zero packet drop targeted flow control mechanism.
Set the dropless_fc parameter to 1 as shown below to enable the dropless flow control mechanism feature on all QLogic network adapters in the system.
vmkload_mod bnx2x dropless_fc=1
52 VMware driver software
RSS
The optional parameter RSS can be used to specify the number of receive side scaling queues. For VMware ESXi (5.1, 5.5, 6.0), values for RSS can be from 2 to 4; RSS=1 disables RSS queues.
max_vfs
The optional parameter max_vfs can be used to enable a specific number of virtual functions. Values for max_vfs can be 1 to 64, or set max_vfs=0 (default) to disable all virtual functions.
enable_vxlan_offld
The optional parameterenable_vxlan_ofld can be used to enable or disable VMware ESXi (5.5, 6.0) VxLAN task offloads with TX TSO and TX CSO. For VMware ESXi (5.5,6.0),enable_vxlan_ofld=1(default)enables VxLAN task offloads;enable_vxlan_offload=0 disables VxLAN task offloads.

Driver defaults

Speed Autonegotiation with all speeds advertised
Flow Control Autonegotiation with RX and TX advertised
MTU 1500 (range is 46–9000)
RX Ring Size 4078 (range is 0–4078)
TX Ring Size 4078 (range is (MAX_SKB_FRAGS+4)–4078).
Coalesce RX Microseconds 25 (range is 0-3000)
Coalesce TX Microseconds 50 (range is 0-12288)
MSI-X Enabled (if supported by the 2.6 kernel)
TSO Enabled

Unloading the driver

Procedure
Enter the vmkload_mod -u bnx2x command to unload the driver.

Driver messages

Description
MAX_SKB_FRAGS varies on different kernels and different architectures. On a 2.6 kernel for x86, MAX_SKB_FRAGS is 18.
The following are the most common sample messages that might be logged in the /var/log/messages file. Use dmesg -n <level> to control the level at which messages will appear on the console. Most systems are set to level 6 by default. To see all messages, set the level higher.
VMware driver software 53
Driver messages
Driver Sign On
QLogic 8400/3400 Series 10Gigabit Ethernet Driver bnx2x 0.40.15($DateTime: 2007/11/22 05:32:40 $)
NIC Detected
eth0: QLogic 8400/3400 Series XGb (A1) PCI-E x8 2.5GHz found at mem e8800000, IRQ 16, node addr 001018360012
MSI-X EnabledSuccessfully
bnx2x: eth0: using MSI-X
Link Up and Speed Indication
bnx2x: eth0 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON
Link Down Indication
bnx2x: eth0 NIC Link is Down
Memory Limitation
If you see messages in the log file that look like the following, then the ESXi host is severely strained. To relieve this, disable NetQueue.
Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING: Heap: 1435: Heap bnx2x already at its maximumSize. Cannot expand. Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING: Heap: 1645: Heap_Align(bnx2x, 4096/4096 bytes, 4096 align) failed. caller: 0x41800187d654 Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING: vmklinux26: alloc_pages: Out of memory
Disable NetQueue by manually loading the bnx2x vmkernel module with the command.
vmkload_mod bnx2x multi_mode=0
or to persist the settings across reboots with the command
esxcfg-module -s multi_mode=0 bnx2x
Reboot the machine for the settings to take place.
MultiQueue/NetQueue
The optional parameter num_queues may be used to set the number of Rx and Tx queues when multi_mode is set to 1 and interrupt mode is MSI-X. If interrupt mode is different than MSI-X (see int_mode parameter), the number of Rx and Tx queues will be set to 1, discarding the value of this parameter.
If you would like the use of more then 1 queue, force the number of NetQueues to use with the following command:
esxcfg-module -s "multi_mode=1 num_queues=<num of queues>" bnx2x
54 VMware driver software
Otherwise, allow the bnx2xdriver to select the numberof NetQueues to use with the following command:
esxcfg-module -s "multi_mode=1 num_queues=0" bnx2x
The optimal number is to have the number of NetQueues match the number of CPUs on the machine.

FCoE Support

This section describes the contents and procedures associated with installation of the VMware software package for supporting FCoE C-NICs.

Enabling FCoE

To verify the correct installation of the driver and to be sure that the switch can see the host port, use the following procedure.
Procedure
1. Use the # esxcli fcoe nic list command where X is the interface number gained from esxcli
fcoe nic list to determine the ports that are FCoE-capable.
Output example:
vmnic4 User Priority: 3 Source MAC: FF:FF:FF:FF:FF:FF Active: false Priority Settable: false Source MAC Settable: false VLAN Range Settable: false
2. Use the # esxcli fcoe nic discover -n vmnicX command where X is the interface number gained from esxcli fcoe nic list to enable the FCoE interface.
3. Use the # esxcli fcoe adapter list command to verify that the interface is working. Output example:
vmhba34 Source MAC: bc:30:5b:01:82:39 FCF MAC: 00:05:73:cf:2c:ea VNPort MAC: 0e:fc:00:47:04:04 Physical NIC: vmnic7 User Priority: 3 VLAN id: 2008
The output of the # esxcli fcoe adapter list command should show a valid FCF MAC, VNPort MAC, Priority, and VLAN id for the Fabric that is connected to the C-NIC.
You can also use the #esxcfg-scsidevs -a command to verify that the interface is working properly.
Output example:
vmhba34 bnx2fc link-up fcoe.1000<mac address>:2000<macaddress> () SoftwareFCoE
vmhba35 bnx2fc link-up fcoe.1000<mac address>:2000<macaddress> () SoftwareFCoE
NOTE: The label Software FCoE is a VMware term used to describe initiators that depend on the inbox FCoE libraries and utilities. The FCoE solution is a fully state connection-based hardware offload solution designed to significantly reduce the CPU burden encumbered by a nonoffload software initiator.
VMware driver software 55

Verifying the correct installation of the driver

Procedure
1. Verify the host port shows up in the switch FLOGI database using the show flogi database command
for the case of a Cisco® FCF and fcoe -loginshow command for the case of a Brocade® FCF.
2. If the Host WWPN does not appear in the FLOGI database, then provide driver log messages for review.

Limitations

NPIV is not currently supported with this release on ESXi, due to lack of supporting inbox components.
Non-offload FCoE is not supported with offload-capable devices. Only the full hardware offload path is supported.

Supported distributions

The FCoE/DCB feature set is supported on VMware ESXi 5.0 and later.
56 VMware driver software

Upgrading the Firmware

Hewlett Packard Enterprise ProLiant Server Adapter Online Firmware Upgrade Utilities are provided for Windows, Linux, and VMware. The OS automatically checks for driver, hardware, and operating system dependencies. The utilities then install only the correct adapter firmware upgrades required by each target server.
Firmware is available from HPE Support Page for each adapter. Refer to the Installation Instructions provided on the download link.
https://support.hpe.com/hpsc/swd/public/detail? sp4ts.oid=5404527&swItemId=MTX_b54a35ef27e245d8aa556e7754&swEnvOid=4184#tab3
For Windows, see the "HPE ProLiant Server Adapters Online Firmware Upgrade Utilities for Windows OS Help" document (pdf).
For VMware, installation instructions are included as a text file in the component package.
For Linux, detailed instructions are included in the Installation Instruction tab. To see an example, go to the following link:
https://support.hpe.com/hpsc/swd/public/detail? sp4ts.oid=5404527&swItemId=MTX_6d3614df81ae4cbc85ca3e16da&swEnvOid=4184#tab3
Upgrading the Firmware 57

Configuring iSCSI Protocol

iSCSI Boot
iSCSI Crash Dump
iSCSI Offload in Windows Server
iSCSI Offload in Linux Server
iSCSI Offload in VMware Server

iSCSI boot

QLogic 8400/3400 Series gigabit Ethernet (GbE) adapters support iSCSI boot to enable network boot of operating systems to diskless systems. iSCSI boot allows a Windows, Linux, or VMware operating system boot from an iSCSI target machine located remotely over a standard IP network.
For both Windows and Linux operating systems, iSCSI boot can be configured to boot with two distinctive paths: nonoffload (also known as Microsoft/Open-iSCSI initiator) and offload (QLogic’s offload iSCSI driver or HBA). Configure the path in the iSCSI Configuration utility, General Parameters window, by setting the HBA Boot Mode option. For more information on all General Parameters window configuration options, see the following table entitled, "Configuration Options".

Supported operating systems for iSCSI boot

The QLogic 8400/3400 Series Gigabit Ethernet adapters support iSCSI boot on the following operating systems:
Windows Server 2008 and later 32-bit and 64-bit (supports offload and nonoffload paths)
RHEL 5.5 and later, SLES 11.1 and later (supports offload and nonoffload paths)
SLES 10.x and SLES 11 (only supports nonoffload path)
VMware ESXi 5.0 and later (only supports nonoffload path)

Setting up iSCSI boot

The iSCSI boot setup includes:
Procedure
Configuring the iSCSI target
Configuring iSCSI boot parameters
Configuring iSCSI boot parameters on VMware
MBA boot protocol configuration
iSCSI boot configuration
Enabling CHAP authentication
Configuring the DHCP server to support iSCSI boot
58 Configuring iSCSI Protocol
1. DHCP iSCSI boot configurations for IPv4
2. DHCP iSCSI boot configurations for IPv6
Configuring the DHCP server
Preparing the iSCSI boot image
Booting
Configuring the iSCSI target
Configuring the iSCSI target varies by target vendors. For information on configuring the iSCSI target, refer to the documentation provided by the vendor. The general steps include:
Procedure
1. Create an iSCSI target (for targets such as SANBlaze or IET) or a vdisk/volume (for targets such as
EqualLogic or EMC).
2. Create a virtual disk.
3. Map the virtual disk to the iSCSI target created in step 1.
4. Associate an iSCSI initiator with the iSCSI target.
5. Record the iSCSI target name, TCP port number, iSCSI Logical Unit Number (LUN), initiator Internet
Qualified Name (IQN), and CHAP authentication details.
6. After configuring the iSCSI target, obtain the following:
Target IQN
Target IP address
Target TCP port number
Target LUN
Initiator IQN
CHAP ID and secret
Configuring iSCSI boot parameters
Configure the QLogic iSCSI boot software for either static or dynamic configuration. See the following table for configuration options available from the General Parameters screen.
The following table lists parameters for both IPv4 and IPv6. Parameters specific to either IPv4 or IPv6 are noted.
NOTE: Availability of IPv6 iSCSI boot is platform/device dependent.
The following table lists parameters for both IPv4 and IPv6. Parameters specific to either IPv4 or IPv6 are noted.
Configuring iSCSI Protocol 59
Table 4: Configuration options
Option Description
TCP/IP parameters through DHCP This option is specific to IPv4. Controls whether the
iSCSI boot host software acquires the IP address information using DHCP (Enabled) or use a static IP configuration (Disabled).
IP Autoconfiguration This option is specific to IPv6. Controls if the iSCSI
boot host software configures a stateless link-local address and/or stateful address if DHCPv6 is present and used (enabled). Router Solicit packets are sent out up to three times with 4 second intervals in between each retry. Or use a static IP configuration (Disabled).
iSCSI parameters through DHCP Controls whether the iSCSI boot host software
acquires its iSCSI target parameters using DHCP (Enabled) or through a static configuration (Disabled). The static information is entered through the iSCSI Initiator Parameters Configuration screen.
CHAP Authentication Controls whether the iSCSI boot host software uses
CHAP authentication when connecting to the iSCSI target. If CHAP Authentication is enabled, the CHAP ID and CHAP Secret are entered through the iSCSI Initiator Parameters Configuration screen.
DHCP Vendor ID Controls how the iSCSI boot host software interprets
the Vendor Class ID field used during DHCP. If the Vendor Class ID field in the DHCP Offer packet matches the value in the field, the iSCSI boot host software looks into the DHCP Option 43 fields for the required iSCSI boot extensions. If DHCP is disabled, this value does not need to be set.
Link Up Delay Time Controls how long the iSCSI boot host software
waits, in seconds, after an Ethernet link is established before sending any data over the network. The valid values are 0 to 255. As an example, a user may need to set a value for this option if a network protocol, such as Spanning Tree, is enabled on the switch interface to the client system.
Use TCP Timestamp Controls if the TCP Timestamp option is enabled or
disabled.
Target as First HDD Allows specifying that the iSCSI target drive will
appear as the first hard drive in the system.
LUN Busy Retry Count Controls the number of connection retries the iSCSI
Boot initiator will attempt if the iSCSI target LUN is busy.
60 Configuring iSCSI Protocol
Table Continued
Option Description
IP Version This option specific to IPv6. Toggles between the
HBA Boot Mode Set to disable when the host OS is configured for
Configuring iSCSI boot parameters on VMware
VMware configuration of iSCSI boot parameters is similar to that of Windows and
Linux.
Procedure
1. Configure the adapter (using preboot CCM or the preboot UEFI HII BIOSDevice pages) to use the iSCSI
boot protocol.
2. Set the initiator parameters.
a. During initial installation, the leave the MBA parameter Boot to iSCSI Target set to Disabled.
IPv4 or IPv6 protocol. All IP settings will be lost when switching from one protocol version to another.
software initiator mode and to enable for HBA mode. This option is available only on 8400 Series adapters. This parameter cannot be changed when the adapter is in Multi-Function mode.
NOTE: Another option is to use the One Time Disable option. If it is not possible to install the VMware OS on the remote LUN, reselect One Time Disable (see “Booting from iSCSI LUN on VMware” for more information).
b. After the VMware OS is installed on the remote LUN, you must change this setting to Enabled so the
system boots from that remote LUN.
c. If not using DHCP, configure as needed the initiator parameters: static IP address, subnet mask, default
gateway, primary DNS, and secondary DNS parameters.
d. If authentication is required, configure the CHAP ID and CHAP secret parameters.
3. Set the target parameters.
a. Configure the target system's port IP address, target name, and login information.
b. If authentication is required, configure the CHAP ID and CHAP secret parameters.
4. On the storage array, configure the Boot LUN ID (the LUN on the target that is used for the vSphere host
installation and subsequent boots).
NOTE: Because iSCSI HBA Boot Mode is not supported on VMware, ensure that this option is not selected on the iSCSI General Configuration page.
5. Exit and save this configuration.
Configuring iSCSI Protocol 61
Configuring the MBA boot configuration
Procedure
1. Restart your system.
2. Press CTRL+S on the QLogic 577xx/578xx Ethernet Boot Agent banner.
Figure 9: QLogic 577xx/578xx Ethernet Boot Agent
3. Use the up or down arrow keys to select a device, and then press the Enter key on the CCM device list.
Figure 10: CCM Device List
4. Select MBA Configuration from the Main Menu, and then press ENTER.
Figure 11: Selecting MBA Configuration
5. Use the up or down arrow keys to select Boot Protocol from the MBA Configuration menu.
62 Configuring iSCSI Protocol
Figure 12: Selecting the iSCSI Boot Protocol
6. Use the left or right arrow keys to change the boot protocol option to iSCSI, and then press the Enter key.
Figure 13: Selecting the iSCSI Boot Protocol
NOTE: If iSCSI boot firmware is not programmed in the 8400/3400 Series network adapter, the iSCSI Boot
Configuration option will not be available. The iSCSI boot parameters can also be configured using the Unified Extensible Firmware Interface (UEFI) Human Interface Infrastructure (HII) BIOS pages on servers that support it in their BIOS.
7. Proceed to “Static iSCSI Boot Configuration” or “Dynamic iSCSI Boot Configuration”.
Configuring iSCSI boot
Procedure
Static iSCSI boot configuration
Dynamic iSCSI boot configuration
Static iSCSI boot configuration
In a static configuration, you must enter data for the system’s IP address, the system’s initiator IQN, and the target parameters obtained in “Configuring the iSCSI Target”. For information about configuration options, see the Configuration options table.
Configuring iSCSI Protocol 63
Configuring the iSCSI boot parameters using static configuration
Procedure
1. On the Main Menu, select iSCSI Boot Configuration (Figure 9-5), and then press ENTER.
Figure 14: Selecting iSCSI Boot Configuration
2. On the iSCSI Boot Main Menu, select General Parameters (Figure 9-6), and then press ENTER.
Figure 15: Selecting General Parameters
3. On the General Parameters Menu, press the UP ARROW or DOWN ARROW keys to select a parameter,
and then press the RIGHT ARROW or LEFT ARROW keys to set the following values:
TCP/IP Parameters through DHCP: Disabled (IPv4)
IP Autoconfiguration: Disabled (IPv6)
iSCSI Parameters through DHCP: Disabled
CHAP Authentication: As required
Boot to iSCSI Target: As required
DHCP Vendor ID: As required
Link Up Delay Time: As required
Use TCP Timestamp: As required
Target as First HDD: As required
LUN Busy Retry Count: As required
IP Version: As Required (IPv6, nonoffload)
HBA Boot Mode: As required
3
NOTE: For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM or mounted bootable OS installation image, set Boot to iSCSI Target to One Time Disabled. This setting causes the system not to boot from the configured iSCSI target after establishing a successful login and connection. This setting will revert to Enabled after the next system reboot.
Enabled means to connect to an iSCSI target and attempt to boot from it.
Disabled means to connect to an iSCSI target and not boot from that device, but instead hand off the boot vector to the next bootable device in the boot sequence.
4. To return to the iSCSI Boot Main menu, press the ESC key, select Initiator Parameters, and then press ENTER.
5. On the Initiator Parameters menu, select the following parameters, and then type a value for each:
3
HBA Boot Mode cannot be changed when the adapter is in Multi-Function mode. HBA Boot Mode is not supported by VMware.
64 Configuring iSCSI Protocol
IP Address
Subnet Mask
Default Gateway
Primary DNS
Secondary DNS
iSCSI Name (corresponds to the iSCSI initiator name to be used by the client system)
CHAP ID
CHAP Secret
NOTE: Carefully enter the IP address. There is no error-checking performed against the IP address to check for duplicates or incorrect segment or network assignment.
6. To return to the iSCSI Boot Main Menu, press ESC, select 1st Target Parameters, and then press ENTER.
7. On the 1st Target Parameters Menu, enable Connect to connect to the iSCSI target. Type values for the following parameters for the iSCSI target, and then press Enter:
IP Address
TCP Port
Boot LUN
iSCSI Name
CHAP ID
CHAP Secret
8. To return to the iSCSI Boot Main Menu, press ESC.
9. If you want configure a second iSCSI target device, select 2nd Target Parameters, and enter parameter
values as you did in Step 7. Otherwise, proceed to Step 10.
10. Press ESC one time to return to the main menu, and a second time to exit and save the configuration.
11. Select Exit and Save Configurations to save the iSCSI boot configuration (Figure 9-7). Otherwise, select Exit and Discard Configuration. Press Enter.
Figure 16: Saving the iSCSI Boot Configuration
12. After all changes have been made, press CTRL+ALT+DEL to exit CCM and to apply the changes to the
adapter’s running configuration.
NOTE: In NPAR mode, be sure that the iSCSI function is configured on the first physical function (PF) for successful boot from SAN configuration.
Dynamic iSCSI boot configuration
In a dynamic configuration, you only need to specify that the system’s IP address and target/initiator information are provided by a DHCP server (see IPv4 and IPv6 configurations in “Configuring the DHCP Server to Support iSCSI Boot”). For IPv4, with the exception of the initiator iSCSI name, any settings on the Initiator Parameters, 1st Target Parameters, or 2nd Target Parameters screens are ignored and do not need
Configuring iSCSI Protocol 65
to be cleared. For IPv6, with the exception of the CHAP ID and Secret, any settings on the Initiator Parameters, 1st Target Parameters, or 2nd Target Parameters screens are ignored and do not need to be cleared. For information on configuration options, see the Configuration options table.
NOTE: When using a DHCP server, the DNS server entries are overwritten by the values provided by the DHCP server. This occurs even if the locally provided values are valid and the DHCP server provides no DNS server information. When the DHCP server provides no DNS server information, both the primary and secondary DNS server values are set to 0.0.0.0. When the Windows OS takes over, the Microsoft iSCSI initiator retrieves the iSCSI Initiator parameters and configures the appropriate registries statically. It will overwrite whatever is configured. Since the DHCP daemon runs in the Windows environment as a user process, all TCP/IP parameters have to be statically configured before the stack comes up in the iSCSI Boot environment.
If DHCP Option 17 is used, the target information is provided by the DHCP server, and the initiator iSCSI name is retrieved from the value programmed from the Initiator Parameters screen. If no value was selected, then the controller defaults to the name:
iqn.1995-05.com.qlogic.<11.22.33.44.55.66>.iscsiboot
where the string 11.22.33.44.55.66 corresponds to the controller's MAC address.
If DHCP option 43 (IPv4 only) is used, then any settings on the Initiator Parameters, 1st Target
Parameters, or 2nd Target Parameters screens are ignored and do not need to be cleared
Configuring iSCSI boot parameters using dynamic configuration
Procedure
1. From the General Parameters menu, set the following:
TCP/IP Parameters through DHCP: Enabled (IPv4)
IP Autoconfiguration: Enabled. (IPv6)
iSCSI Parameters through DHCP: Enabled
CHAP Authentication: As required
Boot to iSCSI Target: As required
DHCP Vendor ID: As required
Link Up Delay Time: As required
Use TCP Timestamp: As required
Target as First HDD: As required
LUN Busy Retry Count: As required
IP Version: As required
HBA Boot Mode: As required
4
4
HBA Boot Mode cannot be changed when the adapter is in Multi-Function mode. HBA Boot Mode is not supported by VMware.
66 Configuring iSCSI Protocol
NOTE: For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM or mounted
bootable OS installation image, set Boot to iSCSI Target to One Time Disabled. This setting causes the system not to boot from the configured iSCSI target after establishing a successful login and connection. This setting reverts to Enabled after the next system reboot.
Enabled means to connect to an iSCSI target and attempt to boot from it
Disabled means to connect to an iSCSI target and not boot from that device, but instead hand off
the boot vector to the next bootable device in the boot sequence.
2. Press ESC once to return to the main menu, and a second time to exit and save the configuration.
3. Select Exit and Save Configurations to save the iSCSI boot configuration. Otherwise, select Exit and
Discard Configuration. Press Enter.
4. After all changes have been made, press CTRL+ALT+DEL to exit CCM and to apply the changes to the
adapter's running configuration.
NOTE: Information on the Initiator Parameters, and 1st Target Parameters windows are ignored and do not need to be cleared.
Enabling CHAP authentication
Prerequisites
Be sure that CHAP authentication is enabled on the target.
Procedure
1. From the General Parameters screen, set CHAP Authentication to Enabled.
2. From the Initiator Parameters screen, type values for the following:
CHAP ID (up to 128 bytes)
CHAP Secret (if authentication is required, and must be 12 characters in length or longer)
3. Press the Esc key to return to the Main menu.
4. From the Main menu, select 1st Target Parameters.
5. From the 1st Target Parameters screen, enter values for the following using the values used when
configuring the iSCSI target:
CHAP ID (optional if two-way CHAP)
CHAP Secret (optional if two-way CHAP, and must be 12 characters in length or longer)
6. Press the Esc key to return to the Main menu.
7. Press the Esc key and select Exit and Save Configuration.
Configuring the DHCP server to support iSCSI boot
The DHCP server is an optional component and it is only necessary if you will be doing a dynamic iSCSI Boot configuration setup (see “Dynamic iSCSI Boot Configuration”).
Configuring the DHCP server to support iSCSI boot is different for IPv4 and IPv6.
Configuring iSCSI Protocol 67
Procedure
DHCP iSCSI Boot Configurations for IPv4
DHCP iSCSI Boot Configuration for IPv6
DHCP iSCSI boot configurations for IPv4
The DHCP protocol includes a several options that provide configuration information to the DHCP client. For iSCSI boot, QLogic adapters support the following DHCP configurations
DHCP Option 17, Root Path
DHCP Option 43, Vendor-Specific Information
DHCP Option 17, Root Path
Option 17 is used to pass the iSCSI target information to the iSCSI client. The format of the root path as defined in IETC RFC 4173 is:
"iscsi:"<servername>":"<protocol>":"<port>":"<LUN>":"<targetname>"
The following table lists the DHCP option 17 parameters.
DHCP Option 43,vendor-specific information
DHCP option 43 (vendor-specific information) provides more configuration options to the iSCSI client than DHCP option 17. In this configuration, three additional suboptions are provided that assign the initiator IQN to the iSCSI boot client along with two iSCSI target IQNs that can be used for booting. The format for the iSCSI target IQN is the same as that of DHCP option 17, while the iSCSI initiator IQN is simply the initiator's IQN.
NOTE: DHCP Option 43 is supported on IPv4 only.
Configuring the DHCP server
Configure the DHCP server to support option 17 or option 43.
NOTE: If using Option 43, you also need to configure Option 60. The value of Option 60 should match the DHCP Vendor ID value. The DHCP Vendor ID value is QLGC ISAN, as shown in General Parameters of the
iSCSI Boot Configuration menu.
DHCP iSCSI Boot Configuration for IPv6
The DHCPv6 server can provide several options, including stateless or stateful IP configuration, as well as information to the DHCPv6 client. For iSCSI boot, QLogic adapters support the following DHCP configurations:
DHCPv6 Option 16, Vendor Class Option
DHCPv6 Option 17, Vendor-specific Information
NOTE: The DHCPv6 standard Root Path option is not yet available. QLogic suggests using Option 16 or Option 17 for dynamic iSCSI Boot IPv6 support.
DHCPv6 Option 16, Vendor Class Option
DHCPv6 Option 16 (vendor class option) must be present and must contain a string that matches your configured DHCP Vendor ID parameter. The DHCP Vendor ID value is QLGC ISAN, as shown in General
Parameters of the iSCSI Boot Configuration Menu.
68 Configuring iSCSI Protocol
The content of Option 16 should be <2-byte length> <DHCP Vendor ID>.
DHCPv6 Option 16, Vendor Class Option
DHCPv6 Option 17 (vendor-specific information) provides more configuration options to the iSCSI client. In this configuration, three additional suboptions are provided that assign the initiator IQN to the iSCSI boot client along with two iSCSI target IQNs that can be used for booting.
DHCP Option 17 Suboption Definition lists the DHCP option 17 suboptions.
DHCP Option 17 Suboption Definition
Parameters
201
First iSCSI target information in the standard root path format
"iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN>":"<targetname>"
202
Second iSCSI target information in the standard root path format
"iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN>":"<targetname>"
203
iSCSI initiator IQN
NOTE: In the above parameters, the brackets [ ] are required for the IPv6 addresses.
The content of option 17 should be <2-byte Option Number 201|202|203> <2-byte length> <data>.
Configuring the DHCP Server
Configure the DHCP server to support Option 16 and Option 17.
NOTE: The format of DHCPv6 Option 16 and Option 17 are fully defined in RFC 3315.
Preparing the iSCSI Boot Image
Procedure
Setting up Windows Server 2008 R2 and SP2 iSCSI Boot
Setting up Windows Server 2012/2012 R2 iSCSI Boot
Setting up Linux iSCSI Boot
Injecting (Slipstreaming) Adapter Drivers into Windows Image Files
Linux iSCSI Boot Setup
SUSE 11.1 Remote DVD Installation Workaround
VMware iSCSI Boot from SAN
Installing a VMware Host on the Remote iSCSI LUN
Setting up Windows Server 2008 R2 and SP2 iSCSI Boot
Windows Server 2008 R2 and Windows Server 2008 SP2 support booting and installing in either the offload or nonoffload paths.
Configuring iSCSI Protocol 69
The following procedure prepares the image for installation and booting in either the offload or nonoffload path. The procedure references Windows Server 2008 R2 but is common to both the Windows Server 2008 R2 and SP2.
Required CD/ISO image:
Windows Server 2008 R2 x64 with the QLogic drivers injected. See “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files”. Also refer to the Microsoft knowledge base topic KB974072 at
support.microsoft.com.
NOTE:
The Microsoft procedure injects only the eVBD and NDIS drivers. QLogic recommends that you inject all drivers (eVBD, VBD, BXND, OIS, FCoE, and NDIS).
For the specific driver installer application instructions on how to extract the individual Windows 8400/3400 Series drivers, refer to the SILENT.TXT file.
Other software required:
Bindview.exe (Windows Server 2008 R2 only; see KB976042)
Setting up Windows Server 2008 iSCSI boot
Procedure
1. Remove any local hard drives on the system to be booted (the “remote system”).
2. Load the latest QLogic MBA and iSCSI boot images onto NVRAM of the adapter.
3. Configure the BIOS on the remote system to have the QLogic MBA as the first bootable device, and the
CD as the second device.
4. Configure the iSCSI target to allow a connection from the remote device. Ensure that the target has
sufficient disk space to hold the new OS installation.
5. Boot up the remote system. When the PXE banner appears, press CTRL+S to enter the PXE menu.
6. At the PXE menu, set Boot Protocol to iSCSI.
7. Enter the iSCSI target parameters.
8. Set HBA Boot Mode to Enabled or Disabled. (Note: This parameter cannot be changed when the
adapter is in Multi-Function mode.)
9. Save the settings and reboot the system.
The remote system connects to the iSCSI target and then boots from the DVDROM device.
10. Boot to DVD and begin installation.
11. Answer all the installation questions appropriately (specify the operating system you want to install,
accept the license terms, and so on).
When the Where do you want to install Windows? dialog window appears, the target drive is visible. This target is a drive connected through the iSCSI boot protocol and located in the remote iSCSI target.
12. To proceed with Windows Server 2008 R2 installation, click Next.
A few minutes after the Windows Server 2008 R2 DVD installation process starts, a system reboot will follow. After the reboot, the Windows Server 2008 R2 installation routine resumes and completes the installation.
70 Configuring iSCSI Protocol
13. Following another system restart, check and verify that the remote system is able to boot to the desktop.
14. After Windows Server 2008 R2 is booted up, load all drivers and run Bindview.exe.
a. Select All Services.
b. Under WFP Lightweight Filter, there are Binding paths for the AUT. Right-click and disable them.
When done, close the application.
15. Verify that the OS and system are functional and can pass traffic by pinging a remote system’s IP.
Setting up Windows Server 2012/2012 R2 iSCSI Boot
Windows Server 2012 and 2012 R2 supports booting and installing in either the offload or non-offload paths. QLogic requires the use of a “slipstream” DVD with the latest QLogic drivers injected. See “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files”. Also refer to the Microsoft knowledge base topic KB974072 at
NOTE: The Microsoft procedure injects only the eVBD and NDIS drivers. QLogic recommends that you inject all drivers (eVBD, VBD, BXND, OIS, FCoE, and NDIS).
support.microsoft.com.
Setting up Windows Server 2012 iSCSI Boot
Procedure
1. Remove any local hard drives on the system to be booted (the “remote system”).
2. Load the latest QLogic MBA and iSCSI boot images into the NVRAM of the adapter.
3. Configure the BIOS on the remote system to have the QLogic MBA as the first bootable device and the
CD as the second device.
4. Configure the iSCSI target to allow a connection from the remote device. Ensure that the target has sufficient disk space to hold the new OS installation.
5. Boot up the remote system. When the Preboot Execution Environment (PXE) banner appears, press CTRL+S to enter the PXE menu.
6. At the PXE menu, set Boot Protocol to iSCSI.
7. Enter the iSCSI target parameters.
8. Set HBA Boot Mode to Enabled or Disabled. (Note: This parameter cannot be changed when the
adapter is in Multi-Function mode.)
9. Save the settings and reboot the system.
The remote system connects to the iSCSI target and then boots from the DVDROM device.
10. Boot to DVD and begin installation.
11. Answer all the installation questions appropriately (specify the operating system you want to install,
accept the license terms, and so on).
When the Where do you want to install Windows? window appears, the target drive is visible. This target is a drive connected through the iSCSI boot protocol and located in the remote iSCSI target.
12. To proceed with Windows Server 2008 R2 installation, click Next.
A few minutes after the Windows Server 2008 R2 DVD installation process starts, a system reboot follows. After the reboot, the Windows Server 2008 R2 installation routine resumes and completes the installation.
Configuring iSCSI Protocol 71
13. Following another system restart, check and verify that the remote system is able to boot to the desktop.
14. After Windows Server 2012 boots to the OS, QLogic recommends running the driver installer to complete
the QLogic drivers and application installation.
Injecting (Slipstreaming) adapter drivers into Windows image files
Procedure
1. Obtain the latest driver package for the applicable Windows Server version (2012 or 2012 R2).
2. Extract the driver package to a working directory:
a. Open a command-line session and navigate to the folder that contains the driver package.
b. Type the following command to start the driver installer:
setup.exe /a
c. In the Network location: field, enter the path of the folder to which to extract the driver package. For
example, type c:\temp.
d. Follow the driver installer instructions to install the drivers in the specified folder. In this example, the
driver files are installed in c:\temp\Program File 64\QLogic Corporation\QDrivers.
3. Download the Windows Assessment and Deployment Kit (ADK) version 8.1 from https:// docs.microsoft.com/en-us/windows-hardware/get-started/adk-install.
4. Open a command-line session (with administrator privilege) and navigate the release CD to the Tools \Slipstream folder.
5. Locate the slipstream.bat script file, and then enter the following command:
slipstream.bat <path>
where <path> is the drive and subdirectory that you specified in Step 2. For example:
slipstream.bat "c:\temp\Program Files 64\QLogic Corporation\QDrivers
NOTE:
Operating system installation media is expected to be a local drive. Network paths for operating system
installation media are not supported.
The slipstream.bat script injects the driver components in all the SKUs that are supported by the
operating system installation media.
6. Burn a DVD containing the resulting driver ISO image file located in the working directory.
7. Install the Windows Server operating system using the new DVD.
Setting up the Linux iSCSI boot
Linux iSCSI boot is supported on Red Hat Enterprise Linux 5.5 and later and SUSE Linux Enterprise Server 11 SP1 and later in both the offload and non-offload paths.
NOTE: SLES 10.x and SLES 11 have support only for the non-offload path.
72 Configuring iSCSI Protocol
Procedure
1. For driver update, obtain the latest QLogic Linux driver CD.
2. Configure the iSCSI Boot Parameters for DVD direct install to target by disabling the Boot from target
option on the network adapter.
3. Configure to install through the non-offload path by setting HBA Boot Mode to Disabled in the NVRAM Configuration. (Note: This parameter cannot be changed when the adapter is in Multi-Function mode.). Note that, for RHEL6.2 and SLES11SP2 and newer, installation through the offload path is supported. For this case, set the HBA Boot Mode to Enabled in the NVRAM Configuration.
4. Change the boot order as follows:
a. Boot from the network adapter.
b. Boot from the CD/DVD drive.
5. Reboot the system.
6. System will connect to iSCSI target, then will boot from CD/DVD drive.
7. Follow the corresponding OS instructions.
a. RHEL 5.5—Type linux dd at “boot:” prompt and press enter
b. SuSE 11.X—Choose installation and type withiscsi=1 netsetup=1 at the boot option.
This is intended as a starting set of kernel parameters. Please consult SLES documentation for a full list of available options.
If driver update is desired, add “DUD=1” or choose YES for the F6 driver option.
In some network configurations, if additional time is required for the network adapters to become active (for example, with a use of “netsetup=dhcp,all”) add “netwait=8”. This would allow the network adapters additional time to complete the driver load and re-initialization of all interfaces.
8. At the “networking device” prompt, choose the desired network adapter port and press OK.
9. At “configure TCP/IP prompt”, configure the way the system acquire IP address and press OK.
10. If static IP was chosen, you need to enter IP information for iSCSI initiator.
11. (RHEL) Choose to “skip” media testing.
12. Continue installation as desired. A drive will be available at this point. After file copying is done, remove
CD/DVD and reboot the system.
13. When the system reboots, enable “boot from target” in iSCSI Boot Parameters and continue with installation until it is done.
Creating a new customized initrd for any new components
Procedure
1. Update iSCSI initiator if desired. You will first need to remove the existing initiator using rpm -e.
2. Make sure all run levels of network service are on:
chkconfig network on
Configuring iSCSI Protocol 73
3. Make sure 2,3 and 5 run levels of iSCSI service are on.
chkconfig -level 235 iscsi on
4. For Red Hat 6.0, make sure Network Manager service is stopped and disabled.
5. Install iscsiuio if desired (not required for SuSE 10).
6. Install linux-nx2 package if desired.
7. Install bibt package.
8. Remove ifcfg-eth*.
9. Reboot.
10. For SUSE 11.1, follow the remote DVD installation workaround shown below.
11. After the system reboots, log in, change to the /opt/bcm/bibt folder, and run iscsi_setup.sh script to create
the offload and/or the non-offload initrd image. Copy the initrd image(s), offload and/or non-offload, to the /boot folder.
12. Change the grub menu to point to the new initrd image.
13. To enable CHAP, you need to modify iscsid.conf (Red Hat only).
14. Reboot and change CHAP parameters if desired.
15. Continue booting into the iSCSI Boot image and select one of the images you created (non-offload or
offload). Your choice should correspond with your choice in the iSCSI Boot parameters section. If HBA Boot Mode was enabled in the iSCSI Boot Parameters section, you have to boot the offload image. SLES 10.x and SLES 11 do not support offload.
16. For IPv6, you can now change the IP address for both the initiator and the target to the desired IPv6 address in the NVRAM configuration.
SUSE 11.1 Remote DVD installation workaround
Procedure
1. Create a new file called boot.open-iscsi with the content shown below.
2. Copy the file you just created to /etc/init.d/ folder and overwrite the existing one.
Content of the new boot.open-iscsi file:
#!/bin/bash # # /etc/init.d/iscsi # ### BEGIN INIT INFO # Provides: iscsiboot # Required-Start: # Should-Start: boot.multipath # Required-Stop: # Should-Stop: $null # Default-Start: B # Default-Stop: # Short-Description: iSCSI initiator daemon root-fs support # Description: Starts the iSCSI initiator daemon if the # root-filesystem is on an iSCSI device # ### END INIT INFO
ISCSIADM=/sbin/iscsiadm ISCSIUIO=/sbin/iscsiuio CONFIG_FILE=/etc/iscsid.conf DAEMON=/sbin/iscsid
74 Configuring iSCSI Protocol
ARGS="-c $CONFIG_FILE"
# Source LSB init functions . /etc/rc.status
# # This service is run right after booting. So all targets activated # during mkinitrd run should not be removed when the open-iscsi # service is stopped. # iscsi_load_iscsiuio() { TRANSPORT=`$ISCSIADM -m session 2> /dev/null | grep "bnx2i"` if [ "$TRANSPORT" ] ; then echo -n "Launch iscsiuio " startproc $ISCSIUIO
fi }
iscsi_mark_root_nodes() { $ISCSIADM -m session 2> /dev/null | while read t num i target ; do ip=${i%%:*} STARTUP=`$ISCSIADM -m node -p $ip -T $target 2> /dev/null | grep "node.conn\[0\].startup" | cut -d' ' -f3` if [ "$STARTUP" -a "$STARTUP" != "onboot" ] ; then $ISCSIADM -m node -p $ip -T $target -o update -n node.conn[0].startup -v onboot fi done }
# Reset status of this service rc_reset
# We only need to start this for root on iSCSI if ! grep -q iscsi_tcp /proc/modules ; then if ! grep -q bnx2i /proc/modules ; then rc_failed 6 rc_exit fi fi
case "$1" in start) echo -n "Starting iSCSI initiator for the root device: " iscsi_load_iscsiuio startproc $DAEMON $ARGS rc_status -v iscsi_mark_root_nodes ;; stop|restart|reload) rc_failed 0 ;; status)
echo -n "Checking for iSCSI initiator service: " if checkproc $DAEMON ; then rc_status -v else rc_failed 3 rc_status -v fi ;; *) echo "Usage: $0 {start|stop|status|restart|reload}" exit 1 ;; esac rc_exit
Configuring iSCSI Protocol 75
VMware iSCSI boot from SAN
The 8400/3400 Series adapters are VMware-dependent hardware iSCSI-Offload adapters. The iSCSI-Offload functionality partially depends on the VMware Open-iSCSI library and networking stack for iSCSI configuration and the management interfaces provided by VMware. The 8400/3400 Series adapters present a standard networking instance and iSCSI offload instance on the same port. The iSCSI-Offload functionality depends on the host network configuration to obtain the IP and MAC addresses, as well as other parameters used for iSCSI sessions.
Installing a VMware Host on the Remote iSCSI LUN
After the iSCSI boot parameters in the QLogic adapter complete, install the VMware host. The applicable installation media is in the local CD or is available by some other method in the BIOS on the host (for example, virtual media).
Installing a VMware host on the remote iSCSI LUN
Procedure
1. Ensure that the boot controller or device order is set correctly in the BIOS. The network adapter appears
before the applicable installation device in the boot order settings.
2. To simplify booting from iSCSI, be sure that you initially map the boot LUN on one path only to the vSphere
host. Verify that you use the correct LUN ID.
When the host is powered on, the system BIOS loads the firmware code of the adapter and starts executing it. The firmware contains boot and iSCSI initiator code. The iSCSI initiator firmware establishes an iSCSI session with the target. On boot, a successful login to the target appears before installation starts.
3. If you get a failure at this point, you must revisit the preceding configuration steps.
Installation begins, in which the following occurs:
a. As part of the installation process, a memory-only stateless VMkernel is loaded.
b. The VMkernel discovers suitable LUNs for installation, one of which is the remote iSCSI LUN.
c. For the VMkernel iSCSI driver to communicate with the target, the TCP/IP protocol must be set up (as
part of the startup init script).
d. The NIC firmware hands off the initiator and target configuration data to the VMkernel using the iBFT.
e. After the required networking is set up, an iSCSI session is established to the target configured in the
iBFT.
f. LUNs beneath the targets are discovered and registered with VMkernel SCSI stack (PSA).
If everything is successful during the initial installation, the iSCSI LUN is offered as a destination for the vSphere host image.
4. Complete the vSphere host installation as usual to that remote iSCSI LUN.
Booting
Booting on Windows and Linux
After preparing the system for an iSCSI boot and verifying the operating system is present on the iSCSI target, perform the actual boot. The system boots to Windows or Linxu over the network and operates like a local disk drive.
76 Configuring iSCSI Protocol
Procedure
1. Reboot the server.
2. Press the CTRL+S keys.
3. To boot through an offload path, set the HBA Boot Mode to Enabled. To boot through a nonoffload path,
set the HBA Boot Mode to Disabled. This parameter cannot be changed when the adapter is in Multi­Function mode.
If CHAP authentication is needed, enable CHAP authentication after determining that booting is successful (see Enabling CHAP Authentication).
Booting from iSCSI LUN on VMware
After installing the boot image onto the remote LUN, you may need to change the iSCSI configuration. If the One Time Disable option was not used, then the Boot to iSCSI Target setting must be changed from Disabled to Enabled. To change this setting, reboot to CCM or UEFI HII. When the host is rebooted, the vSphere host boots from the iSCSI LUN through the software iSCSI initiator pathway.
NOTE: To apply any changes made in CCM, you must press the CTRL+ALT+DEL keys to reboot. Do not simply press the ESC key to exit CCM and continue with the boot up.
ESXi iSCSI Boot from SAN for Synergy
Use the following procedure to install ESXi 6.5 onto an iSCSI LUN using an HPE Synergy 12000 blade with an HPE Synergy 3820 CNA.
Procedure
1. Download the HPE ESXi 6.5 U1 customized ISO which contains the latest HPE drivers.
2. Insert the Synergy blade with the 3820 adapter into Chassis Bay 5 of the Synergy 12000 frame.
3. Connect the iSCSI target to the switch.
4. From iSCSI management, create an iSCSI LUN for BFS installation, map it to initiator_name and set
Target LUN to 1.
5. Log in to the Synergy 12000 frame to configure the uplink and create a server profile with iSCSI BFS.
a. Set up and configure the end-to-end connections.
I. Run a cable connection from the ICM2 Uplink Q6 to the switch.
II. Connect an ISCSI target to the switch.
III. From HPE OneView, Networks: Create Network: “icm3q3-l2”, type: Ethernet, VLAN: Untagged.
Configuring iSCSI Protocol 77
IV. Update LIG and create uplink set: From HPE OneView Logical Interconnect Group: edit ligA,
click Add uplink set “icm3q3-l2”, Networks: select icm3q3-l2, Uplink Ports, add uplink port Q3 for Bay3, click OK to complete
78 Configuring iSCSI Protocol
V. From the Logical Interconnect Group, click Action and then select update group. Updating the
group takes a few minutes to complete.
b. Create a server profile:
I. From HPE OneView, server Profiles, click Create: Profile1.
II. Select Bay5 for ServerHardware.
III. Click Add connection:
i. Name: icm3q3-l2
ii. Type: Ethernet
iii. Network: icm3q3-l2
iv. Boot: iSCSI Primary
v. Boot From: Select Specify boot target.
vi. ISCSI Initiator: Select User-Specified, Type initiator name, and match what is
specified on Target.
vii. IPV4 address allocation: Select DHCP if present or enter a static IP address.
viii. Boot Target: Enter Target name, Target LUN: 1
ix. Enter Target IP address.
Configuring iSCSI Protocol 79
x. Click ok.
xi. Manage Boot mode: UEFI
xii. Manage Boot order: Hard Disk
xiii. Click OK to create a profile, it will take a few minutes to complete.
6. Install ESXi in iSCSI BFS using iLO:
a. Launch iLO.
b. Virtual Drives: Click Virtual Drives, Image File CD-ROM/DVD, select the ESXI6.5 ISO that contains
the latest drivers.
c. From iLO, Power On the server.
d. Observe the POST Initialization, and the press F11.
e. Verify that iSCSI BFS LUN is visible.
80 Configuring iSCSI Protocol
f. Scroll down and select iLO Virtual USB 3 to boot off of and start the ESXi installation.
Configuring iSCSI Protocol 81
g. Press Enter to continue to the Welcome screen.
82 Configuring iSCSI Protocol
h. Press F11 to accept the EULA and continue.
Configuring iSCSI Protocol 83
i. Scroll down, select iSCSI disk as the storage device to which you want to install, and then press
Enter.
84 Configuring iSCSI Protocol
j. Select US default as the language.
Configuring iSCSI Protocol 85
k. Enter the Root password, and then press Enter.
86 Configuring iSCSI Protocol
l. Press F11 to start the installation.
Configuring iSCSI Protocol 87
m. Press Enter to reboot when installation is complete.
88 Configuring iSCSI Protocol
n. During Post Initialization, press F11 to access the Boot Menu. Select iSCSI disk, and then press
Enter.
Configuring iSCSI Protocol 89
o. The following screen shows the observer loading progress.
90 Configuring iSCSI Protocol
p. The boot is complete.
Configuring iSCSI Protocol 91
q. Enable ESXi Shell, press F2, select Troubleshooting options, enable the ESXi Shell, enable SSH,
and then press Esc until you reach the main screen.
92 Configuring iSCSI Protocol
r. Open a shell, log in, and then run the esxcli network NIC list.
Configuring iSCSI Protocol 93

Configuring VLANs for iSCSI boot

iSCSI traffic on the network may be isolated in a Layer-2 VLAN to segregate it from general traffic. When this is the case, make the iSCSI interface on the adapter a member of that VLAN.
Procedure
1. During a boot of the Initiator system, press CTRL+S to open the QLogic CCM preboot utility.
Figure 17: Comprehensive Configuration Management
2. In the CCM device list, use the up or down arrow keys to select a device, and then press ENTER.
94 Configuring iSCSI Protocol
Figure 18: Configuring VLANs-CCM Device List
3. In the Main menu, select MBA Configuration,and then press ENTER.
Figure 19: Configuring VLANs—Multiboot Agent Configuration
4. In the MBA Configuration menu (Figure 8-11), use the up or down arrow keys to select each of following
parameters.
VLAN Mode: Press ENTER to change the value to Enabled
VLAN ID: Press ENTER to open the VLAN ID dialog, type the target VLAN ID (1–4096), and then press ENTER.
Figure 20: Configuring iSCSI Boot VLAN
5. Press ESC once to return to the Main menu, and a second time to exit and save the configuration.
6. Select Exit and Save Configurations to save the VLAN for iSCSI boot configuration (Figure 8-12).
Otherwise, select Exit and Discard Configuration. Press ENTER.
Figure 21: Saving the iSCSI Boot VLAN Configuration
7. After all changes have been made, press CTRL+ALT+DEL to exit CCM and to apply the changes to the
adapter's running configuration.
Configuring iSCSI Protocol 95

Other iSCSI Boot considerations

There are several other factors that should be considered when configuring a system for iSCSI boot.
Changing the speed and duplex settings in Windows environments
Changing the Speed & Duplex settings on the boot port using Windows Device Manager when performing iSCSI boot through the offload path is not supported. Booting through the NDIS path is supported. The Speed & Duplex settings can be changed using the QCC GUI for iSCSI boot through the offload and NDIS paths.
Virtual LANs
Virtual LAN (VLAN) tagging is not supported for iSCSI boot with the Microsoft iSCSI Software Initiator.
Creating an iSCSI Boot Image with the dd Method
If direct installation to a remote iSCSI target is not an option, an alternate way to create such an image is to use the dd method. With this method, you install the image directly to a local hard drive and then create an iSCSI boot image for the subsequent boot.
Procedure
1. Install Linux OS on your local hard drive and ensure that the Open-iSCSI initiator is up to date.
2. Ensure that all Run levels of network service are on.
3. Ensure that the 2, 3, and 5 Run levels of iSCSI service are on.
4. Update iscsiuio. You can get the iscsiuio package from the CD. This step is not needed for SuSE 10.
5. Install the linux-nx2 package on your Linux system. You can get this package from the CD.
6. Install bibt package on you Linux system. You can get this package from the CD.
7. Delete all ifcfg-eth* files.
8. Configure one port of the network adapter to connect to iSCSI Target (for instructions, see “Configuring the iSCSI Target”).
9. Connect to the iSCSI Target.
10. Use the DD command to copy from the local hard drive to iSCSI Target.
11. When DD is done, execute the sync command a couple of times, log out, and then log in to iSCSI Target
again.
12. Run the fsck command on all partitions created on the iSCSI Target.
13. Change to the /OPT/bcm/bibt folder and run the iscsi_setup.sh script to create the initrd images. Option 0
will create a non-offload image and option 1 will create an offload image. The Iscsi_script.sh script will create the non-offload image only on SuSE 10 as offload is not supported on SuSE 10.
14. Mount the /boot partition on the iSCSI Target.
15. Copy the initrd images you created in step 13 from your local hard drive to the partition mounted in step
14.
16. On the partition mounted in step 14, edit the grub menu to point to the new initrd images.
17. Unmount the /boot partition on the iSCSI Target.
18. (Red Hat Only) To enable CHAP, you need to modify the CHAP section of the iscsid.conf file on the
iSCSI Target. Edit the iscsid.conf file with one-way or two-way CHAP information as desired.
96 Configuring iSCSI Protocol
19. Shut down the system and disconnect the local hard drive. Now you are ready to iSCSI boot the iSCSI
Target.
20. Configure iSCSI Boot Parameters, including CHAP parameters if desired (see “Configuring the iSCSI Target”).
21. Continue booting into the iSCSI Boot image and choose one of the images you created (non-offload or
offload). Your choice should correspond with your choice in the iSCSI Boot parameters section. If HBA Boot Mode was enabled in the iSCSI Boot Parameters section, you have to boot the offload image. SuSE 10.x and SLES 11 do not support offload.
Troubleshooting iSCSI Boot
Symptom
A system blue screen occurs when iSCSI boots Windows Server 2008 R2 through the adapter’s NDIS path with the initiator configured using a link-local IPv6 address and the target configured using a router-configured IPv6 address.
Cause
This is a known Windows TCP/IP stack issue.
Symptom
The iSCSI Crash Dump utility will not work properly to capture a memory dump when the link speed for iSCSI boot is configured for 10Mbps or 100Mbps.
Action
The iSCSI Crash Dump utility is supported when the link speed for iSCSI boot is configured for 1Gbps or 10Gbps. 10Mbps or 100Mbps is not supported.
Symptom
An iSCSI target is not recognized as an installation target when you try to install Windows Server 2008 by using an IPv6 connection.
Action
This is a known third-party issue. See Microsoft Knowledge Base KB 971443, http:// support.microsoft.com/kb/971443.
Symptom
When switching iSCSI boot from the Microsoft standard path to iSCSI offload, the booting fails to complete.
Configuring iSCSI Protocol 97
Action
Prior to switching the iSCSI boot path, install or upgrade the Virtual Bus Device (VBD) driver to and OIS driver to the latest versions.
Symptom
The iSCSI configuration utility will not run.
Action
Ensure that the iSCSI Boot firmware is installed in the NVRAM.
Symptom
A system blue screen occurs when installing the drivers through Windows Plug-and-Play (PnP).
Action
Install the drivers through the Setup installer.
Symptom
For static IP configuration when switching from Layer 2 iSCSI boot to iSCSI HBA, then you will receive an IP address conflict.
Action
Change the IP address of the network property in the OS.
Symptom
After configuring the iSCSI boot LUN to 255, a system blue screen appears when performing iSCSI boot.
Action
Although the iSCSI solution supports a LUN range from 0 to 255, the Microsoft iSCSI software initiator does not support a LUN of 255. Configure a LUN value from 0 to 254.
Symptom
NDIS miniports with Code 31 yellow-bang after L2 iSCSI boot install.
98 Configuring iSCSI Protocol
Action
Run the latest version of the driver installer.
Symptom
Unable to update inbox driver if a non-inbox hardwareID present.
Action
Create a custom slipstream DVD image with supported drivers present on the install media.
Symptom
In Windows Server 2012, toggling between iSCSI HBA offload mode and iSCSI software initiator boot can leave the machine in a state where the HBA offload miniport bxois will not load.
Action
Manually edit [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\bxois\StartOver ride] from 3 to 0. Modify the registry key before toggling back from NDIS to HBA path in CCM.
NOTE: Microsoft recommends against this method. Toggling the boot path from NDIS to HBA or vice versa after installation is completed is not recommended.
Symptom
Installing Windows onto an iSCSI target through iSCSI boot fails when connecting to a 1Gbps switch port.
Action
This is a limitation relating to adapters that use SFP+ as the physical connection. SFP+ defaults to 10Gbps operation and does not support autonegotiation.

iSCSI crash dump

If you use the iSCSI Crash Dump utility, it is important to follow the installation procedure to install the iSCSI Crash Dump driver. See “Using the Installer” for more information.

iSCSI offload in Windows Server

iSCSI traffic may be segregated offload is a technology that offloads iSCSI protocol processing overhead from host processors to the iSCSI host bus adapter to increase network performance and throughput while helping to optimize server processor use. This section covers Windows iSCSI offload feature for the 8400 Series family of network adapters.
With the proper iSCSI offload licensing, you can configure your iSCSI-capable 8400 Series network adapter to offload iSCSI processing from the host processor. The following procedures enable your system to take advantage of QLogic’s iSCSI offload feature:
Configuring iSCSI Protocol 99
Installing QLogic Drivers
Enabling and Disabling iSCSI-Offload
Installing the Microsoft iSCSI Initiator
Configuring Microsoft Initiator to Use QLogic's iSCSI Offload

Configuring iSCSI offload

With the proper iSCSI offload licensing, you can configure your iSCSI-capable 8400 Series network adapter to offload iSCSI processing from the host processor. The following process enables your system to take advantage of the iSCSI offload feature.
Procedure
Installing Drivers
Installing the Microsoft iSCSI Initiator
Configure Microsoft Initiator to Use the iSCSI Offload
Installing drivers
Procedure
Install the Windows drivers as described in "Windows Driver Software".
Enabling and disabling iSCSI-offload
For Windows operating systems, use QLogic’s CCM pre-boot utility or the server pre-boot UEFI HII device configuration page to configure the DCB parameters for lossless iSCSI-TLV over DCB mode.
Use the QCC GUI, QCS CLI, or the QCC PowerKit to enable or disable the iSCSI-Offload instance per port on Windows in single function mode. To configure iSCSI-offload in NPAR mode, use the NPAR configuration page in any of the following applications:
QCC GUI
QCS CLI
QCC PowerKit
Pre-boot server UEFI HII
Pre-boot server CCM
Enabling and disabling iSCSI-offload
Procedure
1. Open QCC GUI.
2. In the tree pane on the left, under the port node, select the port’s virtual bus device instance.
3. In the configuration pane on the right, click the Resource Config tab.
4. Complete the Resource Config page for each selected port (see the following figure) as follows:
100 Configuring iSCSI Protocol
Loading...