FastLinQ 8400, 3400 Series Converged Network Adapters and

User’s Guide
Converged Network Adapters and Intelligent
Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
This document is provided for informational purposes only and may contain errors. QLogic reserves the right, without
notice, to make changes to this document or in product design or specifications. QLogic disclaims any warranty of any
kind, expressed or implied, and does not guarantee that any results or performance described in the document will be
achieved by you. All statements regarding QLogic's future direction and intent are subject to change or withdrawal
without notice and represent goals and objectives only.
Document Revision History
Revision A, October 20, 2014
Revision B, November 14, 2014
Revision C, April 3, 2015
Revision D, September 16, 2015
Revision E, May 10, 2016
Changes
Sections Affected
Updated to replace the QLogic Control Suite GUI
with the QConvergeConsole GUI.
Throughout
In the Note, removed Separate licenses are
required for all offloading technologies.
“Functional Description” on page 1
Added a QConvergeConsole Plug-ins for vSphere
to the list of manageability features.
“Features” on page 2
Added the Adapter Management section.
“Adapter Management” on page 6
In step 1, removed the reference to the table of
10GbE optics.
“Connecting the Network Cables” on page 11
Added the Manually Extracting the Device Drivers
section.
“Manually Extracting the Device Drivers” on
page 20
In the KMP Packages bullet, added a reference to
the QLogic Control Suite CLI User’s Guide.
“Packaging” on page 26
Removed the chapters Installing Management
Applications, Using QLogic Control Suite, and
Manageability.
Updated the first sentence to read, The optional
parameter enable_vxlan_ofld can be used to
enable or disable . . .
“enable_vxlan_offld” on page 54
In the Note in step 5, added the option of configuring iSCSI boot parameters with UEFI HII BIOS
pages.
“MBA Boot Protocol Configuration” on page 71
Added a procedure to inject adapter drivers into
the Windows image files.
“Injecting (Slipstreaming) Adapter Drivers into Windows Image Files” on page 91
In the fourth question, replaced configurations with
IP addresses.
“Event Log Messages” on page 105
ii
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
Removed NX2 from the heading.
“Bind iSCSI Target to QLogic iSCSI Transport
Name” on page 111
At the end of the first paragraph, added and the
other PFs on that port can support SR-IOV VF
connections..
“SR-IOV and Storage” on page 161
Updated all instances of Advanced Server Program and ASP to QLogic Advanced Server Program and QLASP, respectively.
Throughout
Added a paragraph at the end of the section
describing two traffic classes that can be used by
the Windows QoS service.
“Data Center Bridging in Windows Server 2012” on
page 172
Removed the table describing the QLogic Teaming
Software Component.
“Software Components” on page 183
In Table 15-4, updated the subheads to read
Generic (Static) Trunking and Dynamic LACP.
“Teaming Mechanisms” on page 188
In the first bullet, added with Auto-Fallback
Enabled (SLB).
“QLASP Overview” on page 233
In the second paragraph, last sentence, updated
bnx2 to read bnx2/bnx2x.
“Linux” on page 251
Removed sections Running a Cable Length Test
and Testing Network Connectivity.
Chapter 18, Troubleshooting
“Types of Teams” on page 234
iii
83840-546-00 E
User’s Guide Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
iv
83840-546-00 E
Table of Contents
Preface
Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
What Is in This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Related Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
License Agreements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Technical Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Downloading Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Knowledge Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Legal Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Warranty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Laser Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FDA Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Agency Certification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
EMI and EMC Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . .
Product Safety Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
xxi
xxi
xxii
xxiii
xxiv
xxv
xxv
xxvi
xxvi
xxvi
xxvii
xxvii
xxvii
xxvii
xxvii
xxvii
xxviii
Product Overview
Functional Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adaptive Interrupt Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ASIC with Embedded RISC Processor . . . . . . . . . . . . . . . . . . . . . . . .
Adapter Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
QLogic Control Suite CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
QLogic QConvergeConsole Graphical User Interface. . . . . . . . . . . . .
QLogic QConvergeConsole vCenter Plug-In. . . . . . . . . . . . . . . . . . . .
QLogic FastLinQ ESXCLI VMware Plug-In . . . . . . . . . . . . . . . . . . . . .
Supported Operating Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
1
2
4
5
5
5
5
6
6
6
6
6
7
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
Adapter Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Physical Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Standards Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Installing the Hardware
System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Operating System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Safety Precautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preinstallation Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installation of the Network Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Connecting the Network Cables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
12
13
13
13
15
15
15
16
Windows Driver Software
Installing the Driver Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the Installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Silent Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manually Extracting the Device Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Removing the Device Drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing QLogic Management Applications . . . . . . . . . . . . . . . . . . . . . . . .
Viewing or Changing the Adapter Properties . . . . . . . . . . . . . . . . . . . . . . . .
Setting Power Management Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
8
8
8
9
9
10
11
Multi-boot Agent (MBA) Driver Software
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Up MBA in a Client Environment . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling the MBA Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring the MBA Driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Up the BIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Up MBA in a Server Environment . . . . . . . . . . . . . . . . . . . . . . . . . .
Red Hat Linux PXE Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MS-DOS UNDI/Intel APITEST. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
7
7
7
17
18
19
20
21
21
21
22
Linux Driver Software
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2x Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2i Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2fc Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Packaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing Linux Driver Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
25
25
26
26
26
26
27
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
Installing the Source RPM Package . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing the KMP Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Building the Driver from the Source TAR File . . . . . . . . . . . . . . . . . . .
Load and Run Necessary iSCSI Software Components . . . . . . . . . . . . . . .
Unloading/Removing the Linux Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unloading/Removing the Driver from an RPM Installation . . . . . . . . .
Removing the Driver from a TAR Installation . . . . . . . . . . . . . . . . . . .
Uninstalling the QCC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Patching PCI Files (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Values for Optional Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2x Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
disable_tpa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
int_mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
dropless_fc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
disable_iscsi_ooo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
multi_mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
num_queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
pri_map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
qs_per_cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
cos_min_rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2i Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
error_mask1 and error_mask2 . . . . . . . . . . . . . . . . . . . . . . . . . .
en_tcp_dack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
time_stamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
sq_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rq_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
event_coal_div . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
last_active_tcp_port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ooo_enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2fc Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
debug_logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2x Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii
28
31
31
32
33
33
33
34
34
34
35
35
35
35
36
36
36
36
37
37
37
38
38
38
38
39
39
39
39
39
40
40
40
40
41
42
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
bnx2x Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Sign On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CNIC Driver Sign On (bnx2 only) . . . . . . . . . . . . . . . . . . . . . . . .
NIC Detected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Link Up and Speed Indication. . . . . . . . . . . . . . . . . . . . . . . . . . .
Link Down Indication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MSI-X Enabled Successfully . . . . . . . . . . . . . . . . . . . . . . . . . . .
bnx2i Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
BNX2I Driver Signon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Port to iSCSI Transport Name Binding. . . . . . . . . . . . .
Driver Completes handshake with iSCSI Offload-enabled
CNIC Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Detects iSCSI Offload Is Not Enabled on the CNIC Device
Exceeds Maximum Allowed iSCSI Connection Offload Limit . . .
Network Route to Target Node and Transport Name Binding
Are Two Different Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Target Cannot Be Reached on Any of the CNIC Devices. . . . . .
Network Route Is Assigned to Network Interface, Which Is Down
SCSI-ML Initiated Host Reset (Session Recovery) . . . . . . . . . .
CNIC Detects iSCSI Protocol Violation - Fatal Errors. . . . . . . . .
CNIC Detects iSCSI Protocol Violation - Non-FATAL, Warning .
Driver Puts a Session Through Recovery. . . . . . . . . . . . . . . . . .
Reject iSCSI PDU Received from the Target . . . . . . . . . . . . . . .
Open-iSCSI Daemon Handing Over Session to Driver . . . . . . .
bnx2fc Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
BNX2FC Driver Signon: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Compiles Handshake with FCoE Offload Enabled
CNIC Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Fails Handshake with FCoE Offload Enabled CNIC Device
No Valid License to Start FCoE . . . . . . . . . . . . . . . . . . . . . . . . .
Session Failures Due to Exceeding Maximum Allowed FCoE
Offload Connection Limit or Memory Limits . . . . . . . . . . . . . . .
Session Offload Failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Session Upload Failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unable to Issue ABTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
Unable to Issue IO Request Due to Session Not Ready . . . . . .
Drop Incorrect L2 Receive Frames. . . . . . . . . . . . . . . . . . . . . . .
HBA/lport Allocation Failures . . . . . . . . . . . . . . . . . . . . . . . . . . .
NPIV Port Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
viii
42
42
42
42
42
42
42
42
42
43
43
43
43
43
43
43
43
44
45
45
45
45
45
45
45
45
46
46
46
46
46
46
46
46
46
46
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
Teaming with Channel Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
VMware Driver Software
Packaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Download, Install, and Update Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Networking Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
int_mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
disable_tpa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
num_rx_queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
num_tx_queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
pri_map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
qs_per_cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
cos_min_rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
dropless_fc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
max_vfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
enable_vxlan_offld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Defaults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unloading and Removing Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Sign On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
NIC Detected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MSI-X Enabled Successfully . . . . . . . . . . . . . . . . . . . . . . . . . . .
Link Up and Speed Indication. . . . . . . . . . . . . . . . . . . . . . . . . . .
Link Down Indication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Memory Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MultiQueue/NetQueue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FCoE Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installation Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Supported Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
47
47
48
49
52
52
52
52
52
53
53
53
53
54
54
54
54
54
55
55
55
55
55
55
55
55
56
56
56
58
58
58
58
Firmware Upgrade
Upgrading Firmware for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Upgrading Firmware for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
59
62
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
8
iSCSI Protocol
iSCSI Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Supported Operating Systems for iSCSI Boot. . . . . . . . . . . . . . . . . . .
iSCSI Boot Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring the iSCSI Target . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring iSCSI Boot Parameters . . . . . . . . . . . . . . . . . . . . . .
MBA Boot Protocol Configuration . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI Boot Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling CHAP Authentication . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring the DHCP Server to Support iSCSI Boot . . . . . . . .
DHCP iSCSI Boot Configurations for IPv4 . . . . . . . . . . . . . . . . .
DHCP iSCSI Boot Configuration for IPv6 . . . . . . . . . . . . . . . . . .
Configuring the DHCP Server . . . . . . . . . . . . . . . . . . . . . . . . . .
Preparing the iSCSI Boot Image. . . . . . . . . . . . . . . . . . . . . . . . .
Booting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring VLANs for iSCSI Boot . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other iSCSI Boot Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing the Speed and Duplex Settings in
Windows Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Virtual LANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The 'dd' Method of Creating an iSCSI Boot Image . . . . . . . . . . .
Troubleshooting iSCSI Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI Crash Dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI Offload in Windows Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI Offload Limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring iSCSI Offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing QLogic Drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing the Microsoft iSCSI Initiator . . . . . . . . . . . . . . . . . . . . .
Configure Microsoft Initiator to Use QLogic’s iSCSI Offload. . . .
iSCSI Offload FAQs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Event Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI Offload in Linux Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Open iSCSI User Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User Application - qlgc_iscsiuio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bind iSCSI Target to QLogic iSCSI Transport Name. . . . . . . . . . . . . .
VLAN Configuration for iSCSI Offload (Linux). . . . . . . . . . . . . . . . . . .
Modifying the iSCSI iface File. . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting the VLAN ID on the Ethernet Interface . . . . . . . . . . . . . .
x
68
68
69
69
69
71
73
79
79
79
81
82
83
92
93
94
94
95
95
96
97
98
98
98
99
99
99
105
105
110
110
111
111
112
112
112
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
Making Connections to iSCSI Targets . . . . . . . . . . . . . . . . . . . . . . . . .
Add Static Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI Target Discovery Using 'SendTargets' . . . . . . . . . . . . . . .
Login to Target Using 'iscsiadm' Command . . . . . . . . . . . . . . . .
List All Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
List All Drives Active in the System . . . . . . . . . . . . . . . . . . . . . .
Maximum Offload iSCSI Connections . . . . . . . . . . . . . . . . . . . . . . . . .
Linux iSCSI Offload FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI Offload on VMware Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
Fibre Channel Over Ethernet
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FCoE Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preparing System BIOS for FCoE Build and Boot. . . . . . . . . . . . . . . .
Modify System Boot Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Specify BIOS Boot Protocol (if required) . . . . . . . . . . . . . . . . . .
Prepare QLogic Multiple Boot Agent for FCoE Boot . . . . . . . . . . . . . .
UEFI Boot LUN Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Provisioning Storage Access in the SAN. . . . . . . . . . . . . . . . . . . . . . .
Pre-Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CTRL+R Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
One-Time Disabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Server 2008 R2 and Windows Server 2008 SP2
FCoE Boot Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Server 2012/2102 R2 FCoE Boot Installation . . . . . . . . . . .
Linux FCoE Boot Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SLES11 SP2 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RHEL6 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linux: Adding Additional Boot Paths . . . . . . . . . . . . . . . . . . . . .
VMware ESXi FCoE Boot Installation . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring FCoE Boot from SAN on VMware . . . . . . . . . . . . . .
Booting from SAN After Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Driver Upgrade on Linux Boot from SAN Systems . . . . . . . . . . . . . . .
Errors During Windows FCoE Boot from SAN Installation . . . . . . . . .
Configuring FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
113
113
113
113
114
114
114
114
114
117
118
118
118
118
119
123
125
125
125
126
128
130
131
131
137
142
144
148
149
149
150
151
NIC Partitioning and Bandwidth Management
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Supported Operating Systems for NIC Partitioning . . . . . . . . . . . . . . .
Configuring for NIC Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
152
152
153
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Number of Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network MAC Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iSCSI MAC Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Physical Link Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Relative Bandwidth Weight (%) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum Bandwidth (%). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
Virtual LANs in Windows
VLAN Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adding VLANs to Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
159
159
161
161
Microsoft Virtualization with Hyper-V
Supported Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single Network Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Server 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Server 2008 R2 and 2012 . . . . . . . . . . . . . . . . . . . . . . . . . .
Teamed Network Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Server 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Server 2008 R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring VMQ with SLB Teaming . . . . . . . . . . . . . . . . . . . . . . . . . .
Upgrading Windows Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
156
158
SR-IOV
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling SR-IOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SR-IOV and Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SR-IOV and Jumbo Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
153
153
153
153
154
154
154
154
163
164
164
164
165
166
167
167
168
Data Center Bridging (DCB)
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DCB Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enhanced Transmission Selection (ETS) . . . . . . . . . . . . . . . . . . . . . .
Priority Flow Control (PFC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data Center Bridging eXchange (DCBX) . . . . . . . . . . . . . . . . . . . . . .
Configuring DCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DCB Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data Center Bridging in Windows Server 2012 . . . . . . . . . . . . . . . . . . . . . .
xii
169
170
170
170
170
171
171
172
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
15
QLogic Teaming Services
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming and Network Addresses . . . . . . . . . . . . . . . . . . . . . . . .
Description of Teaming Types . . . . . . . . . . . . . . . . . . . . . . . . . .
Software Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Repeater Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Switching Hub. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming Support by Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Supported Features by Team Type . . . . . . . . . . . . . . . . . . . . . . . . . . .
Selecting a Team Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming Mechanisms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Outbound Traffic Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Inbound Traffic Flow (SLB Only). . . . . . . . . . . . . . . . . . . . . . . . .
Protocol Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Types of Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Switch-Independent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Switch-Dependent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LiveLink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Attributes of the Features Associated with Each Type of Team . . . . .
Speeds Supported for Each Type of Team . . . . . . . . . . . . . . . . . . . . .
Teaming and Other Advanced Networking Properties . . . . . . . . . . . . . . . .
Checksum Offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IEEE 802.1p QoS Tagging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Large Send Offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jumbo Frames. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IEEE 802.1Q VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preboot Execution Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
General Network Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming with Microsoft Virtual Server 2005 . . . . . . . . . . . . . . . . . . . .
Teaming Across Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Switch-Link Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xiii
174
175
176
177
177
178
183
183
184
184
184
184
185
185
187
188
189
190
190
191
192
192
192
193
196
196
199
199
200
200
201
201
201
202
202
202
203
203
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
Spanning Tree Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Topology Change Notice (TCN) . . . . . . . . . . . . . . . . . . . . . . . . .
Port Fast/Edge Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Layer 3 Routing/Switching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming with Hubs (for troubleshooting purposes only) . . . . . . . . . . .
Hub Usage in Teaming Network Configurations . . . . . . . . . . . . .
SLB Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SLB Team Connected to a Single Hub . . . . . . . . . . . . . . . . . . . .
Generic and Dynamic Trunking (FEC/GEC/IEEE 802.3ad) . . . .
Teaming with Microsoft NLB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Application Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming and Clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Microsoft Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
High-Performance Computing Cluster . . . . . . . . . . . . . . . . . . . .
Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming and Network Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Load Balancing and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting Teaming Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming Configuration Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Event Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows System Event Log Messages . . . . . . . . . . . . . . . . . . . . . . .
Base Driver (Physical Adapter/Miniport) . . . . . . . . . . . . . . . . . . . . . . .
Intermediate Driver (Virtual Adapter/Team) . . . . . . . . . . . . . . . . . . . . .
Virtual Bus Driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
207
208
208
208
209
209
209
210
210
211
211
211
211
213
214
215
216
217
219
219
220
221
224
224
224
228
230
Configuring Teaming in Windows Server
QLASP Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Load Balancing and Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Types of Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Smart Load Balancing and Failover . . . . . . . . . . . . . . . . . . . . . . . . . .
Link Aggregation (802.3ad) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generic Trunking (FEC/GEC)/802.3ad-Draft Static . . . . . . . . . . . . . . .
SLB (Auto-Fallback Disable) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Limitations of Smart Load Balancing and Failover/SLB
(Auto-Fallback Disable) Types of Teams. . . . . . . . . . . . . . . . . . . . . .
Teaming and Large Send Offload/Checksum Offload Support . . . . . .
xiv
233
234
234
235
235
236
236
237
238
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
17
User Diagnostics in DOS
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Diagnostic Test Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
Troubleshooting
Hardware Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
QCC GUI Diagnostic Tests Failures . . . . . . . . . . . . . . . . . . . . . . . . . .
QCC Network Test Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checking Port LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checking if Current Drivers are Loaded . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Possible Problems and Solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Multi-boot Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
QLASP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
NPAR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A
239
239
240
242
248
248
249
249
250
250
250
251
252
252
252
253
255
255
Adapter LEDS
xv
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
xvi
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
List of Figures
Figure
Page
3-1
MBA Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
4-1
Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
6-1
Selecting an Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
6-2
QLE3442 Driver Versions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
6-3
PCI Identifiers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
6-4
List of Driver Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
6-5
Download Driver Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
8-1
QLogic 577xx/578xx Ethernet Boot Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
8-2
CCM Device List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
8-3
Selecting MBA Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
8-4
Selecting the iSCSI Boot Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
8-5
Selecting iSCSI Boot Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
8-6
Selecting General Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
8-7
Saving the iSCSI Boot Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
8-8
Comprehensive Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
8-9
Configuring VLANs—CCM Device List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
8-10 Configuring VLANs—Multiboot Agent Configuration . . . . . . . . . . . . . . . . . . . . . . . .
93
8-11 Configuring iSCSI Boot VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
8-12 Saving the iSCSI Boot VLAN Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
8-13 iSCSI Initiator Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
8-14 iSCSI Initiator Node Name Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
100
8-15 iSCSI Initiator—Add a Target Portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
100
8-16 Target Portal IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
101
8-17 Selecting the Local Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
101
8-18 Selecting the Initiator IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
8-19 Adding the Target Portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
103
8-20 Logging on to the iSCSI Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
8-21 Log On to Target Dialog Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
8-22 Assigning a VLAN Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115
8-23 Configuring the VLAN on VMKernel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
116
9-1
FCoE Boot<Variable>—CCM Device List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119
9-2
FCoE Boot<Variable>—Enable DCB/DCBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
120
9-3
FCoE Boot<Variable>—Select FCoE Boot Protocol . . . . . . . . . . . . . . . . . . . . . . . .
120
9-4
FCoE Boot<Variable>—Target Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121
9-5
FCoE Boot<Variable>—Specify Target WWPN and Boot LUN . . . . . . . . . . . . . . . .
122
9-6
FCoE Boot Target Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
122
9-7
FCoE Boot Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
123
9-8
FCoE Target Parameters Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
124
9-9
Selecting an FCoE WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
124
9-10 One-time Disabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
127
9-11 Load EVBD Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
128
9-12 Load bxfcoe Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129
9-13 Selecting the FCoE Boot LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129
xvii
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
9-14
9-15
9-16
11-1
15-1
15-2
15-3
15-4
15-5
15-6
15-7
15-8
15-9
15-10
SLES Boot Options Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choosing Driver Update Medium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FCoE Reboot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of Servers Supporting Multiple VLANs with Tagging. . . . . . . . . . . . . . . . .
Process for Selecting a Team Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Intermediate Driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teaming Across Switches Without an Interswitch Link . . . . . . . . . . . . . . . . . . . . . .
Teaming Across Switches With Interconnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Failover Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Team Connected to a Single Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Clustering With Teaming Across One Switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Clustering With Teaming Across Two Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Backup without Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Backup With SLB Teaming Across Two Switches . . . . . . . . . . . . . . . . . . .
xviii
131
132
149
156
187
189
204
205
206
210
212
214
215
218
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
List of Tables
Table
2-1
4-1
5-1
6-1
6-2
8-1
8-2
8-3
8-4
8-5
11-1
13-1
13-2
15-1
15-2
15-3
15-4
15-5
15-6
15-7
15-8
15-9
16-1
17-1
17-2
A-1
A-2
100/1000BASE-T and 10GBASE-T Cable Specifications . . . . . . . . . . . . . . . . . . . .
Windows Operating Systems and iSCSI Crash Dump. . . . . . . . . . . . . . . . . . . . . . .
QLogic 8400/3400 Series Linux Drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VMware Driver Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
QLogic 8400/3400 Series FCoE Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DHCP Option 17 Parameter Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DHCP Option 43 Suboption Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DHCP Option 17 Suboption Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Offload iSCSI (OIS) Driver Event Log Messages. . . . . . . . . . . . . . . . . . . . . . . . . . .
Example VLAN Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configurable Network Adapter Hyper-V Features . . . . . . . . . . . . . . . . . . . . . . . . . .
Configurable Teamed Network Adapter Hyper-V Features . . . . . . . . . . . . . . . . . . .
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Available Teaming Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of Team Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Link Speeds in Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Advanced Adapter Properties and Teaming Support . . . . . . . . . . . . . . . . . . . . . . . .
Base Driver Event Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Intermediate Driver Event Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VBD Event Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Smart Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
uediag Command Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Link and Activity Indicated by the RJ-45 Port LEDs. . . . . . . . . . . . . . . . . .
Network Link and Activity Indicated by the Port LED . . . . . . . . . . . . . . . . . . . . . . . .
xix
Page
11
19
25
48
58
70
80
81
82
105
157
163
165
175
178
185
196
199
199
224
228
230
237
240
242
257
257
83840-546-00 E
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters
QLogic FastLinQ 3400, 8400 Series
xx
83840-546-00 E
Preface
NOTE
QLogic now supports QConvergeConsole® (QCC) GUI as the only GUI
management tool across all QLogic adapters. The QLogic Control Suite
(QCS) GUI is no longer supported for the 8400/3400 Series Adapters and
adapters based on 57xx/57xxx controllers, and has been replaced by the
QCC GUI management tool. The QCC GUI provides single-pane-of-glass
GUI management for all QLogic adapters.
In Windows environments, when you run the QCS CLI and the Management
Agents Installer, it will uninstall the QCS GUI (if installed on the system) and
any related components from your system. To obtain the new GUI, download
QCC GUI for your adapter from the QLogic Downloads Web page:
driverdownloads.qlogic.com
Intended Audience
This guide is intended for personnel responsible for installing and maintaining
computer networking equipment.
What Is in This Guide
This guide describes the features, installation, and configuration of the QLogic®
FastLinQ™ 8400/3400 Series Converged Network Adapters and Intelligent
Ethernet Adapters. The guide is organized as follows:

Chapter 1, Product Overview provides a product functional description, a list
of features, a list of supported operating systems, and the adapter
specifications.

Chapter 2, Installing the Hardware describes how to install the adapter
including the list of system requirements and a preinstallation checklist.

Chapter 3, Multi-boot Agent (MBA) Driver Software describes the software
module that allows your network computer to boot with the images provided
by remote servers across the network.

Chapter 4, Windows Driver Software describes Windows® driver installation
and removal, QLogic management application installation, adapter
properties management, and power management options.
xxi
83840-546-00 E
Preface
Related Materials

Chapter 5, Linux Driver Software describes the Linux® drivers.

Chapter 6, VMware Driver Software describes the VMware® drivers.

Chapter 7, Firmware Upgrade describes the installation and use of the
firmware upgrade utility.

Chapter 8, iSCSI Protocol describes iSCSI boot, iSCSI crash dump, and
iSCSI offload for Windows, Linux, and VMware.

Chapter 9, Fibre Channel Over Ethernet describes FCoE boot from SAN
and booting from SAN after installation.

Chapter 10, NIC Partitioning and Bandwidth Management describes the
NPAR operating system requirements and the NPAR configuration
parameters.

Chapter 11, Virtual LANs in Windows describes the use of VLANs to divide
the physical LAN into functional segments.

Chapter 12, SR-IOV describes the use of Single-Root I/O Virtualization
(SR-IOV) to virtualize network controllers and how to enable SR-IOV.

Chapter 13, Microsoft Virtualization with Hyper-V describes the use of
Microsoft® Hyper-V® for Windows Server 2008 and 2012.

Chapter 14, Data Center Bridging (DCB) describes the DCB capabilities
configuration, and requirements.

Chapter 15, QLogic Teaming Services describes the use of teaming to group
multiple physical devices to provide fault tolerance and load balancing.

Chapter 16, Configuring Teaming in Windows Server describes the teaming
configuration for Windows Server® operating systems.

Chapter 17, User Diagnostics in DOS describes the MS-DOS based
application that runs diagnostic tests, updates device firmware, and
manages adapter properties.

Chapter 18, Troubleshootingdescribes a variety of troubleshooting methods
and resources.

Appendix A, Adapter LEDS describes the adapter LEDs and their
significance.
Related Materials
For information about downloading documentation from the QLogic Web site, see
“Downloading Updates” on page xxv.
xxii
83840-546-00 E
Preface
Documentation Conventions
Documentation Conventions
This guide uses the following documentation conventions:

NOTE

CAUTION

provides additional information.
without an alert symbol indicates the presence of a hazard
that could cause damage to equipment or loss of data.
!
CAUTION with an alert symbol indicates the presence of a hazard that
could cause minor or moderate injury.

!
WARNING indicates the presence of a hazard that could cause serious
injury or death.




Text in blue font indicates a hyperlink (jump) to a figure, table, or section in
this guide, and links to Web sites are shown in underlined blue. For
example:

Table 9-2 lists problems related to the user interface and remote agent.

See “Installation Checklist” on page 6.

For more information, visit www.qlogic.com.
Text in bold font indicates user interface elements such as a menu items,
buttons, check boxes, or column headings. For example:

Click the Start button, point to Programs, point to Accessories, and
then click Command Prompt.

Under Notification Options, select the Warning Alarms check box.
Text in Courier font indicates a file name, directory path, or command line
text. For example:

To return to the root directory from anywhere in the file structure:
Type cd /root and press ENTER.

Enter the following command: sh ./install.bin
Key names and key strokes are indicated with UPPERCASE:

Press CTRL+P.

Press the UP ARROW key.
xxiii
83840-546-00 E
Preface
License Agreements


Text in italics indicates terms, emphasis, variables, or document titles. For
example:

For a complete listing of license agreements, refer to the QLogic
Software End User License Agreement.

What are shortcut keys?

To enter the date type mm/dd/yyyy (where mm is the month, dd is the
day, and yyyy is the year).
Topic titles between quotation marks identify related topics either within this
manual or in the online help, which is also referred to as the help system
throughout this document.
License Agreements
Refer to the QLogic Software End User License Agreement for a complete listing
of all license agreements affecting this product.
xxiv
83840-546-00 E
Preface
Technical Support
Technical Support
Customers should contact their authorized maintenance provider for technical
support of their QLogic products. QLogic-direct customers may contact QLogic
Technical Support; others will be redirected to their authorized maintenance
provider. Visit the QLogic support Web site listed in Contact Information for the
latest firmware and software updates.
For details about available service plans, or for information about renewing and
extending your service, visit the Service Program Web page at
http://www.qlogic.com/Support/Pages/ServicePrograms.aspx.
Downloading Updates
The QLogic Web site provides periodic updates to product firmware, software,
and documentation.
To download firmware, software, and documentation:
1.
Go to the QLogic Downloads and Documentation page:
driverdownloads.qlogic.com.
2.
Type the QLogic model name in the search box.
3.
In the search results list, locate and select the firmware, software, or
documentation for your product.
4.
View the product details Web page to ensure that you have the correct
firmware, software, or documentation. For additional information, click
Read Me and Release Notes under Support Files.
5.
Click Download Now.
6.
Save the file to your computer.
7.
If you have downloaded firmware, software, drivers, or boot code, follow the
installation instructions in the Readme file.
Instead of typing a model name in the search box, you can perform a guided
search as follows:
1.
Click the product type tab: Adapters, Switches, Routers, or ASICs.
2.
Click the corresponding button to search by model or operating system.
3.
Click an item in each selection column to define the search, and then click
Go.
4.
Locate the firmware, software, or document you need, and then click the
item’s name or icon to download or open the item.
xxv
83840-546-00 E
Preface
Technical Support
Training
QLogic Global Training maintains a Web site at www.qlogictraining.com offering
online and instructor-led training for all QLogic products. In addition, sales and
technical professionals may obtain Associate and Specialist-level certifications to
qualify for additional benefits from QLogic.
Contact Information
QLogic Technical Support for products under warranty is available during local
standard working hours excluding QLogic Observed Holidays. For customers with
extended service, consult your plan for available hours. For Support phone
numbers, see the Contact Support link at support.qlogic.com.
Support Headquarters
QLogic Corporation
12701 Whitewater Drive
Minnetonka, MN 55343 USA
QLogic Web Site
www.qlogic.com
Technical Support Web Site
support.qlogic.com
Technical Support E-mail
support@qlogic.com
Technical Training E-mail
training@qlogic.com
Knowledge Database
The QLogic knowledge database is an extensive collection of QLogic product
information that you can search for specific solutions. QLogic is constantly adding
to the collection of information in the database to provide answers to your most
urgent questions. Access the database from the QLogic Support Center:
support.qlogic.com.
xxvi
83840-546-00 E
Preface
Legal Notices
Legal Notices
Warranty
For warranty details, please check the QLogic Web site:
http://www.qlogic.com/Support/Pages/Warranty.aspx
Laser Safety
FDA Notice
This product complies with DHHS Rules 21CFR Chapter I, Subchapter J. This
product has been designed and manufactured according to IEC60825-1 on the
safety label of laser product.
CLASS I LASER
Class 1 Laser Product
Appareil laser de classe 1
Produkt der Laser Klasse 1
Luokan 1 Laserlaite
Caution—Class 1 laser radiation when open. Do
not view directly with optical instruments
Attention—Radiation laser de classe 1. Ne pas
regarder directement avec des instruments
optiques.
Vorsicht—Laserstrahlung der Klasse 1 bei
geöffneter Abdeckung. Direktes Ansehen mit
optischen Instrumenten vermeiden.
Varoitus—Luokan 1 lasersäteilyä, kun laite on
auki. Älä katso suoraan laitteeseen käyttämällä
optisia instrumenttej.
Agency Certification
The following sections contain a summary of EMC and EMI test specifications
performed on the QLogic adapters to comply with emission and product safety
standards.
EMI and EMC Requirements
FCC Rules,CFR Title 47, Part 15, Subpart B:2013 Class A
This device complies with Part 15 of the FCC Rules. Operation is subject to the
following two conditions: (1) this device may not cause harmful interference, and
(2) this device must accept any interference received, including interference that
may cause undesired operation.
Industry Canada, ICES-003:2012 Class A
This Class A digital apparatus complies with Canadian ICES-003.Cet appareil
numériqué de la classe A est conformé à la norme NMB-003 du Canada.
xxvii
83840-546-00 E
Preface
Legal Notices
CE Mark 2004/108/EC EMC Directive Compliance
EN55022:2010 Class A1:2007/CISPR22:2009+A1:2010 Class A
EN55024:2010
EN61000-3-2:2006 A1 +A2:2009: Harmonic Current Emission
EN61000-3-3:2008: Voltage Fluctuation and Flicker
VCCI
VCCI:2012-04; Class A
AS/NZS CISPR22
AS/NZS; CISPR 22:2009+A1:2010 Class A
KCC
KC-RRA KN22 KN24(2013) Class A
Product Safety Compliance
UL, cUL product safety:
UL60950-1 (2nd Edition), 2007
UL CSA C22.2 60950-1-07 (2nd Edition) 2007
Use only with listed ITE or equivalent.
Complies with 21 CFR 1040.10 and 1040.11.
2006/95/EC low voltage directive:
TUV EN60950-1:2006+A11+A1+A12 2nd edition
TUV IEC60950-1:2006 2nd Edition Am 1:2009 CB
xxviii
83840-546-00 E
1
Product Overview

Functional Description

Features

Supported Operating Environments

Adapter Management

Adapter Specifications
Functional Description
The QLogic 8400/3400 Series adapters are based on a new class of Gigabit
Ethernet (GbE) and 10GbE converged network interface controller (C-NIC) that
can simultaneously perform accelerated data networking and storage networking
on a standard Ethernet network. The C-NIC offers acceleration for popular
protocols used in the data center, such as:

Internet Small Computer Systems Interface (iSCSI) offload for accelerating
network storage access featuring centralized boot (iSCSI boot)

Fibre Channel over Ethernet (FCoE) offload and acceleration for Fibre
Channel block storage
NOTE
Not all adapters support each listed protocol. For information about
supported protocols, refer to the product data sheet at www.qlogic.com
under Resources.
Enterprise networks that use multiple protocols and multiple network fabrics
benefit from the network adapter’s ability to combine data communications,
storage, and clustering over a single Ethernet fabric by boosting server CPU
processing performance and memory use while alleviating I/O bottlenecks.
The QLogic 8400/3400 Series adapters include a 100/1000Mbps or 10Gbps
Ethernet MAC with both half-duplex and full-duplex capability and a
100/1000Mbps or 10Gbps PHY. The transceiver is fully compatible with the IEEE
802.3 standard for auto-negotiation of speed.
1
83840-546-00 E
1–Product Overview
Features
Using the QLogic teaming software, you can split your network into virtual LANs
(VLANs) and group multiple network adapters together into teams to provide
network load balancing and fault tolerance. See Chapter 15, QLogic Teaming
Services and Chapter 16, Configuring Teaming in Windows Server for detailed
information about teaming. See Chapter 11, Virtual LANs in Windows for a
description of VLANs. See “Configuring Teaming” on page 185 for instructions on
configuring teaming and creating VLANs on Windows operating systems.
Features
The following is a list of the QLogic 8400/3400 Series adapters features. Some
features may not be available on all adapters.

iSCSI offload

FCoE offload

NIC partitioning (NPAR)

Data center bridging (DCB)

Enhanced transmission selection (ETS; IEEE 802.1Qaz)

Priority-based flow control (PFC; IEEE 802.1Qbb)

Data center bridging capability exchange protocol (DCBX; CEE
version 1.01)

Single-chip solution (excluding QLE3442-RJ)

10/100/1000G triple-speed MAC (QLE3442-RJ)

1G/10G triple-speed MAC

SerDes interface for optical transceiver connection

PCIe® Gen3 x8 (10GE)

Zero copy capable hardware

Other offload performance features

TCP, IP, user datagram protocol (UDP) checksum

TCP segmentation

Adaptive interrupts

Receive side scaling (RSS)
2
83840-546-00 E
1–Product Overview
Features


Manageability

QConvergeConsole GUI. See the QConvergeConsole GUI Installation
Guide, QConvergeConsole GUI online help and the QLogic Control
Suite Command Line Interface User’s Guide for more information.

QConvergeConsole Plug-ins for vSphere® through VMware vCenter™
Server software. For more information, see the QConvergeConsole
Plug-ins for vSphere User's Guide.

Supports the pre-execution environment (PXE) 1.0 and 2.0
specifications

Universal management port (UMP)

System management bus (SMBus) controller

Advanced configuration and power interface (ACPI) 1.1a compliant
(multiple power modes)

Intelligent platform management interface (IPMI) support
Advanced network features

Jumbo frames (up to 9,600 bytes). The OS and the link partner must
support jumbo frames.

Virtual LANs

IEEE Std 802.3ad Teaming

Smart Load Balancing™ (SLB) teaming

Flow control (IEEE Std 802.3x)

LiveLink™ (supported in both the 32-bit and 64-bit Windows operating
systems)

Logical link control (IEEE Std 802.2)

High-speed on-chip reduced instruction set computer (RISC) processor

Integrated 96KB frame buffer memory

Quality of service (QoS)

Serial gigabit media independent interface (SGMII)/
Gigabit media independent interface (GMII)/
Media independent interface (MII)

256 unique MAC unicast addresses

Support for multicast addresses through the 128 bits hashing hardware
function

Serial flash NVRAM memory

JTAG support
3
83840-546-00 E
1–Product Overview
Features

PCI power management interface (v1.1)

64-bit base address register (BAR) support

EM64T processor support

iSCSI and FCoE boot support

Virtualization


Microsoft®

VMware

Linux®

XenServer®
Single root I/O virtualization (SR-IOV)
iSCSI
The Internet engineering task force (IETF) has standardized iSCSI. SCSI is a
popular protocol that enables systems to communicate with storage devices,
using block-level transfer (address data stored on a storage device that is not a
whole file). iSCSI maps the SCSI request/response application protocols and its
standardized command set over TCP/IP networks.
As iSCSI uses TCP as its sole transport protocol, it greatly benefits from hardware
acceleration of the TCP processing. However, iSCSI as a layer 5 protocol has
additional mechanisms beyond the TCP layer. iSCSI processing can also be
offloaded, thereby reducing CPU use even further.
The QLogic 8400/3400 Series adapters target best-system performance,
maintains system flexibility to changes, and supports current and future OS
convergence and integration. Therefore, the adapter's iSCSI offload architecture
is unique because of the split between hardware and host processing.
4
83840-546-00 E
1–Product Overview
Features
FCoE
FCoE allows Fibre Channel protocol to be transferred over Ethernet. FCoE
preserves existing Fibre Channel infrastructure and capital investments. The
following FCoE features are supported:

Full stateful hardware FCoE offload

Receiver classification of FCoE and Fibre Channel initialization protocol
(FIP) frames. FIP is the FCoE initialization protocol used to establish and
maintain connections.

Receiver CRC offload

Transmitter CRC offload

Dedicated queue set for Fibre Channel traffic

DCB provides lossless behavior with PFC

DCB allocates a share of link bandwidth to FCoE traffic with ETS
Power Management
Wake on LAN (WOL) is not supported.
Adaptive Interrupt Frequency
The adapter driver intelligently adjusts host interrupt frequency based on traffic
conditions to increase overall application throughput. When traffic is light, the
adapter driver interrupts the host for each received packet, minimizing latency.
When traffic is heavy, the adapter issues one host interrupt for multiple,
back-to-back incoming packets, preserving host CPU cycles.
ASIC with Embedded RISC Processor
The core control for QLogic 8400/3400 Series adapters resides in a tightly
integrated, high-performance ASIC. The ASIC includes a RISC processor that
provides the flexibility to add new features to the card and adapt to future network
requirements through software downloads. In addition, the adapter drivers can
exploit the built-in host offload functions on the adapter as host operating systems
are enhanced to take advantage of these functions.
5
83840-546-00 E
1–Product Overview
Adapter Management
Adapter Management
The following applications are available to manage 8400/3400 Series Adapters:

QLogic Control Suite CLI

QLogic QConvergeConsole Graphical User Interface

QLogic QConvergeConsole vCenter Plug-In

QLogic FastLinQ ESXCLI VMware Plug-In
QLogic Control Suite CLI
The QCS CLI is a console application that you can run from a Windows command
prompt or a Linux terminal console. Use the QCS CLI to manage QLogic
8400/3400 Series Adapters or any QLogic adapter based on 57xx/57xxx
controllers on both local and remote computer systems. For information about
installing and using the QCS CLI, see the QLogic Control Suite CLI User’s Guide.
QLogic QConvergeConsole Graphical User Interface
The QCC GUI is a Web-based management tool for configuring and managing
QLogic Fibre Channel adapters and Intelligent Ethernet adapters. You can use the
QCC GUI on Windows and Linux platforms to manage QLogic 8400/3400 Series
Adapters on both local and remote computer systems. For information about
installing the QCC GUI, see the QConvergeConsole GUI Installation Guide. For
information about using the QCC GUI, see the online help.
QLogic QConvergeConsole vCenter Plug-In
The QCC vCenter Plug-In is a Web-based management tool that is integrated into
the VMware vCenter Server for configuring and managing QLogic Fibre Channel
adapters and Intelligent Ethernet adapters in a virtual environment. You can use
the vCenter Plug-in VMware vSphere clients to manage QLogic 8400/3400 Series
Intelligent Ethernet Adapters. For information about installing and using the
vCenter Plug-in, see the QConvergeConsole Plug-ins for vSphere User’s Guide.
QLogic FastLinQ ESXCLI VMware Plug-In
The QLogic FastLinQ ESXCLI VMware plug-in extends the capabilities of the
ESX® command line interface to manage QLogic 3400, 8400, and 45000 Series
Adapters installed in VMware ESX/ESXi hosts. For information about using the
ESXCLI Plug-In, see the QLogic FastLinQ ESXCLI VMware Plug-in User’s Guide.
6
83840-546-00 E
1–Product Overview
Supported Operating Environments
Supported Operating Environments
The QLogic 8400/3400 Series adapters support several operating systems
including Windows, Linux (RHEL®, SUSE®, Ubuntu®, CentOSSM)1, VMware ESXi
Server®, and Citrix® XenServer. For a complete list of supported operating
systems and versions, go to driverdownloads.qlogic.com and search for your
adapter type, model, or operating system.
Adapter Specifications
Physical Characteristics
The QLogic 8400/3400 Series Adapters are implemented as low-profile PCIe
cards. The adapters ship with a full-height bracket for use in a standard PCIe slot
or an optional spare low-profile bracket for use in a low-profile PCIe slot.
Low-profile slots are typically found in compact servers.
Standards Specifications
1
2

IEEE 802.3ae (10Gb Ethernet)

IEEE 802.1q (VLAN)

IEEE 802.3ad (Link Aggregation)

IEEE 802.3x (Flow Control)

IPv4 (RFC 791)

IPv6 (RFC 2460)

IEEE 802.1Qbb (Priority-based Flow Control)

IEEE 802.1Qaz (data center bridging exchange (DCBX) and enhanced
transmission selection [ETS])

IEEE 802.3an 10GBASE-T2

IEEE 802.3ab 1000BASE-T2

IEEE 802.3u 100BASE-TX2
Ubuntu and CentOS operating systems are supported only on 3400 Series adapters.
3400 Series Adapters only
7
83840-546-00 E
2
Installing the Hardware

System Requirements

Safety Precautions

Preinstallation Checklist

Installation of the Network Adapter

Connecting the Network Cables
System Requirements
Before you install a QLogic 8400/3400 Series adapter, verify that your system
meets the following hardware and operating system requirements:
Hardware Requirements

IA32- or EMT64-based computer that meets operating system requirements

One open PCI Express slot. Depending on the PCI Express support on your
adapter, the slot may be of type

PCI Express 1.0a x1

PCI Express 1.0a x4

PCI Express Gen2 x8

PCI Express Gen3 x8
Full dual-port 10Gbps bandwidth is supported on PCI Express Gen2 x8 or
faster slots.

128MB RAM (minimum)
Operating System Requirements
For a complete list of supported operating systems and versions, go to
driverdownloads.qlogic.com and search for your adapter type, model, or operating
system.
8
83840-546-00 E
2–Installing the Hardware
Safety Precautions
Safety Precautions
!
WARNING
The adapter is being installed in a system that operates with voltages that
can be lethal. Before you open the case of your system, observe the
following precautions to protect yourself and to prevent damage to the
system components.
 Remove any metallic objects or jewelry from your hands and wrists.
 Make sure to use only insulated or nonconducting tools.
 Verify that the system is powered OFF and is unplugged before you
touch internal components.
 Install or remove adapters in a static-free environment. The use of a
properly grounded wrist strap or other personal antistatic devices and an
antistatic mat is strongly recommended.
Preinstallation Checklist
1.
Verify that your system meets the hardware and software requirements
listed under System Requirements.
2.
Verify that your system is using the latest BIOS.
NOTE
If you acquired the adapter software on a disk or from the QLogic Web
Site driverdownloads.qlogic.com), verify the path to the adapter driver
files.
1.
If your system is active, shut it down.
2.
When system shutdown is complete, turn off the power and unplug the
power cord.
3.
Remove the adapter from its shipping package and place it on an antistatic
surface.
4.
Check the adapter for visible signs of damage, particularly on the edge
connector. Never attempt to install a damaged adapter.
9
83840-546-00 E
2–Installing the Hardware
Installation of the Network Adapter
Installation of the Network Adapter
The following instructions apply to installing the QLogic 8400/3400 Series
adapters in most systems. Refer to the manuals that were supplied with your
system for details about performing these tasks on your particular system.
1.
Review “Safety Precautions” on page 9 and “Preinstallation Checklist” on
page 9. Before you install the adapter, ensure that the system power is OFF,
the power cord is unplugged from the power outlet, and that you are
following proper electrical grounding procedures.
2.
Open the system case and select the slot based on the adapter, which may
be of type PCIe 1.0a x1, PCIe 1.0a x4, PCIe Gen2 x8, PCIe Gen3 x8, or
other appropriate slot. A lesser-width adapter can be seated into a
greater-width slot (x8 in a x16), but a greater-width adapter cannot be
seated into a lesser-width slot (x8 in a x4). If you do not know how to identify
a PCI Express slot, refer to your system documentation.
3.
Remove the blank cover-plate from the slot that you selected.
4.
Align the adapter connector edge with the PCI Express connector slot in the
system.
5.
Applying even pressure at both corners of the card, push the adapter card
into the slot until it is firmly seated. When the adapter is properly seated, the
adapter port connectors are aligned with the slot opening, and the adapter
faceplate is flush against the system chassis.
CAUTION
Do not use excessive force when seating the card, as this may damage the
system or the adapter. If you have difficulty seating the adapter, remove it,
realign it, and try again.
6.
Secure the adapter with the adapter clip or screw.
7.
Close the system case and disconnect any personal antistatic devices.
10
83840-546-00 E
2–Installing the Hardware
Connecting the Network Cables
Connecting the Network Cables
The QLogic 8400/3400 Series adapters have either an RJ-45 connector used for
attaching the system to an Ethernet copper-wire segment, or a fiber optic
connector for attaching the system to an Ethernet fiber optic segment.
NOTE
The QLogic 3442-RJ adapter supports Automatic MDI Crossover (MDIX),
which eliminates the need for crossover cables when connecting machines
back-to-back. A straight-through Category 5/5e/6/6A/7 cable allows the
machines to communicate when connected directly together.
1.
Select an appropriate cable. Table 2-1 lists the copper cable requirements
for connecting to 100/1000BASE-T and 10GBASE-T ports.
Table 2-1. 100/1000BASE-T and 10GBASE-T Cable Specifications
Port Type
Connector
Maximum
Distance
Media
100/1000BASE-T a
RJ-45
Category 5 b UTP
100m (328 ft)
10GBASE-T
RJ-45
Category 6 c UTP
40m (131 ft)
c
Category 6A/7 UTP
100m (328 ft)
a
1000BASE-T signaling requires four twisted pairs of Category 5 balanced cabling, as specified in
ISO/IEC 11801:2002 and ANSI/EIA/TIA-568-B.
b
Category 5 is the minimum requirement. Categories 5e, 6, 6a, and 7 are fully supported.
c
10GBASE-T signaling requires four twisted pairs of Category 6 or Category 6A (augmented
Category 6) balanced cabling, as specified in ISO/IEC 11801:2002 and ANSI/TIA/EIA-568-B.
2.
Connect one end of the cable to the RJ-45 connector on the adapter.
3.
Connect the other end of the cable to an RJ-45 Ethernet network port.
The 8400/3400 Series Adapters also support direct attach copper cables.
11
83840-546-00 E
3
Multi-boot Agent (MBA)
Driver Software

Overview

Setting Up MBA in a Client Environment

Setting Up MBA in a Server Environment
Overview
QLogic 8400/3400 Series adapters support Preboot Execution Environment
(PXE), Remote Program Load (RPL), iSCSI, and Bootstrap Protocol (BootP).
Multi-Boot Agent (MBA) is a software module that allows your network computer
to boot with the images provided by remote servers across the network. The
QLogic MBA driver complies with the PXE 2.1 specification and is released with
split binary images. This provides flexibility to users in different environments
where the motherboard may or may not have built-in base code.
The MBA module operates in a client/server environment. A network consists of
one or more boot servers that provide boot images to multiple computers through
the network. The QLogic implementation of the MBA module has been tested
successfully in the following environments:

Linux Red Hat® PXE Server. QLogic PXE clients are able to remotely boot
and use network resources (NFS mount, and so forth) and to perform Linux
installations. In the case of a remote boot, the Linux universal driver binds
seamlessly with the QLogic Universal Network Driver Interface (UNDI) and
provides a network interface in the Linux remotely-booted client
environment.

Intel® APITEST. The QLogic PXE driver passes all API compliance test
suites.

MS-DOS UNDI. The MS-DOS UNDI seamlessly binds with the QLogic UNDI
to provide a network adapter driver interface specification (NDIS2) interface
to the upper layer protocol stack. This allows computers to connect to
network resources in an MS-DOS environment.
12
83840-546-00 E
3–Multi-boot Agent (MBA) Driver Software
Setting Up MBA in a Client Environment
Setting Up MBA in a Client Environment
Setting up MBA in a client environment involves the following steps:
1.
Enabling the MBA driver.
2.
Configuring the MBA driver.
3.
Setting up the BIOS for the boot order.
Enabling the MBA Driver
To enable or disable the MBA driver:
1.
Insert an MS-DOS 6.22 or a Real Mode Kernel bootable disk containing the
uxdiag.exe file (for 10/100/1000Mbps network adapters) or uediag.exe
(for 10Gbps network adapters) in the removable disk drive and power up
your system.
NOTE
The uxdiag.exe (or uediag.exe) file is on the installation CD or in the
DOS Utilities package available from driverdownloads.qlogic.com/.
2.
Type:
uxdiag -mba [ 0-disable | 1-enable ] -c devnum
(or uediag -mba [ 0-disable | 1-enable ] -c devnum)
where
devnum is the specific device(s) number (0,1,2, …) to be programmed.
Configuring the MBA Driver
This section describes the configuration of the MBA driver on add-in NIC models
of the QLogic network adapter using the Comprehensive Configuration
Management (CCM) utility. To configure the MBA driver on LOM models of the
QLogic network adapter, check your system documentation. Both the MBA driver
and the CCM utility reside on the adapter Flash memory.
13
83840-546-00 E
3–Multi-boot Agent (MBA) Driver Software
Setting Up MBA in a Client Environment
You can use the CCM utility to configure the MBA driver one adapter at a time as
described in this section. To simultaneously configure the MBA driver for multiple
adapters, use the MS-DOS-based user diagnostics application described in
“Performing Diagnostics” on page 240. For more information about the CCM
utility, see the Comprehensive Configuration Management User’s Guide.
1.
Restart your system.
2.
Press CTRL+S within four seconds after you are prompted to do so. A list of
adapters displays.
a.
Select the adapter to configure and press ENTER. The Main Menu
displays.
b.
Select MBA Configuration to display the MBA Configuration menu
(Figure 3-1).
Figure 3-1. MBA Configuration Menu
14
83840-546-00 E
3–Multi-boot Agent (MBA) Driver Software
Setting Up MBA in a Server Environment
3.
Use the up arrow and down arrow keys to move to the Boot Protocol menu
item. Then use the right arrow or left arrow key to select the boot protocol of
choice if other boot protocols besides PXE are available. If available, other
boot protocols include Remote Program Load (RPL), iSCSI, and BOOTP.
NOTE
 For iSCSI boot-capable LOMs, the boot protocol is set through the
BIOS. See your system documentation for more information.
 If you have multiple adapters in your system and you are unsure
which adapter you are configuring, press CTRL+F6, which causes
the port LEDs on the adapter to start blinking.
4.
Use the UP ARROW, DOWN ARROW, LEFT ARROW, and RIGHT ARROW
keys to move to and change the values for other menu items, as desired.
5.
Press F4 to save your settings.
6.
Press ESC when you are finished.
Setting Up the BIOS
To boot from the network with the MBA, make the MBA enabled adapter the first
bootable device under the BIOS. This procedure depends on the system BIOS
implementation. Refer to the user manual for the system for instructions.
Setting Up MBA in a Server Environment
Red Hat Linux PXE Server
The Red Hat Enterprise Linux distribution has PXE Server support. It allows users
to remotely perform a complete Linux installation over the network. The
distribution comes with the boot images boot kernel (vmlinuz) and initial ram disk
(initrd), which are located on the Red Hat disk#1:
/images/pxeboot/vmlinuz
/images/pxeboot/initrd.img
Refer to the Red Hat documentation for instructions on how to install PXE Server
on Linux.
The Initrd.img file distributed with Red Hat Enterprise Linux, however, does
not have a Linux network driver for the QLogic 8400/3400 Series adapters. This
version requires a driver disk for drivers that are not part of the standard
distribution. You can create a driver disk for the QLogic 8400/3400 Series
adapters from the image distributed with the installation CD. Refer to the Linux
Readme.txt file for more information.
15
83840-546-00 E
3–Multi-boot Agent (MBA) Driver Software
Setting Up MBA in a Server Environment
MS-DOS UNDI/Intel APITEST
To boot in MS-DOS mode and connect to a network for the MS-DOS environment,
download the Intel PXE PDK from the Intel website. This PXE PDK comes with a
TFTP/ProxyDHCP/Boot server. The PXE PDK can be downloaded from Intel at
http://downloadcenter.intel.com/SearchResult.aspx?lang=eng&ProductFamily=N
etwork+Connectivity&ProductLine=Boot+Agent+Software&ProductProduct=Intel
%c2%ae+Boot+Agent.
16
83840-546-00 E
4
Windows Driver Software

Installing the Driver Software

Removing the Device Drivers

Installing QLogic Management Applications

Viewing or Changing the Adapter Properties

Setting Power Management Options
NOTE
QLogic now supports QConvergeConsole GUI as the only GUI management
tool across all QLogic adapters. The QLogic Control Suite (QCS) GUI is no
longer supported for the 8400/3400 Series Adapters and adapters based on
57xx/57xxx controllers, and has been replaced by the QCC GUI
management tool. The QCC GUI provides single-pane-of-glass GUI
management for all QLogic adapters.
In Windows environments, when you run the QCS CLI and the Management
Agents Installer, it will uninstall the QCS GUI (if installed on the system) and
any related components from your system. To obtain the new GUI, download
QCC GUI for your adapter from the QLogic Downloads Web page:
driverdownloads.qlogic.com
Installing the Driver Software
NOTE
These instructions assume that your QLogic 8400/3400 Series adapter was
not factory installed. If your controller was installed at the factory, the driver
software has been installed for you.
When Windows first starts after a hardware device has been installed (such as a
QLogic 8400/3400 Series adapter), or after the existing device driver has been
removed, the operating system automatically detects the hardware and prompts
you to install the driver software for that device.
17
83840-546-00 E
4–Windows Driver Software
Installing the Driver Software
Both a graphical interactive installation mode (see “Using the Installer” on
page 18) and a command-line silent mode for unattended installation (see “Using
Silent Installation” on page 19) are available.
NOTE
 Before installing the driver software, verify that the Windows operating
system has been upgraded to the latest version with the latest service
pack applied.
 A network device driver must be physically installed before the QLogic
8400/3400 Series adapter can be used with your Windows operating
system. Drivers are located on the installation CD.
Using the Installer
If supported and if you will use the QLogic iSCSI Crash Dump utility, it is important
to follow the installation sequence:

Run the installer

Install the Microsoft iSCSI Software Initiator along with the patch
To install the QLogic 8400/3400 Series drivers
1.
When the Found New Hardware Wizard appears, click Cancel.
2.
Insert the installation CD into the CD or DVD drive.
3.
On the installation CD, open the folder for your operating system, open the
DrvInst folder, and then double-click Setup.exe to open the InstallShield
Wizard.
4.
Click Next to continue.
5.
After you review the license agreement, click I accept the terms in the
license agreement and then click Next to continue.
6.
Click Install.
7.
Click Finish to close the wizard.
8.
The installer will determine if a system restart is necessary. Follow the
on-screen instructions.
To install the Microsoft iSCSI Software Initiator for iSCSI Crash Dump
If supported and if you will use the QLogic iSCSI Crash Dump utility, it is important
to follow the installation sequence:

Run the installer
18
83840-546-00 E
4–Windows Driver Software
Installing the Driver Software

Install Microsoft iSCSI Software Initiator along with the patch (MS
KB939875)
NOTE
If performing an upgrade of the device drivers from the installer, re-enable
iSCSI Crash Dump from the Advanced section of the QCC Configuration
tab.
Perform this procedure after running the installer to install the device drivers.
1.
Install Microsoft iSCSI Software Initiator (version 2.06 or later) if not included
in your OS. To determine when you need to install the Microsoft iSCSI
Software Initiator, see Table 4-1. To download the iSCSI Software Initiator
from Microsoft, go to
http://www.microsoft.com/downloads/en/details.aspx?familyid=12cb3c1a-15
d6-4585-b385-befd1319f825&displaylang=en.
2.
Install Microsoft patch for iSCSI crash dump file generation (Microsoft
KB939875) from http://support.microsoft.com/kb/939875. To determine if
you need to install the Microsoft patch, see Table 4-1.
Table 4-1. Windows Operating Systems and iSCSI Crash Dump
MS iSCSI Software
Initiator Required
Operating System
Microsoft Patch (MS
KB939875) Required
NDIS
Windows Server 2008
Yes (included in OS)
No
Windows Server 2008 R2
Yes (included in OS)
No
OIS
Windows Server 2008
No
No
Windows Server 2008 R2
No
No
Using Silent Installation
NOTE
 All commands are case sensitive.
 For detailed instructions and information about unattended installs, refer
to the silent.txt file in the folder.
19
83840-546-00 E
4–Windows Driver Software
Manually Extracting the Device Drivers
To perform a silent install from within the installer source folder
Type the following:
setup /s /v/qn
To perform a silent upgrade from within the installer source folder
Type the following:
setup /s /v/qn
To perform a silent reinstall of the same installer
Type the following:
setup /s /v"/qn REINSTALL=ALL"
NOTE
The REINSTALL switch should only be used if the same installer is already
installed on the system. If upgrading an earlier version of the installer, use
setup /s /v/qn as listed above.
To perform a silent install to force a downgrade (default is NO)
setup /s /v” /qn DOWNGRADE=Y”
Manually Extracting the Device Drivers
To manually extract the Windows device drivers, type the following
command:
setup /a
This will run the setup utility, extract the drivers, and place them in the designated
location.
20
83840-546-00 E
4–Windows Driver Software
Removing the Device Drivers
Removing the Device Drivers
Uninstall the QLogic 8400/3400 Series device drivers from your system only
through the InstallShield wizard. Uninstalling the device drivers with Device
Manager or any other means may not provide a clean uninstall and may cause the
system to become unstable.
NOTE
Windows Server 2008 and Windows Server 2008 R2 provide the Device
Driver Rollback feature to replace a device driver with one that was
previously installed. However, the complex software architecture of the
8400/3400 Series device may present problems if the rollback feature is
used on one of the individual components. Therefore, we recommend that
changes to driver versions be made only through the use of a driver installer.
To remove the device drivers, in Control Panel, double-click Add or Remove
Programs.
Installing QLogic Management Applications
1.
Execute the setup file (setup.exe) to open the QLogic Management
Programs installation wizard.
2.
Accept the terms of the license agreement, and then click Next.
3.
In the Custom Setup dialog box, review the components to be installed,
make any necessary changes, and then click Next.
4.
In the Ready to Install the Program dialog box, click Install to proceed with
the installation.
Viewing or Changing the Adapter Properties
To view or change the properties of the QLogic network adapter
1.
Open the QCC GUI.
2.
Click the Advanced section of the Configurations tab.
21
83840-546-00 E
4–Windows Driver Software
Setting Power Management Options
Setting Power Management Options
You can set power management options to allow the operating system to turn off
the controller to save power. If the device is busy doing something (servicing a
call, for example) however, the operating system will not shut down the device.
The operating system attempts to shut down every possible device only when the
computer attempts to go into hibernation. To have the controller stay on at all
times, do not click the Allow the computer to turn off the device to save power
check box (Figure 4-1).
Figure 4-1. Power Management
NOTE
 The Power Management tab is available only for servers that support
power management.
 If you select Only allow management stations to bring the computer
out of standby, the computer can be brought out of standby only by
Magic Packet.
22
83840-546-00 E
4–Windows Driver Software
Setting Power Management Options
CAUTION
Do not select Allow the computer to turn off the device to save power for
any adapter that is a member of a team.
23
83840-546-00 E
5
Linux Driver Software

Introduction

Limitations

Packaging

Installing Linux Driver Software

Unloading/Removing the Linux Driver

Patching PCI Files (Optional)

Network Installations

Setting Values for Optional Properties

Driver Defaults

Driver Messages

Teaming with Channel Bonding

Statistics
24
83840-546-00 E
5–Linux Driver Software
Introduction
Introduction
This section discusses the Linux drivers for the QLogic 8400/3400 Series network
adapters. Table 5-1 lists the 8400/3400 Series Linux drivers. For information
about iSCSI offload in Linux server, see “iSCSI Offload in Linux Server” on
page 110.
Table 5-1. QLogic 8400/3400 Series Linux Drivers
Linux Driver
Description
bnx2x
Linux driver for the 8400/3400 Series 10Gb network adapters. This driver directly controls the hardware and is responsible for sending and receiving Ethernet packets on behalf of
the Linux host networking stack. This driver also receives
and processes device interrupts, both on behalf of itself (for
L2 networking) and on behalf of the bnx2fc (FCoE) and cnic
drivers.
cnic
The cnic driver provides the interface between QLogic’s
upper layer protocol (storage) drivers and QLogic’s
8400/3400 Series 10Gb network adapters. The CNIC module works with the bnx2 and bnx2x network drives in the
downstream and the bnx2fc (FCoE) and bnx2i (iSCSI) drivers in the upstream.
bnx2i
Linux iSCSI HBA driver to enable iSCSI offload on the
8400/3400 Series 10Gb network adapters.
bnx2fc
Linux FCoE kernel mode driver used to provide a translation
layer between the Linux SCSI stack and the QLogic FCoE
firmware/hardware. In addition, the driver interfaces with the
networking layer to transmit and receive encapsulated FCoE
frames on behalf of open-fcoe’s libfc/libfcoe for FIP/device
discovery.
Limitations

bnx2x Driver

bnx2i Driver

bnx2fc Driver
25
83840-546-00 E
5–Linux Driver Software
Packaging
bnx2x Driver
The current version of the driver has been tested on 2.6.x kernels starting from
2.6.9. The driver may not compile on kernels older than 2.6.9. Testing is
concentrated on i386 and x86_64 architectures. Only limited testing has been
done on some other architectures. Minor changes to some source files and
Makefile may be needed on some kernels.
bnx2i Driver
The current version of the driver has been tested on 2.6.x kernels, starting from
2.6.18 kernel. The driver may not compile on older kernels. Testing is
concentrated on i386 and x86_64 architectures, RHEL 5, RHEL 6, RHEL 7, and
SUSE 11 SP1 and later distributions.
bnx2fc Driver
The current version of the driver has been tested on 2.6.x kernels, starting from
2.6.32 kernel, which is included in RHEL 6.1 distribution. This driver may not
compile on older kernels. Testing was limited to i386 and x86_64 architectures,
RHEL 6.1, RHEL 7.0, and SLES® 11 SP1 and later distributions.
Packaging
The Linux drivers are released in the following packaging formats:


DKMS Packages

netxtreme2-version.dkms.noarch.rpm

netxtreme2-version.dkms.src.rpm
KMP Packages


SLES

netxtreme2-kmp-[kernel]-version.i586.rpm

netxtreme2-kmp-[kernel]-version.x86_64.rpm
Red Hat

kmod-kmp-netxtreme2-{kernel]-version.i686.rpm

kmod-kmp-netxtreme2-{kernel]-version.x86_64.rpm
The QCS CLI management utility is also distributed as an RPM package
(QCS-{version}.{arch}.rpm). For information about installing the Linux QCS
CLI, see the QLogic Control Suite CLI User’s Guide.
26
83840-546-00 E
5–Linux Driver Software
Installing Linux Driver Software

Source Packages
Identical source files to build the driver are included in both RPM and TAR
source packages. The supplemental tar file contains additional utilities such
as patches and driver diskette images for network installation.
The following is a list of included files:

netxtreme2-version.src.rpm: RPM package with 8400/3400
Series bnx2/bnx2x/cnic/bnx2fc/bnx2ilibfc/libfcoe driver source.

netxtreme2-version.tar.gz: tar zipped package with 8400/3400
Series bnx2/bnx2x/cnic/bnx2fc/bnx2i/libfc/libfcoe driver source.

iscsiuio-version.tar.gz: iSCSI user space management tool
binary.

open-fcoe-*.qlgc.<subvert>.<arch>.rpm: open-fcoe
userspace management tool binary RPM for SLES11 SP2 and legacy
versions.

fcoe-utils-*.qlgc.<subver>.<arch>.rpm: open-fcoe
userspace management tool binary RPM for RHEL 6.4 and legacy
versions.
The Linux driver has a dependency on open-fcoe userspace management
tools as the front-end to control FCoE interfaces. The package name of the
open-fcoe tool is fcoe-utils for RHEL 6.4 and open-fcoe for SLES11 SP2 and
legacy versions.
Installing Linux Driver Software

Installing the Source RPM Package

Building the Driver from the Source TAR File
NOTE
If a bnx2x, bnx2i, or bnx2fc driver is loaded and the Linux kernel is updated,
the driver module must be recompiled if the driver module was installed
using the source RPM or the TAR package.
27
83840-546-00 E
5–Linux Driver Software
Installing Linux Driver Software
Installing the Source RPM Package
The following are guidelines for installing the driver source RPM Package.
Prerequisites:

Linux kernel source

C compiler
Procedure:
1.
Install the source RPM package:
rpm -ivh netxtreme2-<version>.src.rpm
2.
Change the directory to the RPM path and build the binary RPM for your
kernel:
For RHEL:
cd ~/rpmbuild
rpmbuild -bb SPECS/netxtreme2.spec
For SLES:
cd /usr/src/packages
rpmbuild -bb SPECS/netxtreme2.spec
3.
Install the newly compiled RPM:
rpm -ivh RPMS/<arch>/netxtreme2-<version>.<arch>.rpm
Note that the --force option may be needed on some Linux distributions if
conflicts are reported.
4.
For FCoE offload, install the open-fcoe utility.
For RHEL 6.4 and legacy versions, either of the following:
yum install
fcoe-utils-<version>.rhel.64.qlgc.<subver>.<arch>.rpm
-orrpm -ivh
fcoe-utils-<version>.rhel.64.qlgc.<subver>.<arch>.rpm
For SLES11 SP2:
rpm -ivh
open-fcoe-<version>.sles.sp2.qlgc.<subver>.<arch>.rpm
28
83840-546-00 E
5–Linux Driver Software
Installing Linux Driver Software
For RHEL 6.4 and SLES11 SP2 and legacy versions, the version of
fcoe-utils/open-fcoe included in your distribution is sufficient and no out of
box upgrades are provided.
Where available, installation with yum will automatically resolve
dependencies. Otherwise, required dependencies can be located on your
O/S installation media.
5.
For SLES, turn on the fcoe and lldpad services for FCoE offload, and just
lldpad for iSCSI-offload-TLV.
For SLES11 SP1:
chkconfig lldpad on
chkconfig fcoe on
For SLES11 SP2:
chkconfig boot.lldpad on
chkconfig boot.fcoe on
6.
Inbox drivers are included with all of the supported operating systems. The
simplest means to ensure the newly installed drivers are loaded is to reboot.
7.
For FCoE offload, after rebooting, create configuration files for all FCoE ethX
interfaces:
cd /etc/fcoe
cp cfg-ethx cfg-<ethX FCoE interface name>
NOTE
Note that your distribution might have a different naming scheme for
Ethernet devices (pXpX or emX instead of ethX).
8.
For FCoE offload or iSCSI-offload-TLV, modify /etc/fcoe/cfg-<interface> by
setting DCB_REQUIRED=yes to DCB_REQUIRED=no.
9.
Turn on all ethX interfaces.
ifconfig <ethX> up
10.
For SLES, use YaST to configure your Ethernet interfaces to automatically
start at boot by setting a static IP address or enabling DHCP on the
interface.
29
83840-546-00 E
5–Linux Driver Software
Installing Linux Driver Software
11.
For FCoE offload and iSCSI-offload-TLV, disable lldpad on QLogic
converged network adapter interfaces. This is required because QLogic
uses an offloaded DCBX client.
lldptool set-lldp –i <ethX> adminStatus=disasbled
12.
For FCoE offload and iSCSI-offload-TLV, make sure
/var/lib/lldpad/lldpad.conf is created and each <ethX> block does
not specify “adminStatus” or if specified, it is set to 0 (“adminStatus=0”) as
below.
lldp :
{
eth5 :
{
tlvid00000001 :
{
info = "04BC305B017B73";
};
tlvid00000002 :
{
info = "03BC305B017B73";
};
};
13.
For FCoE offload and iSCSI-offload-TLV, restart lldpad service to apply new
settings
For SLES11 SP1, RHEL 6.4 and legacy versions:
service lldpad restart
For SLES11 SP2:
rclldpad restart
For SLES12:
systemctl restart lldpad
30
83840-546-00 E
5–Linux Driver Software
Installing Linux Driver Software
14.
For FCOE offload, restart fcoe service to apply new settings
For SLES11 SP1, RHEL 6.4, and legacy versions:
service fcoe restart
For SLES11 SP2:
rcfcoe restart
For SLES12:
systemctl restart fcoe
Installing the KMP Package
NOTE
The examples in this procedure refer to the bnx2x driver, but also apply to
the bxn2fc and bnx2i drivers.
1.
Install the KMP package:
rpm -ivh <file>
rmmod bnx2x
2.
Load the driver:
modprobe bnx2x
Building the Driver from the Source TAR File
NOTE
The examples used in this procedure refer to the bnx2x driver, but also apply
to the bnx2i and bnx2fc drivers.
1.
Create a directory and extract the TAR files to the directory:
tar xvzf netxtreme2-<version>.tar.gz
2.
Build the driver bnx2x.ko (or bnx2x.o) as a loadable module for the running
kernel:
cd netxtreme2-<version>
make
31
83840-546-00 E
5–Linux Driver Software
Load and Run Necessary iSCSI Software Components
3.
Test the driver by loading it (first unload the existing driver, if necessary):
rmmod bnx2x (or bnx2fc or bnx2i)
insmod bnx2x/src/bnx2x.ko (or bnx2fc/src/bnx2fc.ko, or
bnx2i/src/bnx2i.ko)
4.
For iSCSI offload and FCoE offload, load the cnic driver (if applicable):
insmod cnic.ko
5.
Install the driver and man page:
make install
NOTE
See the RPM instructions above for the location of the installed driver.
6.
Install the user daemon (qlgc_iscsiuio).
Refer to “Load and Run Necessary iSCSI Software Components” on page 32 for
instructions on loading the software components required to use the QLogic iSCSI
offload feature.
To configure the network protocol and address after building the driver, refer to the
manuals supplied with your operating system.
Load and Run Necessary iSCSI Software
Components
The QLogic iSCSI Offload software suite consists of three kernel modules and a
user daemon. Required software components can be loaded either manually or
through system services.
1.
Unload the existing driver, if necessary:
Manual:
rmmod bnx2i
or
modprobe -r bnx2i
2.
Load the iSCSI driver:
Manual:
insmod bnx2i.ko or modprobe bnx2i
32
83840-546-00 E
5–Linux Driver Software
Unloading/Removing the Linux Driver
Unloading/Removing the Linux Driver

Unloading/Removing the Driver from an RPM Installation

Removing the Driver from a TAR Installation
Unloading/Removing the Driver from an RPM Installation
NOTE
 The examples used in this procedure refer to the bnx2x driver, but also
apply to the bnx2fc and bnx2i drivers.
 On 2.6 kernels, it is not necessary to bring down the eth# interfaces
before unloading the driver module.
 If the cnic driver is loaded, unload the cnic driver before unloading the
bnx2x driver.
 Prior to unloading the bnx2i driver, disconnect all active iSCSI sessions
to targets.
To unload the driver, use ifconfig to bring down all eth# interfaces opened by the
driver, and then type the following:
rmmod bnx2x
NOTE
The above command will also remove the cnic module.
If the driver was installed using RPM, do the following to remove it:
rpm -e netxtreme2
Removing the Driver from a TAR Installation
NOTE
The examples used in this procedure refer to the bnx2x driver, but also apply
to the bnx2fc and bnx2i drivers.
If the driver was installed using make install from the tar file, the bnx2x.ko driver
file has to be manually deleted from the operating system. See “Installing the
Source RPM Package” on page 28 for the location of the installed driver.
33
83840-546-00 E
5–Linux Driver Software
Patching PCI Files (Optional)
Uninstalling the QCC GUI
For information about removing the QCC GUI, see QConvergeConsole GUI
Installation Guide.
Patching PCI Files (Optional)
NOTE
The examples used in this procedure refer to the bnx2x driver, but also apply
to the bnx2fc and bnx2i drivers.
For hardware detection utilities, such as Red Hat kudzu, to properly identify bnx2x
supported devices, a number of files containing PCI vendor and device
information may need to be updated.
Apply the updates by running the scripts provided in the supplemental tar file. For
example, on Red Hat Enterprise Linux, apply the updates by doing the following:
./patch_pcitbl.sh /usr/share/hwdata/pcitable pci.updates
/usr/share/hwdata/pcitable.new bnx2
./patch_pciids.sh /usr/share/hwdata/pci.ids pci.updates
/usr/share/hwdata/pci.ids.new
Next, the old files can be backed up and the new files can be renamed for use.
cp /usr/share/hwdata/pci.ids /usr/share/hwdata/old.pci.ids
cp /usr/share/hwdata/pci.ids.new /usr/share/hwdata/pci.ids
cp /usr/share/hwdata/pcitable /usr/share/hwdata/old.pcitable
cp /usr/share/hwdata/pcitable.new /usr/share/hwdata/pcitable
Network Installations
For network installations through NFS, FTP, or HTTP (using a network boot disk
or PXE), a driver disk that contains the bnx2x driver may be needed. The driver
disk images for the most recent Red Hat and SuSE versions are included. Boot
drivers for other Linux versions can be compiled by modifying the Makefile and the
make environment. Further information is available from the Red Hat website,
http://www.redhat.com.
34
83840-546-00 E
5–Linux Driver Software
Setting Values for Optional Properties
Setting Values for Optional Properties
Optional properties exist for the different drivers:

bnx2x Driver

bnx2i Driver

bnx2fc Driver
bnx2x Driver
disable_tpa
The disable_tpa parameter can be supplied as a command line argument to
disable the Transparent Packet Aggregation (TPA) feature. By default, the driver
will aggregate TCP packets. Use disable_tpa to disable the advanced TPA
feature.
Set the disable_tpa parameter to 1 as shown below to disable the TPA feature on
all 8400/3400 Series network adapters in the system. The parameter can also be
set in modprobe.conf. See the man page for more information.
insmod bnx2x.ko disable_tpa=1
or
modprobe bnx2x disable_tpa=1
int_mode
The int_mode parameter is used to force using an interrupt mode.
Set the int_mode parameter to 1 to force using the legacy INTx mode on all
8400/3400 Series adapters in the system.
insmod bnx2x.ko int_mode=1
or
modprobe bnx2x int_mode=1
Set the int_mode parameter to 2 to force using MSI mode on all 8400/3400
Series adapters in the system.
insmod bnx2x.ko int_mode=2
or
modprobe bnx2x int_mode=2
Set the int_mode parameter to 3 to force using MSI-X mode on all 8400/3400
Series adapters in the system.
35
83840-546-00 E
5–Linux Driver Software
Setting Values for Optional Properties
dropless_fc
The dropless_fc parameter can be used to enable a complementary flow control
mechanism on 8400/3400 Series adapters. The default flow control mechanism is
to send pause frames when the on-chip buffer (BRB) is reaching a certain level of
occupancy. This is a performance targeted flow control mechanism. On
8400/3400 Series adapters, one can enable another flow control mechanism to
send pause frames, where one of the host buffers (when in RSS mode) are
exhausted.
This is a zero packet drop targeted flow control mechanism.
Set the dropless_fc parameter to 1 to enable the dropless flow control
mechanism feature on all 8400/3400 Series adapters in the system.
insmod bnx2x.ko dropless_fc=1
or
modprobe bnx2x dropless_fc=1
disable_iscsi_ooo
The disable_iscsi_ooo parameter is to disable the allocation of the iSCSI TCP
Out-of-Order (OOO) reception resources, specifically for VMware for low-memory
systems.
multi_mode
The optional parameter multi_mode is for use on systems that support
multi-queue networking. Multi-queue networking on the receive side depends only
on MSI-X capability of the system, multi-queue networking on the transmit side is
supported only on kernels starting from 2.6.27. By default, multi_mode parameter
is set to 1. Thus, on kernels up to 2.6.26, the driver will allocate on the receive
side one queue per-CPU and on the transmit side only one queue. On kernels
starting from 2.6.27, the driver will allocate on both receive and transmit sides,
one queue per-CPU. In any case, the number of allocated queues will be limited
by number of queues supported by hardware.
The multi_mode optional parameter can also be used to enable SAFC (Service
Aware Flow Control) by differentiating the traffic to up to 3 CoS (Class of Service)
in the hardware according to the VLAN PRI value or according to the IP DSCP
value (least 3 bits).
num_queues
The optional parameter num_queues may be used to set the number of queues
when multi_mode is set to 1 and interrupt mode is MSI-X. If interrupt mode is
different than MSI-X (see int_mode), the number of queues will be set to 1,
discarding the value of this parameter.
36
83840-546-00 E
5–Linux Driver Software
Setting Values for Optional Properties
pri_map
The optional parameter pri_map is used to map the VLAN PRI value or the IP
DSCP value to a different or same CoS in the hardware. This 32-bit parameter is
evaluated by the driver as an 8 value of 4 bits each. Each nibble sets the desired
hardware queue number for that priority. For example, set pri_map to 0x11110000
to map priority 0 to 3 to CoS 0 and map priority 4 to 7 to CoS 1.
qs_per_cos
The optional parameter qs_per_cos is used to specify how many queues will
share the same CoS. This parameter is evaluated by the driver up to 3 values of 8
bits each. Each byte sets the desired number of queues for that CoS. The total
number of queues is limited by the hardware limit. For example, set qs_per_cos
to 0x10101 to create a total of three queues, one per CoS. In another example,
set qs_per_cos to 0x404 to create a total of 8 queues, divided into 2 CoS, 4
queues in each CoS.
cos_min_rate
The optional parameter cos_min_rate is used to determine the weight of each
CoS for round-robin scheduling in transmission. This parameter is evaluated by
the driver as up to 3 values of 8 bits each. Each byte sets the desired weight for
that CoS. The weight ranges from 0 to 100. For example, set cos_min_rate to
0x101 for fair transmission rate between 2 CoS. In another example, set the
cos_min_rate to 0x30201 to give CoS the higher rate of transmission. To avoid
using the fairness algorithm, omit setting cos_min_rate or set it to 0.
Set the multi_mode parameter to 2 as shown below to differentiate the traffic
according to the VLAN PRI value.
insmod bnx2x.ko multi_mode=2 pri_map=0x11110000 qs_per_cos=0x404
or
modprobe bnx2x multi_mode=2 pri_map=0x11110000 qs_per_cos=0x404
Set the multi_mode parameter to 4, as shown below, to differentiate the traffic
according to the IP DSCP value.
insmod bnx2x.ko multi_mode=4 pri_map=0x22221100 qs_per_cos=0x10101
cos_min_rate=0x30201
or
modprobe bnx2x multi_mode=4 pri_map=0x22221100 qs_per_cos=0x10101
cos_min_rate=0x30201
37
83840-546-00 E
5–Linux Driver Software
Setting Values for Optional Properties
bnx2i Driver
Optional parameters en_tcp_dack, error_mask1, and error_mask2 can be
supplied as command line arguments to the insmod or modprobe command for
bnx2i.
error_mask1 and error_mask2
"Config FW iSCSI Error Mask #", use to configure certain iSCSI protocol violation
to be treated either as a warning or a fatal error. All fatal iSCSI protocol violations
will result in session recovery (ERL 0). These are bit masks.
Defaults: All violations will be treated as errors.
CAUTION
Do not use error_mask if you are not sure about the consequences. These
values are to be discussed with QLogic development team on a
case-by-case basis. This is just a mechanism to work around iSCSI
implementation issues on the target side. Without proper knowledge of
iSCSI protocol details, users are advised not to experiment with these
parameters.
en_tcp_dack
"Enable TCP Delayed ACK", enables/disables TCP delayed ACK feature on
offloaded iSCSI connections.
Defaults: TCP delayed ACK is ENABLED. For example:
insmod bnx2i.ko en_tcp_dack=0
or
modprobe bnx2i en_tcp_dack=0
time_stamps
“Enable TCP TimeStamps”, enables/disables TCP time stamp feature on
offloaded iSCSI connections.
Defaults: TCP time stamp option is DISABLED. For example:
insmod bnx2i.ko time_stamps=1
or
modprobe bnx2i time_stamps=1
38
83840-546-00 E
5–Linux Driver Software
Setting Values for Optional Properties
sq_size
"Configure SQ size", used to choose send queue size for offloaded connections
and SQ size determines the maximum SCSI commands that can be queued. SQ
size also has a bearing on the number of connections that can be offloaded; as
QP size increases, the number of connections supported will decrease. With the
default values, the adapter can offload 28 connections.
Defaults: 128
Range: 32 to 128
Note that QLogic validation is limited to a power of 2; for example, 32, 64, 128.
rq_size
“Configure RQ size”, used to choose the size of asynchronous buffer queue size
per offloaded connections. RQ size is not required greater than 16 as it is used to
place iSCSI ASYNC/NOP/REJECT messages and SCSI sense data.
Defaults: 16
Range: 16 to 32
Note that QLogic validation is limited to a power of 2; for example, 16, 32.
event_coal_div
"Event Coalescing Divide Factor", performance tuning parameter used to
moderate the rate of interrupt generation by the iSCSI firmware.
Defaults: 2
Valid values: 1, 2, 4, 8
last_active_tcp_port
“Last active TCP port used”, status parameter used to indicate the last TCP port
number used in the iSCSI offload connection.
Defaults: N/A
Valid values: N/A
Note: This is a read-only parameter.
ooo_enable
“Enable TCP out-of-order feature”, enables/disables TCP out-of-order rx handling
feature on offloaded iSCSI connections.
Defaults: TCP out-of-order feature is ENABLED. For example:
insmod bnx2i.ko ooo_enable=1
or
modprobe bnx2i ooo_enable=1
39
83840-546-00 E
5–Linux Driver Software
Driver Defaults
bnx2fc Driver
Optional parameter debug_logging can be supplied as a command line
arguments to the insmod or modprobe command for bnx2fc.
debug_logging
"Bit mask to enable debug logging", enables/disables driver debug logging.
Defaults: None. For example:
insmod bnx2fc.ko debug_logging=0xff
or
modprobe bnx2fc debug_logging=0xff
IO level debugging = 0x1
Session level debugging = 0x2
HBA level debugging = 0x4
ELS debugging = 0x8
Misc debugging = 0x10
Max debugging = 0xff
Driver Defaults

bnx2 Driver

bnx2x Driver
bnx2 Driver
Speed: Autonegotiation with all speeds advertised
Flow Control: Autonegotiation with RX and TX advertised
MTU: 1500 (range is 46–9000)
RX Ring Size: 255 (range is 0–4080)
RX Jumbo Ring Size: 0 (range 0–16320) adjusted by the driver based on MTU
and RX Ring Size
TX Ring Size: 255 (range is (MAX_SKB_FRAGS+1)–255). MAX_SKB_FRAGS
varies on different kernels and different architectures. On a 2.6 kernel for x86,
MAX_SKB_FRAGS is 18.
Coalesce RX Microseconds: 18 (range is 0–1023)
Coalesce RX Microseconds IRQ: 18 (range is 0–1023)
Coalesce RX Frames: 6 (range is 0–255)
40
83840-546-00 E
5–Linux Driver Software
Driver Defaults
Coalesce RX Frames IRQ: 6 (range is 0–255)
Coalesce TX Microseconds: 80 (range is 0–1023)
Coalesce TX Microseconds IRQ: 80 (range is 0–1023)
Coalesce TX Frames: 20 (range is 0–255)
Coalesce TX Frames IRQ: 20 (range is 0–255)
Coalesce Statistics Microseconds: 999936 (approximately 1 second) (range is
0–16776960 in increments of 256)
MSI: Enabled (if supported by the 2.6 kernel and the interrupt test passes)
TSO: Enabled (on 2.6 kernels)
bnx2x Driver
Speed: Autonegotiation with all speeds advertised
Flow control: Autonegotiation with RX and TX advertised
MTU: 1500 (range is 46–9600)
RX Ring Size: 4078 (range is 0–4078)
TX Ring Size: 4078 (range is (MAX_SKB_FRAGS+4)–4078). MAX_SKB_FRAGS
varies on different kernels and different architectures. On a 2.6 kernel for x86,
MAX_SKB_FRAGS is 18.
Coalesce RX Microseconds: 25 (range is 0–3000)
Coalesce TX Microseconds: 50 (range is 0–12288)
Coalesce Statistics Microseconds: 999936 (approximately 1 second) (range is
0–16776960 in increments of 256)
MSI-X: Enabled (if supported by the 2.6 kernel and the interrupt test passes)
TSO: Enabled
41
83840-546-00 E
5–Linux Driver Software
Driver Messages
Driver Messages
The following are the most common sample messages that may be logged in the
/var/log/messages file. Use dmesg -n <level> to control the level at which
messages appear on the console. Most systems are set to level 6 by default. To
see all messages, set the level higher.

bnx2x Driver

bnx2i Driver

bnx2fc Driver
bnx2x Driver
Driver Sign On
QLogic 8400/3400 Series 10 Gigabit Ethernet Driver
bnx2x v1.6.3c (July 23, 20xx)
CNIC Driver Sign On (bnx2 only)
QLogic 8400/3400 Series cnic v1.1.19 (Sep 25, 20xx)
NIC Detected
eth#: QLogic 8400/3400 Series xGb (B1)
PCI-E x8 found at mem f6000000, IRQ 16, node addr 0010180476ae
cnic: Added CNIC device: eth0
Link Up and Speed Indication
bnx2x: eth# NIC Link is Up, 10000 Mbps full duplex
Link Down Indication
bnx2x: eth# NIC Link is Down
MSI-X Enabled Successfully
bnx2x: eth0: using MSI-X
bnx2i Driver
BNX2I Driver Signon
QLogic 8400/3400 Series iSCSI Driver bnx2i v2.1.1D (May 12, 20xx)
42
83840-546-00 E
5–Linux Driver Software
Driver Messages
Network Port to iSCSI Transport Name Binding
bnx2i: netif=eth2, iscsi=bcm570x-050000
bnx2i: netif=eth1, iscsi=bcm570x-030c00
Driver Completes handshake with iSCSI Offload-enabled
CNIC Device
bnx2i [05:00.00]: ISCSI_INIT passed
NOTE
This message is displayed only when the user attempts to make an iSCSI
connection.
Driver Detects iSCSI Offload Is Not Enabled on the CNIC Device
bnx2i: iSCSI not supported, dev=eth3
bnx2i: bnx2i: LOM is not enabled to offload iSCSI connections,
dev=eth0
bnx2i: dev eth0 does not support iSCSI
Exceeds Maximum Allowed iSCSI Connection Offload Limit
bnx2i: alloc_ep: unable to allocate iscsi cid
bnx2i: unable to allocate iSCSI context resources
Network Route to Target Node and Transport Name Binding
Are Two Different Devices
bnx2i: conn bind, ep=0x... ($ROUTE_HBA) does not belong to hba
$USER_CHOSEN_HBA
where:

ROUTE_HBA is the net device on which connection was offloaded based on
route information.

USER_CHOSEN_HBA is the adapter to which target node is bound (using
iSCSI transport name).
Target Cannot Be Reached on Any of the CNIC Devices
bnx2i: check route, cannot connect using cnic
Network Route Is Assigned to Network Interface, Which Is Down
bnx2i: check route, hba not found
SCSI-ML Initiated Host Reset (Session Recovery)
bnx2i: attempting to reset host, #3
43
83840-546-00 E
5–Linux Driver Software
Driver Messages
CNIC Detects iSCSI Protocol Violation - Fatal Errors
bnx2i: iscsi_error - wrong StatSN rcvd
bnx2i: iscsi_error - hdr digest err
bnx2i: iscsi_error - data digest err
bnx2i: iscsi_error - wrong opcode rcvd
bnx2i: iscsi_error - AHS len > 0 rcvd
bnx2i: iscsi_error - invalid ITT rcvd
bnx2i: iscsi_error - wrong StatSN rcvd
bnx2i: iscsi_error - wrong DataSN rcvd
bnx2i: iscsi_error - pend R2T violation
bnx2i: iscsi_error - ERL0, UO
bnx2i: iscsi_error - ERL0, U1
bnx2i: iscsi_error - ERL0, U2
bnx2i: iscsi_error - ERL0, U3
bnx2i: iscsi_error - ERL0, U4
bnx2i: iscsi_error - ERL0, U5
bnx2i: iscsi_error - ERL0, U
bnx2i: iscsi_error - invalid resi len
bnx2i: iscsi_error - MRDSL violation
bnx2i: iscsi_error - F-bit not set
bnx2i: iscsi_error - invalid TTT
bnx2i: iscsi_error - invalid DataSN
bnx2i: iscsi_error - burst len violation
bnx2i: iscsi_error - buf offset violation
bnx2i: iscsi_error - invalid LUN field
bnx2i: iscsi_error - invalid R2TSN field
bnx2i: iscsi_error - invalid cmd len1
bnx2i: iscsi_error - invalid cmd len2
bnx2i: iscsi_error - pend r2t exceeds MaxOutstandingR2T value
bnx2i: iscsi_error - TTT is rsvd
bnx2i: iscsi_error - MBL violation
bnx2i: iscsi_error - data seg len != 0
bnx2i: iscsi_error - reject pdu len error
bnx2i: iscsi_error - async pdu len error
bnx2i: iscsi_error - nopin pdu len error
bnx2i: iscsi_error - pend r2t in cleanup
bnx2i: iscsi_error - IP fragments rcvd
bnx2i: iscsi_error - IP options error
bnx2i: iscsi_error - urgent flag error
44
83840-546-00 E
5–Linux Driver Software
Driver Messages
CNIC Detects iSCSI Protocol Violation - Non-FATAL, Warning
bnx2i: iscsi_warning - invalid TTT
bnx2i: iscsi_warning - invalid DataSN
bnx2i: iscsi_warning - invalid LUN field
NOTE
The driver needs to be configured to consider certain violation to treat as
warning and not as a critical error.
Driver Puts a Session Through Recovery
conn_err - hostno 3 conn 03fbcd00, iscsi_cid 2 cid a1800
Reject iSCSI PDU Received from the Target
bnx2i - printing rejected PDU contents
[0]: 1 ffffffa1 0 0 0 0 20 0
[8]: 0 7 0 0 0 0 0 0
[10]: 0 0 40 24 0 0 ffffff80 0
[18]: 0 0 3 ffffff88 0 0 3 4b
[20]: 2a 0 0 2 ffffffc8 14 0 0
[28]: 40 0 0 0 0 0 0 0
Open-iSCSI Daemon Handing Over Session to Driver
bnx2i: conn update - MBL 0x800 FBL 0x800MRDSL_I 0x800 MRDSL_T
0x2000
bnx2fc Driver
BNX2FC Driver Signon:
QLogic NetXtreme II FCoE Driver bnx2fc v0.8.7 (Mar 25, 2011
Driver Compiles Handshake with FCoE Offload Enabled
CNIC Device
bnx2fc [04:00.00]: FCOE_INIT passed
Driver Fails Handshake with FCoE Offload Enabled CNIC Device
bnx2fc: init_failure due to invalid opcode
bnx2fc: init_failure due to context allocation failure
bnx2fc: init_failure due to NIC error
bnx2fc: init_failure due to completion status error
bnx2fc: init_failure due to HSI mismatch
45
83840-546-00 E
5–Linux Driver Software
Driver Messages
No Valid License to Start FCoE
bnx2fc: FCoE function not enabled <ethX>
bnx2fC: FCoE not supported on <ethX>
Session Failures Due to Exceeding Maximum Allowed FCoE
Offload Connection Limit or Memory Limits
bnx2fc: Failed to allocate conn id for port_id <remote port id>
bnx2fc: exceeded max sessions..logoff this tgt
bnx2fc: Failed to allocate resources
Session Offload Failures
bnx2fc: bnx2fc_offload_session - Offload error
<rport> not FCP type. not offloading
<rport> not FCP_TARGET. not offloading
Session Upload Failures
bnx2fc: ERROR!! destroy timed out
bnx2fc: Disable request timed out.
destroy not set to FW
bnx2fc: Disable failed with completion status <status>
bnx2fc: Destroy failed with completion status <status>
Unable to Issue ABTS
bnx2fc: initiate_abts: tgt not offloaded
bnx2fc: initiate_abts: rport not ready
bnx2fc: initiate_abts: link is not ready
bnx2fc: abort failed, xid = <xid>
Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
bnx2fc: Relogin to the target
Unable to Issue IO Request Due to Session Not Ready
bnx2fc: Unable to post io_req
Drop Incorrect L2 Receive Frames
bnx2fc: FPMA mismatch... drop packet
bnx2fc: dropping frame with CRC error
HBA/lport Allocation Failures
bnx2fc: Unable to allocate hba
bnx2fc: Unable to allocate scsi host
NPIV Port Creation
bnx2fc: Setting vport names, <WWNN>, <WWPN>
46
83840-546-00 E
5–Linux Driver Software
Teaming with Channel Bonding
Teaming with Channel Bonding
With the Linux drivers, you can team adapters together using the bonding kernel
module and a channel bonding interface. For more information, see the Channel
Bonding information in your operating system documentation.
Statistics
Detailed statistics and configuration information can be viewed using the ethtool
utility. See the ethtool man page for more information.
47
83840-546-00 E
6
VMware Driver Software

Packaging

Download, Install, and Update Drivers

Networking Support

FCoE Support
Packaging
The VMware driver is released in the packaging formats shown in Table 6-1. For
information about iSCSI offload in VMware server, see “iSCSI Offload on VMware
Server” on page 114.
Table 6-1. VMware Driver Packaging
Format
Compressed zip
Drivers Package
QLG-NetXtremeII-2.0-version.zip
48
83840-546-00 E
6–VMware Driver Software
Download, Install, and Update Drivers
Download, Install, and Update Drivers
To download, install, or update the VMware ESXi driver for 8400/3400 Series
10 GbE network adapters, go to
http://www.vmware.com/resources/compatibility/search.php?deviceCategory=io
and do the following:
1.
Type the adapter name (in quotes) in the Keyword window (such as
"QLE3442"), and then click Update and View Results (Figure 6-1).
Figure 6-1. Selecting an Adapter
Figure 6-2 shows the available QLE3442 driver versions.
Figure 6-2. QLE3442 Driver Versions
49
83840-546-00 E
6–VMware Driver Software
Download, Install, and Update Drivers
2.
Mouse over the QLE3442 link in the results section to show the PCI
identifiers (Figure 6-3).
Figure 6-3. PCI Identifiers
3.
Click the model link to show a listing of all of the driver packages
(Figure 6-4). Click the desired ESXi version, and then click the link to go to
the VMware driver download web page.
Figure 6-4. List of Driver Packages
50
83840-546-00 E
6–VMware Driver Software
Download, Install, and Update Drivers
4.
Log in to the VMware driver download page and click Download to
download the desired driver package (Figure 6-5).
Figure 6-5. Download Driver Package
5.
This package is double zipped—unzip the package once before copying the
offline bundle zip file to the ESXi host.
6.
To install the driver package, issue the following command:
esxcli software vib install -d <path>/<offline bundle
name.zip> --maintenance-mode
or
esxcli software vib install --depot=/<path>/<offline bundle
name.zip> --maintenance-mode
NOTE
 If you do not unzip the outer zipping, the installation will report that it can
not find the drivers.
 Use double dashes (--) before the depot and maintenance-mode
parameters.
 Do not use the -v method of installing individual driver vSphere
installation bundles (VIBs).
 A reboot is required after all driver installations.
51
83840-546-00 E
6–VMware Driver Software
Networking Support
Networking Support
This section describes the bnx2x VMware ESXi driver for the QLogic 8400/3400
Series PCIe 10 GbE network adapters.
Driver Parameters
Several optional parameters can be supplied as a command line argument to the
vmkload_mod command. These parameters can also be set with the
esxcfg-module command. See the man page for more information.
int_mode
The optional parameter int_mode is used to force using an interrupt mode other
than MSI-X. By default, the driver will try to enable MSI-X if it is supported by the
kernel. If MSI-X is not attainable, then the driver will try to enable MSI if it is
supported by the kernel. If MSI is not attainable, then the driver will use the legacy
INTx mode.
Set the int_mode parameter to 1 as shown below to force using the legacy INTx
mode on all 8400/3400 Series network adapters in the system.
vmkload_mod bnx2x int_mode=1
Set the int_mode parameter to 2 as shown below to force using MSI mode on all
8400/3400 Series network adapters in the system.
vmkload_mod bnx2x int_mode=2
disable_tpa
The optional parameter disable_tpa can be used to disable the Transparent
Packet Aggregation (TPA) feature. By default, the driver will aggregate TCP
packets, but if you would like to disable this advanced feature, it can be done.
Set the disable_tpa parameter to 1 as shown below to disable the TPA feature on
all 8400/3400 Series network adapters in the system.
vmkload_mod bnx2x.ko disable_tpa=1
Use ethtool to disable TPA (LRO) for a specific network adapter.
num_rx_queues
The optional parameter num_rx_queues may be used to set the number of Rx
queues on kernels starting from 2.6.24 when multi_mode is set to 1 and interrupt
mode is MSI-X. Number of Rx queues must be equal to or greater than the
number of Tx queues (see num_tx_queues parameter). If the interrupt mode is
different than MSI-X (see int_mode parameter), then then the number of Rx
queues will be set to 1, discarding the value of this parameter.
52
83840-546-00 E
6–VMware Driver Software
Networking Support
num_tx_queues
The optional parameter num_tx_queues may be used to set the number of Tx
queues on kernels starting from 2.6.27 when multi_mode is set to 1 and interrupt
mode is MSI-X. The number of Rx queues must be equal to or greater than the
number of Tx queues (see num_rx_queues parameter). If the interrupt mode is
different than MSI-X (see int_mode parameter), then the number of Tx queues
will be set to 1, discarding the value of this parameter.
pri_map
The optional parameter pri_map is used to map the VLAN PRI value or the IP
DSCP value to a different or the same CoS in the hardware. This 32-bit parameter
is evaluated by the driver as 8 values of 4 bits each. Each nibble sets the desired
hardware queue number for that priority.
For example, set the pri_map parameter to 0x22221100 to map priority 0 and 1 to
CoS 0, map priority 2 and 3 to CoS 1, and map priority 4 to 7 to CoS 2. In another
example, set the pri_map parameter to 0x11110000 to map priority 0 to 3 to CoS
0, and map priority 4 to 7 to CoS 1.
qs_per_cos
The optional parameter qs_per_cos is used to specify the number of queues that
will share the same CoS. This parameter is evaluated by the driver up to 3 values
of 8 bits each. Each byte sets the desired number of queues for that CoS. The
total number of queues is limited by the hardware limit.
For example, set the qs_per_cos parameter to 0x10101 to create a total of three
queues, one per CoS. In another example, set the qs_per_cos parameter to
0x404 to create a total of 8 queues, divided into only 2 CoS, 4 queues in each
CoS.
cos_min_rate
The optional parameter cos_min_rate is used to determine the weight of each
CoS for round-robin scheduling in transmission. This parameter is evaluated by
the driver up to three values of eight bits each. Each byte sets the desired weight
for that CoS. The weight ranges from 0 to 100.
For example, set the cos_min_rate parameter to 0x101 for fair transmission rate
between two CoS. In another example, set the cos_min_rate parameter to
0x30201 to give the higher CoS the higher rate of transmission. To avoid using the
fairness algorithm, omit setting the optional parameter cos_min_rate or set it to 0.
53
83840-546-00 E
6–VMware Driver Software
Networking Support
dropless_fc
The optional parameter dropless_fc can be used to enable a complementary flow
control mechanism on QLogic network adapters. The default flow control
mechanism is to send pause frames when the BRB is reaching a certain level of
occupancy. This is a performance targeted flow control mechanism. On QLogic
network adapters, you can enable another flow control mechanism to send pause
frames if one of the host buffers (when in RSS mode) is exhausted. This is a zero
packet drop targeted flow control mechanism.
Set the dropless_fc parameter to 1 as shown below to enable the dropless flow
control mechanism feature on all QLogic network adapters in the system.
vmkload_mod bnx2x dropless_fc=1
RSS
The optional parameter RSS can be used to specify the number of receive side
scaling queues. For VMware ESXi (5.1, 5.5, 6.0), values for RSS can be from 2 to
4; RSS=1 disables RSS queues.
max_vfs
The optional parameter max_vfs can be used to enable a specific number of
virtual functions. Values for max_vfs can be 1 to 64, or set max_vfs=0 (default) to
disable all virtual functions.
enable_vxlan_offld
The optional parameter enable_vxlan_ofld can be used to enable or disable
VMware ESXi (5.5, 6.0) VxLAN task offloads with TX TSO and TX CSO. For
VMware ESXi (5.5, 6.0), enable_vxlan_ofld=1 (default) enables VxLAN task
offloads; enable_vxlan_offload=0 disables VxLAN task offloads.
Driver Defaults
Speed: Autonegotiation with all speeds advertised
Flow Control: Autonegotiation with rx and tx advertised
MTU: 1500 (range 46–9600)
Rx Ring Size: 4078 (range 0–4078)
Tx Ring Size: 4078 (range (MAX_SKB_FRAGS+4) - 4078). MAX_SKB_FRAGS
varies on different kernels and different architectures. On a 2.6 kernel for x86,
MAX_SKB_FRAGS is 18.
Coalesce RX Microseconds: 25 (range 0–3000)
Coalesce TX Microseconds: 50 (range 0–12288)
MSI-X: Enabled (if supported by 2.6 kernel)
TSO: Enabled
54
83840-546-00 E
6–VMware Driver Software
Networking Support
Unloading and Removing Driver
To unload the driver, type the following:
vmkload_mod -u bnx2x
Driver Messages
The following are the most common sample messages that may be logged in the
file /var/log/messages. Use dmesg -n <level> to control the level at which
messages will appear on the console. Most systems are set to level 6 by default.
To see all messages, set the level higher.
Driver Sign On
QLogic 8400/3400 Series 10Gigabit Ethernet Driver
bnx2x 0.40.15 ($DateTime: 2007/11/22 05:32:40 $)
NIC Detected
eth0: QLogic 8400/3400 Series XGb (A1)
PCI-E x8 2.5GHz found at mem e8800000, IRQ 16, node addr
001018360012
MSI-X Enabled Successfully
bnx2x: eth0: using MSI-X
Link Up and Speed Indication
bnx2x: eth0 NIC Link is Up, 10000 Mbps full duplex, receive &
transmit flow control ON
Link Down Indication
bnx2x: eth0 NIC Link is Down
Memory Limitation
If you see messages in the log file that look like the following, then the ESXi host
is severely strained. To relieve this, disable NetQueue.
Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING:
Heap: 1435: Heap bnx2x already at its maximumSize. Cannot expand.
Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING:
Heap: 1645: Heap_Align(bnx2x, 4096/4096 bytes, 4096 align) failed.
caller: 0x41800187d654
Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING:
vmklinux26: alloc_pages: Out of memory
Disable NetQueue by manually loading the bnx2x vmkernel module with the
command.
vmkload_mod bnx2x multi_mode=0
55
83840-546-00 E
6–VMware Driver Software
FCoE Support
or to persist the settings across reboots with the command
esxcfg-module -s multi_mode=0 bnx2x
Reboot the machine for the settings to take place.
MultiQueue/NetQueue
The optional parameter num_queues may be used to set the number of Rx and
Tx queues when multi_mode is set to 1 and interrupt mode is MSI-X. If interrupt
mode is different than MSI-X (see int_mode parameter), the number of Rx and Tx
queues will be set to 1, discarding the value of this parameter.
If you would like the use of more then 1 queue, force the number of NetQueues to
use with the following command:
esxcfg-module -s "multi_mode=1 num_queues=<num of queues>" bnx2x
Otherwise, allow the bnx2x driver to select the number of NetQueues to use with
the following command:
esxcfg-module -s "multi_mode=1 num_queues=0" bnx2x
The optimal number is to have the number of NetQueues match the number of
CPUs on the machine.
FCoE Support
This section describes the contents and procedures associated with installation of
the VMware software package for supporting QLogic FCoE C-NICs.
Enabling FCoE
To enable FCoE hardware offload on the C-NIC
1.
Determine the ports that are FCoE-capable:
# esxcli fcoe nic list
Output example:
vmnic4
User Priority: 3
Source MAC: FF:FF:FF:FF:FF:FF
Active: false
Priority Settable: false
Source MAC Settable: false
VLAN Range Settable: false
56
83840-546-00 E
6–VMware Driver Software
FCoE Support
1.
Enable the FCoE interface:
# esxcli fcoe nic discover -n vmnicX
Where X is the interface number gained from esxcli fcoe nic list.
2.
Verify that the interface is working:
# esxcli fcoe adapter list
Output example:
vmhba34
Source MAC: bc:30:5b:01:82:39
FCF MAC: 00:05:73:cf:2c:ea
VNPort MAC: 0e:fc:00:47:04:04
Physical NIC: vmnic7
User Priority: 3
VLAN id: 2008
The output of this command should show valid: FCF MAC, VNPort MAC, Priority,
and VLAN id for the Fabric that is connected to the C-NIC.
The following command can also be used to verify that the interface is working
properly:
#esxcfg-scsidevs -a
Output example:
vmhba34 bnx2fc
link-up
address> () Software FCoE
fcoe.1000<mac address>:2000<mac
vmhba35 bnx2fc
link-up
address> () Software FCoE
fcoe.1000<mac address>:2000<mac
NOTE
The label Software FCoE is a VMware term used to describe initiators that
depend on the inbox FCoE libraries and utilities. QLogic's FCoE solution is a
fully state connection-based hardware offload solution designed to
significantly reduce the CPU burden encumbered by a non-offload software
initiator.
57
83840-546-00 E
6–VMware Driver Software
FCoE Support
Installation Check
To verify the correct installation of the driver and to ensure that the host port is
seen by the switch, follow the procedure below.
To verify the correct installation of the driver
1.
Verify the host port shows up in the switch FLOGI database using the show
flogi database command for the case of a Cisco® FCF and fcoe
-loginshow command for the case of a Brocade® FCF.
2.
If the Host WWPN does not appear in the FLOGI database, then provide
driver log messages for review.
Limitations

NPIV is not currently supported with this release on ESXi, due to lack of
supporting inbox components.

Non-offload FCoE is not supported with offload-capable QLogic devices.
Only the full hardware offload path is supported.
Drivers
Table 6-2 lists the 8400/3400 Series FCoE drivers.
Table 6-2. QLogic 8400/3400 Series FCoE Drivers
Driver
Description
bnx2x
This driver manages all PCI device resources (registers, host interface queues) and also acts as the Layer 2 VMware low-level network driver for QLogic's 8400/3400 Series 10G device. This driver
directly controls the hardware and is responsible for sending and
receiving Ethernet packets on behalf of the VMware host networking
stack. The bnx2x driver also receives and processes device interrupts, both on behalf of itself (for L2 networking) and on behalf of the
bnx2fc (FCoE protocol) and CNIC drivers.
bnx2fc
The QLogic VMware FCoE driver is a kernel mode driver used to
provide a translation layer between the VMware SCSI stack and the
QLogic FCoE firmware/hardware. In addition, the driver interfaces
with the networking layer to transmit and receive encapsulated
FCoE frames on behalf of open-fcoe's libfc/libfcoe for
FIP/device discovery.
Supported Distributions
The FCoE/DCB feature set is supported on VMware ESXi 5.0 and later.
58
83840-546-00 E
7
Firmware Upgrade
QLogic provides a Windows and Linux utility for upgrading adapter firmware and
bootcode. Each utility executes as a console application that can be run from a
command prompt. Upgrade VMware firmware with the VMware vSphere plug-in.
Upgrading Firmware for Windows
To upgrade firmware for Windows:
1.
Go to driverdownloads.qlogic.com and download the Windows firmware
upgrade utility for your adapter.
2.
Install the firmware upgrade utility.
3.
In a DOS command line, type the following command:
C:WinQlgcUpg.bat
******************************************************************************
QLogic Firmware Upgrade Utility for Windows v2.7.14.0
******************************************************************************
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61
1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62
Upgrading MFW
Forced upgrading MFW1 image: from ver MFW1 7.10.39 to ver MFW1 7.12.31
Upgrading MFW2 image to version MFW2 7.12.31
Upgrading SWIM1B image: to version SWIM1 7.12.31
Upgrading SWIM2B image: to version SWIM2 7.12.31
Upgrading SWIM3B image: to version SWIM3 7.12.31
Upgrading SWIM4B image: to version SWIM4 7.12.31
Upgrading SWIM5B image: to version SWIM5 7.12.31
Upgrading SWIM6B image: to version SWIM6 7.12.31
Upgrading SWIM7B image: to version SWIM7 7.12.31
Upgrading SWIM8B image: to version SWIM8 7.12.31
59
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Windows
Forced upgrading E3_EC_V2 image: from ver N/A to ver N/A
Forced upgrading E3_PCIE_V2 image: from ver N/A to ver N/A
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61
1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62
Upgrading MFW
Forced upgrading MFW1 image: from ver MFW1 7.10.39 to ver MFW1 7.12.31
Upgrading MFW2 image to version MFW2 7.12.31
Upgrading SWIM1B image: to version SWIM1 7.12.31
Upgrading SWIM2B image: to version SWIM2 7.12.31
Upgrading SWIM3B image: to version SWIM3 7.12.31
Upgrading SWIM4B image: to version SWIM4 7.12.31
Upgrading SWIM5B image: to version SWIM5 7.12.31
Upgrading SWIM6B image: to version SWIM6 7.12.31
Upgrading SWIM7B image: to version SWIM7 7.12.31
Upgrading SWIM8B image: to version SWIM8 7.12.31
Forced upgrading E3_EC_V2 image: from ver N/A to ver N/A
Forced upgrading E3_PCIE_V2 image: from ver N/A to ver N/A
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
******************************************************************************
QLogic Firmware Upgrade Utility for Windows v2.7.14.0
******************************************************************************
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61
1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62
Upgrading MBA
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Forced upgrading MBA image: from ver PCI30 MBA 7.11.3 ;EFI x64 7.10.54 to ver
PCI30 7.12.4
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61
60
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Windows
1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62
Upgrading MBA
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Forced upgrading MBA image: from ver PCI30 MBA 7.11.3 ;EFI x64 7.10.54 to ver
PCI30 7.12.4
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
******************************************************************************
QLogic Firmware Upgrade Utility for Windows v2.7.14.0
******************************************************************************
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61
1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62
Upgrading L2T
Forced upgrading L2T image: from ver L2T 7.10.31 to ver L2T 7.10.31
Forced upgrading L2C image: from ver L2C 7.10.31 to ver L2C 7.10.31
Forced upgrading L2X image: from ver L2X 7.10.31 to ver L2X 7.10.31
Forced upgrading L2U image: from ver L2U 7.10.31 to ver L2U 7.10.31
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61
1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62
Upgrading L2T
Forced upgrading L2T image: from ver L2T 7.10.31 to ver L2T 7.10.31
Forced upgrading L2C image: from ver L2C 7.10.31 to ver L2C 7.10.31
Forced upgrading L2X image: from ver L2X 7.10.31 to ver L2X 7.10.31
Forced upgrading L2U image: from ver L2U 7.10.31 to ver L2U 7.10.31
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
61
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Linux
Upgrading Firmware for Linux
To upgrade firmware for Linux:
1.
Go to driverdownloads.qlogic.com and download the Linux firmware
upgrade utility for your adapter.
2.
In a Linux command line window, type the following command:
# ./LnxQlgcUpg.sh
******************************************************************************
QLogic Firmware Upgrade Utility for Linux v2.7.13
******************************************************************************
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading MFW1 image: from ver MFW1 7.12.27 to ver MFW1 7.12.31
Upgrading MFW2 image: to version MFW2 7.12.31
Upgrading SWIM1B image: to version SWIM1 7.12.31
Upgrading SWIM2B image: to version SWIM2 7.12.31
Upgrading SWIM3B image: to version SWIM3 7.12.31
Upgrading SWIM4B image: to version SWIM4 7.12.31
Upgrading SWIM5B image: to version SWIM5 7.12.31
Upgrading SWIM6B image: to version SWIM6 7.12.31
Upgrading SWIM7B image: to version SWIM7 7.12.31
Upgrading SWIM8B image: to version SWIM8 7.12.31
Forced upgrading E3_WC_V2 image: from ver N/A to ver N/A
Forced upgrading E3_PCIE_V2 image: from ver N/A to ver N/A
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
62
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Linux
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading MFW1 image: from ver MFW1 7.12.31 to ver MFW1 7.12.31
Upgrading MFW2 image: to version MFW2 7.12.31
Upgrading SWIM1B image: to version SWIM1 7.12.31
Upgrading SWIM2B image: to version SWIM2 7.12.31
Upgrading SWIM3B image: to version SWIM3 7.12.31
Upgrading SWIM4B image: to version SWIM4 7.12.31
Upgrading SWIM5B image: to version SWIM5 7.12.31
Upgrading SWIM6B image: to version SWIM6 7.12.31
Upgrading SWIM7B image: to version SWIM7 7.12.31
Upgrading SWIM8B image: to version SWIM8 7.12.31
Forced upgrading E3_WC_V2 image: from ver N/A to ver N/A
Forced upgrading E3_PCIE_V2 image: from ver N/A to ver N/A
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
Successfully upgraded mf800v7c.31
******************************************************************************
QLogic Firmware Upgrade Utility for Linux v2.7.13
******************************************************************************
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Forced upgrading MBA image: from ver PCI30_CLP MBA 7.10.33;EFI x64 7.10.50 to
ver PCI30_CLP MBA 7.12.4
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
63
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Linux
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1
Forced upgrading MBA image: from ver PCI30_CLP MBA 7.12.4 to ver PCI30_CLP MBA
7.12.4
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
Successfully upgraded evpxe.nic
******************************************************************************
QLogic Firmware Upgrade Utility for Linux v2.7.13
******************************************************************************
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
NIC is not supported.
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
NIC is not supported.
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
64
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Linux
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
NIC is not supported.
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
NIC is not supported.
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading ISCSI_B image: from ver v7.12.1 to ver v7.12.1
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading ISCSI_B image: from ver v7.12.1 to ver v7.12.1
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
Successfully upgraded ibootv712.01
******************************************************************************
QLogic Firmware Upgrade Utility for Linux v2.7.13
******************************************************************************
65
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Linux
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading FCOE_B image: from ver v7.12.4 to ver v7.12.4
skipping FCOE boot config block
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading FCOE_B image: from ver v7.12.4 to ver v7.12.4
skipping FCOE boot config block
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
Successfully upgraded fcbv712.04
******************************************************************************
QLogic Firmware Upgrade Utility for Linux v2.7.13
******************************************************************************
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading L2T image: from ver L2T 7.10.31 to ver L2T 7.10.31
66
83840-546-00 E
7–Firmware Upgrade
Upgrading Firmware for Linux
Forced upgrading L2C image: from ver L2C 7.10.31 to ver L2C 7.10.31
Forced upgrading L2X image: from ver L2X 7.10.31 to ver L2X 7.10.31
Forced upgrading L2U image: from ver L2U 7.10.31 to ver L2U 7.10.31
C Brd
MAC
Drv
Name
- ---- ------------ --- -----------------------------------------------------0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2)
2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3)
3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
Forced upgrading L2T image: from ver L2T 7.10.31 to ver L2T 7.10.31
Forced upgrading L2C image: from ver L2C 7.10.31 to ver L2C 7.10.31
Forced upgrading L2X image: from ver L2X 7.10.31 to ver L2X 7.10.31
Forced upgrading L2U image: from ver L2U 7.10.31 to ver L2U 7.10.31
The System Reboot is required in order for the upgrade to take effect.
The System Reboot is required in order for the upgrade to take effect.
Quitting program ...
Program Exit Code: (95)
Successfully upgraded l2fwv710.31
67
83840-546-00 E
8
iSCSI Protocol

iSCSI Boot

iSCSI Crash Dump

iSCSI Offload in Windows Server

iSCSI Offload in Linux Server

iSCSI Offload on VMware Server
iSCSI Boot
QLogic 8400/3400 Series Gigabit Ethernet adapters support iSCSI boot to enable
network boot of operating systems to diskless systems. iSCSI boot allows a
Windows, Linux, or VMware operating system boot from an iSCSI target machine
located remotely over a standard IP network.
For both Windows and Linux operating systems, iSCSI boot can be configured to
boot with two distinctive paths: non-offload (also known as Microsoft/Open-iSCSI
initiator) and offload (QLogic’s offload iSCSI driver or HBA). Configuration of the
path is set with the HBA Boot Mode option located on the General Parameters
screen of the iSCSI Configuration utility. See Table 8-1 for more information on all
General Parameters screen configuration options.
Supported Operating Systems for iSCSI Boot
The QLogic 8400/3400 Series Gigabit Ethernet adapters support iSCSI boot on
the following operating systems:

Windows Server 2008 and later 32-bit and 64-bit (supports offload and
non-offload paths)

RHEL 5.5 and later, SLES 11.1 and later (supports offload and non-offload
paths)

SLES 10.x and SLES 11 (only supports non-offload path)

VMware ESXi 5.0 and later (only support non-offload path)
68
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
iSCSI Boot Setup
The iSCSI boot setup consists of:

Configuring the iSCSI Target

Configuring iSCSI Boot Parameters

Preparing the iSCSI Boot Image

Booting
Configuring the iSCSI Target
Configuring the iSCSI target varies by target vendors. For information on
configuring the iSCSI target, refer to the documentation provided by the vendor.
The general steps include:
1.
Create an iSCSI target (for targets such as SANBlaze® or IET®) or a
vdisk/volume (for targets such as EqualLogic® or EMC®).
2.
Create a virtual disk.
3.
Map the virtual disk to the iSCSI target created in step 1.
4.
Associate an iSCSI initiator with the iSCSI target.
5.
Record the iSCSI target name, TCP port number, iSCSI Logical Unit
Number (LUN), initiator Internet Qualified Name (IQN), and CHAP
authentication details.
6.
After configuring the iSCSI target, obtain the following:

Target IQN

Target IP address

Target TCP port number

Target LUN

Initiator IQN

CHAP ID and secret
Configuring iSCSI Boot Parameters
Configure the QLogic iSCSI boot software for either static or dynamic
configuration. Refer to Table 8-1 for configuration options available from the
General Parameters screen.
69
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Table 8-1 lists parameters for both IPv4 and IPv6. Parameters specific to either
IPv4 or IPv6 are noted.
NOTE
Availability of IPv6 iSCSI boot is platform/device dependent.
Table 8-1. Configuration Options
Option
Description
TCP/IP parameters via
DHCP
This option is specific to IPv4. Controls whether the iSCSI
boot host software acquires the IP address information using
DHCP (Enabled) or use a static IP configuration (Disabled).
IP Autoconfiguration
This option is specific to IPv6. Controls whether the iSCSI
boot host software will configure a stateless link-local
address and/or stateful address if DHCPv6 is present and
used (Enabled). Router Solicit packets are sent out up to
three times with 4 second intervals in between each retry. Or
use a static IP configuration (Disabled).
iSCSI parameters via
DHCP
Controls whether the iSCSI boot host software acquires its
iSCSI target parameters using DHCP (Enabled) or through a
static configuration (Disabled). The static information is
entered through the iSCSI Initiator Parameters Configuration
screen.
CHAP Authentication
Controls whether the iSCSI boot host software uses CHAP
authentication when connecting to the iSCSI target. If CHAP
Authentication is enabled, the CHAP ID and CHAP Secret
are entered through the iSCSI Initiator Parameters Configuration screen.
DHCP Vendor ID
Controls how the iSCSI boot host software interprets the
Vendor Class ID field used during DHCP. If the Vendor Class
ID field in the DHCP Offer packet matches the value in the
field, the iSCSI boot host software looks into the DHCP
Option 43 fields for the required iSCSI boot extensions. If
DHCP is disabled, this value does not need to be set.
Link Up Delay Time
Controls how long the iSCSI boot host software waits, in seconds, after an Ethernet link is established before sending any
data over the network. The valid values are 0 to 255. As an
example, a user may need to set a value for this option if a
network protocol, such as Spanning Tree, is enabled on the
switch interface to the client system.
Use TCP Timestamp
Controls if the TCP Timestamp option is enabled or disabled.
70
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Table 8-1. Configuration Options (Continued)
Option
Description
Target as First HDD
Allows specifying that the iSCSI target drive will appear as
the first hard drive in the system.
LUN Busy Retry Count
Controls the number of connection retries the iSCSI Boot initiator will attempt if the iSCSI target LUN is busy.
IP Version
This option specific to IPv6. Toggles between the IPv4 or
IPv6 protocol. All IP settings will be lost when switching from
one protocol version to another.
HBA Boot Mode
Set to disable when the host OS is configured for software
initiator mode and to enable for HBA mode. This option is
available only on 8400 Series adapters. This parameter cannot be changed when the adapter is in Multi-Function mode.
MBA Boot Protocol Configuration
To configure the boot protocol
1.
Restart your system.
2.
In the QLogic 577xx/578xx Ethernet Boot Agent banner (Figure 8-1), press
CTRL+S.
Figure 8-1. QLogic 577xx/578xx Ethernet Boot Agent
71
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
3.
In the CCM device list (Figure 8-2), use the up or down arrow keys to select
a device, and then press ENTER.
Figure 8-2. CCM Device List
4.
In the Main menu, select MBA Configuration (Figure 8-3), and then press
ENTER.
Figure 8-3. Selecting MBA Configuration
72
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
5.
In the MBA Configuration menu (Figure 8-4), use the up or down arrow keys
to select Boot Protocol. Use the left or right arrow keys to change the boot
protocol option to iSCSI. Press ENTER.
Figure 8-4. Selecting the iSCSI Boot Protocol
NOTE
If iSCSI boot firmware is not programmed in the 8400/3400 Series
network adapter, the iSCSI Boot Configuration option will not be
available.The iSCSI boot parameters can also be configured using the
Unified Extensible Firmware Interface (UEFI) Human Interface
Infrastructure (HII) BIOS pages on servers that support it in their BIOS.
6.
Proceed to “Static iSCSI Boot Configuration” on page 73 or “Dynamic iSCSI
Boot Configuration” on page 77.
iSCSI Boot Configuration

Static iSCSI Boot Configuration

Dynamic iSCSI Boot Configuration
Static iSCSI Boot Configuration
In a static configuration, you must enter data for the system’s IP address, the
system’s initiator IQN, and the target parameters obtained in “Configuring the
iSCSI Target” on page 69. For information on configuration options, see Table 8-1.
73
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
To configure the iSCSI boot parameters using static configuration:
1.
In the Main menu, select iSCSI Boot Configuration (Figure 8-5), and then
press ENTER.
Figure 8-5. Selecting iSCSI Boot Configuration
2.
In the iSCSI Boot Main menu, select General Parameters (Figure 8-6), and
then press ENTER.
Figure 8-6. Selecting General Parameters
1.
In the General Parameters menu, use the up or down arrow keys to select a
parameter, and then use the right or left arrow keys to set the following
values.










TCP/IP Parameters via DHCP: Disabled (IPv4)
IP Autoconfiguration: Disabled (IPv6)
iSCSI Parameters via DHCP: Disabled
CHAP Authentication: As required
Boot to iSCSI Target: As required
DHCP Vendor ID: As required
Link Up Delay Time: As required
Use TCP Timestamp: As required
Target as First HDD: As required
LUN Busy Retry Count: As required
74
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot


IP Version: As Required (IPv6, non-offload)
HBA Boot Mode: As required3
NOTE
For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM
or mounted bootable OS installation image, set Boot to iSCSI Target to
One Time Disabled. This causes the system not to boot from the configured
iSCSI target after establishing a successful login and connection. This
setting will revert to Enabled after the next system reboot. Enabled means
to connect to an iSCSI target and attempt to boot from it. Disabled means to
connect to an iSCSI target and not boot from that device, but instead hand
off the boot vector to the next bootable device in the boot sequence.
2.
Press ESC to return to the iSCSI Boot Main menu. Select Initiator
Parameters, and then press ENTER.
3.
In the Initiator Parameters menu, select the following parameters, and then
type a value for each:

IP Address

Subnet Mask

Default Gateway

Primary DNS

Secondary DNS

iSCSI Name (corresponds to the iSCSI initiator name to be used by
the client system)

CHAP ID

CHAP Secret
NOTE
Carefully enter the IP address. There is no error-checking performed against
the IP address to check for duplicates or incorrect segment/network
assignment.
4.
3
Press ESC to return to the iSCSI Boot Main menu. Select 1st Target
Parameters, and then press ENTER.
HBA Boot Mode cannot be changed when the adapter is in Multi-Function mode.
75
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
5.
In the 1st Target Parameters menu, enable Connect to connect to the iSCSI
target. Type values for the following parameters for the iSCSI target, and
then press ENTER:

IP Address

TCP Port

Boot LUN

iSCSI Name

CHAP ID

CHAP Secret
6.
Press ESC to return to the iSCSI Boot Main menu.
7.
If you want configure a second iSCSI target device, select 2nd Target
Parameters, and enter parameter values as you did in Step 5. Otherwise,
proceed to Step 8.
8.
Press ESC once to return to the Main menu, and a second time to exit and
save the configuration.
9.
Select Exit and Save Configurations to save the iSCSI boot configuration
(Figure 8-7). Otherwise, select Exit and Discard Configuration. Press
ENTER.
Figure 8-7. Saving the iSCSI Boot Configuration
10.
After all changes have been made, press CTRL+ALT+DEL to exit CCM and
to apply the changes to the adapter's running configuration.
NOTE
In NPAR mode, ensure that the iSCSI function is configured on the first
Physical Function (PF) for successful boot from SAN configuration.
76
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Dynamic iSCSI Boot Configuration
In a dynamic configuration, you only need to specify that the system’s IP address
and target/initiator information are provided by a DHCP server (see IPv4 and IPv6
configurations in “Configuring the DHCP Server to Support iSCSI Boot” on
page 79). For IPv4, with the exception of the initiator iSCSI name, any settings on
the Initiator Parameters, 1st Target Parameters, or 2nd Target Parameters
screens are ignored and do not need to be cleared. For IPv6, with the exception of
the CHAP ID and Secret, any settings on the Initiator Parameters, 1st Target
Parameters, or 2nd Target Parameters screens are ignored and do not need to be
cleared. For information on configuration options, see Table 8-1.
NOTE
When using a DHCP server, the DNS server entries are overwritten by the
values provided by the DHCP server. This occurs even if the locally provided
values are valid and the DHCP server provides no DNS server information.
When the DHCP server provides no DNS server information, both the
primary and secondary DNS server values are set to 0.0.0.0. When the
Windows OS takes over, the Microsoft iSCSI initiator retrieves the iSCSI
Initiator parameters and configures the appropriate registries statically. It will
overwrite whatever is configured. Since the DHCP daemon runs in the
Windows environment as a user process, all TCP/IP parameters have to be
statically configured before the stack comes up in the iSCSI Boot
environment.
If DHCP Option 17 is used, the target information is provided by the DHCP server,
and the initiator iSCSI name is retrieved from the value programmed from the
Initiator Parameters screen. If no value was selected, then the controller defaults
to the name:
iqn.1995-05.com.qlogic.<11.22.33.44.55.66>.iscsiboot
where the string 11.22.33.44.55.66 corresponds to the controller’s MAC
address.
If DHCP option 43 (IPv4 only) is used, then any settings on the Initiator
Parameters, 1st Target Parameters, or 2nd Target Parameters screens are
ignored and do not need to be cleared.
To configure the iSCSI boot parameters using dynamic configuration
1.
From the General Parameters menu, set the following:




TCP/IP Parameters via DHCP: Enabled (IPv4)
IP Autoconfiguration: Enabled. (IPv6)
iSCSI Parameters via DHCP: Enabled
CHAP Authentication: As Required
77
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot








Boot to iSCSI Target: As Required
DHCP Vendor ID: As Required
Link Up Delay Time: As Required
Use TCP Timestamp: As Required
Target as First HDD: As Required
LUN Busy Retry Count: As Required
IP Version: As Required
HBA Boot Mode: As Required4
NOTE
For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM
or mounted bootable OS installation image, set Boot to Target to One Time
Disabled. This causes the system not to boot from the configured iSCSI
target after establishing a successful login and connection. This setting will
revert to Enabled after the next system reboot. Enabled means to connect
to an iSCSI target and attempt to boot from it. Disabled means to connect to
an iSCSI target and not boot from that device, but instead hand off the boot
vector to the next bootable device in the boot sequence.
2.
Press ESC once to return to the Main menu, and a second time to exit and
save the configuration.
3.
Select Exit and Save Configurations to save the iSCSI boot configuration.
Otherwise, select Exit and Discard Configuration. Press ENTER.
4.
After all changes have been made, press CTRL+ALT+DEL to exit CCM and
to apply the changes to the adapter's running configuration.
NOTE
Information on the Initiator Parameters, and 1st Target Parameters
screens are ignored and do not need to be cleared.
4
HBA Boot Mode cannot be changed when the adapter is in Multi-Function mode.
78
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Enabling CHAP Authentication
Ensure that CHAP authentication is enabled on the target.
To enable CHAP authentication
1.
From the General Parameters screen, set CHAP Authentication to
Enabled.
2.
From the Initiator Parameters screen, type values for the following:

CHAP ID (up to 128 bytes)

CHAP Secret (if authentication is required, and must be 12 characters
in length or longer)
3.
Select ESC to return to the Main menu.
4.
From the Main menu, select 1st Target Parameters.
5.
From the 1st Target Parameters screen, type values for the following using
the values used when configuring the iSCSI target:

CHAP ID (optional if two-way CHAP)

CHAP Secret (optional if two-way CHAP, and must be 12 characters in
length or longer)
6.
Select ESC to return to the Main menu.
7.
Select ESC and select Exit and Save Configuration.
Configuring the DHCP Server to Support iSCSI Boot
The DHCP server is an optional component and it is only necessary if you will be
doing a dynamic iSCSI Boot configuration setup (see “Dynamic iSCSI Boot
Configuration” on page 77).
Configuring the DHCP server to support iSCSI boot is different for IPv4 and IPv6.

DHCP iSCSI Boot Configurations for IPv4

DHCP iSCSI Boot Configuration for IPv6
DHCP iSCSI Boot Configurations for IPv4
The DHCP protocol includes a number of options that provide configuration
information to the DHCP client. For iSCSI boot, QLogic adapters support the
following DHCP configurations:

DHCP Option 17, Root Path

DHCP Option 43, Vendor-Specific Information
79
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
DHCP Option 17, Root Path
Option 17 is used to pass the iSCSI target information to the iSCSI client.
The format of the root path as defined in IETC RFC 4173 is:
"iscsi:"<servername>":"<protocol>":"<port>":"<LUN>":"<targetname>"
Table 8-2 lists the DHCP option 17 parameters.
Table 8-2. DHCP Option 17 Parameter Definition
Parameter
Definition
"iscsi:"
A literal string
<servername>
The IP address or FQDN of the iSCSI target
":"
Separator
<protocol>
The IP protocol used to access the iSCSI target. Currently, only TCP
is supported so the protocol is 6.
<port>
The port number associated with the protocol. The standard port
number for iSCSI is 3260.
<LUN>
The Logical Unit Number to use on the iSCSI target. The value of the
LUN must be represented in hexadecimal format. A LUN with an ID
OF 64 would have to be configured as 40 within the option 17
parameter on the DHCP server.
<targetname>
The target name in either IQN or EUI format (refer to RFC 3720 for
details on both IQN and EUI formats). An example IQN name would
be “iqn.1995-05.com.QLogic:iscsi-target”.
DHCP Option 43, Vendor-Specific Information
DHCP option 43 (vendor-specific information) provides more configuration options
to the iSCSI client than DHCP option 17. In this configuration, three additional
suboptions are provided that assign the initiator IQN to the iSCSI boot client along
with two iSCSI target IQNs that can be used for booting. The format for the iSCSI
target IQN is the same as that of DHCP option 17, while the iSCSI initiator IQN is
simply the initiator's IQN.
NOTE
DHCP Option 43 is supported on IPv4 only.
80
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Table 8-3 lists the DHCP option 43 suboptions.
Table 8-3. DHCP Option 43 Suboption Definition
Suboption
201
Definition
First iSCSI target information in the standard root path format
"iscsi:"<servername>":"<protocol>":"<port>":"<LUN>"
:"<targetname>"
202
Second iSCSI target information in the standard root path format
"iscsi:"<servername>":"<protocol>":"<port>":"<LUN>"
:"<targetname>"
203
iSCSI initiator IQN
Using DHCP option 43 requires more configuration than DHCP option 17, but it
provides a richer environment and provides more configuration options. QLogic
recommends that customers use DHCP option 43 when performing dynamic
iSCSI boot configuration.
Configuring the DHCP Server
Configure the DHCP server to support option 17 or option 43.
NOTE
If using Option 43, you also need to configure Option 60. The value of
Option 60 should match the DHCP Vendor ID value. The DHCP Vendor ID
value is QLGC ISAN, as shown in General Parameters of the iSCSI Boot
Configuration menu.
DHCP iSCSI Boot Configuration for IPv6
The DHCPv6 server can provide a number of options, including stateless or
stateful IP configuration, as well s information to the DHCPv6 client. For iSCSI
boot, QLogic adapters support the following DHCP configurations:

DHCPv6 Option 16, Vendor Class Option

DHCPv6 Option 17, Vendor-Specific Information
NOTE
The DHCPv6 standard Root Path option is not yet available. QLogic
suggests using Option 16 or Option 17 for dynamic iSCSI Boot IPv6 support.
81
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
DHCPv6 Option 16, Vendor Class Option
DHCPv6 Option 16 (vendor class option) must be present and must contain a
string that matches your configured DHCP Vendor ID parameter. The DHCP
Vendor ID value is QLGC ISAN, as shown in General Parameters of the iSCSI
Boot Configuration menu.
The content of Option 16 should be <2-byte length> <DHCP Vendor ID>.
DHCPv6 Option 17, Vendor-Specific Information
DHCPv6 Option 17 (vendor-specific information) provides more configuration
options to the iSCSI client. In this configuration, three additional suboptions are
provided that assign the initiator IQN to the iSCSI boot client along with two iSCSI
target IQNs that can be used for booting.
Table 8-4 list the DHCP option 17 suboptions.
Table 8-4. DHCP Option 17 Suboption Definition
Suboption
201
Definition
First iSCSI target information in the standard root path format
"iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN
>":"<targetname>"
202
Second iSCSI target information in the standard root path format
"iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN
>":"<targetname>"
203
iSCSI initiator IQN
NOTE
In Table 8-4, the brackets [ ] are required for the IPv6 addresses.
The content of option 17 should be <2-byte Option Number 201|202|203> <2-byte
length> <data>.
Configuring the DHCP Server
Configure the DHCP server to support Option 16 and Option 17.
NOTE
The format of DHCPv6 Option 16 and Option 17 are fully defined in RFC
3315.
82
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Preparing the iSCSI Boot Image

Windows Server 2008 R2 and SP2 iSCSI Boot Setup

Windows Server 2012/2012 R2 iSCSI Boot Setup

Linux iSCSI Boot Setup

Injecting (Slipstreaming) Adapter Drivers into Windows Image Files
Windows Server 2008 R2 and SP2 iSCSI Boot Setup
Windows Server 2008 R2 and Windows Server 2008 SP2 support booting and
installing in either the offload or non-offload paths.
The following procedure prepares the image for installation and booting in either
the offload or non-offload path. The following procedure references Windows
Server 2008 R2 but is common to both the Windows Server 2008 R2 and SP2.
Required CD/ISO image:

Windows Server 2008 R2 x64 with the QLogic drivers injected. See
“Injecting (Slipstreaming) Adapter Drivers into Windows Image Files” on
page 91. Also refer to the Microsoft knowledge base topic KB974072 at
support.microsoft.com.
NOTE
 The Microsoft procedure injects only the eVBD and NDIS drivers. QLogic
recommends that all drivers (eVBD, VBD, BXND, OIS, FCoE, and NDIS)
be injected.
 Refer to the silent.txt file for the specific driver installer application
for instructions on how to extract the individual Windows 8400/3400
Series drivers.
Other software required:

Bindview.exe (Windows Server 2008 R2 only; see KB976042)
Procedure:
1.
Remove any local hard drives on the system to be booted (the “remote
system”).
2.
Load the latest QLogic MBA and iSCSI boot images onto NVRAM of the
adapter.
3.
Configure the BIOS on the remote system to have the QLogic MBA as the
first bootable device, and the CDROM as the second device.
4.
Configure the iSCSI target to allow a connection from the remote device.
Ensure that the target has sufficient disk space to hold the new O/S
installation.
83
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
5.
Boot up the remote system. When the PXE banner appears, press CTRL+S
to enter the PXE menu.
6.
At the PXE menu, set Boot Protocol to iSCSI.
7.
Enter the iSCSI target parameters.
8.
Set HBA Boot Mode to Enabled or Disabled. (Note: This parameter
cannot be changed when the adapter is in Multi-Function mode.)
9.
Save the settings and reboot the system.
The remote system should connect to the iSCSI target and then boot from
the DVDROM device.
10.
Boot to DVD and begin installation.
11.
Answer all the installation questions appropriately (specify the Operating
System you want to install, accept the license terms, and so on).
When the Where do you want to install Windows? window appears, the
target drive should be visible. This is a drive connected through the iSCSI
boot protocol, located in the remote iSCSI target.
12.
Select Next to proceed with Windows Server 2008 R2 installation.
A few minutes after the Windows Server 2008 R2 DVD installation process
starts, a system reboot will follow. After the reboot, the Windows Server
2008 R2 installation routine should resume and complete the installation.
13.
Following another system restart, check and verify that the remote system is
able to boot to the desktop.
14.
After Windows Server 2008 R2 is booted up, load all drivers and run
Bindview.exe.
15.
a.
Select All Services.
b.
Under WFP Lightweight Filter you should see Binding paths for the
AUT. Right-click and disable them. When done, close out of the
application.
Verify that the OS and system are functional and can pass traffic by pinging
a remote system's IP.
84
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Windows Server 2012/2012 R2 iSCSI Boot Setup
Windows Server 2012/2012 R2 supports booting and installing in either the
offload or non-offload paths. QLogic requires the use of a “slipstream” DVD with
the latest QLogic drivers injected. See “Injecting (Slipstreaming) Adapter Drivers
into Windows Image Files” on page 91. Also refer to the Microsoft knowledge
base topic KB974072 at support.microsoft.com.
NOTE
The Microsoft procedure injects only the eVBD and NDIS drivers. QLogic
recommends that all drivers (eVBD, VBD, BXND, OIS, FCoE, and NDIS) be
injected.
The following procedure prepares the image for installation and booting in either
the offload or non-offload path:
1.
Remove any local hard drives on the system to be booted (the “remote
system”).
2.
Load the latest QLogic MBA and iSCSI boot images into the NVRAM of the
adapter.
3.
Configure the BIOS on the remote system to have the QLogic MBA as the
first bootable device and the CDROM as the second device.
4.
Configure the iSCSI target to allow a connection from the remote device.
Ensure that the target has sufficient disk space to hold the new O/S
installation.
5.
Boot up the remote system. When the Preboot Execution Environment
(PXE) banner appears, press CTRL+S to enter the PXE menu.
6.
At the PXE menu, set Boot Protocol to iSCSI.
7.
Enter the iSCSI target parameters.
8.
Set HBA Boot Mode to Enabled or Disabled. (Note: This parameter
cannot be changed when the adapter is in Multi-Function mode.)
9.
Save the settings and reboot the system.
The remote system should connect to the iSCSI target and then boot from
the DVDROM device.
10.
Boot from DVD and begin installation.
11.
Answer all the installation questions appropriately (specify the Operating
System you want to install, accept the license terms, and so on).
When the Where do you want to install Windows? window appears, the
target drive should be visible. This is a drive connected through the iSCSI
boot protocol, located in the remote iSCSI target.
85
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
12.
Select Next to proceed with Windows Server 2012 installation.
A few minutes after the Windows Server 2012 DVD installation process
starts, a system reboot will occur. After the reboot, the Windows Server 2012
installation routine should resume and complete the installation.
13.
Following another system restart, check and verify that the remote system is
able to boot to the desktop.
14.
After Windows Server 2012 boots to the OS, QLogic recommends running
the driver installer to complete the QLogic drivers and application
installation.
Linux iSCSI Boot Setup
Linux iSCSI boot is supported on Red Hat Enterprise Linux 5.5 and later and
SUSE Linux Enterprise Server 11 SP1 and later in both the offload and
non-offload paths. Note that SLES 10.x and SLES 11 have support only for the
non-offload path.
1.
For driver update, obtain the latest QLogic Linux driver CD.
2.
Configure the iSCSI Boot Parameters for DVD direct install to target by
disabling the Boot from target option on the network adapter.
3.
Configure to install through the non-offload path by setting HBA Boot Mode
to Disabled in the NVRAM Configuration. (Note: This parameter cannot be
changed when the adapter is in Multi-Function mode.). Note that, for
RHEL6.2 and SLES11SP2 and newer, installation through the offload path is
supported. For this case, set the HBA Boot Mode to Enabled in the NVRAM
Configuration.
4.
Change the boot order as follows:
a.
Boot from the network adapter.
b.
Boot from the CD/DVD drive.
5.
Reboot the system.
6.
System will connect to iSCSI target, then will boot from CD/DVD drive.
7.
Follow the corresponding OS instructions.
a.
RHEL 5.5—Type linux dd at “boot:” prompt and press enter
b.
SuSE 11.X—Choose installation and type
withiscsi=1 netsetup=1 at the boot option.

This is intended as a starting set of kernel parameters. Please
consult SLES documentation for a full list of available options.

If driver update is desired, add “DUD=1” or choose YES for the
F6 driver option.
86
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot

In some network configurations, if additional time is required for
the network adapters to become active (for example, with a use
of “netsetup=dhcp,all”) add “netwait=8”. This would allow the
network adapters additional time to complete the driver load and
re-initialization of all interfaces.
8.
At the “networking device” prompt, choose the desired network adapter port
and press OK.
9.
At “configure TCP/IP prompt”, configure the way the system acquire IP
address and press OK.
10.
If static IP was chosen, you need to enter IP information for iSCSI initiator.
11.
(RHEL) Choose to “skip” media testing.
12.
Continue installation as desired. A drive will be available at this point. After
file copying is done, remove CD/DVD and reboot the system.
13.
When the system reboots, enable “boot from target” in iSCSI Boot
Parameters and continue with installation until it is done.
At this stage, the initial installation phase is complete. The rest of the procedure
pertains to creating a new customized initrd for any new components update:
1.
Update iSCSI initiator if desired. You will first need to remove the existing
initiator using rpm -e.
2.
Make sure all run levels of network service are on:
chkconfig network on
3.
Make sure 2,3 and 5 run levels of iSCSI service are on.
chkconfig -level 235 iscsi on
4.
For Red Hat 6.0, make sure Network Manager service is stopped and
disabled.
5.
Install iscsiuio if desired (not required for SuSE 10).
6.
Install linux-nx2 package if desired.
7.
Install bibt package.
8.
Remove ifcfg-eth*.
9.
Reboot.
10.
For SUSE 11.1, follow the remote DVD installation workaround shown
below.
11.
After the system reboots, log in, change to the /opt/bcm/bibt folder, and run
iscsi_setup.sh script to create the offload and/or the non-offload initrd image.
12.
Copy the initrd image(s), offload and/or non-offload, to the /boot folder.
87
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
13.
Change the grub menu to point to the new initrd image.
14.
To enable CHAP, you need to modify iscsid.conf (Red Hat only).
15.
Reboot and change CHAP parameters if desired.
16.
Continue booting into the iSCSI Boot image and select one of the images
you created (non-offload or offload). Your choice should correspond with
your choice in the iSCSI Boot parameters section. If HBA Boot Mode was
enabled in the iSCSI Boot Parameters section, you have to boot the offload
image. SLES 10.x and SLES 11 do not support offload.
17.
For IPv6, you can now change the IP address for both the initiator and the
target to the desired IPv6 address in the NVRAM configuration.
SUSE 11.1 Remote DVD installation workaround
1.
Create a new file called boot.open-iscsi with the content shown below.
2.
Copy the file you just created to /etc/init.d/ folder and overwrite the existing
one.
88
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Content of the new boot.open-iscsi file:
#!/bin/bash
#
# /etc/init.d/iscsi
#
### BEGIN INIT INFO
# Provides:
iscsiboot
# Required-Start:
# Should-Start:
boot.multipath
# Required-Stop:
# Should-Stop:
$null
# Default-Start:
B
# Default-Stop:
# Short-Description: iSCSI initiator daemon root-fs support
# Description:
Starts the iSCSI initiator daemon if the
#
root-filesystem is on an iSCSI device
#
### END INIT INFO
ISCSIADM=/sbin/iscsiadm
ISCSIUIO=/sbin/iscsiuio
CONFIG_FILE=/etc/iscsid.conf
DAEMON=/sbin/iscsid
ARGS="-c $CONFIG_FILE"
# Source LSB init functions
. /etc/rc.status
#
# This service is run right after booting. So all targets activated
# during mkinitrd run should not be removed when the open-iscsi
# service is stopped.
#
iscsi_load_iscsiuio()
{
TRANSPORT=`$ISCSIADM -m session 2> /dev/null | grep "bnx2i"`
if [ "$TRANSPORT" ] ; then
echo -n "Launch iscsiuio "
startproc $ISCSIUIO
89
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
fi
}
iscsi_mark_root_nodes()
{
$ISCSIADM -m session 2> /dev/null | while read t num i target ;
do
ip=${i%%:*}
STARTUP=`$ISCSIADM -m node -p $ip -T $target 2> /dev/null | grep
"node.conn\[0\].startup" | cut -d' ' -f3`
if [ "$STARTUP" -a "$STARTUP" != "onboot" ] ; then
$ISCSIADM -m node -p $ip -T $target -o update -n
node.conn[0].startup -v onboot
fi
done
}
# Reset status of this service
rc_reset
# We only need to start this for root on iSCSI
if ! grep -q iscsi_tcp /proc/modules ; then
if ! grep -q bnx2i /proc/modules ; then
rc_failed 6
rc_exit
fi
fi
case "$1" in
start)
echo -n "Starting iSCSI initiator for the root device: "
iscsi_load_iscsiuio
startproc $DAEMON $ARGS
rc_status -v
iscsi_mark_root_nodes
;;
stop|restart|reload)
rc_failed 0
;;
status)
90
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
echo -n "Checking for iSCSI initiator service: "
if checkproc $DAEMON ; then
rc_status -v
else
rc_failed 3
rc_status -v
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart|reload}"
exit 1
;;
esac
rc_exit
Injecting (Slipstreaming) Adapter Drivers into Windows Image Files
To inject adapter drivers into the Windows image files:
1.
Obtain the latest driver package for the applicable Windows Server version
(2012 or 2012 R2).
2.
Extract the driver package to a working directory:
a.
Open a command line session and navigate to the folder that contains
the driver package.
b.
Type the following command to start the driver installer:
setup.exe /a
c.
In the Network location: field, type the path of the folder to which to
extract the driver package. For example, type c:\temp.
d.
Follow the driver installer instructions to install the drivers in the
specified folder. In this example, the driver files are installed in
c:\temp\Program File 64\QLogic Corporation\QDrivers.
3.
Download the Windows Assessment and Deployment Kit (ADK) version 8.1
from http://www.microsoft.com/en-in/download/details.aspx?id=39982.
4.
Open a command line session (with administrator privilege) and navigate
through the release CD to the Tools\Slipstream folder.
5.
Locate the slipstream.bat script file, and then type the following
command:
slipstream.bat <path>
91
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
where <path> is the drive and subfolder that you specified in Step 2. For
example:
slipstream.bat “c:\temp\Program Files 64\QLogic
Corporation\QDrivers
NOTE
 Operating system installation media is expected to be a local drive.
Network paths for operating system installation media are not
supported.
 The slipstream.bat script injects the driver components in all
the SKUs that are supported by the operating system installation
media.
6.
Burn a DVD containing the resulting driver ISO image file located in the
working directory.
7.
Install the Windows Server operating system using the new DVD.
Booting
After that the system has been prepared for an iSCSI boot and the operating
system is present on the iSCSI target, the last step is to perform the actual boot.
The system will boot to Windows or Linux over the network and operate as if it
were a local disk drive.
1.
Reboot the server.
2.
Press CTRL+S.
3.
To boot through an offload path, set the HBA Boot Mode to Enabled. To boot
through a non-offload path, set the HBA Boot Mode to Disabled. This
parameter cannot be changed when the adapter is in Multi-Function mode.
If CHAP authentication is needed, enable CHAP authentication after determining
that booting is successful (see “Enabling CHAP Authentication” on page 79).
92
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Configuring VLANs for iSCSI Boot
iSCSI traffic on the network may be isolated in a Layer-2 VLAN to segregate it
from general traffic. When this is the case, make the iSCSI interface on the
adapter a member of that VLAN.
1.
During a boot of the Initiator system, press CTRL+S to open the QLogic
CCM pre-boot utility (Figure 8-8).
Figure 8-8. Comprehensive Configuration Management
2.
In the CCM device list, use the up or down arrow keys to select a device,
and then press ENTER.
Figure 8-9. Configuring VLANs—CCM Device List
3.
In the Main menu, select MBA Configuration, and then press ENTER.
Figure 8-10. Configuring VLANs—Multiboot Agent Configuration
93
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
4.
In the MBA Configuration menu (Figure 8-11), use the up or down arrow
keys to select each of following parameters.

VLAN Mode: Press ENTER to change the value to Enabled

VLAN ID: Press ENTER to open the VLAN ID dialog, type the target
VLAN ID (1–4096), and then press ENTER.
Figure 8-11. Configuring iSCSI Boot VLAN
5.
Press ESC once to return to the Main menu, and a second time to exit and
save the configuration.
6.
Select Exit and Save Configurations to save the VLAN for iSCSI boot
configuration (Figure 8-12). Otherwise, select Exit and Discard
Configuration. Press ENTER.
Figure 8-12. Saving the iSCSI Boot VLAN Configuration
7.
After all changes have been made, press CTRL+ALT+DEL to exit CCM and
to apply the changes to the adapter's running configuration.
Other iSCSI Boot Considerations
There are several other factors that should be considered when configuring a
system for iSCSI boot.
Changing the Speed and Duplex Settings in
Windows Environments
Changing the Speed & Duplex settings on the boot port using Windows Device
Manager when performing iSCSI boot through the offload path is not supported.
Booting through the NDIS path is supported. The Speed & Duplex settings can be
changed using the QCC GUI for iSCSI boot through the offload and NDIS paths.
94
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
Virtual LANs
Virtual LAN (VLAN) tagging is not supported for iSCSI boot with the Microsoft
iSCSI Software Initiator.
The 'dd' Method of Creating an iSCSI Boot Image
In the case when installation directly to a remote iSCSI target is not an option, an
alternate way to create such an image is to use the ‘dd’ method. With this method,
you install the image directly to a local hard drive and then create an iSCSI boot
image for the subsequent boot:
1.
Install Linux OS on your local hard drive and ensure that the Open-iSCSI
initiator is up to date.
2.
Ensure that all Runlevels of network service are on.
3.
Ensure that the 2, 3, and 5 Runlevels of iSCSI service are on.
4.
Update iscsiuio. You can get the iscsiuio package from the QLogic CD. This
step is not needed for SuSE 10.
5.
Install the linux-nx2 package on your Linux system. You can get this
package from QLogic CD.
6.
Install bibt package on you Linux system. You can get this package from
QLogic CD.
7.
Delete all ifcfg-eth* files.
8.
Configure one port of the network adapter to connect to iSCSI Target (for
instructions, see “Configuring the iSCSI Target” on page 69).
9.
Connect to the iSCSI Target.
10.
Use the DD command to copy from the local hard drive to iSCSI Target.
11.
When DD is done, execute the sync command a couple of times, log out,
and then log in to iSCSI Target again.
12.
Run the fsck command on all partitions created on the iSCSI Target.
13.
Change to the /OPT/bcm/bibt folder and run the iscsi_setup.sh script to
create the initrd images. Option 0 will create a non-offload image and option
1 will create an offload image. The Iscsi_script.sh script will create the
non-offload image only on SuSE 10 as offload is not supported on SuSE 10.
14.
Mount the /boot partition on the iSCSI Target.
15.
Copy the initrd images you created in step 13 from your local hard drive to
the partition mounted in step 14.
16.
On the partition mounted in step 14, edit the grub menu to point to the new
initrd images.
17.
Unmount the /boot partition on the iSCSI Target.
95
83840-546-00 E
8–iSCSI Protocol
iSCSI Boot
18.
(Red Hat Only) To enable CHAP, you need to modify the CHAP section of
the iscsid.conf file on the iSCSI Target. Edit the iscsid.conf file with
one-way or two-way CHAP information as desired.
19.
Shut down the system and disconnect the local hard drive. Now you are
ready to iSCSI boot the iSCSI Target.
20.
Configure iSCSI Boot Parameters, including CHAP parameters if desired
(see “Configuring the iSCSI Target” on page 69).
21.
Continue booting into the iSCSI Boot image and choose one of the images
you created (non-offload or offload). Your choice should correspond with
your choice in the iSCSI Boot parameters section. If HBA Boot Mode was
enabled in the iSCSI Boot Parameters section, you have to boot the offload
image. SuSE 10.x and SLES 11 do not support offload.
Troubleshooting iSCSI Boot
The following troubleshooting tips are useful for iSCSI boot.
Problem: A system blue screen occurs when iSCSI boots Windows Server 2008
R2 through the adapter’s NDIS path with the initiator configured using a link-local
IPv6 address and the target configured using a router-configured IPv6 address.
Solution: This is a known Windows TCP/IP stack issue.
Problem: The QLogic iSCSI Crash Dump utility will not work properly to capture a
memory dump when the link speed for iSCSI boot is configured for 10Mbps or
100Mbps.
Solution: The iSCSI Crash Dump utility is supported when the link speed for
iSCSI boot is configured for 1Gbps or 10Gbps. 10Mbps or 100Mbps is not
supported.
Problem: An iSCSI target is not recognized as an installation target when you try
to install Windows Server 2008 by using an IPv6 connection.
Solution: This is a known third-party issue. See Microsoft Knowledge Base KB
971443, http://support.microsoft.com/kb/971443.
Problem: When switching iSCSI boot from the Microsoft standard path to QLogic
iSCSI offload, the booting fails to complete.
Solution: Prior to switching the iSCSI boot path, install or upgrade the QLogic
Virtual Bus Device (VBD) driver to and OIS driver to the latest versions.
Problem: The iSCSI configuration utility will not run.
Solution: Ensure that the iSCSI Boot firmware is installed in the NVRAM.
Problem: A system blue screen occurs when installing the QLogic drivers through
Windows Plug-and-Play (PnP).
Solution: Install the drivers through the Setup installer.
96
83840-546-00 E
8–iSCSI Protocol
iSCSI Crash Dump
Problem: For static IP configuration when switching from Layer 2 iSCSI boot to
QLogic iSCSI HBA, then you will receive an IP address conflict.
Solution: Change the IP address of the network property in the OS.
Problem: After configuring the iSCSI boot LUN to 255, a system blue screen
appears when performing iSCSI boot.
Solution: Although QLogic’s iSCSI solution supports a LUN range from 0 to 255,
the Microsoft iSCSI software initiator does not support a LUN of 255. Configure a
LUN value from 0 to 254.
Problem: NDIS miniports with Code 31 yellow-bang after L2 iSCSI boot install.
Solution: Run the latest version of the driver installer.
Problem: Unable to update inbox driver if a non-inbox hardware ID present.
Solution: Create a custom slipstream DVD image with supported drivers present
on the install media.
Problem: In Windows Server 2012, toggling between iSCSI HBA offload mode
and iSCSI software initiator boot can leave the machine in a state where the HBA
offload miniport bxois will not load.
Solution: Manually edit
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\bxois\StartOver
ride] from 3 to 0.
Modify the registry key before toggling back from NDIS to HBA path in CCM.
NOTE
Microsoft recommends against this method. Toggling the boot path from
NDIS to HBA or vice versa after installation is completed is not
recommended.
Problem: Installing Windows onto an iSCSI target through iSCSI boot fails when
connecting to a 1Gbps switch port.
Solution: This is a limitation relating to adapters that use SFP+ as the physical
connection. SFP+ defaults to 10Gbps operation and does not support
autonegotiation.
iSCSI Crash Dump
If you will use the QLogic iSCSI Crash Dump utility, it is important to follow the
installation procedure to install the iSCSI Crash Dump driver. See “Using the
Installer” on page 18 for more information.
97
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
iSCSI Offload in Windows Server
iSCSI traffic may be segregated offload is a technology that offloads iSCSI
protocol processing overhead from host processors to the iSCSI host bus adapter
to increase network performance and throughput while helping to optimize server
processor use. This section covers Windows iSCSI offload feature for the 8400
Series family of network adapters.
iSCSI Offload Limitations
The bnx2i driver for iSCSI does not operate on a stand-alone PCI device. It
shares the same PCI device with the networking driver (bnx2 and bnx2x). The
networking driver alone supports layer 2 networking traffic. Offloaded iSCSI
operations require both the networking driver and the bnx2i driver.
iSCSI operations will be interrupted when the networking driver brings down or
resets the device. This scenario requires proper handling by the networking and
bnx2i drivers, and the user space iscsid daemon that keeps track of all iSCSI
sessions. Offloaded iSCSI connections take up system and on-chip resources that
must be freed up before the device can be reset. iscsid running in user space is
generally less predictable, as it can run slowly and take a long time to disconnect
and reconnect iSCSI sessions during network reset, especially when the number
of connections is large. QLogic cannot guarantee that iSCSI sessions will always
recover in every conceivable scenario when the networking device is repeatedly
being reset. QLogic recommends that administrator-administered network device
resets, such as MTU change, ring size change, device shutdown, hot-unplug, and
so forth, be kept at a minimum while there are active offloaded iSCSI sessions
running on that shared device. On the other hand, link-related changes do not
require device reset and are safe to be performed at any time.
To help alleviate some of the above issues, install the latest open-iscsi utilities by
upgrading your Red Hat Network subscription.
Configuring iSCSI Offload
With the proper iSCSI offload licensing, you can configure your iSCSI-capable
8400 Series network adapter to offload iSCSI processing from the host processor.
The following process enables your system to take advantage of QLogic’s iSCSI
offload feature.

Installing QLogic Drivers

Installing the Microsoft iSCSI Initiator

Configure Microsoft Initiator to Use QLogic’s iSCSI Offload
98
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
Installing QLogic Drivers
Install the Windows drivers as described in Chapter 4, Windows Driver Software.
Installing the Microsoft iSCSI Initiator
For Windows Server 2008 and later, the iSCSI initiator is included inbox. To
download the iSCSI initiator from Microsoft, go to
http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=18986
and locate the direct link for your system.
Configure Microsoft Initiator to Use QLogic’s iSCSI Offload
Now that the IP address has been configured for the iSCSI adapter, you need to
use Microsoft Initiator to configure and add a connection to the iSCSI target using
QLogic iSCSI adapter. See Microsoft’s user guide for more details on Microsoft
Initiator.
1.
Open Microsoft Initiator.
2.
Configure the initiator IQN name according to your setup. To change, click
Change.
Figure 8-13. iSCSI Initiator Properties
99
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
3.
Type the initiator IQN name, and then click OK.
Figure 8-14. iSCSI Initiator Node Name Change
4.
Select the Discovery tab (Figure 8-15), and click Add to add a target portal.
Figure 8-15. iSCSI Initiator—Add a Target Portal
100
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
5.
Enter the IP address of the target and click Advanced (Figure 8-16).
Figure 8-16. Target Portal IP Address
6.
From the General tab, select QLogic 10 Gigabit Ethernet iSCSI Adapter
for the local adapter (Figure 8-17).
Figure 8-17. Selecting the Local Adapter
101
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
7.
Select the adapter IP address for the Initiator IP, and then click OK
(Figure 8-18).
Figure 8-18. Selecting the Initiator IP Address
102
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
8.
In the iSCSI Initiator Properties dialog box (Figure 8-19), click OK to add the
target portal.
Figure 8-19. Adding the Target Portal
103
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
9.
From the Targets tab (Figure 8-20), select the target and click Log On to log
into your iSCSI target using the QLogic iSCSI adapter.
Figure 8-20. Logging on to the iSCSI Target
10.
In the Log On to Target dialog box (Figure 8-21), click Advanced... .
Figure 8-21. Log On to Target Dialog Box
11.
On the General tab, select QLogic 10 Gigabit Ethernet iSCSI Adapter for
the local adapter, and then click OK to close Advanced settings.
12.
Click OK to close the Microsoft Initiator.
104
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
13.
To format your iSCSI partition, use Disk Manager.
NOTE
 Teaming does not support iSCSI adapters.
 Teaming does not support NDIS adapters that are in the boot path.
 Teaming supports NDIS adapters that are not in the iSCSI boot path, but
only for the SLB team type.
iSCSI Offload FAQs
Q: How do I assign an IP address for iSCSI offload?
A: Use the Configurations tab in the QCC GUI.
Q: What tools should be used to create the connection to the target?
A: Use Microsoft iSCSI Software Initiator (version 2.08 or later).
Q: How do I know that the connection is offloaded?
A: Use Microsoft iSCSI Software Initiator. From a command line, type iscsicli
sessionlist. From Initiator Name, an iSCSI offloaded connection will display
an entry beginning with “B06BDRV...”. A non-offloaded connection will display an
entry beginning with “Root...”.
Q: What IP addresses should be avoided?
A: The IP address should not be the same as the LAN.
Q: Why does the install fail when attempting to complete an iSCSI offload install
using Windows Server OS for 8400 Series adapters?
A: There is a conflict with the internal inbox driver.
Event Log Messages
Table 8-5 lists the offload iSCSI driver event log messages.
Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages
Message
Number
Severity
Message
1
Error
Initiator failed to connect to the target. Target IP address and
TCP Port number are given in dump data.
2
Error
The initiator could not allocate resources for an iSCSI session.
105
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages
Message
Number
Severity
Message
3
Error
Maximum command sequence number is not serially greater
than expected command sequence number in login
response. Dump data contains Expected Command
Sequence number followed by Maximum Command
Sequence number.
4
Error
MaxBurstLength is not serially greater than FirstBurstLength. Dump data contains FirstBurstLength followed by MaxBurstLength.
5
Error
Failed to setup initiator portal. Error status is given in the
dump data.
6
Error
The initiator could not allocate resources for an iSCSI connection
7
Error
The initiator could not send an iSCSI PDU. Error status is
given in the dump data.
8
Error
Target or discovery service did not respond in time for an
iSCSI request sent by the initiator. iSCSI Function code is
given in the dump data. For details about iSCSI Function
code please refer to iSCSI User's Guide.
9
Error
Target did not respond in time for a SCSI request. The CDB
is given in the dump data.
10
Error
Login request failed. The login response packet is given in
the dump data.
11
Error
Target returned an invalid login response packet. The login
response packet is given in the dump data.
12
Error
Target provided invalid data for login redirect. Dump data
contains the data returned by the target.
13
Error
Target offered an unknown AuthMethod. Dump data contains the data returned by the target.
14
Error
Target offered an unknown digest algorithm for CHAP. Dump
data contains the data returned by the target.
15
Error
CHAP challenge given by the target contains invalid characters. Dump data contains the challenge given.
16
Error
An invalid key was received during CHAP negotiation. The
key=value pair is given in the dump data.
106
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages
Message
Number
Severity
Message
17
Error
CHAP Response given by the target did not match the
expected one. Dump data contains the CHAP response.
18
Error
Header Digest is required by the initiator, but target did not
offer it.
19
Error
Data Digest is required by the initiator, but target did not offer
it.
20
Error
Connection to the target was lost. The initiator will attempt to
retry the connection.
21
Error
Data Segment Length given in the header exceeds MaxRecvDataSegmentLength declared by the target.
22
Error
Header digest error was detected for the given PDU. Dump
data contains the header and digest.
23
Error
Target sent an invalid iSCSI PDU. Dump data contains the
entire iSCSI header.
24
Error
Target sent an iSCSI PDU with an invalid opcode. Dump
data contains the entire iSCSI header.
25
Error
Data digest error was detected. Dump data contains the calculated checksum followed by the given checksum.
26
Error
Target trying to send more data than requested by the initiator.
27
Error
Initiator could not find a match for the initiator task tag in the
received PDU. Dump data contains the entire iSCSI header.
28
Error
Initiator received an invalid R2T packet. Dump data contains
the entire iSCSI header.
29
Error
Target rejected an iSCSI PDU sent by the initiator. Dump
data contains the rejected PDU.
30
Error
Initiator could not allocate a work item for processing a
request.
31
Error
Initiator could not allocate resource for processing a request.
32
Information
Initiator received an asynchronous logout message. The Target name is given in the dump data.
33
Error
Challenge size given by the target exceeds the maximum
specified in iSCSI specification.
107
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages
Message
Number
Severity
Message
34
Information
A connection to the target was lost, but Initiator successfully
reconnected to the target. Dump data contains the target
name.
35
Error
Target CHAP secret is smaller than the minimum size (12
bytes) required by the specification.
36
Error
Initiator CHAP secret is smaller than the minimum size (12
bytes) required by the specification. Dump data contains the
given CHAP secret.
37
Error
FIPS service could not be initialized. Persistent logons will
not be processed.
38
Error
Initiator requires CHAP for login authentication, but target
did not offer CHAP.
39
Error
Initiator sent a task management command to reset the target. The target name is given in the dump data.
40
Error
Target requires login authentication through CHAP, but Initiator is not configured to perform CHAP.
41
Error
Target did not send AuthMethod key during security negotiation phase.
42
Error
Target sent an invalid status sequence number for a connection. Dump data contains Expected Status Sequence number followed by the given status sequence number.
43
Error
Target failed to respond in time for a login request.
44
Error
Target failed to respond in time for a logout request.
45
Error
Target failed to respond in time for a login request. This login
request was for adding a new connection to a session.
46
Error
Target failed to respond in time for a SendTargets command.
47
Error
Target failed to respond in time for a SCSI command sent
through a WMI request.
48
Error
Target failed to respond in time to a NOP request.
49
Error
Target failed to respond in time to a Task Management
request.
50
Error
Target failed to respond in time to a Text Command sent to
renegotiate iSCSI parameters.
108
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Windows Server
Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages
Message
Number
Severity
Message
51
Error
Target failed to respond in time to a logout request sent in
response to an asynchronous message from the target.
52
Error
Initiator Service failed to respond in time to a request to configure IPSec resources for an iSCSI connection.
53
Error
Initiator Service failed to respond in time to a request to
release IPSec resources allocated for an iSCSI connection.
54
Error
Initiator Service failed to respond in time to a request to
encrypt or decrypt data.
55
Error
Initiator failed to allocate resources to send data to target.
56
Error
Initiator could not map an user virtual address to kernel virtual address resulting in I/O failure.
57
Error
Initiator could not allocate required resources for processing
a request resulting in I/O failure.
58
Error
Initiator could not allocate a tag for processing a request
resulting in I/O failure.
59
Error
Target dropped the connection before the initiator could transition to Full Feature Phase.
60
Error
Target sent data in SCSI Response PDU instead of Data_IN
PDU. Only Sense Data can be sent in SCSI Response.
61
Error
Target set DataPduInOrder to NO when initiator requested
YES. Login will be failed.
62
Error
Target set DataSequenceInOrder to NO when initiator
requested YES. Login will be failed.
63
Error
Cannot reset the target or LUN. Will attempt session recovery.
64
Information
Attempt to bootstrap Windows using iSCSI NIC Boot (iBF).
65
Error
Booting from iSCSI, but could not set any NIC in Paging
Path.
66
Error
Attempt to disable the Nagle Algorithm for iSCSI connection
failed.
67
Information
If Digest support selected for iSCSI Session, will use Processor support for Digest computation.
109
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Linux Server
Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages
Message
Number
Severity
Message
68
Error
After receiving an async logout from the target, attempt to
relogin the session failed. Error status is given in the dump
data.
69
Error
Attempt to recover an unexpected terminated session failed.
Error status is given in the dump data.
70
Error
Error occurred when processing iSCSI logon request. The
request was not retried. Error status is given in the dump
data.
71
Information
Initiator did not start a session recovery upon receiving the
request. Dump data contains the error status.
72
Error
Unexpected target portal IP types. Dump data contains the
expected IP type.
iSCSI Offload in Linux Server

Open iSCSI User Applications

User Application - qlgc_iscsiuio

Bind iSCSI Target to QLogic iSCSI Transport Name

VLAN Configuration for iSCSI Offload (Linux)

Making Connections to iSCSI Targets

Maximum Offload iSCSI Connections

Linux iSCSI Offload FAQ
Open iSCSI User Applications
Install and run the inbox open-iscsi initiator programs from the DVD. Refer to
“Packaging” on page 26 for details.
110
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Linux Server
User Application - qlgc_iscsiuio
Install and run the qlgc_iscsiuio daemon before attempting to create iSCSI
connections. The driver will not be able to establish connections to the iSCSI
target without the daemon's assistance.
1.
Install the qlgc_iscsiuio source package
# tar -xvzf iscsiuio-<version>.tar.gz
2.
CD to the directory where iscsiuio is extracted
# cd iscsiuio-<version>
3.
Compile and install
# ./configure
# make
# make install
4.
Check the iscsiuio version matches with the source package
# qlgc_iscsiuio -v
5.
Start brcm_iscsiuio
# qlgc_iscsiuio
Bind iSCSI Target to QLogic iSCSI Transport Name
In Linux, each iSCSI port is an interface known as iface. By default, the open-iscsi
daemon connects to discovered targets using a software initiator (transport name
= tcp) with the iface name default. To offload the iSCSI connection to the CNIC
device, explicitly use the ifaces whose names have the prefix bnx2i. The bnx2i
ifaces are created automatically using the iscsiadm CLI utility as follows:
iscsiadm -m iface
for example:
linux-71lr:~ # iscsiadm -m iface
default tcp,<empty>,<empty>,<empty>,<empty>
bnx2i.00:17:a4:77:ec:3b
bnx2i,00:17:a4:77:ec:3b,<empty>,<empty>
bnx2i.00:17:a4:77:ec:3a
bnx2i,00:17:a4:77:ec:3a,<empty>,<empty>
111
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Linux Server
where the iface file includes the following information for RHEL 5.4, RHEL 5.5,
and SLES 11 SP1:
iface.net_ifacename = ethX
iface.iscsi_ifacename = <iface file>
iface.hwaddress = XX:XX:XX:XX:XX:XX
iface.ipaddress = XX.XX.XX.XX
iface.transport_name = bnx2i
Ensure that the iface.hwaddress is in lower case format.
VLAN Configuration for iSCSI Offload (Linux)
iSCSI traffic on the network may be isolated in a VLAN to segregate it from other
traffic. When this is the case, you must make the iSCSI interface on the adapter a
member of that VLAN.
Modifying the iSCSI iface File
To configure the iSCSI VLAN add the VLAN ID in the iface file for iSCSI. In the
following example, the VLAN ID is set to 100.
#Begin Record 6.2.0-873.2.el6
Iface.iscsi_ifacefile name = <>
Iface.ipaddress = 0.0.0.0
Iface.hwaddress = <>
Iface.trasport_name = bnx2i
Iface.vlan_id = 100
Iface.vlan_priority
= 0
Iface.iface_num = 100
Iface.mtu = 0
Iface.port = 0
#END Record
NOTE
Although not strictly required, QLogic recommends configuring the same
VLAN ID on the iface.iface_num field for iface file identification purposes.
Setting the VLAN ID on the Ethernet Interface
If using RHEL5.X versions of Linux, it is recommended that you configure the
iSCSI VLAN on the Ethernet interface. In RHEL6.3, and sles11sp3, it is not
necessary to set the VLAN on the Ethernet driver.
Execute the following commands to set the VLAN ID:
Vconfig add ethx <vlan number> — Creates an L2 VLAN interface.
112
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload in Linux Server
Ifconfig eth.<VLANID> <static ip> up — Assigns and IP address
to the VLAN interface.
Use the following command to get detailed information about VLAN interface:
# cat /proc/net/vlan/ethx.<vlanid>
Preserve the VLAN configuration across reboots by adding it to configuration files.
Configure the VLAN interface configuration in /etc/sysconfig/network-scripts. The
configuration file name has a specific format that includes the physical interface, a
character, and the VLAN ID.
For example, if the VLAN ID is 100, and the physical interface is eth0, then the
configuration file name should be ifcfg-eth0.100. The following are example
settings in the configuration file.
"DEVICE=ethX.100
"BOOTPROTO=static
"ONBOOT=yes
"IPADDR=<>
"NETMASK=<>
"USERCTL=no
"NETWORK=<>
"VLAN=yes
Restart the networking service for the changes to take effect, as follows:
"Service network restart"
Making Connections to iSCSI Targets
Refer to open-iscsi documentation for a comprehensive list of iscsiadm
commands. This is a sample list of commands to discovery targets and to create
iSCSI connections to a target.
Add Static Entry
iscsiadm -m node -p <ipaddr[:port]> -T
iqn.2007-05.com.qlogic:target1 -o new -I <iface_file_name>
iSCSI Target Discovery Using 'SendTargets'
iscsiadm -m discovery --type sendtargets -p <ipaddr[:port]> -I
<iface_file_name>
Login to Target Using 'iscsiadm' Command
iscsiadm --mode node --targetname <iqn.targetname> --portal
<ipaddr[:port]> --login
113
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload on VMware Server
List All Sessions
After a login, type the following command to show all sessions:
iscsiadm -m session
The following command shows more detail, such as the device name
(/dev/sdb), which can be verified using the fdisk -l command.
iscsiadm -m session -P2
List All Drives Active in the System
fdisk -l
Maximum Offload iSCSI Connections
With default driver parameters set, which includes 128 outstanding commands,
bnx2i can offload 128 connections on QLogic 8400 Series adapters:
This is not a hard limit, but just a simple on-chip resource allocation math. bnx2i
will be able to offload more connections by reducing the shared queue size, which
in turn limits the maximum outstanding tasks on a connection. See “Setting Values
for Optional Properties” on page 35 for information on sq_size and rq_size. The
driver logs the following message to syslog when the maximum allowed
connection offload limit is reached - “bnx2i: unable to allocate iSCSI context
resources”.
Linux iSCSI Offload FAQ

QLogic 3400 Series adapters do not support iSCSI offload.

The iSCSI session will not recover after a hot remove and hot plug.

For MPIO to work properly, iSCSI nopout should be enabled on each iSCSI
session. Refer to open-iscsi documentation for procedures on setting up
noop_out_interval and noop_out_timeout values.

In the scenario where multiple CNIC devices are in the system and the
system is booted with QLogic’s iSCSI boot solution, ensure that the iSCSI
node under /etc/iscsi/nodes for the boot target is bound to the NIC that is
used for booting.
iSCSI Offload on VMware Server
The bnx2xi driver is the QLogic VMware iSCSI adapter driver supporting iSCSI
offload and jumbo frames up to 9,000 bytes on VMware ESXi 5.x and 6.0.
Lossless iSCSI-Offload-TLV over DCB is supported on ESXi 5.x and 6.0.
114
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload on VMware Server
Similar to bnx2fc, bnx2i is a kernel mode driver used to provide a translation layer
between the VMware SCSI stack and the QLogic iSCSI firmware/hardware. Bnx2i
functions under the open-iscsi framework.
iSCSI traffic on the network may be isolated in a VLAN to segregate it from other
traffic. When this is the case, you must make the iSCSI interface on the adapter a
member of that VLAN.
To configure the VLAN using the vSphere client (GUI):
1.
Click the ESXi host.
2.
Click the Configuration tab.
3.
Click the Networking link, and then click Properties.
4.
Click the virtual switch/port groups in the Ports tab, and then click Edit.
5.
Click the General tab.
6.
Assign a VLAN number in VLAN ID (optional).
Figure 8-22. Assigning a VLAN Number
115
83840-546-00 E
8–iSCSI Protocol
iSCSI Offload on VMware Server
7.
Configure the VLAN on VMKernel (Figure 8-23).
Figure 8-23. Configuring the VLAN on VMKernel
116
83840-546-00 E
9
Fibre Channel Over
Ethernet

Overview

FCoE Boot from SAN

Configuring FCoE
Overview
In today’s data center, multiple networks, including network attached storage
(NAS), management, IPC, and storage, are used to achieve the desired
performance and versatility. In addition to iSCSI for storage solutions, FCoE can
now be used with capable QLogic C-NICs. FCoE is a standard that allows Fibre
Channel protocol to be transferred over Ethernet by preserving existing Fibre
Channel infrastructures and capital investments by classifying received FCoE and
FIP frames.
The following FCoE features are supported:

Receiver classification of FCoE and FIP frames. FIP is the FCoE
Initialization Protocol used to establish and maintain connections.

Receiver CRC offload

Transmitter CRC offload

Dedicated queue set for Fibre Channel traffic

DCB provides lossless behavior with PFC

DCB allocates a share of link bandwidth to FCoE traffic with ETS
DCB supports storage, management, computing, and communications fabrics
onto a single physical fabric that is simpler to deploy, upgrade, and maintain than
in standard Ethernet networks. DCB technology allows the capable QLogic
C-NICs to provide lossless data delivery, lower latency, and standards-based
bandwidth sharing of data center physical links. The DCB supports FCoE, iSCSI,
Network-Attached Storage (NAS), Management, and IPC traffic flows. For more
information on DCB, see Chapter 14, Data Center Bridging (DCB).
117
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
FCoE Boot from SAN
This section describes the install and boot procedures for the Windows, Linux,
and ESXi operating systems.
NOTE
FCoE Boot from SAN is not supported on ESXi 5.0. ESXi Boot from SAN is
supported on ESXi 5.1 and later.
The following section details the BIOS setup and configuration of the boot
environment prior to the OS install.
Preparing System BIOS for FCoE Build and Boot
Modify System Boot Order
The QLogic initiator must be the first entry in the system boot order. The second
entry must be the OS installation media. It is important that the boot order be set
correctly or else the installation will not proceed correctly. Either the desired boot
LUN will not be discovered or it will be discovered but marked offline.
Specify BIOS Boot Protocol (if required)
On some platforms, the boot protocol must be configured through system BIOS
configuration. On all other systems the boot protocol is specified through the
QLogic Comprehensive Configuration Management (CCM), and for those
systems this step is not required.
118
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
Prepare QLogic Multiple Boot Agent for FCoE Boot
1.
During POST, press CTRL+S at the QLogic Ethernet Boot Agent banner to
invoke the CCM utility.
2.
Select the device through which boot is to be configured ().
NOTE
When running in NIC Partitioning (NPAR) mode, FCoE boot is supported
only when the first function on each port is assigned an FCoE personality.
FCoE boot is not supported when the FCoE personality is assigned to any
other function.
Figure 9-1. FCoE Boot<Variable>—CCM Device List
119
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
3.
Ensure DCB/DCBX is enabled on the device (Figure 9-2). FCoE boot is only
supported on DCBX capable configurations. As such, DCB/DCBX must be
enabled, and the directly attached link peer must also be DCBX capable with
parameters that allow for full DCBX synchronization.
Figure 9-2. FCoE Boot<Variable>—Enable DCB/DCBX
4.
On some platforms, you may need to set the boot protocol through system
BIOS configuration in the integrated devices pane as described above.
For all other devices, set the Boot Protocol field to FCoE in the MBA
Configuration Menu (Figure 9-3) through CCM.
Figure 9-3. FCoE Boot<Variable>—Select FCoE Boot Protocol
120
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
5.
Configure the desired boot target and LUN. From the Target Information
Menu (Figure 9-4), select the first available path.
Figure 9-4. FCoE Boot<Variable>—Target Information
121
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
6.
Enable the Connect field. Enter the target WWPN and Boot LUN
information for the target to be used for boot (Figure 9-5).
Figure 9-5. FCoE Boot<Variable>—Specify Target WWPN and Boot LUN
Figure 9-6. FCoE Boot Target Information
7.
Press ESC until prompted to exit and save changes. To exit CCM, restart the
system, and apply changes, press CTRL+ALT+Del.
8.
Proceed to OS installation once storage access has been provisioned in the
SAN.
122
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
UEFI Boot LUN Scanning
UEFI boot LUN scanning eases the task of configuring FCoE boot from SAN by
allowing you to choose from a list of targets and selecting a WWPN instead of
typing the WWPN.
To configure FCoE boot from SAN using UEFI boot LUN scanning:
1.
In the Main Menu, select FCoE Boot Configuration, and then press
ENTER.
2.
In the FCoE Boot Configuration menu, select FCoE Target Parameters,
and then press ENTER.
Figure 9-7. FCoE Boot Configuration Menu
123
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
3.
In the FCoE Target Parameters window, there are eight target entries in
which you can enable the target (Connect n), select or type a WWPN
(WWPN n), and type a boot LUN number (Boot LUN n) (Figure 9-8).
Figure 9-8. FCoE Target Parameters Window
The first six target entries (1-6) enable you to select a WWPN from a menu.
Use the UP and DOWN arrows to select the WWPN field, and then press
ENTER. In the WWPN list, use the UP and DOWN arrows to select a
WWPN, and then press ENTER.
Figure 9-9. Selecting an FCoE WWPN
The last two target entries (7, 8) enable you to type a WWPN. In the WWPN
field, type the WWPN, and then press ENTER.
4.
Press ESC until prompted to exit and save changes.
124
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
Provisioning Storage Access in the SAN
Storage access consists of zone provisioning and storage selective LUN
presentation, each of which is commonly provisioned per initiator WWPN. Two
main paths are available for approaching storage access:

Pre-Provisioning

CTRL+R Method
Pre-Provisioning
With pre-provisioning, note the initiator WWPN and manually modify fabric zoning
and storage selective LUN presentation to allow the appropriate access for the
initiator.
The initiator WWPN can be seen at the bottom of the screen in the FCoE boot
target configuration window.
The initiator WWPN can also be directly inferred from the FIP MAC address
associated with the interface(s) planned for boot. Two MAC addresses are printed
on stickers attached to the SFP+ cage on your adapter. The FIP MAC ends in an
odd digit. The WWPN is 20:00: + <FIP MAC>. For example, if the FIP MAC is
00:10:18:11:22:33, then the WWPN will be 20:00:00:10:18:11:22:33.
NOTE
The default WWPN is 20:00: + <FIP MAC>. The default WWNN is 10:00: +
<FIP MAC>.
CTRL+R Method
The CTRL+R method allows you to use the boot initiator to bring up the link and
login into all available fabrics and targets. Using this method, you can ensure that
the initiator is logged into the fabric/target before making provisioning changes,
and as such, can provision without manually typing in WWPNs.
1.
Configure at least one boot target through CCM as described above.
2.
Allow the system to attempt to boot through the selected initiator.
3.
Once the initiator boot starts, it will commence with DCBX sync, FIP
Discovery, Fabric Login, Target Login, and LUN readiness checks. As each
of these phases completes, if the initiator is unable to proceed to the next
phase, MBA will present the option to press CTRL+R.
4.
Once CTRL+R has been activated, the boot initiator will maintain a link in
whatever phase has most recently succeeded and allow you time to make
the necessary provisioning corrections to proceed to the next phase.
125
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
5.
If the initiator logs into the fabric, but is unable to log into the target, a
CTRL+R will pause the boot process and allow you to configure fabric
zoning.
6.
Once zoning is complete, the initiator will automatically log into all visible
targets. If the initiator is unable to discover the designated LUN on the
designated target as provisioned in step 1, CTRL+R will pause the boot
process and allow you to configure selective LUN presentation.
7.
The boot initiator will periodically poll the LUN for readiness, and once the
user has provisioned access to the LUN, the boot process will automatically
proceed.
NOTE
This does not preclude the need to put the boot initiator into one-time
disabled mode as described in “One-Time Disabled” on page 126.
One-Time Disabled
QLogic's FCoE ROM is implemented as Boot Entry Vector (BEV). In this
implementation, the Option ROM only connects to the target once it has been
selected by BIOS as the chosen boot device. This is different from other
implementations that will connect to the boot device even if another device has
been selected by the system BIOS. For OS installation over the FCoE path, it is
necessary to instruct the Option ROM to bypass FCoE and skip to CD/DVD
installation media. As instructed earlier, the boot order must be configured with
QLogic boot first and installation media second. Furthermore, during OS
installation, it is required to bypass the FCoE boot and pass through to the
installation media for boot. It is required to do this by one-time disabling the FCoE
boot ROM from booting, and not by simply allowing the FCoE ROM to attempt to
boot and allowing the BIOS to fail through and boot the installation media. Finally,
it is required that the FCoE ROM successfully discover and test the readiness of
the desired boot LUN for installation to proceed successfully. Failure to allow the
boot ROM to discover the LUN and do a coordinated bypass will result in a failure
to properly install the O/S to the LUN. To affect this coordinated bypass, there are
two choices:

Once the FCoE boot ROM discovers a ready target LUN, it will prompt you
to press CTRL+D within four seconds to Stop booting from the target.
Press CTRL+D, and proceed to boot from the installation media.

From CCM, set the Option ROM setting under MBA settings to One Time
Disabled. With this setting, the FCoE ROM will load once and automatically
bypass once the ready LUN is discovered. On the subsequent reboot after
installation, the option ROM will automatically revert to Enabled.
126
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
Wait through all option ROM banners. Once FCoE boot is invoked, it will connect
to the target, and provide a four second window to press CTRL+D to invoke the
bypass. Press CTRL+D to proceed to installation.
Figure 9-10. One-time Disabled
127
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
Windows Server 2008 R2 and Windows Server 2008 SP2
FCoE Boot Installation
Ensure that no USB flash drive is attached before starting the OS installer. The
EVBD and OFC/BXFOE drivers need to be loaded during installation. Go through
the normal procedures for OS installation. When no disk devices are found,
Windows will prompt you to load additional drivers. At this point, connect a USB
flash drive containing the full contents of the provided EVBD and OFC boot driver
folders. After all appropriate drivers are loaded, the setup show the target disk(s).
Disconnect the USB flash drive before selecting the disk for installation.
1.
Load the EVBD driver first (Figure 9-11).
Figure 9-11. Load EVBD Driver
128
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
2.
Then load the bxfcoe (OFC) driver (Figure 9-12).
Figure 9-12. Load bxfcoe Driver
3.
Select the boot LUN to be installed (Figure 9-13).
Figure 9-13. Selecting the FCoE Boot LUN
129
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
4.
Continue with the rest of the installation. After installation is complete and
booted to SAN, execute the provided Windows driver installer and reboot.
Installation is now complete.
NOTE
The boot initiator must be configured to point at the desired installation LUN,
and the boot initiator must have successfully logged and determined the
readiness of the LUN prior to starting installation. If these requirements are
not met, the devices will still show up in the drive list above, but upon
proceeding with installation Read/Write errors will occur.
Windows Server 2012/2102 R2 FCoE Boot Installation
For Windows Server 2012/2012 R2 Boot from SAN installation, QLogic requires
the use of a “slipstream” DVD or ISO image with the latest QLogic drivers injected.
See “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files” on
page 91 in the iSCSI chapter. Also, refer to the Microsoft Knowledge Base topic
KB974072 at support.microsoft.com, which is helpful for Windows Server 2012
FCoE Boot from SAN also. Microsoft's procedure injects only the eVBD and NDIS
drivers. QLogic strongly recommends injecting all drivers, especially those in
bold:

eVBD

VBD

BXND

OIS

FCoE

NDIS
Once you have a properly slipstreamed ISO, you can use that ISO for normal
Windows Server 2012 installation, without needing USB-provided drivers.
NOTE
Refer to the silent.txt file for the specific driver installer application for
instructions on how to extract the individual Windows 8400 Series drivers.
130
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
Linux FCoE Boot Installation
Configure the adapter boot parameters and Target Information (press CTRL+S
and enter the CCM utility) as detailed in “Preparing System BIOS for FCoE Build
and Boot” on page 118. Then, use the guidelines in the following sections for
FCoE boot installation with the appropriate Linux version.

SLES11 SP2 Installation

RHEL6 Installation
SLES11 SP2 Installation
1.
Boot from the SLES11 SP2 installation medium and on the installation
splash screen press F6 for driver update disk. Select Yes. In boot options
(Figure 9-14), type withfcoe=1. Select Installation to proceed.
Figure 9-14. SLES Boot Options Window
131
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
2.
Follow the on screen instructions to choose the Driver Update medium and
load drivers (Figure 9-15).
Figure 9-15. Choosing Driver Update Medium
3.
Once the driver update is complete, select Next to continue with OS
installation.
132
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
4.
When requested, click Configure FCoE Interfaces.
5.
Ensure FCoE Enable is set to yes on the 10GbE QLogic initiator ports you
wish to use as the SAN boot path(s).
133
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
6.
For each interface to be enabled for FCoE boot, click Change Settings and
ensure FCoE Enable and AUTO_VLAN are set to yes and DCB required is
set to no.
7.
For each interface to be enabled for FCoE boot, click on Create FCoE
VLAN Interface. The VLAN interface creation dialog will launch. Click Yes
to confirm. This will trigger automatic FIP VLAN discovery. If successful, the
VLAN will be displayed under FCoE VLAN Interface. If no VLAN is visible,
check your connectivity and switch configuration.
134
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
8.
Once complete with configuration of all interface, click OK to proceed.
9.
Click Next to continue installation. YaST2 will prompt to activate multipath.
Answer as appropriate.
135
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
10.
Continue installation as usual.
11.
Under the Expert tab on the Installation Settings screen, select Booting.
12.
Select the Boot Loader Installation tab and then select Boot Loader
Installation Details, make sure you have one boot loader entry here. Delete
all redundant entries.
136
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
13.
Click OK to proceed and complete installation.
RHEL6 Installation
1.
Boot from the installation medium.
2.
For RHEL6.3, an updated Anaconda image is required for FCoE BFS. That
updated image is provided by Red Hat at the following URL
http://rvykydal.fedorapeople.org/updates.823086-fcoe.img.
3.
For RHEL6.3, on the installation splash screen, press Tab and add the
options dd updates=<URL_TO_ANACONDA_UPDATE_IMAGE> to the boot
command line. Please refer to the Red Hat Installation Guide, Section 28.1.3
(http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Insta
llation_Guide/ap-admin-options.html#sn-boot-options-update) for details
about installing the Anaconda update image. Press ENTER to proceed.
4.
For RHEL6.4 and above, no updated Anaconda is required. On the
installation splash screen press Tab and add the option dd to the boot
command line, as shown in the following screen. Press ENTER to proceed.
137
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
5.
When prompted Do you have a driver disk, enter Yes.
NOTE
RHEL does not allow driver update media to be loaded over the
network when installing driver updates for network devices. Use local
media.
6.
Once drivers are loaded, proceed with installation.
138
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
7.
Select Specialized Storage Devices when prompted.
8.
Click Add Advanced Target.
139
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
9.
Select Add FCoE SAN. and select Add drive.
10.
For each interface intended for FCoE boot, select the interface, deselect
Use DCB, select Use auto vlan, and then click Add FCoE Disk(s).
140
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
11.
Repeat steps 8 through 10 for all initiator ports.
12.
Confirm all FCoE visible disks are visible under Multipath Devices and/or
Other SAN Devices.
13.
Click Next to proceed.
14.
Click Next and complete installation as usual.
Upon completion of installation, the system will reboot.
15.
Once booted, ensure all boot path devices are set to start on boot. Set
onboot=yes under each network interface config file in
/etc/sysconfig/network-scripts.
16.
On RHEL 6.4 only, edit /boot/grub/menu.lst.
a.
Delete all fcoe=<INTERFACE>:nodcb parameters from the kernel
/vmlinuz … line. There should be as many fcoe= parameters as
there were FCoE interfaces configured during installation.
b.
Insert fcoe=edd:nodcb to the kernel /vmlinuz … line.
141
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
Linux: Adding Additional Boot Paths
Both RHEL and SLES require updates to the network configuration when adding
new boot through an FCoE initiator that was not configured during installation.
The following sections describe this procedure for each supported operating
system.
RHEL6.2 and Above
On RHEL6.2 and above, if the system is configured to boot through an initiator
port that has not previously been configured in the OS, the system automatically
boots successfully, but will encounter problems during shutdown. All new boot
path initiator ports must be configured in the OS before updating pre-boot FCoE
boot parameters.
1.
Identify the network interface names for the newly added interfaces through
ifconfig -a.
2.
Edit /boot/grub/menu.lst.
Add ifname=<INTERFACE>:<MAC_ADDRESS> to the line kernel
/vmlinuz … for each new interface. The MAC address must be all lower
case and separated by a colon. (ifname=em1:00:00:00:00:00:00)
3.
Create a /etc/fcoe/cfg-<INTERFACE> file for each new FCoE initiator
by duplicating the /etc/fcoe/cfg-<INTERFACE> file that was already
configured during initial installation.
4.
Execute nm-connection-editor.
5.
a.
Open Network Connection and choose each new interface.
b.
Configure each interface as desired, including DHCP settings.
c.
Click Apply to save.
For each new interface, edit
/etc/sysconfig/network-scripts/ifcfg-<INTERFACE> to add the
line NM_CONTROLLED="no". Modifying these files automatically causes a
restart to the network service. This may cause the system to appear to hang
briefly. It is best to ensure that redundant multipath paths are available
before performing this operation.
142
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
SLES 11 SP2 and Above
On SLES11 SP2, if the system boots through an initiator that has not been
configured as an FCoE interface during installation, the system will fail to boot. To
add new boot paths, the system must boot up through the configured FCoE
interface.
1.
Configure a new FCoE interface that will be added as a new path so it can
discover the boot LUN.
a.
Create a /etc/fcoe/cfg-<INTERFACE> file for each new FCoE
initiator by duplicating the /etc/fcoe/cfg-<INTERFACE> file that
was already configured during initial installation.
b.
Bring up the new interfaces:
# ifconfig <INTERFACE> up
c.
Restart FCoE service:
# rcfcoe restart
For SLES 12:
# systemctl restart fcoe
2.
Run multipath -l to make sure the system has a correct number of
multipaths to the boot LUN, including new paths.
3.
Create a /etc/sysconfig/network/ifcfg-<INTERFACE> file for each
new interface by duplicating the
/etc/sysconfig/network/ifcfg-<INTERFACE> file that was already
configured during initial installation.
4.
Create a new ramdisk to update changes:
# mkinitrd
143
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
VMware ESXi FCoE Boot Installation
FCoE Boot from SAN requires that the latest QLogic 8400 Series asynchronous
drivers be included into the ESXi (5.1, 5.5, 6.0) install image. Refer to
Image_builder_doc.pdf from VMware on how to slipstream drivers.
1.
Boot from the updated ESXi installation image and select ESXi installer
when prompted.
2.
Press ENTER to continue.
144
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
3.
Press F11 to accept the agreement and continue.
4.
Select the boot LUN for installation and press ENTER to continue.
145
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
5.
Select the desired installation method.
6.
Select the keyboard layout.
7.
Enter a password.
146
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
8.
Press F11 to confirm the install.
9.
Press ENTER to reboot after installation.
10.
On 57800 and 57810 boards, the management network is not vmnic0. After
booting, open the GUI console and display the configure management
network > network adapters screen to select the NIC to be used as the
management network device.
147
83840-546-00 E
9–Fibre Channel Over Ethernet
FCoE Boot from SAN
11.
For BCM57800 and BCM57810 boards, the FCoE boot devices need to
have a separate vSwitch other than vSwith0. This allows DHCP to assign
the IP address to the management network rather than to the FCoE boot
device. To create a vSwitch for the FCoE boot devices, add the boot device
vmnics in vSphere Client under Networking.
Configuring FCoE Boot from SAN on VMware
Note that each host must have access only to its own boot LUN—not to the boot
LUNs of other hosts. Use storage system software to ensure that the host
accesses only the designated LUNs.
148
83840-546-00 E
9–Fibre Channel Over Ethernet
Booting from SAN After Installation
Booting from SAN After Installation
Now that boot configuration and OS installation are complete, you can reboot and
test the installation. On this and all future reboots, no other user interactivity is
required. Ignore the CTRL+D prompt and allow the system to boot through to the
FCoE SAN LUN (Figure 9-16).
At this time, if additional redundant failover paths are desired, you can configure
those paths through CCM, and the MBA will automatically failover to secondary
paths if the first path is not available. Further, the redundant boot paths will yield
redundant paths visible through host MPIO software allowing for a fault tolerant
configuration.
Figure 9-16. FCoE Reboot
Driver Upgrade on Linux Boot from SAN Systems
1.
Remove the existing installed 8400 Series package. Log in as root. Query
for the existing 8400 Series package and remove it using the following
commands:
# rpm -e <8400 Series package name>
For example:
rpm -e netxtreme2
or:
rpm -e netxtreme2-x.y.z-1.x86_64
149
83840-546-00 E
9–Fibre Channel Over Ethernet
Booting from SAN After Installation
2.
Install the binary RPM containing the new driver version. Refer to the
linux-nx2 package README for instructions on how to prepare a binary
driver RPM.
3.
Use the following command to update the ramdisk:
4.

On RHEL 6.x systems, execute: dracut -force

On SLES11spX systems, execute: mkinitrd
If you are using different name for the initrd under /boot, be sure to overwrite
it with the default, as dracut/mkinitrd updates the ramdisk with the default
original name.
Also, verify that your appropriate entry for the boot from SAN setup uses the
correct or updated intrd name in /boot/grub/menu.lst.
5.
To complete your driver upgrade, reboot the system and select the modified
grub boot entry that contains the updated initrd.
Errors During Windows FCoE Boot from SAN Installation
If any USB flash drive is connected while Windows setup is loading files for
installation, an error message will appear when you provide the drivers and then
select the SAN disk for the installation. The most common error message that
Windows OS installer reports is “We couldn't create a new partition or locate an
existing one. For more information, see the Setup log files.”
In other cases, the error message may indicate a need to ensure that the disk's
controller is enabled in the computer's BIOS menu.
150
83840-546-00 E
9–Fibre Channel Over Ethernet
Configuring FCoE
To avoid any of the above error messages, it is necessary to ensure that there is
no USB flash drive attached until the setup asks for the drivers. Once you load the
drivers and see your SAN disk(s), detach or disconnect the USB flash drive
immediately before selecting the disk for further installation.
Configuring FCoE
By default, DCB is enabled on QLogic 8400 Series FCoE-, DCB-compatible
C-NICs. QLogic 8400 Series FCoE requires a DCB-enabled interface. For
Windows operating systems, use the QCC GUI or QLogic’s Comprehensive
Configuration Management (CCM) utility to configure the DCB parameters.
151
83840-546-00 E
10
NIC Partitioning and
Bandwidth Management

Overview

Configuring for NIC Partitioning

Configuration Parameters
Overview
NIC partitioning divides a QLogic 8400/3400 Series 10 Gigabit Ethernet NIC into
multiple virtual NICs by having multiple PCI physical functions per port. Each PCI
function is associated with a different virtual NIC. To the OS and the network, each
physical function appears as a separate NIC port. This Switch Independent
Partitioning is also known as vNIC2 or Virtual NIC2 by IBM.
The number of partitions for each port can range from one to four; thus, a
dual-port NIC can have up to eight partitions. Each partition behaves as if it is an
independent NIC port.
Benefits of a partitioned 10G NIC include:

Reduced cabling and ports when used to replace many 1G NICs.

Server segmentation with separate subnets/VLANs.

High server availability with NIC failover and NIC link bandwidth
aggregation.

Server I/O virtualization with virtual OS support.

No change to the OS is required.

SLB type teaming is supported.
Supported Operating Systems for NIC Partitioning
The QLogic 8400/3400 Series 10 Gigabit Ethernet adapters support NIC
partitioning on the following operating systems:

Windows Server 2008 family

Windows Server 2012 family
152
83840-546-00 E
10–NIC Partitioning and Bandwidth Management
Configuring for NIC Partitioning

Linux 64-bit, RHEL 5.5 and later, SLES11 SP1 and later

VMware ESXi 5.0, 5.1, 5.5, and 6.0
NOTE
32-bit Linux operating systems have a limited amount of memory space
available for Kernel data structures. Therefore, it is recommended that only
64-bit Linux be used when configuring NPAR.
Configuring for NIC Partitioning
When NIC partitioning is enabled on an adapter, by default, no offloads are
enabled on any physical function (PF) or virtual NIC (vNIC). The user must
explicitly configure storage offloads on a PF to use FCoE and/or iSCSI offload on
an adapter.
NOTE
In NPAR mode, SR-IOV cannot be enabled on any PF (vNIC) on which
storage offload (FCoE or iSCSI) is configured. This does not apply to
adapters in Single Function (SF) mode.
NOTE
In NPAR mode, users should avoid teaming or bonding partitions on the
same physical media. Such configuration still results in a single point of
failure which defeats the purpose of ports teaming or ports bonding.
Configuration Parameters
Number of Partitions
The number of partitions for the port. Each port can have from one to four
partitions with each partition behaving as if it is an independent NIC port. The user
can disable a selected partition by disabling all protocols (Ethernet, iSCSI and
FcoE) on that partition.
Network MAC Address
The MAC address of the port.
iSCSI MAC Address
If an iSCSI adapter is loaded onto the system, the iSCSI MAC address will
appear.
153
83840-546-00 E
10–NIC Partitioning and Bandwidth Management
Configuration Parameters
Flow Control
The flow control setting of the port.
Physical Link Speed
The physical link speed of the port, either 1G or 10G.
Relative Bandwidth Weight (%)

The relative bandwidth setting represents a weight or importance of a
particular function. There are up to four functions per port. The weight is
used to arbitrate between the functions in the event of congestion.

The sum of all weights for the functions on a single physical port should be
either 0 or 100.

A value of 0 for all functions means that each function transmits at 25% of
the physical link speed, not to exceed the Maximum Bandwidth setting5.

A value for a function between 1 and 100 represents a percentage of the
physical link speed and is used by an internal arbitration logic as an input
value (weight). A higher value causes this function to transmit relatively
more data, compared to a function (on the same port) that has defined a
lower value.

The smallest recommended value for a partition is 10.
Maximum Bandwidth (%)

The maximum bandwidth setting defines an upper threshold value, ensuring
that this limit is not exceeded during transmission. The valid range for this
value is between 1 and 100. The maximum bandwidth value is defined as a
percentage of the physical link speed.

It is possible for the sum of all maximum bandwidth values across the four
functions of a single port to exceed the physical link speed value of either
10Gbps or 1Gbps. This case is called oversubscription. In a case where
oversubscription congestion occurs on transmit, the Relative Bandwidth
Weight value comes into effect.

The Maximum Bandwidth setting is only valid in the context of Tx, but not
Rx.
An example configuration:
Four functions (or partitions) are configured with a total of six protocols, as shown
below.
5 A Relative Bandwidth Weight value of “0” for all functions (or partitions) causes the bandwidth
allocation to be divided equally among all enabled offloads. A Relative Bandwidth Weight value
of “25” for all functions causes bandwidth allocation to be divided equally among all functions.
Please refer to the example below for distinction.
154
83840-546-00 E
10–NIC Partitioning and Bandwidth Management
Configuration Parameters
Function 0

Ethernet

FCoE
Function 1

Ethernet
Function 2

Ethernet
Function 3

Ethernet

iSCSI
1.
If Relative Bandwidth Weight is configured as “0” for all four PFs, then all
six offloads will share the bandwidth equally. In this case, each offload will be
assigned roughly 16.67% of the total bandwidth.
2.
If Relative Bandwidth Weight is configured as “25” for all four PFs, then
Ethernet and FCoE offloads on function 0 and Ethernet and iSCSI offloads
on function 3 will be assigned roughly 12.5% of the total bandwidth, whereas
Ethernet offloads on function 1 and function 2 are assigned roughly 25% of
the total bandwidth.
155
83840-546-00 E
11
Virtual LANs in Windows

VLAN Overview

Adding VLANs to Teams
VLAN Overview
Virtual LANs (VLANs) allow you to split your physical LAN into logical parts, to
create logical segmentation of work groups, and to enforce security policies for
each logical segment. Each defined VLAN behaves as its own separate network
with its traffic and broadcasts isolated from the others, increasing bandwidth
efficiency within each logical group. Up to 64 VLANs (63 tagged and 1 untagged)
can be defined for each QLogic adapter on your server, depending on the amount
of memory available in your system. VLANs can be added to a team to allow
multiple VLANs with different VLAN IDs. A virtual adapter is created for each
VLAN added. Although VLANs are commonly used to create individual broadcast
domains and/or separate IP subnets, it is useful for a server to have a presence
on more than one VLAN simultaneously. QLogic adapters support multiple VLANs
on a per-port or per-team basis, allowing very flexible network configurations.
Figure 11-1. Example of Servers Supporting Multiple VLANs with Tagging
156
83840-546-00 E
11–Virtual LANs in Windows
VLAN Overview
Figure 11-1 shows an example network that uses VLANs. In this example
network, the physical LAN consists of a switch, two servers, and five clients. The
LAN is logically organized into three different VLANs, each representing a
different IP subnet. The features of this network are described in Table 11-1.
Table 11-1. Example VLAN Network Topology
Component
Description
VLAN #1
An IP subnet consisting of the Main Server, PC #3, and PC #5. This
subnet represents an engineering group.
VLAN #2
Includes the Main Server, PCs #1 and #2 through shared media
segment, and PC #5. This VLAN is a software development group.
VLAN #3
Includes the Main Server, the Accounting Server and PC #4. This
VLAN is an accounting group.
Main Server
A high-use server that needs to be accessed from all VLANs and IP
subnets. The Main Server has a QLogic adapter installed. All three
IP subnets are accessed through the single physical adapter interface. The server is attached to one of the switch ports, which is configured for VLANs #1, #2, and #3. Both the adapter and the
connected switch port have tagging turned on. Because of the tagging VLAN capabilities of both devices, the server is able to communicate on all three IP subnets in this network, but continues to
maintain broadcast separation between all of them.
Accounting
Server
Available to VLAN #3 only. The Accounting Server is isolated from
all traffic on VLANs #1 and #2. The switch port connected to the
server has tagging turned off.
PCs #1 and #2
Attached to a shared media hub that is then connected to the
switch. PCs #1 and #2 belong to VLAN #2 only, and are logically in
the same IP subnet as the Main Server and PC #5. The switch port
connected to this segment has tagging turned off.
PC #3
A member of VLAN #1, PC #3 can communicate only with the Main
Server and PC #5. Tagging is not enabled on PC #3 switch port.
PC #4
A member of VLAN #3, PC #4 can only communicate with the servers. Tagging is not enabled on PC #4 switch port.
PC #5
A member of both VLANs #1 and #2, PC #5 has an QLogic
adapter installed. It is connected to switch port #10. Both the
adapter and the switch port are configured for VLANs #1 and #2
and have tagging enabled.
157
83840-546-00 E
11–Virtual LANs in Windows
Adding VLANs to Teams
NOTE
VLAN tagging is only required to be enabled on switch ports that create
trunk links to other switches, or on ports connected to tag-capable
end-stations, such as servers or workstations with QLogic adapters.
For Hyper-V, create VLANs in the vSwitch-to-VM connection instead of in a
team, to allow VM live migrations to occur without having to ensure the
future host system has a matching team VLAN setup.
Adding VLANs to Teams
Each team supports up to 64 VLANs (63 tagged and 1 untagged). Note that only
QLogic adapters and Alteon® AceNIC adapters can be part of a team with VLANs.
With multiple VLANs on an adapter, a server with a single adapter can have a
logical presence on multiple IP subnets. With multiple VLANs in a team, a server
can have a logical presence on multiple IP subnets and benefit from load
balancing and failover.
NOTE
Adapters that are members of a failover team can also be configured to
support VLANs. Because VLANs are not supported for an Intel LOM, if an
Intel LOM is a member of a failover team, VLANs cannot be configured for
that team.
158
83840-546-00 E
12
SR-IOV

Overview

Enabling SR-IOV
Overview
Virtualization of network controllers allows users to consolidate their networking
hardware resources and run multiple virtual machines concurrently on
consolidated hardware. Virtualization also provides the user a rich set of features
such as I/O sharing, consolidation, isolation and migration, and simplified
management with provisions for teaming and failover.
Virtualization can come at the cost of reduced performance due to hypervisor
overhead. The PCI-SIG introduced the Single- Root I/O Virtualization (SR-IOV)
specification to address these performance issues by creating a virtual function
(VF), a lightweight PCIe function that can be directly assigned to a virtual machine
(VM), bypassing the hypervisor layer for the main data movement.
Not all QLogic adapters support SR-IOV; refer to your product documentation for
details.
Enabling SR-IOV
Before attempting to enable SR-IOV, ensure that:

The adapter hardware supports SR-IOV.

SR-IOV is supported and enabled in the system BIOS.
To enable SR-IOV:
1.
Enable the feature on the adapter:
Using the QCC GUI:
a.
Select the network adapter. Select the Configuration tab and select
SR-IOV Global Enable.
b.
In the SR-IOV VFs per PF field, configure the number of SRIOV Virtual
Functions (VFs) that the adapter can support per physical function,
from 0 to 64 in increments of 8 (default = 16).
159
83840-546-00 E
12–SR-IOV
Enabling SR-IOV
c.
In the SR-IOV Max Chains per VF field, configure the maximum
number of transmit and receive queues (such as receive side scaling
(RSS) queues) that can be used for each virtual function. The
maximum is 16.
Using CCM:
a.
Select the SR-IOV-capable adapter from the Device List. On the Main
Menu, select Device Hardware Configuration, then select SR-IOV
Enabled.
b.
To configure the number of VFs that the adapter can support:
If Multi-Function Mode to is set to SF (Single Function), then the
“Number of VFs per PF” field appears, in which you can set from 0 to
64 in increments of 8 (default = 16).
If Multi-Function Mode is set to NPAR, then display the Main Menu
and select NIC Partition Configuration. Then, select the NPAR
Function to configure and enter the appropriate value in the Number
of VFs per PF field.
2.
In Virtual Switch Manager, create a virtual NIC. Select Allow Management
operating system to share the network adapter if the host will use this
vSwitch to connect to the associated VMs.
3.
In Virtual Switch Manager, select the virtual adapter and select Hardware
Acceleration in the navigation pane. In the Single-root I/O virtualization
section, select Enable SR-IOV. SR-IOV must be done now and cannot be
enabled after the vSwitch is created.
4.
Install the QLogic drivers for the adapters detected in the VM. Use the latest
drivers available from your vendor for the host OS (do not use the inbox
drivers). The same driver version must be installed on the host and the VM.
To verify that SR-IOV is operational
1.
Start the VM.
2.
In Hyper-V Manager, select the adapter and select the VM in the Virtual
Machines list.
3.
Select the Networking tab at the bottom of the window and view the adapter
status.
160
83840-546-00 E
12–SR-IOV
Enabling SR-IOV
SR-IOV and Storage
Storage (FCoE or iSCSI) can be enabled on an SR-IOV-enabled adapter.
However, if storage is used on an NPAR-enabled physical function (PF), then the
number of virtual functions for that PF is set to zero; therefore, SR-IOV is disabled
on that PF, and the other PFs on that port can support SR-IOV VF connections.
This limitation applies only when the adapter is configured in NPAR mode. It is not
relevant when the adapter is configured in single-function mode.
SR-IOV and Jumbo Packets
If SR-IOV is enabled on a virtual function (VF) on the adapter, ensure that the
same jumbo packet settings is configured on both the VF and the Microsoft
synthetic adapter. You can configure these values using Windows Device
Manager > Advanced properties.
If there is a mismatch in the values, the SR-IOV function will show as Degraded in
Hyper-V > Networking Status.
161
83840-546-00 E
13
Microsoft Virtualization
with Hyper-V
Microsoft Virtualization is a hypervisor virtualization system for Windows Server
2008 and 2012. This section is intended for those who are familiar with Hyper-V,
and it addresses issues that affect the configuration of 8400/3400 Series network
adapters and teamed network adapters when Hyper-V is used. For more
information on Hyper-V, see
http://technet.microsoft.com/en-us/windowsserver/dd448604.aspx.
162
83840-546-00 E
13–Microsoft Virtualization with Hyper-V
Supported Features
Supported Features
Table 13-1 identifies Hyper-V supported features that are configurable for
8400/3400 Series network adapters. This table is not an all-inclusive list of
Hyper-V features.
Table 13-1. Configurable Network Adapter Hyper-V Features
Feature
Supported in Windows
Server
2008
2008 R2
Comments/Limitation
2012
IPv4
Yes
Yes
Yes
–
IPv6
Yes
Yes
Yes
–
IPv4 Large Send Offload
(LSO) (parent and child
partition)
Yes
Yes
Yes
–
IPv4 Checksum Offload
(CO) (parent and child
partition)
Yes
Yes
Yes
–
IPv6 LSO (parent and
child partition)
No*
Yes
Yes
*When bound to a virtual
network, OS limitation.
IPv6 CO (parent and child
partition)
No*
Yes
Yes
*When bound to a virtual
network, OS limitation.
Jumbo frames
No*
Yes
Yes
*OS limitation.
RSS
No*
No*
Yes
*OS limitation.
RSC
No*
No*
Yes
*OS limitation.
SRIOV
No*
No*
Yes
*OS limitation.
NOTE
Ensure that Integrated Services, which is a component of Hyper-V, is
installed in the guest operating system (child partition).
163
83840-546-00 E
13–Microsoft Virtualization with Hyper-V
Single Network Adapter
Single Network Adapter
Windows Server 2008
When configuring a 8400/3400 Series network adapter on a Hyper-V system, be
aware of the following:

An adapter that is to be bound to a virtual network should not be configured
for VLAN tagging through the driver’s advanced properties. Instead,
Hyper-V should manage VLAN tagging exclusively.

Since Hyper-V does not support Jumbo Frames, it is recommended that this
feature not be used or connectivity issues may occur with the child partition.

The Locally Administered Address (LAA) set by Hyper-V takes precedence
over the address set in the adapter’s advanced properties.

In an IPv6 network, a team that supports CO and/or LSO and is bound to a
Hyper-V virtual network will report CO and LSO as an offload capability in
the QCC GUI; however, CO and LSO will not work. This is a limitation of
Hyper-V. Hyper-V does not support CO and LSO in an IPv6 network.
Windows Server 2008 R2 and 2012
When configuring a 8400/3400 Series network adapter on a Hyper-V system, be
aware of the following:

An adapter that is to be bound to a virtual network should not be configured
for VLAN tagging through the driver’s advanced properties. Instead,
Hyper-V should manage VLAN tagging exclusively.

The Locally Administered Address (LAA) set by Hyper-V takes precedence
over the address set in the adapter’s advanced properties.

The LSO and CO features in the guest OS are independent of the network
adapter properties.

To allow jumbo frames from the guest OS, both the network adapter and the
virtual adapter must have jumbo frames enabled. The Jumbo MTU property
for the network adapter must be set to allow traffic of large MTU from within
the guest OS. The jumbo packet of the virtual adapter must be set to
segment the sent and received packets.
164
83840-546-00 E
13–Microsoft Virtualization with Hyper-V
Teamed Network Adapters
Teamed Network Adapters
Table 13-2 identifies Hyper-V supported features that are configurable for
8400/3400 Series teamed network adapters. This table is not an all-inclusive list
of Hyper-V features.
Table 13-2. Configurable Teamed Network Adapter Hyper-V Features
Supported in Windows
Server Version
Feature
Comments/Limitation
2008
Smart Load Balancing and Failover
(SLB) team type
Yes
2008
R2
Yes
2012
Yes
Multi-member SLB team
allowed with latest QLogic
Advanced Server Program
(QLASP) version.
Note: VM MAC is not presented to external switches.
Link Aggregation
(IEEE 802.3ad
LACP) team type
Yes
Yes
Yes
–
Generic Trunking
(FEC/GEC) 802.3ad
Draft Static team type
Yes
Yes
Yes
–
Failover
Yes
Yes
Yes
–
LiveLink
Yes
Yes
Yes
–
Large Send Offload
(LSO)
Limited*
Yes
Yes
*Conforms to miniport limitations outlines in Table 13-1.
Checksum Offload
(CO)
Limited*
Yes
Yes
*Conforms to miniport limitations outlines in Table 13-1.
Hyper-V VLAN over
an adapter
Yes
Yes
Yes
–
Hyper-V VLAN over
a teamed adapter
Yes
Yes
Yes
–
Hyper-V VLAN over
a VLAN
Limited*
Limited*
Limited*
Only an untagged VLAN.
Hyper-V virtual
switch over an
adapter
Yes
Yes
Yes
–
165
83840-546-00 E
13–Microsoft Virtualization with Hyper-V
Teamed Network Adapters
Table 13-2. Configurable Teamed Network Adapter Hyper-V Features
Supported in Windows
Server Version
Feature
Comments/Limitation
2008
2008
R2
2012
Hyper-V virtual
switch over a teamed
adapter
Yes
Yes
Yes
–
Hyper-V virtual
switch over a VLAN
Yes
Yes
Yes
–
iSCSI boot
No
No*
No*
*Remote boot to SAN is supported.
Virtual Machine
Queue (VMQ)
No
Yes
Yes
See “Configuring VMQ with
SLB Teaming” on page 167.
RSC
No
No
Yes
Windows Server 2008
When configuring a team of 8400/3400 Series network adapters on a Hyper-V
system, be aware of the following:

Create the team prior to binding the team to the Hyper-V virtual network.

Create a team only with an adapter that is not already assigned to a Hyper-V
virtual network.

In an IPv6 network, a team that supports CO and/or LSO and is bound to a
Hyper-V virtual network will report CO and LSO as an offload capability in
the QCC GUI; however, CO and LSO will not work. This is a limitation of
Hyper-V. Hyper-V does not support CO and LSO in an IPv6 network.

To successfully perform VLAN tagging for both the host (parent partition)
and the guest (child partition) with the QLASP teaming software, you must
configure the team for tagging. Unlike VLAN tagging with a single adapter,
tagging cannot be managed by Hyper-V when using QLASP software.

When making changes to a team or removing a team, remove the team’s
binding from all guest OSs that use any of the VNICs in the team, change
the configuration, and then rebind the team’s VNICs to the guest OS. This
can be done in the Hyper-V Manager.
166
83840-546-00 E
13–Microsoft Virtualization with Hyper-V
Teamed Network Adapters
Windows Server 2008 R2
When configuring a team of 8400/3400 Series network adapters on a Hyper-V
system, be aware of the following:

Create the team prior to binding the team to the Hyper-V virtual network.

Create a team only with an adapter that is not already assigned to a Hyper-V
virtual network.

An QLASP virtual adapter configured for VLAN tagging can be bound to a
Hyper-V virtual network, and is a supported configuration. However, the
VLAN tagging capability of QLASP cannot be combined with the VLAN
capability of Hyper-V. To use the VLAN capability of Hyper-V, the QLASP
team must be untagged.

When making changes to a team or removing a team, remove the team’s
binding from all guest OSs that use any of the VNICs in the team, change
the configuration, and then rebind the team’s VNICs to the guest OS. This
can be done in the Hyper-V Manager.
Configuring VMQ with SLB Teaming
When Hyper-V server is installed on a system configured to use Smart Load
Balance and Failover (SLB) type teaming, you can enable Virtual Machine
Queueing (VMQ) to improve overall network performance. VMQ enables
delivering packets from an external virtual network directly to virtual machines
defined in the SLB team, eliminating the need to route these packets and, thereby,
reducing overhead.
To create a VMQ-capable SLB team:
1.
Create an SLB team. If using the Teaming Wizard, when you select the SLB
team type, also select Enable HyperV Mode. If using Expert mode, enable
the property in the Create Team or Edit Team tabs. See “Configuring
Teaming” on page 185 for additional instructions on creating a team.
2.
Follow these instructions to add the required registry entries in Windows:
http://technet.microsoft.com/en-us/library/gg162696%28v=ws.10%29.aspx
3.
For each team member on which you want to enable VMQ, modify the
following registry entry and configure a unique instance number (in the
following example, it is set to 0026):
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\
{4D36E972-E325-11CE-BFC1-08002BE10318}\0026]
"*RssOrVmqPreference"="1"
167
83840-546-00 E
13–Microsoft Virtualization with Hyper-V
Upgrading Windows Operating Systems
Upgrading Windows Operating Systems
This section covers Windows upgrades for the following:

From Windows Server 2003 to Windows Server 2008

From Windows Server 2008 to Windows Server 2008 R2

From Windows Server 2008 R2 to Windows Server 2012
Prior to performing an OS upgrade when a QLogic 8400/3400 Series adapter is
installed on your system, QLogic recommends the procedure below.
1.
Save all team and adapter IP information.
2.
Uninstall all QLogic drivers using the installer.
3.
Perform the Windows upgrade.
4.
Reinstall the latest QLogic adapter drivers and the QCC GUI.
168
83840-546-00 E
14
Data Center Bridging (DCB)

Overview

DCB Capabilities

Configuring DCB

DCB Conditions

Data Center Bridging in Windows Server 2012
Overview
Data Center Bridging (DCB) is a collection of IEEE specified standard extensions
to Ethernet to provide lossless data delivery, low latency, and standards-based
bandwidth sharing of data center physical links. DCB supports storage,
management, computing, and communications fabrics onto a single physical
fabric that is simpler to deploy, upgrade, and maintain than in standard Ethernet
networks. DCB has a standards-based bandwidth sharing at its core, allowing
multiple fabrics to coexist on the same physical fabric. The various capabilities of
DCB allow for LAN traffic (large number of flows and not latency-sensitive), SAN
traffic (large packet sizes and requires lossless performance), and IPC
(latency-sensitive messages) to bandwidth share the same physical converged
connection and achieve the desired individual traffic performance.
DCB includes the following capabilities:

Enhanced Transmission Selection (ETS)

Priority-based Flow Control (PFC)

Data Center Bridging Capability eXchange Protocol (DCBX)
169
83840-546-00 E
14–Data Center Bridging (DCB)
DCB Capabilities
DCB Capabilities
Enhanced Transmission Selection (ETS)
Enhanced Transmission Selection (ETS) provides a common management
framework for assignment of bandwidth to traffic classes. Each traffic class or
priority can be grouped in a Priority Group (PG), and it can be considered as a
virtual link or virtual interface queue. The transmission scheduler in the peer is
responsible for maintaining the allocated bandwidth for each PG. For example, a
user can configure FCoE traffic to be in PG 0 and iSCSI traffic in PG 1. The user
can then allocate each group a certain bandwidth. For example, 60% to FCoE and
40% to iSCSI. The transmission scheduler in the peer will ensure that in the event
of congestion, the FCoE traffic will be able to use at least 60% of the link
bandwidth and iSCSI to use 40%. See additional references at
http://www.ieee802.org/1/pages/802.1az.html.
Priority Flow Control (PFC)
Priority Flow Control (PFC) provides a link-level flow control mechanism that can
be controlled independently for each traffic type. The goal of this mechanism is to
ensure zero loss due to congestion in DCB networks. Traditional IEEE 802.3
Ethernet does not guarantee that a packet transmitted on the network will reach
its intended destination. Upper-level protocols are responsible to maintain the
reliability by way of acknowledgment and retransmission. In a network with
multiple traffic classes, it becomes very difficult to maintain the reliability of traffic
in the absence of feedback. This is traditionally tackled with the help of link-level
Flow Control.
When PFC is used in a network with multiple traffic types, each traffic type can be
encoded with a different priority value and a pause frame can refer to this priority
value while instructing the transmitter to stop and restart the traffic. The value
range for the priority field is from 0 to 7, allowing eight distinct types of traffic that
can be individually stopped and started. See additional references at
http://www.ieee802.org/1/pages/802.1bb.html.
Data Center Bridging eXchange (DCBX)
Data Center Bridging eXchange (DCBX) is a discovery and capability exchange
protocol that is used for conveying capabilities and configuration of ETS and PFC
between link partners to ensure consistent configuration across the network
fabric. For two devices to exchange information, one device must be willing to
adopt network configuration from the other device. For example, if a C-NIC is
configured to willingly adopt ETS and PFC configuration information from a
connected switch, and the switch acknowledges the C-NIC’s willingness, then the
switch will send the C-NIC the recommended ETS and PFC parameter settings.
The DCBX protocol uses the Link Level Discovery Protocol (LLDP) to exchange
PFC and ETS configurations between link partners.
170
83840-546-00 E
14–Data Center Bridging (DCB)
Configuring DCB
Configuring DCB
By default, DCB is enabled on QLogic 8400/3400 Series DCB-compatible C-NICs.
DCB configuration is rarely required, as the default configuration should satisfy
most scenarios. DCB parameters can be configured using the QCC GUI.
NOTE
FCoE operation depends on successful VLAN discovery. All switches that
support FCoE support VLAN discovery, but some switches may require
specific configuration. Refer to the switch configuration guides for
information on how to configure a port for successful VLAN discovery.
DCB Conditions
The following is a list of conditions that allow DCB technology to function on the
network.

If DCB is enabled on the interface, DCBX is automatically enabled and
carried out automatically once a link is established.

If DCBX fails to synchronize with a compatible peer, the adapter will
automatically fall back to default NIC behavior (no priority tagging, no PFC,
no ETS).

By default, the port will advertise itself as willing, and as such, will accept all
DCB settings as advertised by the switch.

If PFC is operational, PFC settings supersede link level flow control settings.
If PFC is not operational, link level flow control settings prevail.

In NIC Partitioned enabled configurations, ETS (if operational) overrides the
Bandwidth Weights assigned to each function. Transmission selection
weights are per protocol per ETS settings instead. Maximum bandwidths per
PF are still honored in the presence of ETS.

In the absence of an iSCSI or FCoE application TLV advertised through the
DCBX peer, the adapter will use the settings taken from the local Admin
MIB.
171
83840-546-00 E
14–Data Center Bridging (DCB)
Data Center Bridging in Windows Server 2012
Data Center Bridging in Windows Server 2012
Windows Server 2012 introduces a new way of managing Quality Of Service
(QoS) at the OS level. There are two main aspects of Windows QoS:

A vendor-independent method for managing DCB settings on NICs, both
individually and across an entire domain. The management interface is
provided by Windows PowerShell Cmdlets.

The ability to tag specific types of L2 networking traffic, such as SMB traffic,
so that hardware bandwidth can be managed using ETS.
All QLogic Converged Network Adapters that support DCB are capable of
interoperating with Windows QoS.
To enable the QoS Windows feature, ensure that the QLogic device is
DCB-capable:
1.
Using CCM or the QCC GUI, enable Data Center Bridging.
2.
Using Windows Device Manager or the QCC GUI, select the NDIS driver,
display Advanced properties, and enable the Quality of Service property.
When QoS is enabled, administrative control over DCB-related settings is
relinquished to the operating system (that is, the QCC GUI can no longer be used
for administrative control of the DCB). You can use PowerShell to configure and
manage the QoS feature. Using PowerShell Cmdlets, you can configure various
QoS-related parameters, such as traffic classification, priority flow control, and
traffic class throughput scheduling.
For more information on using PowerShell Cmdlets, see the “DCB Windows
PowerShell User Scripting Guide” in the Microsoft Technet Library.
To revert to standard QCC control over the QLogic DCB feature set, uninstall the
Microsoft QoS feature or disable Quality of Service in the QCC GUI or Device
Manager NDIS Advance Properties page.
NOTE
QLogic recommends that you do not install the DCB feature if SR-IOV will be
used. If you install the DCB feature, be aware that selecting Enable
single-root I/O virtualization (SR-IOV) in Virtual Switch Manager will force
the underlying adapter into a DCB state in which OS DCB configuration will
be ignored, and DCB configuration from the QCC GUI will be in effect with
the exception that the user-configured Networking Priority value
(non-zero) will not take effect, even though it appears that it is from the QCC
GUI.
172
83840-546-00 E
14–Data Center Bridging (DCB)
Data Center Bridging in Windows Server 2012
The 8400/3400 Series Adapters support up to two traffic classes (in addition to the
default traffic class) that can be used by the Windows QoS service. On 8400
Series Adapters, disable iSCSI-offload or FCoE-offload (or both) to free one or
two traffic classes for use by the Windows QoS service. Assigning more traffic
classes than what is available in the Windows QoS service (through PowerShell)
will cause the default traffic class to be used for all traffic.
173
83840-546-00 E
15
QLogic Teaming Services
This chapter describes teaming for adapters in Windows Server systems. For
more information on a similar technologies on other operating systems (for
example, Linux Channel Bonding), refer to your operating system documentation.

Executive Summary

Teaming Mechanisms

Teaming and Other Advanced Networking Properties

General Network Considerations

Application Considerations

Troubleshooting Teaming Problems

Frequently Asked Questions

Event Log Messages
Executive Summary

Glossary

Teaming Concepts

Software Components

Hardware Requirements

Teaming Support by Processor

Configuring Teaming

Supported Features by Team Type

Selecting a Team Type
This section describes the technology and implementation considerations when
working with the network teaming services offered by the QLogic software
shipped with servers and storage products. The goal of QLogic teaming services
is to provide fault tolerance and link aggregation across a team of two or more
adapters. The information in this document is provided to assist IT professionals
during the deployment and troubleshooting of system applications that require
network fault tolerance and load balancing.
174
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Glossary
Table 15-1. Glossary
Item
Definition
ARP
Address Resolution Protocol
QCC
QConvergeConsole
QLASP
QLASP (intermediate NIC teaming driver)
DNS
Domain Name Service
G-ARP
Gratuitous Address Resolution Protocol
Generic Trunking
(FEC/GEC)/802.3adDraft Static
Switch-dependent load balancing and failover type of team in
which the intermediate driver manages outgoing traffic and the
switch manages incoming traffic.
HSRP
Hot Standby Router Protocol
ICMP
Internet Control Message Protocol
IGMP
Internet Group Management Protocol
IP
Internet Protocol
IPv6
Version 6 of the IP Protocol
iSCSI
Internet Small Computer Systems Interface
L2
Layer 2. Used to describe network traffic that is not offloaded,
and where hardware only performs Layer 2 operations on the
traffic. Layer 3 (IP) and Layer 4 (TCP) protocols are processed
in software.
L4
Layer 4. Used to describe network traffic that is heavily offloaded to the hardware, where much of the Layer 3 (IP) and
Layer 4 (TCP) processing is done in the hardware to improve
performance.
LACP
Link Aggregation Control Protocol
Link Aggregation
(802.3ad)
Switch-dependent load balancing and failover type of team with
LACP in which the intermediate driver manages outgoing traffic
and the switch manages incoming traffic.
LOM
LAN on Motherboard
MAC
media access control
NDIS
Network Driver Interface Specification
NLB
Network Load Balancing (Microsoft)
175
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Table 15-1. Glossary (Continued)
Item
Definition
PXE
Preboot Execution Environment
RAID
redundant array of inexpensive disks
Smart Load Balancing™ and Failover
Switch-independent failover type of team in which the primary
team members handle all incoming and outgoing traffic while
the standby team member (if present) is idle until a failover
event (for example, loss of link occurs). The intermediate NIC
teaming driver (QLASP) manages incoming/outgoing traffic.
Smart Load Balancing (SLB)
Switch-independent load balancing and failover type of team, in
which the intermediate driver manages outgoing/incoming traffic.
TCP
Transmission Control Protocol
UDP
User Datagram Protocol
WINS
Windows name service
WLBS
Windows Load Balancing Service
Teaming Concepts

Network Addressing

Teaming and Network Addresses

Description of Teaming Types
The concept of grouping multiple physical devices to provide fault tolerance and
load balancing is not new. It has been around for years. Storage devices use
RAID technology to group individual hard drives. Switch ports can be grouped
together using technologies such as Cisco Gigabit EtherChannel, IEEE 802.3ad
Link Aggregation, Bay Network Multilink Trunking, and Extreme Network Load
Sharing. Network interfaces on servers can be grouped together into a team of
physical ports called a virtual adapter.
176
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Network Addressing
To understand how teaming works, it is important to understand how node
communications work in an Ethernet network. This document is based on the
assumption that the reader is familiar with the basics of IP and Ethernet network
communications. The following information provides a high-level overview of the
concepts of network addressing used in an Ethernet network. Every Ethernet
network interface in a host platform, such as a computer system, requires a
globally unique Layer 2 address and at least one globally unique Layer 3 address.
Layer 2 is the Data Link Layer, and Layer 3 is the Network layer as defined in the
OSI model. The Layer 2 address is assigned to the hardware and is often referred
to as the MAC address or physical address. This address is pre-programmed at
the factory and stored in NVRAM on a network interface card or on the system
motherboard for an embedded LAN interface. The Layer 3 addresses are referred
to as the protocol or logical address assigned to the software stack. IP and IPX
are examples of Layer 3 protocols. In addition, Layer 4 (Transport Layer) uses
port numbers for each network upper level protocol such as Telnet or FTP. These
port numbers are used to differentiate traffic flows across applications. Layer 4
protocols such as TCP or UDP are most commonly used in today’s networks. The
combination of the IP address and the TCP port number is called a socket.
Ethernet devices communicate with other Ethernet devices using the MAC
address, not the IP address. However, most applications work with a host name
that is translated to an IP address by a Naming Service such as WINS and DNS.
Therefore, a method of identifying the MAC address assigned to the IP address is
required. The Address Resolution Protocol for an IP network provides this
mechanism. For IPX, the MAC address is part of the network address and ARP is
not required. ARP is implemented using an ARP Request and ARP Reply frame.
ARP Requests are typically sent to a broadcast address while the ARP Reply is
typically sent as unicast traffic. A unicast address corresponds to a single MAC
address or a single IP address. A broadcast address is sent to all devices on a
network.
Teaming and Network Addresses
A team of adapters function as a single virtual network interface and does not
appear any different to other network devices than a non-teamed adapter. A
virtual network adapter advertises a single Layer 2 and one or more Layer 3
addresses. When the teaming driver initializes, it selects one MAC address from
one of the physical adapters that make up the team to be the Team MAC address.
This address is typically taken from the first adapter that gets initialized by the
driver. When the system hosting the team receives an ARP request, it selects one
MAC address from among the physical adapters in the team to use as the source
MAC address in the ARP Reply. In Windows operating systems, the IPCONFIG
/all command shows the IP and MAC address of the virtual adapter and not the
individual physical adapters. The protocol IP address is assigned to the virtual
network interface and not to the individual physical adapters.
177
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
For switch-independent teaming modes, all physical adapters that make up a
virtual adapter must use the unique MAC address assigned to them when
transmitting data. That is, the frames that are sent by each of the physical
adapters in the team must use a unique MAC address to be IEEE compliant. It is
important to note that ARP cache entries are not learned from received frames,
but only from ARP requests and ARP replies.
Description of Teaming Types

Smart Load Balancing and Failover

Generic Trunking

Link Aggregation (IEEE 802.3ad LACP)

SLB (Auto-Fallback Disable)
There are three methods for classifying the supported teaming types:

The first is based on whether the switch port configuration must also match
the adapter teaming type.

The second is based on whether the team supports load balancing and
failover, or just failover.

The third is based on whether the Link Aggregation Control Protocol is used
or not.
Table 15-2 shows a summary of the teaming types and their classification.
Table 15-2. Available Teaming Types
Teaming Type
Link
Aggregation
Switch-Dependent
Control
(Switch must
Load Balancing
Protocol
support specific
Support
type of team)
Required on the
Switch
Failover
Smart Load Balancing and
Failover (with
two to eight load
balance team
members)
✔
✔
SLB (Auto-Fallback Disable)
✔
✔
✔
✔
Link Aggregation (802.3ad)
✔
✔
178
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Table 15-2. Available Teaming Types (Continued)
Teaming Type
Generic Trunking
(FEC/GEC)/802.
3ad-Draft Static
Link
Aggregation
Switch-Dependent
Control
(Switch must
Load Balancing
Protocol
support specific
Support
type of team)
Required on the
Switch
✔
✔
Failover
✔
Smart Load Balancing and Failover
The Smart Load Balancing and Failover type of team provides both load
balancing and failover when configured for load balancing, and only failover when
configured for fault tolerance. This type of team works with any Ethernet switch
and requires no trunking configuration on the switch. The team advertises multiple
MAC addresses and one or more IP addresses (when using secondary IP
addresses). The team MAC address is selected from the list of load balance
members. When the system receives an ARP request, the software-networking
stack will always send an ARP Reply with the team MAC address. To begin the
load balancing process, the teaming driver will modify this ARP Reply by changing
the source MAC address to match one of the physical adapters.
179
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Smart Load Balancing enables both transmit and receive load balancing based on
the Layer 3/Layer 4 IP address and TCP/UDP port number. In other words, the
load balancing is not done at a byte or frame level but on a TCP/UDP session
basis. This methodology is required to maintain in-order delivery of frames that
belong to the same socket conversation. Load balancing is supported on 2 to 8
ports. These ports can include any combination of add-in adapters and LAN on
Motherboard (LOM) devices. Transmit load balancing is achieved by creating a
hashing table using the source and destination IP addresses and TCP/UDP port
numbers.The same combination of source and destination IP addresses and
TCP/UDP port numbers will generally yield the same hash index and therefore
point to the same port in the team. When a port is selected to carry all the frames
of a given socket, the unique MAC address of the physical adapter is included in
the frame, and not the team MAC address. This is required to comply with the
IEEE 802.3 standard. If two adapters transmit using the same MAC address, then
a duplicate MAC address situation would occur that the switch could not handle.
NOTE
IPv6 addressed traffic will not be load balanced by SLB because ARP is not
a feature of IPv6.
Receive load balancing is achieved through an intermediate driver by sending
gratuitous ARPs on a client-by-client basis using the unicast address of each
client as the destination address of the ARP request (also known as a directed
ARP). This is considered client load balancing and not traffic load balancing.
When the intermediate driver detects a significant load imbalance between the
physical adapters in an SLB team, it will generate G-ARPs in an effort to
redistribute incoming frames. The intermediate driver (QLASP) does not answer
ARP requests; only the software protocol stack provides the required ARP Reply.
It is important to understand that receive load balancing is a function of the
number of clients that are connecting to the system through the team interface.
SLB receive load balancing attempts to load balance incoming traffic for client
machines across physical ports in the team. It uses a modified gratuitous ARP to
advertise a different MAC address for the team IP Address in the sender physical
and protocol address. This G-ARP is unicast with the MAC and IP Address of a
client machine in the target physical and protocol address respectively. This
causes the target client to update its ARP cache with a new MAC address map to
the team IP address. G-ARPs are not broadcast because this would cause all
clients to send their traffic to the same port. As a result, the benefits achieved
through client load balancing would be eliminated, and could cause out-of-order
frame delivery. This receive load balancing scheme works as long as all clients
and the teamed system are on the same subnet or broadcast domain.
180
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
When the clients and the system are on different subnets, and incoming traffic has
to traverse a router, the received traffic destined for the system is not load
balanced. The physical adapter that the intermediate driver has selected to carry
the IP flow carries all of the traffic. When the router sends a frame to the team IP
address, it broadcasts an ARP request (if not in the ARP cache). The server
software stack generates an ARP reply with the team MAC address, but the
intermediate driver modifies the ARP reply and sends it over a particular physical
adapter, establishing the flow for that session.
The reason is that ARP is not a routable protocol. It does not have an IP header
and therefore, is not sent to the router or default gateway. ARP is only a local
subnet protocol. In addition, since the G-ARP is not a broadcast packet, the router
will not process it and will not update its own ARP cache.
The only way that the router would process an ARP that is intended for another
network device is if it has Proxy ARP enabled and the host has no default
gateway. This is very rare and not recommended for most applications.
Transmit traffic through a router will be load balanced as transmit load balancing is
based on the source and destination IP address and TCP/UDP port number.
Since routers do not alter the source and destination IP address, the load
balancing algorithm works as intended.
Configuring routers for Hot Standby Routing Protocol (HSRP) does not allow for
receive load balancing to occur in the adapter team. In general, HSRP allows for
two routers to act as one router, advertising a virtual IP and virtual MAC address.
One physical router is the active interface while the other is standby. Although
HSRP can also load share nodes (using different default gateways on the host
nodes) across multiple routers in HSRP groups, it always points to the primary
MAC address of the team.
Generic Trunking
Generic Trunking is a switch-assisted teaming mode and requires configuring
ports at both ends of the link: server interfaces and switch ports. This is often
referred to as Cisco Fast EtherChannel or Gigabit EtherChannel. In addition,
generic trunking supports similar implementations by other switch OEMs such as
Extreme Networks Load Sharing and Bay Networks or IEEE 802.3ad Link
Aggregation static mode. In this mode, the team advertises one MAC Address
and one IP Address when the protocol stack responds to ARP Requests. In
addition, each physical adapter in the team uses the same team MAC address
when transmitting frames. This is possible since the switch at the other end of the
link is aware of the teaming mode and will handle the use of a single MAC
address by every port in the team. The forwarding table in the switch will reflect
the trunk as a single virtual port.
181
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
In this teaming mode, the intermediate driver controls load balancing and failover
for outgoing traffic only, while incoming traffic is controlled by the switch firmware
and hardware. As is the case for Smart Load Balancing, the QLASP intermediate
driver uses the IP/TCP/UDP source and destination addresses to load balance
the transmit traffic from the server. Most switches implement an XOR hashing of
the source and destination MAC address.
NOTE
Generic Trunking is not supported on iSCSI offload adapters.
Link Aggregation (IEEE 802.3ad LACP)
Link Aggregation is similar to Generic Trunking except that it uses the Link
Aggregation Control Protocol to negotiate the ports that will make up the team.
LACP must be enabled at both ends of the link for the team to be operational. If
LACP is not available at both ends of the link, 802.3ad provides a manual
aggregation that only requires both ends of the link to be in a link up state.
Because manual aggregation provides for the activation of a member link without
performing the LACP message exchanges, it should not be considered as reliable
and robust as an LACP negotiated link. LACP automatically determines which
member links can be aggregated and then aggregates them. It provides for the
controlled addition and removal of physical links for the link aggregation so that no
frames are lost or duplicated. The removal of aggregate link members is provided
by the marker protocol that can be optionally enabled for Link Aggregation Control
Protocol (LACP) enabled aggregate links.
The Link Aggregation group advertises a single MAC address for all the ports in
the trunk. The MAC address of the Aggregator can be the MAC addresses of one
of the MACs that make up the group. LACP and marker protocols use a multicast
destination address.
The Link Aggregation control function determines which links may be aggregated
and then binds the ports to an Aggregator function in the system and monitors
conditions to determine if a change in the aggregation group is required. Link
aggregation combines the individual capacity of multiple links to form a high
performance virtual link. The failure or replacement of a link in an LACP trunk will
not cause loss of connectivity. The traffic will simply be failed over to the remaining
links in the trunk.
182
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
SLB (Auto-Fallback Disable)
This type of team is identical to the Smart Load Balance and Failover type of
team, with the following exception—when the standby member is active, if a
primary member comes back on line, the team continues using the standby
member rather than switching back to the primary member. This type of team is
supported only for situations in which the network cable is disconnected and
reconnected to the network adapter. It is not supported for situations in which the
adapter is removed/installed through Device Manager or Hot-Plug PCI.
If any primary adapter assigned to a team is disabled, the team functions as a
Smart Load Balancing and Failover type of team in which auto-fallback occurs.
Software Components
Teaming is implemented through an NDIS intermediate driver in the Windows
Operating System environment. This software component works with the miniport
driver, the NDIS layer, and the protocol stack to enable the teaming architecture
(see Figure 15-2). The miniport driver controls the host LAN controller directly to
enable functions such as sends, receives, and interrupt processing. The
intermediate driver fits between the miniport driver and the protocol layer
multiplexing several miniport driver instances, and creating a virtual adapter that
looks like a single adapter to the NDIS layer. NDIS provides a set of library
functions to enable the communications between either miniport drivers or
intermediate drivers and the protocol stack. The protocol stack implements IP, IPX
and ARP. A protocol address such as an IP address is assigned to each miniport
device instance, but when an Intermediate driver is installed, the protocol address
is assigned to the virtual team adapter and not to the individual miniport devices
that make up the team.
Hardware Requirements

Repeater Hub

Switching Hub

Router
The various teaming modes described in this document place certain restrictions
on the networking equipment used to connect clients to teamed systems. Each
type of network interconnect technology has an effect on teaming as described in
the following sections.
183
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Repeater Hub
A Repeater Hub allows a network administrator to extend an Ethernet network
beyond the limits of an individual segment. The repeater regenerates the input
signal received on one port onto all other connected ports, forming a single
collision domain. This means that when a station attached to a repeater sends an
Ethernet frame to another station, every station within the same collision domain
will also receive that message. If two stations begin transmitting at the same time,
a collision occurs, and each transmitting station must retransmit its data after
waiting a random amount of time.
The use of a repeater requires that each station participating within the collision
domain operate in half-duplex mode. Although half-duplex mode is supported for
Gigabit Ethernet adapters in the IEEE 802.3 specification, half-duplex mode is not
supported by the majority of Gigabit Ethernet adapter manufacturers. Therefore,
half-duplex mode is not considered here.
Teaming across hubs is supported for troubleshooting purposes (such as
connecting a network analyzer) for SLB teams only.
Switching Hub
Unlike a repeater hub, a switching hub (or more simply a switch) allows an
Ethernet network to be broken into multiple collision domains. The switch is
responsible for forwarding Ethernet packets between hosts based solely on
Ethernet MAC addresses. A physical network adapter that is attached to a switch
may operate in half-duplex or full-duplex mode.
If the switch does not support Generic Trunking and 802.3ad Link Aggregation, it
may still be used for Smart Load Balancing.
NOTE
All modes of network teaming are supported across switches when
operating as a stackable switch.
Router
A router is designed to route network traffic based on Layer 3 or higher protocols,
although it often also works as a Layer 2 device with switching capabilities. The
teaming of ports connected directly to a router is not supported.
Teaming Support by Processor
All team types are supported by the IA-32 and EM64T processors.
184
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Configuring Teaming
The QCC GUI is used to configure teaming in the supported operating system
environments and is designed to run on 32-bit and 64-bit Windows family of
operating systems. The QCC GUI is used to configure load balancing and fault
tolerance teaming, and VLANs. In addition, it displays the MAC address, driver
version, and status information about each network adapter. The QCC GUI also
includes a number of diagnostics tools such as hardware diagnostics, cable
testing, and a network topology test.
Supported Features by Team Type
Table 15-3 provides a feature comparison across the team types. Use this table to
determine the best type of team for your application. The teaming software
supports up to eight ports in a single team and up to 16 teams in a single system.
These teams can be any combination of the supported teaming types, but each
team must be on a separate network or subnet.
Table 15-3. Comparison of Team Types
Type of Team
Fault
Tolerance
Load
Balancing
Switch-Dependent
Static Trunking
Switch-Independent
Dynamic Link
Aggregation
(IEEE 802.3ad)
Function
SLB with
Standby a
SLB
Generic Trunking
Link Aggregation
2–16
2–16
2–16
2–16
Number of teams
16
16
16
16
Adapter fault tolerance
Yes
Yes
Yes
Yes
Switch link fault tolerance (same
broadcast domain)
Yes
Yes
Switch-dependent
Switch-dependent
TX load balancing
No
Yes
Yes
Yes
RX load balancing
No
Yes
Yes (performed by the
switch)
Yes (performed by the
switch)
Requires compatible switch
No
No
Yes
Yes
Heartbeats to
check connectivity
No
No
No
No
Number of ports
per team (same
broadcast domain)
185
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Table 15-3. Comparison of Team Types (Continued)
Type of Team
Fault
Tolerance
Load
Balancing
Switch-Dependent
Static Trunking
Switch-Independent
Dynamic Link
Aggregation
(IEEE 802.3ad)
Function
SLB with
Standby a
SLB
Generic Trunking
Link Aggregation
Mixed media
(adapters with different media)
Yes
Yes
Yes (switch-dependent)
Yes
Mixed speeds
(adapters do not
support a common
speed, but operate
at different speeds)
Yes
Yes
No
No
Mixed speeds
(adapters that support a common
speed, but operate
at different speeds)
Yes
Yes
No (must be the same
speed)
Yes
Load balances
TCP/IP
No
Yes
Yes
Yes
Mixed vendor
teaming
Yes b
Yes b
Yes b
Yes b
Load balances
non-IP
No
Yes (IPX outbound traffic
only)
Yes
Yes
Same MAC
address for all team
members
No
No
Yes
Yes
Same IP address
for all team members
Yes
Yes
Yes
Yes
Load balancing by
IP address
No
Yes
Yes
Yes
Load balancing by
MAC address
No
Yes (used for
no-IP/IPX)
Yes
Yes
a
SLB with one primary and one standby member.
b
Requires at least one QLogic adapter in the team.
186
83840-546-00 E
15–QLogic Teaming Services
Executive Summary
Selecting a Team Type
The following flow chart provides the decision flow when planning for Layer 2
teaming. The primary rationale for teaming is the need for additional network
bandwidth and fault tolerance. Teaming offers link aggregation and fault tolerance
to meet both of these requirements. Preference teaming should be selected in the
following order: Link Aggregation as the first choice, Generic Trunking as the
second choice, and SLB teaming as the third choice when using unmanaged
switches or switches that do not support the first two options. if switch fault
tolerance is a requirement, then SLB is the only choice (see Figure 15-1).
Figure 15-1. Process for Selecting a Team Type
187
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
Teaming Mechanisms

Architecture

Types of Teams

Attributes of the Features Associated with Each Type of Team

Speeds Supported for Each Type of Team
188
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
Architecture
The QLASP is implemented as an NDIS intermediate driver (see Figure 15-2). It
operates below protocol stacks such as TCP/IP and IPX and appears as a virtual
adapter. This virtual adapter inherits the MAC Address of the first port initialized in
the team. A Layer 3 address must also be configured for the virtual adapter. The
primary function of QLASP is to balance inbound (for SLB) and outbound traffic
(for all teaming modes) among the physical adapters installed on the system
selected for teaming. The inbound and outbound algorithms are independent and
orthogonal to each other. The outbound traffic for a particular session can be
assigned to a given port while its corresponding inbound traffic can be assigned to
a different port.
Figure 15-2. Intermediate Driver
189
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
Outbound Traffic Flow
The QLogic Intermediate Driver manages the outbound traffic flow for all teaming
modes. For outbound traffic, every packet is first classified into a flow, and then
distributed to the selected physical adapter for transmission. The flow
classification involves an efficient hash computation over known protocol fields.
The resulting hash value is used to index into an Outbound Flow Hash Table.The
selected Outbound Flow Hash Entry contains the index of the selected physical
adapter responsible for transmitting this flow. The source MAC address of the
packets will then be modified to the MAC address of the selected physical
adapter. The modified packet is then passed to the selected physical adapter for
transmission.
The outbound TCP and UDP packets are classified using Layer 3 and Layer 4
header information. This scheme improves the load distributions for popular
Internet protocol services using well-known ports such as HTTP and FTP.
Therefore, QLASP performs load balancing on a TCP session basis and not on a
packet-by-packet basis.
In the Outbound Flow Hash Entries, statistics counters are also updated after
classification. The load-balancing engine uses these counters to periodically
distribute the flows across teamed ports. The outbound code path has been
designed to achieve best possible concurrency where multiple concurrent
accesses to the Outbound Flow Hash Table are allowed.
For protocols other than TCP/IP, the first physical adapter will always be selected
for outbound packets. The exception is Address Resolution Protocol (ARP), which
is handled differently to achieve inbound load balancing.
Inbound Traffic Flow (SLB Only)
The QLogic intermediate driver manages the inbound traffic flow for the SLB
teaming mode. Unlike outbound load balancing, inbound load balancing can only
be applied to IP addresses that are located in the same subnet as the
load-balancing server. Inbound load balancing exploits a unique characteristic of
Address Resolution Protocol (RFC0826), in which each IP host uses its own ARP
cache to encapsulate the IP Datagram into an Ethernet frame. QLASP carefully
manipulates the ARP response to direct each IP host to send the inbound IP
packet to the desired physical adapter. Therefore, inbound load balancing is a
plan-ahead scheme based on statistical history of the inbound flows. New
connections from a client to the server will always occur over the primary physical
adapter (because the ARP Reply generated by the operating system protocol
stack will always associate the logical IP address with the MAC address of the
primary physical adapter).
Like the outbound case, there is an Inbound Flow Head Hash Table. Each entry
inside this table has a singly linked list and each link (Inbound Flow Entries)
represents an IP host located in the same subnet.
190
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
When an inbound IP Datagram arrives, the appropriate Inbound Flow Head Entry
is located by hashing the source IP address of the IP Datagram. Two statistics
counters stored in the selected entry are also updated. These counters are used
in the same fashion as the outbound counters by the load-balancing engine
periodically to reassign the flows to the physical adapter.
On the inbound code path, the Inbound Flow Head Hash Table is also designed to
allow concurrent access. The link lists of Inbound Flow Entries are only
referenced in the event of processing ARP packets and the periodic load
balancing. There is no per packet reference to the Inbound Flow Entries. Even
though the link lists are not bounded; the overhead in processing each non-ARP
packet is always a constant. The processing of ARP packets, both inbound and
outbound, however, depends on the number of links inside the corresponding link
list.
On the inbound processing path, filtering is also employed to prevent broadcast
packets from looping back through the system from other physical adapters.
Protocol Support
ARP and IP/TCP/UDP flows are load balanced. If the packet is an IP protocol only,
such as ICMP or IGMP, then all data flowing to a particular IP address will go out
through the same physical adapter. If the packet uses TCP or UDP for the L4
protocol, then the port number is added to the hashing algorithm, so two separate
L4 flows can go out through two separate physical adapters to the same IP
address.
For example, assume the client has an IP address of 10.0.0.1. All IGMP and
ICMP traffic will go out the same physical adapter because only the IP address is
used for the hash. The flow would look something like this:
IGMP ------> PhysAdapter1 ------> 10.0.0.1
ICMP ------> PhysAdapter1 ------> 10.0.0.1
If the server also sends an TCP and UDP flow to the same 10.0.0.1 address, they
can be on the same physical adapter as IGMP and ICMP, or on completely
different physical adapters from ICMP and IGMP. The stream may look like this:
IGMP ------> PhysAdapter1 ------> 10.0.0.1
ICMP ------> PhysAdapter1 ------> 10.0.0.1
TCP---------> PhysAdapter1 ------> 10.0.0.1
UDP------> PhysAdatper1 ------> 10.0.0.1
Or the streams may look like this:
IGMP ------> PhysAdapter1 ------> 10.0.0.1
ICMP ------> PhysAdapter1 ------> 10.0.0.1
TCP------> PhysAdapter2 ------> 10.0.0.1
UDP------> PhysAdatper3 ------> 10.0.0.1
191
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
The actual assignment between adapters may change over time, but any protocol
that is not TCP/UDP based goes over the same physical adapter because only
the IP address is used in the hash.
Performance
Modern network interface cards provide many hardware features that reduce CPU
use by offloading certain CPU intensive operations (see “Teaming and Other
Advanced Networking Properties” on page 199). In contrast, the QLASP
intermediate driver is a purely software function that must examine every packet
received from the protocol stacks and react to its contents before sending it out
through a particular physical interface. Though the QLASP driver can process
each outgoing packet in near constant time, some applications that may already
be CPU bound may suffer if operated over a teamed interface. Such an
application may be better suited to take advantage of the failover capabilities of
the intermediate driver rather than the load balancing features, or it may operate
more efficiently over a single physical adapter that provides a particular hardware
feature such as Large Send Offload.
Types of Teams
Switch-Independent
The QLogic Smart Load Balancing type of team allows two to eight physical
adapters to operate as a single virtual adapter. The greatest benefit of the SLB
type of team is that it operates on any IEEE compliant switch and requires no
special configuration.
Smart Load Balancing and Failover
SLB provides for switch-independent, bidirectional, fault-tolerant teaming and load
balancing. Switch independence implies that there is no specific support for this
function required in the switch, allowing SLB to be compatible with all switches.
Under SLB, all adapters in the team have separate MAC addresses. The
load-balancing algorithm operates on Layer 3 addresses of the source and
destination nodes, which enables SLB to load balance both incoming and
outgoing traffic.
The QLASP intermediate driver continually monitors the physical ports in a team
for link loss. In the event of link loss on any port, traffic is automatically diverted to
other ports in the team. The SLB teaming mode supports switch fault tolerance by
allowing teaming across different switches- provided the switches are on the
same physical network or broadcast domain.
192
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
Network Communications
The following are the key attributes of SLB:

Failover mechanism – Link loss detection.

Load Balancing Algorithm – Inbound and outbound traffic are balanced
through a QLogic proprietary mechanism based on L4 flows.

Outbound Load Balancing using MAC Address - No.

Outbound Load Balancing using IP Address - Yes

Multivendor Teaming – Supported (must include at least one QLogic
Ethernet adapter as a team member).
Applications
The SLB algorithm is most appropriate in home and small business environments
where cost is a concern or with commodity switching equipment. SLB teaming
works with unmanaged Layer 2 switches and is a cost-effective way of getting
redundancy and link aggregation at the server. Smart Load Balancing also
supports teaming physical adapters with differing link capabilities. In addition, SLB
is recommended when switch fault tolerance with teaming is required.
Configuration Recommendations
SLB supports connecting the teamed ports to hubs and switches if they are on the
same broadcast domain. It does not support connecting to a router or Layer 3
switches because the ports must be on the same subnet.
Switch-Dependent
Generic Static Trunking
This mode supports a variety of environments where the adapter link partners are
statically configured to support a proprietary trunking mechanism. This mode
could be used to support Lucent’s Open Trunk, Cisco’s Fast EtherChannel (FEC),
and Cisco’s Gigabit EtherChannel (GEC). In the static mode, as in generic link
aggregation, the switch administrator needs to assign the ports to the team, and
this assignment cannot be altered by the QLASP, as there is no exchange of the
Link Aggregation Control Protocol (LACP) frame.
With this mode, all adapters in the team are configured to receive packets for the
same MAC address. Trunking operates on Layer 2 addresses and supports load
balancing and failover for both inbound and outbound traffic. The QLASP driver
determines the load-balancing scheme for outbound packets, using Layer 4
protocols previously discussed, whereas the team link partner determines the
load-balancing scheme for inbound packets.
193
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
The attached switch must support the appropriate trunking scheme for this mode
of operation. Both the QLASP and the switch continually monitor their ports for link
loss. In the event of link loss on any port, traffic is automatically diverted to other
ports in the team.
Network Communications
The following are the key attributes of Generic Static Trunking:

Failover mechanism – Link loss detection

Load Balancing Algorithm – Outbound traffic is balanced through QLogic
proprietary mechanism based L4 flows. Inbound traffic is balanced
according to a switch specific mechanism.

Outbound Load Balancing using MAC Address – No

Outbound Load Balancing using IP Address - Yes

Multivendor teaming – Supported (Must include at least one QLogic
Ethernet adapter as a team member)
Applications
Generic trunking works with switches that support Cisco Fast EtherChannel,
Cisco Gigabit EtherChannel, Extreme Networks Load Sharing and Bay Networks
or IEEE 802.3ad Link Aggregation static mode. Since load balancing is
implemented on Layer 2 addresses, all higher protocols such as IP, IPX, and
NetBEUI are supported. Therefore, this is the recommended teaming mode when
the switch supports generic trunking modes over SLB.
Configuration Recommendations
Static trunking supports connecting the teamed ports to switches if they are on the
same broadcast domain and support generic trunking. It does not support
connecting to a router or Layer 3 switches since the ports must be on the same
subnet.
194
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
Dynamic Trunking (IEEE 802.3ad Link Aggregation)
This mode supports link aggregation through static and dynamic configuration
through the Link Aggregation Control Protocol (LACP). With this mode, all
adapters in the team are configured to receive packets for the same MAC
address. The MAC address of the first adapter in the team is used and cannot be
substituted for a different MAC address. The QLASP driver determines the
load-balancing scheme for outbound packets, using Layer 4 protocols previously
discussed, whereas the team’s link partner determines the load-balancing
scheme for inbound packets. Because the load balancing is implemented on
Layer 2, all higher protocols such as IP, IPX, and NetBEUI are supported. The
attached switch must support the 802.3ad Link Aggregation standard for this
mode of operation. The switch manages the inbound traffic to the adapter while
the QLASP manages the outbound traffic. Both the QLASP and the switch
continually monitor their ports for link loss. In the event of link loss on any port,
traffic is automatically diverted to other ports in the team.
Network Communications
The following are the key attributes of Dynamic Trunking:

Failover mechanism – Link loss detection

Load Balancing Algorithm – Outbound traffic is balanced through a QLogic
proprietary mechanism based on L4 flows. Inbound traffic is balanced
according to a switch specific mechanism.

Outbound Load Balancing using MAC Address - No

Outbound Load Balancing using IP Address - Yes

Multivendor teaming – Supported (Must include at least one QLogic
Ethernet adapter as a team member)
Applications
Dynamic trunking works with switches that support IEEE 802.3ad Link
Aggregation dynamic mode using LACP. Inbound load balancing is switch
dependent. In general, the switch traffic is load balanced based on L2 addresses.
In this case, all network protocols such as IP, IPX, and NetBEUI are load
balanced. Therefore, this is the recommended teaming mode when the switch
supports LACP, except when switch fault tolerance is required. SLB is the only
teaming mode that supports switch fault tolerance.
Configuration Recommendations
Dynamic trunking supports connecting the teamed ports to switches as long as
they are on the same broadcast domain and supports IEEE 802.3ad LACP
trunking. It does not support connecting to a router or Layer 3 switches since the
ports must be on the same subnet.
195
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
LiveLink
LiveLink is a feature of QLASP that is available for the Smart Load Balancing
(SLB) and SLB (Auto-Fallback Disable) types of teaming. The purpose of LiveLink
is to detect link loss beyond the switch and to route traffic only through team
members that have a live link. This function is accomplished though the teaming
software. The teaming software periodically probes (issues a link packet from
each team member) one or more specified target network device(s). The probe
target(s) responds when it receives the link packet. If a team member does not
detect the response within a specified amount of time, this indicates that the link
has been lost, and the teaming software discontinues passing traffic through that
team member. Later, if that team member begins to detect a response from a
probe target, this indicates that the link has been restored, and the teaming
software automatically resumes passing traffic through that team member.
LiveLink works only with TCP/IP.
LiveLink is supported in both 32-bit and 64-bit Windows operating systems. For
similar capabilities in Linux operating systems, see the Channel Bonding
information in your Red Hat documentation.
Attributes of the Features Associated with Each Type of Team
The attributes of the features associated with each type of team are summarized
in Table 15-4.
Table 15-4. Attributes
Feature
Attribute
Smart Load Balancing
User interface
QConvergeConsole GUI
Number of teams
Maximum 16
Number of adapters per team
Maximum 16
Hot replace
Yes
Hot add
Yes
Hot remove
Yes
Link speed support
Different speeds
Frame protocol
IP
Incoming packet management
QLASP
Outgoing packet management
QLASP
LiveLink support
Yes
196
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
Table 15-4. Attributes (Continued)
Feature
Attribute
Failover event
Loss of link
Failover time
<500 ms
Fallback time
1.5 s (approximate) a
MAC address
Different
Multivendor teaming
Yes
Generic (Static) Trunking
User interface
QConvergeConsole GUI
Number of teams
Maximum 16
Number of adapters per team
Maximum 16
Hot replace
Yes
Hot add
Yes
Hot remove
Yes
Link speed support
Different speeds b
Frame protocol
All
Incoming packet management
Switch
Outgoing packet management
QLASP
Failover event
Loss of link only
Failover time
<500 ms
Fallback time
1.5 s (approximate) a
MAC address
Same for all adapters
Multivendor teaming
Yes
197
83840-546-00 E
15–QLogic Teaming Services
Teaming Mechanisms
Table 15-4. Attributes (Continued)
Feature
Attribute
Dynamic LACP
User interface
QConvergeConsole GUI
Number of teams
Maximum 16
Number of adapters per team
Maximum 16
Hot replace
Yes
Hot add
Yes
Hot remove
Yes
Link speed support
Different speeds
Frame protocol
All
Incoming packet management
Switch
Outgoing packet management
QLASP
Failover event
Loss of link only
Failover time
<500 ms
Fallback time
1.5 s (approximate) a
MAC address
Same for all adapters
Multivendor teaming
Yes
a
Make sure that Port Fast or Edge Port is enabled.
b
Some switches require matching link speeds to correctly negotiate between trunk connections.
198
83840-546-00 E
15–QLogic Teaming Services
Teaming and Other Advanced Networking Properties
Speeds Supported for Each Type of Team
The various link speeds that are supported for each type of team are listed in
Table 15-5. Mixed speed refers to the capability of teaming adapters that are
running at different link speeds.
Table 15-5. Link Speeds in Teaming
Type of Team
Link Speed
Traffic Direction
Speed Support
SLB
10/100/1000/10000
Incoming/outgoing
Mixed speed
FEC
100
Incoming/outgoing
Same speed
GEC
1000
Incoming/outgoing
Same speed
IEEE 802.3ad
10/100/1000/10000
Incoming/outgoing
Mixed speed
Teaming and Other Advanced Networking
Properties

Checksum Offload

IEEE 802.1p QoS Tagging

Large Send Offload

Jumbo Frames

IEEE 802.1Q VLANs

Preboot Execution Environment
Before creating a team, adding or removing team members, or changing
advanced settings of a team member, make sure each team member has been
configured similarly. Settings to check include VLANs and QoS Packet Tagging,
Jumbo Frames, and the various offloads. Advanced adapter properties and
teaming support are listed in Table 15-6.
Table 15-6. Advanced Adapter Properties and Teaming Support
Supported by Teaming Virtual
Adapter
Adapter Properties
Checksum Offload
Yes
IEEE 802.1p QoS Tagging
No
Large Send Offload
Yes a
199
83840-546-00 E
15–QLogic Teaming Services
Teaming and Other Advanced Networking Properties
Table 15-6. Advanced Adapter Properties and Teaming Support
Supported by Teaming Virtual
Adapter
Adapter Properties
Jumbo Frames
Yes b
IEEE 802.1Q VLANs
Yes c
Preboot Execution environment (PXE)
Yes d
a
All adapters on the team must support this feature. Some adapters may not support this feature if
ASF/IPMI is also enabled.
b
Must be supported by all adapters in the team.
c
Only for QLogic adapters.
d
As a PXE sever only, not as a client.
A team does not necessarily inherit adapter properties; rather various properties
depend on the specific capability. For instance, an example would be flow control,
which is a physical adapter property and has nothing to do with QLASP, and will
be enabled on a particular adapter if the miniport driver for that adapter has flow
control enabled.
NOTE
All adapters on the team must support the property listed in Table 15-6 for
the team to support the property.
Checksum Offload
Checksum Offload is a property of the QLogic network adapters that allows the
TCP/IP/UDP checksums for send and receive traffic to be calculated by the
adapter hardware rather than by the host CPU. In high-traffic situations, this can
allow a system to handle more connections more efficiently than if the host CPU
were forced to calculate the checksums. This property is inherently a hardware
property and would not benefit from a software-only implementation. An adapter
that supports Checksum Offload advertises this capability to the operating system
so that the checksum does not need to be calculated in the protocol stack.
Checksum Offload is only supported for IPv4 at this time.
IEEE 802.1p QoS Tagging
The IEEE 802.1p standard includes a 3-bit field (supporting a maximum of 8
priority levels), which allows for traffic prioritization. The QLASP intermediate
driver does not support IEEE 802.1p QoS tagging.
200
83840-546-00 E
15–QLogic Teaming Services
Teaming and Other Advanced Networking Properties
Large Send Offload
Large Send Offload (LSO) is a feature provided by QLogic network adapters that
prevents an upper level protocol such as TCP from breaking a large data packet
into a series of smaller packets with headers appended to them. The protocol
stack need only generate a single header for a data packet as large as 64 KB, and
the adapter hardware breaks the data buffer into appropriately-sized Ethernet
frames with the correctly sequenced header (based on the single header originally
provided).
Jumbo Frames
The use of jumbo frames was originally proposed by Alteon Networks, Inc. in 1998
and increased the maximum size of an Ethernet frame to a maximum size of 9600
bytes. Though never formally adopted by the IEEE 802.3 Working Group, support
for jumbo frames has been implemented in QLogic 8400/3400 Series adapters.
The QLASP intermediate driver supports jumbo frames, provided that all of the
physical adapters in the team also support jumbo frames and the same size is set
on all adapters in the team.
IEEE 802.1Q VLANs
In 1998, the IEEE approved the 802.3ac standard, which defines frame format
extensions to support Virtual Bridged Local Area Network tagging on Ethernet
networks as specified in the IEEE 802.1Q specification. The VLAN protocol
permits insertion of a tag into an Ethernet frame to identify the VLAN to which a
frame belongs. If present, the 4-byte VLAN tag is inserted into the Ethernet frame
between the source MAC address and the length/type field. The first 2-bytes of
the VLAN tag consist of the IEEE 802.1Q tag type, whereas the second 2 bytes
include a user priority field and the VLAN identifier (VID). Virtual LANs (VLANs)
allow the user to split the physical LAN into logical subparts. Each defined VLAN
behaves as its own separate network, with its traffic and broadcasts isolated from
the others, thus increasing bandwidth efficiency within each logical group. VLANs
also enable the administrator to enforce appropriate security and quality of service
(QoS) policies. The QLASP supports the creation of 64 VLANs per team or
adapter: 63 tagged and 1 untagged. The operating system and system resources,
however, limit the actual number of VLANs. VLAN support is provided according
to IEEE 802.1Q and is supported in a teaming environment and on a single
adapter. Note that VLANs are supported only with homogeneous teaming and not
in a multivendor teaming environment. The QLASP intermediate driver supports
VLAN tagging. One or more VLANs may be bound to a single instance of the
intermediate driver.
201
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
Preboot Execution Environment
The Preboot Execution Environment (PXE) allows a system to boot from an
operating system image over the network. By definition, PXE is invoked before an
operating system is loaded, so there is no opportunity for the QLASP intermediate
driver to load and enable a team. As a result, teaming is not supported as a PXE
client, though a physical adapter that participates in a team when the operating
system is loaded may be used as a PXE client. Whereas a teamed adapter
cannot be used as a PXE client, it can be used for a PXE server, which provides
operating system images to PXE clients using a combination of Dynamic Host
Control Protocol (DHCP) and the Trivial File Transfer Protocol (TFTP). Both of
these protocols operate over IP and are supported by all teaming modes.
General Network Considerations

Teaming with Microsoft Virtual Server 2005

Teaming Across Switches

Spanning Tree Algorithm

Layer 3 Routing/Switching

Teaming with Hubs (for troubleshooting purposes only)

Teaming with Microsoft NLB
Teaming with Microsoft Virtual Server 2005
The only supported QLASP team configuration when using Microsoft Virtual
Server 2005 is with a Smart Load Balancing team-type consisting of a single
primary QLogic adapter and a standby QLogic adapter. Make sure to unbind or
deselect “Virtual Machine Network Services” from each team member prior to
creating a team and prior to creating Virtual networks with Microsoft Virtual Server.
Additionally, a virtual network should be created in this software and subsequently
bound to the virtual adapter created by a team. Directly binding a Guest operating
system to a team virtual adapter may not render the desired results.
NOTE
As of this writing, Windows Server 2008 is not a supported operating system
for Microsoft Virtual Server 2005; thus, teaming may not function as
expected with this combination.
202
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
Teaming Across Switches
SLB teaming can be configured across switches. The switches, however, must be
connected together. Generic Trunking and Link Aggregation do not work across
switches because each of these implementations requires that all physical
adapters in a team share the same Ethernet MAC address. It is important to note
that SLB can only detect the loss of link between the ports in the team and their
immediate link partner. SLB has no way of reacting to other hardware failures in
the switches and cannot detect loss of link on other ports.
Switch-Link Fault Tolerance
The diagrams below describe the operation of an SLB team in a switch fault
tolerant configuration. We show the mapping of the ping request and ping replies
in an SLB team with two active members. All servers (Blue, Gray and Red) have a
continuous ping to each other. Figure 15-3 is a setup without the interconnect
cable in place between the two switches. Figure 15-4 has the interconnect cable
in place, and Figure 15-5 is an example of a failover event with the Interconnect
cable in place. These scenarios describe the behavior of teaming across the two
switches and the importance of the interconnect link.
The diagrams show the secondary team member sending the ICMP echo
requests (yellow arrows) while the primary team member receives the respective
ICMP echo replies (blue arrows). This illustrates a key characteristic of the
teaming software. The load balancing algorithms do not synchronize how frames
are load balanced when sent or received. In other words, frames for a given
conversation can go out and be received on different interfaces in the team. This
is true for all types of teaming supported by QLogic. Therefore, an interconnect
link must be provided between the switches that connect to ports in the same
team.
In the configuration without the interconnect, an ICMP Request from Blue to Gray
goes out port 82:83 destined for Gray port 5E:CA, but the Top Switch has no way
to send it there because it cannot go along the 5E:C9 port on Gray. A similar
scenario occurs when Gray attempts to ping Blue. An ICMP Request goes out on
5E:C9 destined for Blue 82:82, but cannot get there. Top Switch does not have an
entry for 82:82 in its CAM table because there is no interconnect between the two
switches. Pings, however, flow between Red and Blue and between Red and
Gray.
203
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
Furthermore, a failover event would cause additional loss of connectivity.
Consider a cable disconnect on the Top Switch port 4. In this case, Gray would
send the ICMP Request to Red 49:C9, but because the Bottom switch has no
entry for 49:C9 in its CAM Table, the frame is flooded to all its ports but cannot find
a way to get to 49:C9.
Figure 15-3. Teaming Across Switches Without an Interswitch Link
204
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
The addition of a link between the switches allows traffic from/to Blue and Gray to
reach each other without any problems. Note the additional entries in the CAM
table for both switches. The link interconnect is critical for the proper operation of
the team. As a result, it is highly advisable to have a link aggregation trunk to
interconnect the two switches to ensure high availability for the connection.
Figure 15-4. Teaming Across Switches With Interconnect
205
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
Figure 15-5 represents a failover event in which the cable is unplugged on the Top
Switch port 4. This is a successful failover with all stations pinging each other
without loss of connectivity.
Figure 15-5. Failover Event
206
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
Spanning Tree Algorithm

Topology Change Notice (TCN)

Port Fast/Edge Port
In Ethernet networks, only one active path may exist between any two bridges or
switches. Multiple active paths between switches can cause loops in the network.
When loops occur, some switches recognize stations on both sides of the switch.
This situation causes the forwarding algorithm to malfunction allowing duplicate
frames to be forwarded. Spanning tree algorithms provide path redundancy by
defining a tree that spans all of the switches in an extended network and then
forces certain redundant data paths into a standby (blocked) state. At regular
intervals, the switches in the network send and receive spanning tree packets that
they use to identify the path. If one network segment becomes unreachable, or if
spanning tree costs change, the spanning tree algorithm reconfigures the
spanning tree topology and re-establishes the link by activating the standby path.
Spanning tree operation is transparent to end stations, which do not detect
whether they are connected to a single LAN segment or a switched LAN of
multiple segments.
Spanning Tree Protocol (STP) is a Layer 2 protocol designed to run on bridges
and switches. The specification for STP is defined in IEEE 802.1d. The main
purpose of STP is to ensure that you do not run into a loop situation when you
have redundant paths in your network. STP detects/disables network loops and
provides backup links between switches or bridges. It allows the device to interact
with other STP compliant devices in your network to ensure that only one path
exists between any two stations on the network.
After a stable network topology has been established, all bridges listen for hello
BPDUs (Bridge Protocol Data Units) transmitted from the root bridge. If a bridge
does not get a hello BPDU after a predefined interval (Max Age), the bridge
assumes that the link to the root bridge is down. This bridge then initiates
negotiations with other bridges to reconfigure the network to re-establish a valid
network topology. The process to create a new topology can take up to 50
seconds. During this time, end-to-end communications are interrupted.
The use of Spanning Tree is not recommended for ports that are connected to end
stations, because by definition, an end station does not create a loop within an
Ethernet segment. Additionally, when a teamed adapter is connected to a port
with Spanning Tree enabled, users may experience unexpected connectivity
problems. For example, consider a teamed adapter that has a lost link on one of
its physical adapters. If the physical adapter were to be reconnected (also known
as fallback), the intermediate driver would detect that the link has been
reestablished and would begin to pass traffic through the port. Traffic would be
lost if the port was temporarily blocked by the Spanning Tree Protocol.
207
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
Topology Change Notice (TCN)
A bridge/switch creates a forwarding table of MAC addresses and port numbers
by learning the source MAC address that received on a particular port. The table
is used to forward frames to a specific port rather than flooding the frame to all
ports. The typical maximum aging time of entries in the table is 5 minutes. Only
when a host has been silent for 5 minutes would its entry be removed from the
table. It is sometimes beneficial to reduce the aging time. One example is when a
forwarding link goes to blocking and a different link goes from blocking to
forwarding. This change could take up to 50 seconds. At the end of the STP
re-calculation a new path would be available for communications between end
stations. However, because the forwarding table would still have entries based on
the old topology, communications may not be reestablished until after 5 minutes
when the affected ports entries are removed from the table. Traffic would then be
flooded to all ports and re-learned. In this case it is beneficial to reduce the aging
time. This is the purpose of a topology change notice (TCN) BPDU. The TCN is
sent from the affected bridge/switch to the root bridge/switch. As soon as a
bridge/switch detects a topology change (a link going down or a port going to
forwarding) it sends a TCN to the root bridge through its root port. The root bridge
then advertises a BPDU with a Topology Change to the entire network.This
causes every bridge to reduce the MAC table aging time to 15 seconds for a
specified amount of time. This allows the switch to re-learn the MAC addresses as
soon as STP re-converges.
Topology Change Notice BPDUs are sent when a port that was forwarding
changes to blocking or transitions to forwarding. A TCN BPDU does not initiate an
STP recalculation. It only affects the aging time of the forwarding table entries in
the switch.It will not change the topology of the network or create loops. End
nodes such as servers or clients trigger a topology change when they power off
and then power back on.
Port Fast/Edge Port
To reduce the effect of TCNs on the network (for example, increasing flooding on
switch ports), end nodes that are powered on/off often should use the Port Fast or
Edge Port setting on the switch port they are attached to. Port Fast or Edge Port is
a command that is applied to specific ports and has the following effects:

Ports coming from link down to link up will be put in the forwarding STP
mode instead of going from listening to learning and then to forwarding. STP
is still running on these ports.

The switch does not generate a Topology Change Notice when the port is
going up or down.
Layer 3 Routing/Switching
The switch that the teamed ports are connected to must not be a Layer 3 switch or
router. The ports in the team must be in the same network.
208
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
Teaming with Hubs (for troubleshooting purposes only)

Hub Usage in Teaming Network Configurations

SLB Teams

SLB Team Connected to a Single Hub

Generic and Dynamic Trunking (FEC/GEC/IEEE 802.3ad)
SLB teaming can be used with 10/100 hubs, but it is only recommended for
troubleshooting purposes, such as connecting a network analyzer when switch
port mirroring is not an option.
Hub Usage in Teaming Network Configurations
Although the use of hubs in network topologies is functional in some situations, it
is important to consider the throughput ramifications when doing so. Network hubs
have a maximum of 100Mbps half-duplex link speed, which severely degrades
performance in either a Gigabit or 100Mbps switched-network configuration. Hub
bandwidth is shared among all connected devices; as a result, when more
devices are connected to the hub, the bandwidth available to any single device
connected to the hub is reduced in direct proportion to the number of devices
connected to the hub.
It is not recommended to connect team members to hubs; only switches should be
used to connect to teamed ports. An SLB team, however, can be connected
directly to a hub for troubleshooting purposes. Other team types can result in a
loss of connectivity if specific failures occur and should not be used with hubs.
SLB Teams
SLB teams are the only teaming type not dependent on switch configuration. The
server intermediate driver handles the load balancing and fault tolerance
mechanisms with no assistance from the switch. These elements of SLB make it
the only team type that maintains failover and fallback characteristics when team
ports are connected directly to a hub.
209
83840-546-00 E
15–QLogic Teaming Services
General Network Considerations
SLB Team Connected to a Single Hub
SLB teams configured as shown in Figure 15-6 maintain their fault tolerance
properties. Either server connection could fail without affecting the network.
Clients could be connected directly to the hub, and fault tolerance would still be
maintained; server performance, however, would be degraded.
Figure 15-6. Team Connected to a Single Hub
Generic and Dynamic Trunking (FEC/GEC/IEEE 802.3ad)
FEC/GEC and IEEE 802.3ad teams cannot be connected to any hub
configuration. These team types must be connected to a switch that has also
been configured for this team type.
210
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
Teaming with Microsoft NLB
Teaming does not work in Microsoft’s Network Load Balancing (NLB) unicast
mode, only in multicast mode. Due to the mechanism used by the NLB service,
the recommended teaming configuration in this environment is Failover (SLB with
a standby NIC) as load balancing is managed by NLB.
Application Considerations

Teaming and Clustering

Teaming and Network Backup
Teaming and Clustering

Microsoft Cluster Software

High-Performance Computing Cluster

Oracle
Microsoft Cluster Software
In each cluster node, it is strongly recommended that customers install at least
two network adapters (on-board adapters are acceptable). These interfaces serve
two purposes. One adapter is used exclusively for intra-cluster heartbeat
communications. This is referred to as the private adapter and usually resides on
a separate private subnetwork. The other adapter is used for client
communications and is referred to as the public adapter.
Multiple adapters may be used for each of these purposes: private, intracluster
communications and public, external client communications. All QLogic teaming
modes are supported with Microsoft Cluster Software for the public adapter only.
Private network adapter teaming is not supported. Microsoft indicates that the use
of teaming on the private interconnect of a server cluster is not supported
because of delays that could possibly occur in the transmission and receipt of
heartbeat packets between the nodes. For best results, when you want
redundancy for the private interconnect, disable teaming and use the available
ports to form a second private interconnect. This achieves the same end result
and provides dual, robust communication paths for the nodes to communicate
over.
For teaming in a clustered environment, customers are recommended to use the
same brand of adapters.
211
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
Figure 15-7 shows a 2-node Fibre-Channel cluster with three network interfaces
per cluster node: one private and two public. On each node, the two public
adapters are teamed, and the private adapter is not. Teaming is supported across
the same switch or across two switches. Figure 15-8 shows the same 2-node
Fibre-Channel cluster in this configuration.
Figure 15-7. Clustering With Teaming Across One Switch
NOTE
Microsoft Network Load Balancing is not supported with Microsoft Cluster
Software.
212
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
High-Performance Computing Cluster
Gigabit Ethernet is typically used for the following three purposes in
high-performance computing cluster (HPCC) applications:

Inter-Process Communications (IPC): For applications that do not require
low-latency, high-bandwidth interconnects (such as Myrinet, InfiniBand),
Gigabit Ethernet can be used for communication between the compute
nodes.

I/O: Ethernet can be used for file sharing and serving the data to the
compute nodes. This can be done simply using an NFS server or using
parallel file systems such as PVFS.

Management & Administration: Ethernet is used for out-of-band (ERA) and
in-band (OMSA) management of the nodes in the cluster. It can also be
used for job scheduling and monitoring.
In our current HPCC offerings, only one of the on-board adapters is used. If
Myrinet or IB is present, this adapter serves I/O and administration purposes;
otherwise, it is also responsible for IPC. In case of an adapter failure, the
administrator can use the Felix package to easily configure adapter 2. Adapter
teaming on the host side is neither tested nor supported in HPCC.
Advanced Features
PXE is used extensively for the deployment of the cluster (installation and
recovery of compute nodes). Teaming is typically not used on the host side and it
is not a part of our standard offering. Link aggregation is commonly used between
switches, especially for large configurations. Jumbo frames, although not a part of
our standard offering, may provide performance improvement for some
applications due to reduced CPU overhead.
213
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
Oracle
In our Oracle Solution Stacks, we support adapter teaming in both the private
network (interconnect between RAC nodes) and public network with clients or the
application layer above the database layer.
Figure 15-8. Clustering With Teaming Across Two Switches
214
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
Teaming and Network Backup

Load Balancing and Failover

Fault Tolerance
When you perform network backups in a nonteamed environment, overall
throughput on a backup server adapter can be easily impacted due to excessive
traffic and adapter overloading. Depending on the number of backup servers, data
streams, and tape drive speed, backup traffic can easily consume a high
percentage of the network link bandwidth, thus impacting production data and
tape backup performance. Network backups usually consist of a dedicated
backup server running with tape backup software such as NetBackup, Galaxy or
Backup Exec. Attached to the backup server is either a direct SCSI tape backup
unit or a tape library connected through a fiber channel storage area network
(SAN). Systems that are backed up over the network are typically called clients or
remote servers and usually have a tape backup software agent installed.
Figure 15-9 shows a typical 1Gbps nonteamed network environment with tape
backup implementation.
Figure 15-9. Network Backup without Teaming
215
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
Because there are four client servers, the backup server can simultaneously
stream four backup jobs (one per client) to a multidrive autoloader. Because of the
single link between the switch and the backup server; however, a 4-stream
backup can easily saturate the adapter and link. If the adapter on the backup
server operates at 1Gbps (125MBps), and each client is able to stream data at 20
MB/s during tape backup, the throughput between the backup server and switch
will be at 80 MB/s (20 MB/s x 4), which is equivalent to 64% of the network
bandwidth. Although this is well within the network bandwidth range, the 64%
constitutes a high percentage, especially if other applications share the same link.
Load Balancing and Failover
As the number of backup streams increases, the overall throughput increases.
Each data stream, however, may not be able to maintain the same performance
as a single backup stream of 25 MB/s. In other words, even though a backup
server can stream data from a single client at 25 MB/s, it is not expected that four
simultaneously-running backup jobs will stream at 100 MB/s (25 MB/s x 4
streams). Although overall throughput increases as the number of backup
streams increases, each backup stream can be impacted by tape software or
network stack limitations.
For a tape backup server to reliably use adapter performance and network
bandwidth when backing up clients, a network infrastructure must implement
teaming such as load balancing and fault tolerance. Data centers will incorporate
redundant switches, link aggregation, and trunking as part of their fault tolerant
solution. Although teaming device drivers will manipulate the way data flows
through teamed interfaces and failover paths, this is transparent to tape backup
applications and does not interrupt any tape backup process when backing up
remote systems over the network. Figure 15-10 shows a network topology that
demonstrates tape backup in a QLogic teamed environment and how smart load
balancing can load balance tape backup data across teamed adapters.
There are four paths that the client-server can use to send data to the backup
server, but only one of these paths will be designated during data transfer. One
possible path that Client-Server Red can use to send data to the backup server is:
Example Path: Client-Server Red sends data through Adapter A, Switch 1,
Backup Server Adapter A.
The designated path is determined by two factors:

Client-Server ARP cache; which points to the backup server MAC address.
This is determined by the QLogic intermediate driver inbound load balancing
algorithm.

The physical adapter interface on Client-Server Red will be used to transmit
the data. The QLogic intermediate driver outbound load balancing algorithm
determines this (see “Outbound Traffic Flow” on page 190 and “Inbound
Traffic Flow (SLB Only)” on page 190.
216
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
The teamed interface on the backup server transmits a gratuitous address
resolution protocol (G-ARP) to Client-Server Red, which in turn, causes the client
server ARP cache to get updated with the Backup Server MAC address. The load
balancing mechanism within the teamed interface determines the MAC address
embedded in the G-ARP. The selected MAC address is essentially the destination
for data transfer from the client server.On Client-Server Red, the SLB teaming
algorithm will determine which of the two adapter interfaces will be used to
transmit data. In this example, data from Client Server Red is received on the
backup server Adapter A interface. To demonstrate the SLB mechanisms when
additional load is placed on the teamed interface, consider the scenario when the
backup server initiates a second backup operation: one to Client-Server Red, and
one to Client-Server Blue. The route that Client-Server Blue uses to send data to
the backup server is dependent on its ARP cache, which points to the backup
server MAC address. Because Adapter A of the backup server is already under
load from its backup operation with Client-Sever Red, the Backup Server invokes
its SLB algorithm to inform Client-Server Blue (through an G-ARP) to update its
ARP cache to reflect the backup server Adapter B MAC address. When
Client-Server Blue needs to transmit data, it uses either one of its adapter
interfaces, which is determined by its own SLB algorithm. What is important is that
data from Client-Server Blue is received by the Backup Server Adapter B
interface, and not by its Adapter A interface. This is important because with both
backup streams running simultaneously, the backup server must load balance
data streams from different clients. With both backup streams running, each
adapter interface on the backup server is processing an equal load, thus
load-balancing data across both adapter interfaces.
The same algorithm applies if a third and fourth backup operation is initiated from
the backup server. The teamed interface on the backup server transmits a unicast
G-ARP to backup clients to inform them to update their ARP cache. Each client
then transmits backup data along a route to the target MAC address on the
backup server.
Fault Tolerance
If a network link fails during tape backup operations, all traffic between the backup
server and client stops and backup jobs fail. If, however, the network topology was
configured for both QLogic SLB and switch fault tolerance, then this would allow
tape backup operations to continue without interruption during the link failure. All
failover processes within the network are transparent to tape backup software
applications.
217
83840-546-00 E
15–QLogic Teaming Services
Application Considerations
To understand how backup data streams are directed during network failover
process, consider the topology in Figure 15-10. Client-Server Red is transmitting
data to the backup server through Path 1, but a link failure occurs between the
backup server and the switch. Because the data can no longer be sent from
Switch #1 to the Adapter A interface on the backup server, the data is redirected
from Switch #1 through Switch #2, to the Adapter B interface on the backup
server. This occurs without the knowledge of the backup application because all
fault tolerant operations are handled by the adapter team interface and trunk
settings on the switches. From the client server perspective, it still operates as if it
is transmitting data through the original path.
Figure 15-10. Network Backup With SLB Teaming Across Two Switches
218
83840-546-00 E
15–QLogic Teaming Services
Troubleshooting Teaming Problems
Troubleshooting Teaming Problems

Teaming Configuration Tips

Troubleshooting Guidelines
When running a protocol analyzer over a virtual adapter teamed interface, the
MAC address shown in the transmitted frames may not be correct. The analyzer
does not show the frames as constructed by QLASP and shows the MAC address
of the team and not the MAC address of the interface transmitting the frame. It is
suggested to use the following process to monitor a team:

Mirror all uplink ports from the team at the switch.

If the team spans two switches, mirror the interlink trunk as well.

Sample all mirror ports independently.

On the analyzer, use an adapter and driver that does not filter QoS and
VLAN information.
Teaming Configuration Tips
When troubleshooting network connectivity or teaming issues, ensure that the
following information is true for your configuration.
1.
Although mixed-speed SLB teaming is supported, it is recommended that all
adapters in a team be the same speed (either all Gigabit Ethernet or all Fast
Ethernet). For speeds of 10Gbps, it is highly recommended that all adapters
in a team be the same speed.
2.
If LiveLink is not enabled, disable Spanning Tree Protocol or enable an STP
mode that bypasses the initial phases (for example, Port Fast, Edge Port) for
the switch ports connected to a team.
3.
All switches that the team is directly connected to must have the same
hardware revision, firmware revision, and software revision to be supported.
4.
To be teamed, adapters should be members of the same VLAN. When
multiple teams are configured, each team should be on a separate network.
5.
Do not assign a Locally Administered Address on any physical adapter that
is a member of a team.
6.
Verify that power management is disabled on all physical members of any
team.
7.
Remove any static IP address from the individual physical team members
before the team is built.
8.
A team that requires maximum throughput should use LACP or GEC\FEC.
In these cases, the intermediate driver is only responsible for the outbound
load balancing while the switch performs the inbound load balancing.
219
83840-546-00 E
15–QLogic Teaming Services
Troubleshooting Teaming Problems
9.
Aggregated teams (802.3ad \ LACP and GEC\FEC) must be connected to
only a single switch that supports IEEE 802.3a, LACP or GEC/FEC.
10.
It is not recommended to connect any team to a hub, as hubs only support
half duplex. Hubs should be connected to a team for troubleshooting
purposes only. Disabling the device driver of a network adapter participating
in an LACP or GEC/FEC team may have adverse affects with network
connectivity. QLogic recommends that the adapter first be physically
disconnected from the switch before disabling the device driver to avoid a
network connection loss.
11.
Verify the base (Miniport) and team (intermediate) drivers are from the same
release package.
12.
Test the connectivity to each physical adapter prior to teaming.
13.
Test the failover and fallback behavior of the team before placing into a
production environment.
14.
When moving from a nonproduction network to a production network, it is
strongly recommended to test again for failover and fallback.
15.
Test the performance behavior of the team before placing into a production
environment.
16.
Network teaming is not supported when running iSCSI traffic through
Microsoft iSCSI initiator or iSCSI offload. MPIO should be used instead of
QLogic network teaming for these ports.
17.
For information on iSCSI boot and iSCSI offload restrictions, see Chapter 8,
iSCSI Protocol.
Troubleshooting Guidelines
Before you call support, make sure you have completed the following steps for
troubleshooting network connectivity problems when the server is using adapter
teaming.
1.
Make sure the link light is ON for every adapter and all the cables are
attached.
2.
Check that the matching base and intermediate drivers belong to the same
release and are loaded correctly.
3.
Check for a valid IP Address using the Windows ipconfig command.
4.
Check that STP is disabled or Edge Port/Port Fast is enabled on the switch
ports connected to the team or that LiveLink is being used.
5.
Check that the adapters and the switch are configured identically for link
speed and duplex.
220
83840-546-00 E
15–QLogic Teaming Services
Frequently Asked Questions
6.
If possible, break the team and check for connectivity to each adapter
independently to confirm that the problem is directly associated with
teaming.
7.
Check that all switch ports connected to the team are on the same VLAN.
8.
Check that the switch ports are configured properly for Generic Trunking
(FEC/GEC)/802.3ad-Draft Static type of teaming and that it matches the
adapter teaming type. If the system is configured for an SLB type of team,
make sure the corresponding switch ports are not configured for Generic
Trunking (FEC/GEC)/802.3ad-Draft Static types of teams.
Frequently Asked Questions
Question: Under what circumstances is traffic not load balanced? Why is all traffic
not load balanced evenly across the team members?
Answer: The bulk of traffic does not use IP/TCP/UDP or the bulk of the clients are
in a different network. The receive load balancing is not a function of traffic load,
but a function of the number of clients that are connected to the server.
Question: What network protocols are load balanced when in a team?
Answer: QLogic’s teaming software only supports IP/TCP/UDP traffic. All other
traffic is forwarded to the primary adapter.
Question: Which protocols are load balanced with SLB and which ones are not?
Answer: Only IP/TCP/UDP protocols are load balanced in both directions: send
and receive. IPX is load balanced on the transmit traffic only.
Question: Can I team a port running at 100Mbps with a port running at
1000Mbps?
Answer: Mixing link speeds within a team is only supported for Smart Load
Balancing teams and 802.3ad teams.
Question: Can I team a fiber adapter with a copper Gigabit Ethernet adapter?
Answer: Yes with SLB, and yes if the switch allows for it in FEC/GEC and
802.3ad.
Question: What is the difference between adapter load balancing and Microsoft’s
Network Load Balancing (NLB)?
Answer: Adapter load balancing is done at a network session level, whereas NLB
is done at the server application level.
Question: Can I connect the teamed adapters to a hub?
Answer: Teamed ports can be connected to a hub for troubleshooting purposes
only. However, this practice is not recommended for normal operation because
the performance would be degraded due to hub limitations. Connect the teamed
ports to a switch instead.
221
83840-546-00 E
15–QLogic Teaming Services
Frequently Asked Questions
Question: Can I connect the teamed adapters to ports in a router?
Answer: No. All ports in a team must be on the same network; in a router,
however, each port is a separate network by definition. All teaming modes require
that the link partner be a Layer 2 switch.
Question: Can I use teaming with Microsoft Cluster Services?
Answer: Yes. Teaming is supported on the public network only, not on the private
network used for the heartbeat link.
Question: Can PXE work over a virtual adapter (team)?
Answer: A PXE client operates in an environment before the operating system is
loaded; as a result, virtual adapters have not been enabled yet. If the physical
adapter supports PXE, then it can be used as a PXE client, whether or not it is
part of a virtual adapter when the operating system loads. PXE servers may
operate over a virtual adapter.
Question: What is the maximum number of ports that can be teamed together?
Answer: Up to 16 ports can be assigned to a team, of which one port can be a
standby team member.
Question: What is the maximum number of teams that can be configured on the
same server?
Answer: Up to 16 teams can be configured on the same server.
Question: Why does my team loose connectivity for the first 30 to 50 seconds
after the Primary adapter is restored (fallback)?
Answer: Because Spanning Tree Protocol is bringing the port from blocking to
forwarding. You must enable Port Fast or Edge Port on the switch ports connected
to the team or use LiveLink to account for the STP delay.
Question: Can I connect a team across multiple switches?
Answer: Smart Load Balancing can be used with multiple switches because each
physical adapter in the system uses a unique Ethernet MAC address. Link
Aggregation and Generic Trunking cannot operate across switches because they
require all physical adapters to share the same Ethernet MAC address.
Question: How do I upgrade the intermediate driver (QLASP)?
Answer: The intermediate driver cannot be upgraded through the Local Area
Connection Properties. It must be upgraded using the QLogic Setup installer.
Question: How can I determine the performance statistics on a virtual adapter
(team)?
Answer: In the QCC GUI, click the Statistics tab for the virtual adapter.
Question: Can I configure NLB and teaming concurrently?
Answer: Yes, but only when running NLB in a multicast mode (NLB is not
supported with MS Cluster Services).
222
83840-546-00 E
15–QLogic Teaming Services
Frequently Asked Questions
Question: Should both the backup server and client servers that are backed up
be teamed?
Answer: Because the backup server is under the most data load, it should always
be teamed for link aggregation and failover. A fully redundant network, however,
requires that both the switches and the backup clients be teamed for fault
tolerance and link aggregation.
Question: During backup operations, does the adapter teaming algorithm load
balance data at a byte-level or a session-level?
Answer: When using adapter teaming, data is only load balanced at a session
level and not a byte level to prevent out-of-order frames. Adapter teaming load
balancing does not work the same way as other storage load balancing
mechanisms such as EMC PowerPath.
Question: Is there any special configuration required in the tape backup software
or hardware to work with adapter teaming?
Answer: No special configuration is required in the tape software to work with
teaming. Teaming is transparent to tape backup applications.
Question: How do I know what driver I am currently using?
Answer: In all operating systems, the most accurate method for checking the
driver revision is to physically locate the driver file and check the properties.
Question: Can SLB detect a switch failure in a Switch Fault Tolerance
configuration?
Answer: No. SLB can only detect the loss of link between the teamed port and its
immediate link partner. SLB cannot detect link failures on other ports.
Question: Where can I get the latest supported drivers?
Answer: Go to driverdownloads.qlogic.com for driver package updates or support
documents.
Question: Why does my team lose connectivity for the first 30 to 50 seconds after
the primary adapter is restored (fall-back after a failover)?
Answer: During a fall-back event, link is restored causing Spanning Tree Protocol
to configure the port for blocking until it determines that it can move to the
forwarding state. You must enable Port Fast or Edge Port on the switch ports
connected to the team to prevent the loss of communications caused by STP.
Question: Where do I monitor real time statistics for an adapter team in a
Windows server?
Answer: Use the QCC GUI to monitor general, IEEE 802.3 and custom counters.
Question: What features are not supported on a multivendor team?
Answer: VLAN tagging, and RSS are not supported on a multivendor team.
223
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Event Log Messages

Windows System Event Log Messages

Base Driver (Physical Adapter/Miniport)

Intermediate Driver (Virtual Adapter/Team)

Virtual Bus Driver
Windows System Event Log Messages
The known base and intermediate Windows System Event Log status messages
for the QLogic 8400/3400 Series adapters are listed in Table 15-7 and Table 15-8.
As a QLogic adapter driver loads, Windows places a status code in the system
event viewer. There may be up to two classes of entries for these event codes
depending on whether both drivers are loaded (one set for the base or miniport
driver and one set for the intermediate or teaming driver).
Base Driver (Physical Adapter/Miniport)
The base driver is identified by source L2ND. Table 15-7 lists the event log
messages supported by the base driver, explains the cause for the message, and
provides the recommended action.
NOTE
In Table 15-7, message numbers 1 through 17 apply to both NDIS 5.x and
NDIS 6.x drivers, message numbers 18 through 23 apply only to the NDIS
6.x driver.
Table 15-7. Base Driver Event Log Messages
Message
Number
Severity
Message
Cause
Corrective Action
1
Error
Failed to allocate
memory for the
device block. Check
system memory
resource usage.
The driver cannot allocate memory from the
operating system.
Close running applications to free memory.
2
Error
Failed to allocate
map registers.
The driver cannot allocate map registers from
the operating system.
Unload other drivers
that may allocate map
registers.
224
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Table 15-7. Base Driver Event Log Messages (Continued)
Message
Number
Severity
Message
Cause
Corrective Action
3
Error
Failed to access
configuration information. Reinstall the
network driver.
The driver cannot
access PCI configuration space registers on
the adapter.
For add-in adapters:
reseat the adapter in
the slot, move the
adapter to another PCI
slot, or replace the
adapter.
4
Warning
The network link is
down. Check to
make sure the network cable is properly connected.
The adapter has lost its
connection with its link
partner.
Check that the network
cable is connected, verify that the network
cable is the right type,
and verify that the link
partner (for example,
switch or hub) is working correctly.
5
Informational
The network link is
up.
The adapter has established a link.
No action is required.
6
Informational
Network controller
configured for 10Mb
half-duplex link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
7
Informational
Network controller
configured for 10Mb
full-duplex link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
8
Informational
Network controller
configured for
100Mb half-duplex
link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
9
Informational
Network controller
configured for
100Mb full-duplex
link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
10
Informational
Network controller
configured for 1Gb
half-duplex link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
225
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Table 15-7. Base Driver Event Log Messages (Continued)
Message
Number
Severity
11
Informational
Network controller
configured for 1Gb
full-duplex link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
12
Informational
Network controller
configured for 2.5Gb
full-duplex link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
13
Error
Medium not supported.
The operating system
does not support the
IEEE 802.3 medium.
Reboot the operating
system, run a virus
check, run a disk check
(chkdsk), and reinstall
the operating system.
14
Error
Unable to register
the interrupt service
routine.
The device driver cannot install the interrupt
handler.
Reboot the operating
system; remove other
device drivers that may
be sharing the same
IRQ.
15
Error
Unable to map IO
space.
The device driver cannot allocate memory-mapped I/O to
access driver registers.
Remove other adapters from the system,
reduce the amount of
physical memory
installed, and replace
the adapter.
16
Informational
Driver initialized successfully.
The driver has successfully loaded.
No action is required.
17
Informational
NDIS is resetting the
miniport driver.
The NDIS layer has
detected a problem
sending/receiving packets and is resetting the
driver to resolve the
problem.
Run QCC GUI diagnostics; check that the network cable is good.
18
Error
Unknown PHY
detected. Using a
default PHY initialization routine.
The driver could not
read the PHY ID.
Replace the adapter.
Message
Cause
226
Corrective Action
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Table 15-7. Base Driver Event Log Messages (Continued)
Message
Number
Severity
Message
Cause
Corrective Action
19
Error
This driver does not
support this device.
Upgrade to the latest
driver.
The driver does not
recognize the installed
adapter.
Upgrade to a driver version that supports this
adapter.
20
Error
Driver initialization
failed.
Unspecified failure
during driver initialization.
Reinstall the driver,
update to a newer
driver, run QCC GUI
diagnostics, or replace
the adapter.
21
Informational
Network controller
configured for 10Gb
full-duplex link.
The adapter has been
manually configured for
the selected line speed
and duplex settings.
No action is required.
22
Error
Network controller
failed initialization
because it cannot
allocate system
memory.
Insufficient system
memory prevented the
initialization of the
driver.
Increase system memory.
23
Error
Network controller
failed to exchange
the interface with the
bus driver.
The driver and the bus
driver are not compatible.
Update to the latest
driver set, ensuring the
major and minor versions for both NDIS and
the bus driver are the
same.
227
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Intermediate Driver (Virtual Adapter/Team)
The intermediate driver is identified by source BLFM, regardless of the base
driver revision. Table 15-8 lists the event log messages supported by the
intermediate driver, explains the cause for the message, and provides the
recommended action.
Table 15-8. Intermediate Driver Event Log Messages
System
Event
Message
Number
Severity
Message
1
Informational
Event logging enabled
for QLASP driver.
–
No action is required.
2
Error
Unable to register with
NDIS.
The driver cannot register with the NDIS
interface.
Unload other NDIS
drivers.
3
Error
Unable to instantiate
the management interface.
The driver cannot create a device instance.
Reboot the operating
system.
4
Error
Unable to create symbolic link for the management interface.
Another driver has created a conflicting
device name.
Unload the conflicting
device driver that uses
the name Blf.
5
Informational
QLASP Driver has
started.
The driver has started.
No action is required.
6
Informational
QLASP Driver has
stopped.
The driver has
stopped.
No action is required.
7
Error
Could not allocate
memory for internal
data structures.
The driver cannot allocate memory from the
operating system.
Close running applications to free memory.
8
Warning
Could not bind to
adapter.
The driver could not
open one of the team
physical adapters.
Unload and reload the
physical adapter
driver, install an
updated physical
adapter driver, or
replace the physical
adapter.
9
Informational
Successfully bind to
adapter.
The driver successfully opened the physical adapter.
No action is required.
Cause
228
Corrective Action
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Table 15-8. Intermediate Driver Event Log Messages (Continued)
System
Event
Message
Number
Severity
Message
Cause
Corrective Action
10
Warning
Network adapter is
disconnected.
The physical adapter
is not connected to the
network (it has not
established link).
Check that the network cable is connected, verify that the
network cable is the
right type, and verify
that the link partner
(switch or hub) is
working correctly.
11
Informational
Network adapter is
connected.
The physical adapter
is connected to the
network (it has established link).
No action is required.
12
Error
QLASP Features
Driver is not designed
to run on this version
of Operating System.
The driver does not
support the operating
system on which it is
installed.
Consult the driver
release notes and
install the driver on a
supported operating
system or update the
driver.
13
Informational
Hot-standby adapter is
selected as the primary adapter for a
team without a load
balancing adapter.
A standby adapter has
been activated.
Replace the failed
physical adapter.
14
Informational
Network adapter does
not support Advanced
Failover.
The physical adapter
does not support the
QLogic NIC Extension
(NICE).
Replace the adapter
with one that does
support NICE.
15
Informational
Network adapter is
enabled through management interface.
The driver has successfully enabled a
physical adapter
through the management interface.
No action is required.
16
Warning
Network adapter is
disabled through management interface.
The driver has successfully disabled a
physical adapter
through the management interface.
No action is required.
229
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Table 15-8. Intermediate Driver Event Log Messages (Continued)
System
Event
Message
Number
Severity
Message
Cause
Corrective Action
17
Informational
Network adapter is
activated and is participating in network traffic.
A physical adapter has
been added to or activated in a team.
No action is required.
18
Informational
Network adapter is
de-activated and is no
longer participating in
network traffic.
The driver does not
recognize the installed
adapter.
No action is required.
19
Informational
The LiveLink feature in
QLASP connected the
link for the network
adapter.
The connection with
the remote target(s)
for the
LiveLink-enabled team
member has been
established or restored
No action is required.
20
Informational
The LiveLink feature in
QLASP disconnected
the link for the network
adapter.
The LiveLink-enabled
team member is
unable to connect with
the remote target(s).
No action is required.
Virtual Bus Driver
Table 15-9. VBD Event Log Messages
Message
Number
1
Severity
Error
Message
Cause
Corrective Action
Failed to allocate
memory for the device
block. Check system
memory resource
usage.
The driver cannot allocate memory from the
operating system.
Close running applications to free memory.
230
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Table 15-9. VBD Event Log Messages (Continued)
Message
Number
Severity
Message
Cause
Corrective Action
2
Informational
The network link is
down. Check to make
sure the network cable
is properly connected.
The adapter has lost
its connection with its
link partner.
Check that the network cable is connected, verify that the
network cable is the
right type, and verify
that the link partner
(for example, switch or
hub) is working correctly.
3
Informational
The network link is up.
The adapter has
established a link.
No action is required.
4
Informational
Network controller
configured for 10Mb
half-duplex link.
The adapter has been
manually configured
for the selected line
speed and duplex settings.
No action is required.
5
Informational
Network controller
configured for 10Mb
full-duplex link.
The adapter has been
manually configured
for the selected line
speed and duplex settings.
No action is required.
6
Informational
Network controller
configured for 100Mb
half-duplex link.
The adapter has been
manually configured
for the selected line
speed and duplex settings.
No action is required.
7
Informational
Network controller
configured for 100Mb
full-duplex link.
The adapter has been
manually configured
for the selected line
speed and duplex settings.
No action is required.
8
Informational
Network controller
configured for 1Gb
half-duplex link.
The adapter has been
manually configured
for the selected line
speed and duplex settings.
No action is required.
231
83840-546-00 E
15–QLogic Teaming Services
Event Log Messages
Table 15-9. VBD Event Log Messages (Continued)
Message
Number
Severity
Message
Cause
Corrective Action
9
Informational
Network controller
configured for 1Gb
full-duplex link.
The adapter has been
manually configured
for the selected line
speed and duplex settings.
No action is required.
10
Error
Unable to register the
interrupt service routine.
The device driver cannot install the interrupt
handler.
Reboot the operating
system; remove other
device drivers that
may be sharing the
same IRQ.
11
Error
Unable to map IO
space.
The device driver cannot allocate memory-mapped
I/O to access driver
registers.
Remove other adapters from the system,
reduce the amount of
physical memory
installed, and replace
the adapter.
12
Informational
Driver initialized successfully.
The driver has successfully loaded.
No action is required.
13
Error
Driver initialization
failed.
Unspecified failure
during driver initialization.
Reinstall the driver,
update to a newer
driver, run QCC GUI
diagnostics, or replace
the adapter.
14
Error
This driver does not
support this device.
Upgrade to the latest
driver.
The driver does not
recognize the installed
adapter.
Upgrade to a driver
version that supports
this adapter.
15
Error
This driver fails initialization because the
system is running out
of memory.
Insufficient system
memory prevented the
initialization of the
driver.
Increase system memory.
232
83840-546-00 E
16
Configuring Teaming in
Windows Server

QLASP Overview

Load Balancing and Fault Tolerance
NOTE
This chapter describes teaming for adapters in Windows Server systems.
For more information on a similar technology on Linux operating systems
(called “Channel Bonding”), refer to your operating system documentation.
QLASP Overview
QLASP is the QLogic teaming software for the Windows family of operating
systems. QLASP settings are configured by QCC GUI.
QLASP provides heterogeneous support for adapter teaming to include QLogic
8400/3400 Series adapters and QLogic-shipping Intel NIC adapters/LOMs.
QLASP supports four types of teams for Layer 2 teaming:

Smart Load Balancing and Failover with Auto-Fallback Enabled (SLB)

Link Aggregation (802.3ad)

Generic Trunking (FEC/GEC)/802.3ad-Draft Static

SLB (with Auto-Fallback Disable)
For more information on network adapter teaming concepts, see Chapter 15,
QLogic Teaming Services.
NOTE
Windows Server 2012 and later provide built-in teaming support, called NIC
Teaming. It is not recommended that users enable teams through NIC
Teaming and QLASP at the same time on the same adapters.
233
83840-546-00 E
16–Configuring Teaming in Windows Server
Load Balancing and Fault Tolerance
Load Balancing and Fault Tolerance
Teaming provides traffic load balancing and fault tolerance (redundant adapter
operation when a network connection fails). When multiple Ethernet network
adapters are installed in the same system, they can be grouped into teams,
creating a virtual adapter.
A team can consist of two to eight network interfaces, and each interface can be
designated as a primary interface or a standby interface (standby interfaces can
be used only in a “Smart Load Balancing and Failover” on page 235 type of team,
and only one standby interface can be designated per SLB team). If traffic is not
identified on any of the adapter team member connections due to failure of the
adapter, cable, switch port, or switch (where the teamed adapters are attached to
separate switches), the load distribution is reevaluated and reassigned among the
remaining team members. If all of the primary adapters are down, the hot standby
adapter becomes active. Existing sessions are maintained and there is no impact
on the user.
NOTE
Although a team can be created with one adapter, it is not recommended
since this defeats the purpose of teaming. A team consisting of one adapter
is automatically created when setting up VLANs on a single adapter, and this
should be the only time when creating a team with one adapter.
Types of Teams
The available types of teams for the Windows family of operating systems are:

Smart Load Balancing and Failover with Auto-Fallback Enabled (SLB)

Link Aggregation (802.3ad)

Generic Trunking (FEC/GEC)/802.3ad-Draft Static

SLB (with Auto-Fallback Disable)
234
83840-546-00 E
16–Configuring Teaming in Windows Server
Load Balancing and Fault Tolerance
Smart Load Balancing and Failover
Smart Load Balancing and Failover is the QLogic implementation of load
balancing based on IP flow. This feature supports balancing IP traffic across
multiple adapters (team members) in a bidirectional manner. In this type of team,
all adapters in the team have separate MAC addresses. This type of team
provides automatic fault detection and dynamic failover to other team member or
to a hot standby member. This is done independently of Layer 3 protocol (IP, IPX,
NetBEUI); rather, it works with existing Layer 2 and 3 switches. No switch
configuration (such as trunk, link aggregation) is necessary for this type of team to
work.
NOTE
 If you do not enable LiveLink when configuring SLB teams, disabling
STP or enabling Port Fast at the switch or port is recommended. This
minimizes the downtime due to spanning tree loop determination when
failing over. LiveLink mitigates such issues.
 TCP/IP is fully balanced and IPX balances only on the transmit side of
the team; other protocols are limited to the primary adapter.
 If a team member is linked at a higher speed than another, most of the
traffic is handled by the adapter with the higher speed rate.
Link Aggregation (802.3ad)
This mode supports link aggregation and conforms to the IEEE 802.3ad (LACP)
specification. Configuration software allows you to dynamically configure which
adapters you want to participate in a given team. If the link partner is not correctly
configured for 802.3ad link configuration, errors are detected and noted. With this
mode, all adapters in the team are configured to receive packets for the same
MAC address. The outbound load-balancing scheme is determined by our QLASP
driver. The team link partner determines the load-balancing scheme for inbound
packets. In this mode, at least one of the link partners must be in active mode.
NOTE
Link aggregation team type is not supported on ports with NPAR mode
enabled or iSCSI-Offload enabled. Some switches support FCoE-Offload in
LACP teaming mode. Consult your switch documentation for more
information.
235
83840-546-00 E
16–Configuring Teaming in Windows Server
Load Balancing and Fault Tolerance
Generic Trunking (FEC/GEC)/802.3ad-Draft Static
The Generic Trunking (FEC/GEC)/802.3ad-Draft Static type of team is very similar
to the Link Aggregation (802.3ad) type of team in that all adapters in the team are
configured to receive packets for the same MAC address. The Generic Trunking
(FEC/GEC)/802.3ad-Draft Static) type of team, however, does not provide LACP
or marker protocol support. This type of team supports a variety of environments
in which the adapter link partners are statically configured to support a proprietary
trunking mechanism. For instance, this type of team could be used to support
Lucent’s OpenTrunk or Cisco’s Fast EtherChannel (FEC). Basically, this type of
team is a light version of the Link Aggregation (802.3ad) type of team. This
approach is much simpler, in that there is not a formalized link aggregation control
protocol (LACP). As with the other types of teams, the creation of teams and the
allocation of physical adapters to various teams is done statically through user
configuration software.
The Generic Trunking (FEC/GEC/802.3ad-Draft Static) type of team supports load
balancing and failover for both outbound and inbound traffic.
NOTE
Generic Trunk (FEC/GEC/802.3ad Draft Static) team type is not supported
for ports with NPAR mode or FCoE-Offload or iSCSI-Offload enabled.
SLB (Auto-Fallback Disable)
The SLB (Auto-Fallback Disable) type of team is identical to the Smart Load
Balancing and Failover type of team, with the following exception—when the
standby member is active, if a primary member comes back on line, the team
continues using the standby member, rather than switching back to the primary
member.
All primary interfaces in a team participate in load-balancing operations by
sending and receiving a portion of the total traffic. Standby interfaces take over
when all primary interfaces have lost their links.
Failover teaming provides redundant adapter operation (fault tolerance) when a
network connection fails. If the primary adapter in a team is disconnected because
of failure of the adapter, cable, or switch port, the secondary team member
becomes active, redirecting both inbound and outbound traffic originally assigned
to the primary adapter. Sessions will be maintained, causing no impact to the user.
236
83840-546-00 E
16–Configuring Teaming in Windows Server
Load Balancing and Fault Tolerance
Limitations of Smart Load Balancing and Failover/SLB
(Auto-Fallback Disable) Types of Teams
Smart Load Balancing is a protocol-specific scheme. The level of support for IP,
IPX, and NetBEUI protocols is listed in Table 16-1.
Table 16-1. Smart Load Balancing
Operating System
Protocol
Failover/Fallback — All
QLogic
IP
IPX
Failover/Fallback —
Multivendor
NetBE
UI
IP
IPX
NetBE
UI
Windows Server 2008
Y
Y
N/S
Y
N
N/S
Windows Server 2008
R2
Y
Y
N/S
Y
N
N/S
Windows Server
2012/2012 R2
Y
Y
N/S
Y
N
N/S
Load Balance — All
QLogic
Operating System
Protocol
IP
IPX
Load Balance —
Multivendor
NetBE
UI
IP
IPX
NetBE
UI
Windows Server 2008
Y
Y
N/S
Y
N
N/S
Windows Server 2008
R2
Y
Y
N/S
Y
N
N/S
Windows Server
2012/2012 R2
Y
Y
N/S
Y
N
N/S
Legend Y = yes
N = no
N/S = not supported
237
83840-546-00 E
16–Configuring Teaming in Windows Server
Load Balancing and Fault Tolerance
The Smart Load Balancing type of team works with all Ethernet switches without
having to configure the switch ports to any special trunking mode. Only IP traffic is
load-balanced in both inbound and outbound directions. IPX traffic is
load-balanced in the outbound direction only. Other protocol packets are sent and
received through one primary interface only. Failover for non-IP traffic is
supported only for QLogic network adapters. The Generic Trunking type of team
requires the Ethernet switch to support some form of port trunking mode (for
example, Cisco's Gigabit EtherChannel or other switch vendor's Link Aggregation
mode). The Generic Trunking type of team is protocol-independent, and all traffic
should be load-balanced and fault-tolerant.
NOTE
If you do not enable LiveLink when configuring SLB teams, disabling STP or
enabling Port Fast at the switch is recommended. This minimizes the
downtime due to the spanning tree loop determination when failing over.
LiveLink mitigates such issues.
Teaming and Large Send Offload/Checksum Offload Support
Large Send Offload (LSO) and Checksum Offload are enabled for a team only
when all of the members support and are configured for the feature.
238
83840-546-00 E
17
User Diagnostics in DOS

Introduction

System Requirements

Performing Diagnostics

Diagnostic Test Descriptions
Introduction
QLogic 8400/3400 Series User Diagnostics is an MS-DOS based application that
runs a series of diagnostic tests (see Table 17-2) on the QLogic 8400/3400 Series
network adapters in your system. QLogic 8400/3400 Series User Diagnostics also
allows you to update device firmware and to view and change settings for
available adapter properties.
To run QLogic 8400/3400 Series User Diagnostics, create an MS-DOS 6.22
bootable disk containing the uediag.exe file. Next, start the system with the boot
disk in drive A. See “Performing Diagnostics” on page 240 for further instructions
on running diagnostic tests on QLogic network adapters.
System Requirements
Operating System: MS-DOS 6.22
Software: uediag.exe
239
83840-546-00 E
17–User Diagnostics in DOS
Performing Diagnostics
Performing Diagnostics
At the MS-DOS prompt, type uediag followed by the command options. The
uediag command options are shown in Table 17-1. For example, to run all
diagnostic tests on adapter #1 except Group B tests:
C:\>uediag -c 1 -t b
NOTE
You must include uediag at the beginning of the command string each
time you type a command.
Table 17-1. uediag Command Options
Command
Options
Description
uediag
Performs all tests on all QLogic 8400/3400 Series adapters in
your system.
uediag -c
<device#>
Specifies the adapter (device#) to test. Similar to -dev (for backward compatibility).
uediag -cof
Allows tests to continue after detecting a failure.
uediag -dev
<device#>
Specifies the adapter (device#) to test.
uediag -F
Forces an upgrade of the image without checking the version.
uediag -fbc
<bc_image>
Specifies the bin file to update the bootcode.
uediag -fbc1
<bc1_image>
Specifies the bin file to update bootcode 1.
uediag -fbc2
<bc2_image>
Specifies the bin file to update bootcode 2.
uediag -fl2b
<l2b_image>
Specifies the bin file for L2B firmware.
uediag -fib
<ib_image>
Specifies the bin file for iSCSI boot.
uediag -fibc
Programs iSCSI configuration block 0. Used only with -fib
<ib_image>.
uediag -fibc2
Programs iSCSI configuration block 1. Used only with -fib
<ib_image>.
240
83840-546-00 E
17–User Diagnostics in DOS
Performing Diagnostics
Table 17-1. uediag Command Options (Continued)
Command
Options
Description
uediag -fibp
Programs iSCSI configuration software. Used only with -fib
<ib_image>.
uediag -fmba
<mba_image>
Specifies the bin file to update the MBA.
uediag -fnvm
<raw_image>
Programs the raw image into NVM.
uediag -fump
<ump_image>
Specifies the bin file to update UMP firmware.
uediag -help
Displays the QLogic 8400/3400 Series User Diagnostics (uediag)
command options.
uediag -I
<iteration#>
Specifies the number of iterations to run on the selected tests.
uediag -idmatch
Enables matching of VID, DID, SVID, and SSID from the image
file with device IDs: Used only with -fnvm <raw_image>.
uediag -log
<logfile>
Logs the tests results to a specified log file.
uediag -mba
<1/0>
Enables/disables Multiple Boot Agent (MBA) protocol.
1 = Enable
0 = Disable
uediag -mbap
<n>
Sets the MBA boot protocol.
0 = PXE
1 = RPL
2 = BOOTP
3 = iSCSI_Boot
4 = FCoE_Boot
7 = None
uediag -mbav
<1/0>
Enables/disables MBA VLAN.
1 = Enable
0 = Disable
uediag -mbavval
<n>
Sets MBA VLAN (<65536).
241
83840-546-00 E
17–User Diagnostics in DOS
Diagnostic Test Descriptions
Table 17-1. uediag Command Options (Continued)
Command
Options
Description
uediag -mfw
<1/0>
Enables/disables management firmware.
1 = Enable
0 = Disable
uediag -t
<groups/tests>
Disables certain groups/tests.
uediag -T
<groups/tests>
Enables certain groups/tests.
uediag -ver
Displays the version of QLogic 8400/3400 Series User Diagnostics (uediag) and all installed adapters.
Diagnostic Test Descriptions
The diagnostic tests are divided into four groups: Basic Functional Tests (Group
A), Memory Tests (Group B), Block Tests (Group C), and Ethernet Traffic Tests
(Group D). The diagnostic tests are listed and described in Table 17-2.
Table 17-2. Diagnostic Tests
Test
Description
Number
Name
Group A: Basic Functional Tests
A1
Register
Verifies that registers accessible through the PCI/PCIe
interface implement the expected read-only or
read/write attributes by attempting to modify those registers.
A2
PCI Configuration
Checks the PCI Base Address Register (BAR) by varying the amount of memory requested by the BAR and
verifying that the BAR actually requests the correct
amount of memory (without actually mapping the BAR
into system memory). Refer to PCI or PCI-E specifications for details on the BAR and its addressing space.
A3
Interrupt
Generates a PCI interrupt and verifies that the system
receives the interrupt and invokes the correct ISR. A
n2egative test is also performed to verify that a masked
interrupt does not invoke the ISR.
242
83840-546-00 E
17–User Diagnostics in DOS
Diagnostic Test Descriptions
Table 17-2. Diagnostic Tests (Continued)
Test
Description
Number
Name
A5
MSI
Verifies that a Message Signaled Interrupt (MSI)
causes an MSI message to be DMA’d to host memory.
A negative test is also performed to verify that when an
MSI is masked, it does not write an MSI message to
host memory.
A6
Memory BIST
Invokes the internal chip Built-In Self Test (BIST) command to test internal memory.
243
83840-546-00 E
17–User Diagnostics in DOS
Diagnostic Test Descriptions
Table 17-2. Diagnostic Tests (Continued)
Test
Description
Number
Name
Group B: Memory Tests
B1
TXP Scratchpad
B2
TPAT Scratchpad
B3
RXP Scratchpad
B4
COM Scratchpad
B5
CP Scratchpad
B6
MCP Scratchpad
B7
TAS Header Buffer
B8
TAS Payload
Buffer
B9
RBUF via GRC
B10
RBUF via Indirect Access
B11
RBUF Cluster
List
B12
TSCH List
B13
CSCH List
B14
RV2P Scratchpads
B15
TBDC Memory
B16
RBDC Memory
B17
CTX Page Table
B18
CTX Memory
The Group B tests verify all memory blocks of the
QLogic 8400/3400 Series adapters by writing various
data patterns (0x55aa55aa, 0xaa55aa55, walking
zeroed, walking ones, address, and so on.) to each
memory location, reading back the data, and then comparing it to the value written. The fixed data patterns are
used to ensure that no memory bit is stuck high or low,
while the walking zeroed/ones and address tests are
used to ensure that memory writes do not corrupt adjacent memory locations.
244
83840-546-00 E
17–User Diagnostics in DOS
Diagnostic Test Descriptions
Table 17-2. Diagnostic Tests (Continued)
Test
Description
Number
Name
Group C: Block Tests
C1
CPU Logic and
DMA Interface
Verifies the basic logic of all the on-chip CPUs. It also
exercises the DMA interface exposed to those CPUs.
The internal CPU tries to initiate DMA activities (both
read and write) to system memory and then compares
the values to confirm that the DMA operation completed
successfully.
C2
RBUF Allocation
Verifies the RX buffer (RBUF) allocation interface by
allocating and releasing buffers and checking that the
RBUF block maintains an accurate count of the allocated and free buffers.
C3
CAM Access
Verifies the content-addressable memory (CAM) block
by performing read, write, add, modify, and cache hit
tests on the CAM associative memory.
C4
TPAT Cracker
Verifies the packet cracking logic block (the ability to
parse TCP, IP, and UDP headers within an Ethernet
frame) and the checksum/CRC offload logic. In this
test, packets are submitted to the chip as if they were
received over Ethernet and the TPAT block cracks the
frame (identifying the TCP, IP, and UDP header data
structures) and calculates the checksum/CRC. The
TPAT block results are compared with the values
expected by QLogic 8400/3400 Series User Diagnostics and any errors are displayed.
C5
FIO Register
The Fast IO (FIO) verifies the register interface that is
exposed to the internal CPUs.
C6
NVM Access
and Reset-Corruption
Verifies non-volatile memory (NVM) accesses (both
read and write) initiated by one of the internal CPUs. It
tests for appropriate access arbitration among multiple
entities (CPUs). It also checks for possible NVM corruption by issuing a chip reset while the NVM block is servicing data.
C7
Core-Reset
Integrity
Verifies that the chip performs its reset operation correctly by resetting the chip multiple times, checking that
the bootcode and the internal uxdiag driver
loads/unloads correctly.
245
83840-546-00 E
17–User Diagnostics in DOS
Diagnostic Test Descriptions
Table 17-2. Diagnostic Tests (Continued)
Test
Description
Number
Name
C8
DMA Engine
Verifies the DMA engine block by performing numerous
DMA read and write operations to various system and
internal memory locations (and byte boundaries) with
varying lengths (from 1 byte to over 4 KB, crossing the
physical page boundary) and different data patterns
(incremental, fixed, and random). CRC checks are performed to ensure data integrity. The DMA write test also
verifies that DMA writes do not corrupt the neighboring
host memory.
C9
VPD
Exercises the Vital Product Data (VPD) interface using
PCI configuration cycles and requires a proper bootcode to be programmed into the non-volatile memory. If
no VPD data is present (the VPD NVM area is all 0s),
the test first initializes the VPD data area with non-zero
data before starting the test and restores the original
data after the test completes.
C11
FIO Events
Verifies that the event bits in the CPU’s Fast IO (FIO)
interface are triggering correctly when a particular chip
events occur, such as a VPD request initiated by the
host, an expansion ROM request initiated by the host, a
timer event generated internally, toggling any GPIO
bits, or accessing NVM.
Group D: Ethernet Traffic Tests
D1
MAC Loopback
Enables MAC loopback mode in the adapter and transmits 5000 Layer 2 packets of various sizes. As the
packets are received back by QLogic 8400/3400 Series
User Diagnostics, they are checked for errors. Packets
are returned through the MAC receive path and never
reach the PHY. The adapter should not be connected to
a network.
D2
PHY Loopback
Enables PHY loopback mode in the adapter and transmits 5000 Layer 2 packets of various sizes. As the
packets are received back by QLogic 8400/3400 Series
User Diagnostics, they are checked for errors. Packets
are returned through the PHY receive path and never
reach the wire. The adapter should not be connected to
a network.
246
83840-546-00 E
17–User Diagnostics in DOS
Diagnostic Test Descriptions
Table 17-2. Diagnostic Tests (Continued)
Test
Description
Number
Name
D4
LSO
Verifies the adapter’s Large Send Offload (LSO) support by enabling MAC loopback mode and transmitting
large TCP packets. As the packets are received back
by QLogic 8400/3400 Series User Diagnostics, they are
checked for proper segmentation (according to the
selected MSS size) and any other errors. The adapter
should not be connected to a network.
D5
EMAC Statistics
Verifies that the basic statistics information maintained
by the chip is correct by enabling MAC loopback mode
and sending Layer 2 packets of various sizes. The
adapter should not be connected to a network.
D6
RPC
Verifies the Receive Path Catch-up (RPC) block by
sending packets to different transmit chains. The packets traverse the RPC logic (though not the entire MAC
block) and return to the receive buffers as received
packets. This is another loopback path that is used by
Layer 4 and Layer 5 traffic within the MAC block. As
packets are received back by QLogic 8400/3400 Series
User Diagnostics, they are checked for errors. The
adapter should not be connected to a network.
247
83840-546-00 E
18
Troubleshooting

Hardware Diagnostics

Checking Port LEDs

Troubleshooting Checklist

Checking if Current Drivers are Loaded

Possible Problems and Solutions
Hardware Diagnostics
Loopback diagnostic tests are available for testing the adapter hardware. These
tests provide access to the adapter internal/external diagnostics, where packet
information is transmitted across the physical link (for instructions and information
on running tests in an MS-DOS environment, see Chapter 18, Troubleshooting).
QCC GUI Diagnostic Tests Failures
If any of the following tests fail while running the diagnostic tests from QCC GUI,
this may indicate a hardware issue with the NIC or LOM that is installed in the
system.

Control Registers

MII Registers

EEPROM

Internal Memory

On-Chip CPU

Interrupt

Loopback - MAC

Loopback - PHY

Test LED
248
83840-546-00 E
18–Troubleshooting
Checking Port LEDs
Below are troubleshooting steps that may help correct the failure.
1.
Remove the failing device and reseat it in the slot, ensuring the card is firmly
seated in the slot from front to back.
2.
Rerun the test.
3.
If the card still fails, replace it with a different card of the same model and run
the test. If the test passes on the known good card, contact your hardware
vendor for assistance on the failing device.
4.
Power down the machine, remove AC power from the machine, and then
reboot the system.
5.
Remove and re-install the diagnostic software.
6.
Contact your hardware vendor.
QCC Network Test Failures
Typically, the QCC network test failures are the result of a configuration problem
on the network or with the IP addresses. Below are common steps to perform
when troubleshooting the network.
1.
Verify that the cable is attached and you have proper link.
2.
Verify that the drivers are loaded and enabled.
3.
Replace the cable that is attached to the NIC/LOM.
4.
Verify that the IP address is assigned correctly using the command
ipconfig or by checking the OS IP assigning tool.
5.
Verify that the IP address is correct for the network to which the adapter(s) is
connected.
Checking Port LEDs
See “Adapter Specifications” on page 7 to check the state of the network link and
activity.
249
83840-546-00 E
18–Troubleshooting
Troubleshooting Checklist
Troubleshooting Checklist
CAUTION
Before you open the cabinet of your server to add or remove the adapter,
review “Safety Precautions” on page 9.
The following checklist provides recommended actions to take to resolve
problems installing the QLogic 8400/3400 Series adapters or running them in your
system.

Inspect all cables and connections. Verify that the cable connections at the
network adapter and the switch are attached properly. Verify that the cable
length and rating comply with the requirements listed in “Connecting the
Network Cables” on page 11.

Check the adapter installation by reviewing “Installation of the Network
Adapter” on page 10. Verify that the adapter is properly seated in the slot.
Check for specific hardware problems, such as obvious damage to board
components or the PCI edge connector.

Check the configuration settings and change them if they are in conflict with
another device.

Verify that your server is using the latest BIOS.

Try inserting the adapter in another slot. If the new position works, the
original slot in your system may be defective.

Replace the failed adapter with one that is known to work properly. If the
second adapter works in the slot where the first one failed, the original
adapter is probably defective.

Install the adapter in another functioning system and run the tests again. If
the adapter passed the tests in the new system, the original system may be
defective.

Remove all other adapters from the system and run the tests again. If the
adapter passes the tests, the other adapters may be causing contention.
Checking if Current Drivers are Loaded
Windows
Use the QConvergeConsole GUI to view vital information about the adapter, link
status, and network connectivity.
250
83840-546-00 E
18–Troubleshooting
Checking if Current Drivers are Loaded
Linux
To verify that the bnx2.o driver is loaded properly, run:
lsmod | grep -i <module name>
If the driver is loaded, the output of this command shows the size of the driver in
bytes and the number of adapters configured and their names. The following
example shows the drivers loaded for the bnx2/bnx2x module:
[root@test1]# lsmod | grep -i bnx2
bnx2
199238
0
bnx2fc
133775
0
libfcoe
39764
libfc
108727
2 bnx2fc,fcoe
3 bnx2fc,fcoe,libfcoe
scsi_transport_fc
55235
3 bnx2fc,fcoe,libfc
bnx2i
53488
11
cnic
86401
6 bnx2fc,bnx2i
libiscsi
47617 8
be2iscsi,bnx2i,cxgb4i,cxgb3i,libcxgbi,ib_iser,iscsi_tcp,libis
csi_tcp
scsi_transport_iscsi
53047 8
be2iscsi,bnx2i,libcxgbi,ib_iser,iscsi_tcp,libiscsi
bnx2x
1417947
0
libcrc32c
1246
1 bnx2x
mdio
4732
2 cxgb3,bnx2x
If you reboot after loading a new driver, you can use the following command to
verify that the currently loaded driver is the correct version.
modinfo bnx2
[root@test1]# lsmod | grep -i bnx2
bnx2
199238
0
Or, you can use the following command:
[root@test1]# ethtool -i eth2
driver: bnx2x
version: 1.78.07
firmware-version: bc 7.8.6
bus-info: 0000:04:00.2
251
83840-546-00 E
18–Troubleshooting
Possible Problems and Solutions
if you loaded a new driver but have not yet booted, the modinfo command will
not show the updated driver information. Instead, you can view the logs to verify
that the proper driver is loaded and will be active upon reboot:
dmesg | grep -i "QLogic" | grep -i "bnx2"
Possible Problems and Solutions
This section presents a list of possible problems and solutions for the components
and categories:

Multi-boot Agent

QLASP

Linux

NPAR

Miscellaneous
Multi-boot Agent
Problem: Unable to obtain network settings through DHCP using PXE.
Solution: For proper operation make sure that the STP is disabled or that portfast
mode (for Cisco) is enabled on the port to which the PXE client is connected. For
instance, set spantree portfast 4/12 enable.
QLASP
Problem: After physically removing a NIC that was part of a team and then
rebooting, the team did not perform as expected.
Solution: To physically remove a teamed NIC from a system, you must first delete
the NIC from the team. Not doing this before shutting down could result in
breaking the team on a subsequent reboot, which may result in unexpected team
behavior.
Problem: After deleting a team that uses IPv6 addresses and then re-creating the
team, the IPv6 addresses from the old team are used for the re-created team.
Solution: This is a third-party issue. To remove the old team’s IPv6 addresses,
locate the General tab for the team’s TCP/IP properties from your system’s
Network Connections. Either delete the old addresses and type in new IPv6
addresses or select the option to automatically obtain IP addresses.
Problem: Adding an NLB-enabled 8400/3400 Series adapter to a team may
cause unpredictable results.
Solution: Prior to creating the team, unbind NLB from the 8400/3400 Series
adapter, create the team, and then bind NLB to the team.
252
83840-546-00 E
18–Troubleshooting
Possible Problems and Solutions
Problem: A system containing an 802.3ad team causes a Netlogon service failure
in the system event log and prevents it from communicating with the domain
controller during boot up.
Solution: Microsoft Knowledge Base Article 326152
(http://support.microsoft.com/kb/326152/en-us) indicates that Gigabit Ethernet
adapters may experience problems with connectivity to a domain controller due to
link fluctuation while the driver initializes and negotiates link with the network
infrastructure. The link negotiation is further affected when the Gigabit adapters
are participating in an 802.3ad team due to the additional negotiation with a switch
required for this team type. As suggested in the Knowledge Base Article above,
disabling media sense as described in a separate Knowledge Base Article 938449
(http://support.microsoft.com/kb/938449) has shown to be a valid workaround
when this problem occurs.
Problem: A Generic Trunking (GEC/FEC) 802.3ad-Draft Static type of team may
lose some network connectivity if the driver to a team member is disabled.
Solution: If a team member supports underlying management software
(ASF/UMP), the link may be maintained on the switch for the adapter despite its
driver being disabled. This may result in the switch continuing to pass traffic to the
attached port rather than route the traffic to an active team member port.
Disconnecting the disabled adapter from the switch will allow traffic to resume to
the other active team members.
Problem: Large Send Offload (LSO) and Checksum Offload are not working on
my team.
Solution: If one of the adapters on a team does not support LSO, LSO does not
function for the team. Remove the adapter that does not support LSO from the
team, or replace it with one that does. The same applies to Checksum Offload.
Problem: The advanced properties of a team do not change after changing the
advanced properties of an adapter that is a member of the team.
Solution: If an adapter is included as a member of a team and you change any
advanced property, then you must rebuild the team to ensure that the team’s
advanced properties are properly set.
Linux
Problem: 8400/3400 Series devices with SFP+ Flow Control default to Off rather
than Rx/Tx Enable.
Solution: The Flow Control default setting for revision 1.6.x and newer has been
changed to Rx Off and Tx Off because SFP+ devices do not support
Autonegotiation for Flow Control.
253
83840-546-00 E
18–Troubleshooting
Possible Problems and Solutions
Problem: Routing does not work for 8400/3400 Series 10 GbE network adapters
installed in Linux systems.
Solution: For 8400/3400 Series 10 GbE network adapters installed in systems
with Linux kernels older than 2.6.26, disable TPA with either ethtool (if available)
or with the driver parameter (see “disable_tpa” on page 35). Use ethtool to disable
TPA (LRO) for a specific 8400/3400 Series 10 GbE network adapter.
Problem: On a 8400/3400 Series adapter in a CNIC environment, flow control
does not work.
Solution: Flow control is working, but in a CNIC environment, it has the
appearance that it is not. The network adapter is capable of sending pause frames
when the on-chip buffers are depleted, but the adapter also prevents the
head-of-line blocking of other receive queues. Since the head-of-line blocking
causes the on-chip firmware to discard packets inside the on-chip receive buffers,
in the case a particular host queue is depleted, the on-chip receive buffers will
rarely be depleted, therefore, it may appear that flow control is not functioning.
Problem: A bnx2id error appears when installing SLES 10 SP3 SBUU build 36.
Solution: bnx2id is a user space component that needs to be compiled at the time
the package is installed. See your OS documentation for properly installing a
compiler.
Problem: How do I disable bnx2id service on a system that does not have iSCSI
enabled?
Solution: Type service bnx2id stop. Change the bnx2id runlevels to off
using chkconfig or with the GUI.
Problem: How do I rebuild the bnx2id daemon after installing a compiler?
Solution: Change the directory to /usr/src/netxtreme2-version/current/driver
and type make install_usr.
Problem: Errors appear when compiling driver source code.
Solution: Some installations of Linux distributions do not install the development
tools by default. Ensure the development tools for the Linux distribution you are
using are installed before compiling driver source code.
254
83840-546-00 E
18–Troubleshooting
Possible Problems and Solutions
NPAR
Problem: The following error message appears if the storage configurations are
not consistent for all four ports of the device in NPAR mode:
PXE-M1234: NPAR block contains invalid configuration during boot.
A software defect can cause the system to be unable to BFS boot to an iSCSI or
FCoE target if an iSCSI personality is enabled on the first partition of one port,
whereas an FCoE personality is enabled on the first partition of another port. The
MBA driver performs a check for this configuration and prompts the user when it is
found.
Solution: If using the 7.6.x firmware and driver, to workaround this error,
configure the NPAR block such that if iSCSI or FCoE is enabled on the first
partition, the same must be enabled on all partitions of all four ports of that device.
Miscellaneous
Problem: iSCSI Crash Dump is not working in Windows.
Solution: After upgrading the device drivers using the installer, the iSCSI crash
dump driver is also upgraded, and iSCSI Crash Dump must be re-enabled from
the Advanced section of the QCC GUI Configuration tab.
Problem: In Windows Server 2008 R2, if the OS is running as an iSCSI boot OS,
the VolMgr error, “The system could not successfully load the crash dump driver,”
appears in the event log.
Solution: Enable iSCSI Crash Dump from the Advanced section of the QCC GUI
Configuration tab.
Problem: The QLogic 8400/3400 Series adapters may not perform at optimal
level on some systems if it is added after the system has booted.
Solution: The system BIOS in some systems does not set the cache line size and
the latency timer if the adapter is added after the system has booted. Reboot the
system after the adapter has been added.
Problem: Cannot configure Resource Reservations in the QCC GUI after SNP is
uninstalled.
Solution: Reinstall SNP. Prior to uninstalling SNP from the system, ensure
that NDIS is enabled. If NDIS is disabled and SNP is removed, there is no
access to re-enable the device.
Problem: A DCOM error message (event ID 10016) appears in the System Even
Log during the installation of the QLogic adapter drivers.
Solution: This is a Microsoft issue. For more information, see Microsoft
knowledge base KB913119 at http://support.microsoft.com/kb/913119.
Problem: Remote installation of Windows Server 2008 to an iSCSI target through
iSCSI offload fails to complete, and the computer restarts, repeatedly.
255
83840-546-00 E
18–Troubleshooting
Possible Problems and Solutions
Solution: This is a Microsoft issue. For more information on applying the
Microsoft hotfix, see Microsoft knowledge base article KB952942 at
http://support.microsoft.com/kb/952942.
Problem: The network adapter has shut down and an error message appears
indicating that the fan on the network adapter has failed.
Solution: The network adapter was shut down to prevent permanent damage.
Contact QLogic Support for assistance.
256
83840-546-00 E
A
Adapter LEDS
For copper-wire Ethernet connections, the state of the network link and activity is
indicated by the LEDs on the RJ-45 connector, as described in Table A-1. For fiber
optic Ethernet connections and SFP+, the state of the network link and activity is
indicated by a single LED located adjacent to the port connector, as described in
Table A-2. The QCC GUI also provides information about the status of the
network link and activity.
Table A-1. Network Link and Activity Indicated by the RJ-45 Port LEDs
Port LED
Link LED
Activity LED
LED Appearance
Network State
Off
No link (cable disconnected)
Continuously illuminated
Link
Off
No network activity
Blinking
Network activity
Table A-2. Network Link and Activity Indicated by the Port LED
LED Appearance
Network State
Off
No link (cable disconnected)
Continuously illuminated
Link
Blinking
Network activity
257
83840-546-00 E
Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway
Aliso Viejo, CA 92656 949.389.6000
www.qlogic.com
International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Singapore | Taiwan | Israel
© 2014-2016 QLogic Corporation. All rights reserved worldwide. QLogic, the QLogic logo, QConvergeConsole, and FastLinQ are trademarks or registered trademarks of QLogic Corporation.
Microsoft, Windows, and Hyper-V are registered trademarks of Microsoft Corporation. Linux is registered trademark of Linus Torvalds. Smart Load Balancing and LiveLink are trademarks of
Broadcom Corporation. SUSE and SLES are registered trademarks of Novell, Inc. Red Hat, RHEL, and CentOS are service marks or registered trademarks of Red Hat, Inc. PCIe is a registered
trademark of PCI-SIG Corporation. Alteon is a registered trademark of Nortel Networks, Inc. Citrix and XenServer are registered trademarks of Citrix Systems, Inc. Ubuntu is a registered
trademark of Canonical Limited Company. VMware, ESX, vSphere, and vCenter are trademarks or registered trademarks of VMware, Inc. Cisco is a registered trademark of Cisco Systems,
Inc. Brocade is a registered trademark of Brocade Communications Systems, Inc. Intel is a registered trademark of Intel Corporation. SANBLAZE is a registered trademark of SANBlaze
Technology, Inc. IET is a registered trademark of iET Solutions, LLC. EqualLogic is registered trademark of Dell, Inc. All other brand and product names are trademarks or registered trademarks
of their respective owners.
Download PDF
Similar pages
24r9685
更新履歴
datasheet for cLOM8214
x QLE406 C Family
QMD8262-k
Kindly provide the quotations . Hardware
QME8262-k
DS-A81016S - Hikvision
Qsan Technology P210V40 disk array
QLE8262