This guide - Hewlett Packard

This guide - Hewlett Packard
Technical white paper
FC Cookbook for HP Virtual Connect
Version 4.40 Firmware Enhancements
February 2015
Table of contents
Change History
7
Abstract
8
Considerations and concepts
VC SAN module descriptions
Virtual Connect Fibre Channel
support
Supported VC SAN Fabric
configuration
Multi-enclosure stacking
configuration
9
10
12
13
36
Scenario 1: Simplest scenario with
multipathing
Overview
Benefits
Considerations
Requirements
Installation and configuration
Blade Server configuration
Verification
Summary
39
39
40
40
40
41
43
43
44
Scenario 2: VC Fabric-Attach SAN
fabrics with Dynamic Login Balancing
connected to the same redundant
SAN fabric
Overview
Benefits
Considerations
45
45
47
47
Click here to verify the latest version of this document
Requirements
Installation and configuration
Blade Server configuration and
verification
Summary
Scenario 3: Multiple VC Fabric-Attach
SAN fabrics with Dynamic Login
Balancing connected to the same
redundant SAN fabric with different
priority tiers
Overview
Benefits
Considerations
Requirements
Installation and configuration
Verification of the SAN Fabrics
configuration
Blade Server configuration
Verification
Summary
Scenario 4: Multiple VC Fabric-Attach
SAN fabrics with Dynamic Login
Balancing connected to several
redundant SAN fabric with different
priority tiers
Overview
Benefits
Considerations
Requirements
Installation and configuration
Verification of the SAN Fabrics
configuration
Blade Server configuration
Verification
Summary
Scenario 5: Fabric-Attach SAN fabrics
connectivity with HP Virtual Connect
FlexFabric 10Gb/24-Port module
Overview
Benefits
2
47
48
51
51
52
52
53
54
54
54
59
60
60
60
61
61
62
62
63
63
67
68
68
68
69
69
69
Considerations
Virtual Connect FlexFabric Uplink
Port Mappings
Requirements
Installation and configuration
Verification of the SAN Fabrics
configuration
Blade Server configuration
Summary
Scenario 6: Flat SAN connectivity
with HP Virtual Connect FlexFabric
10Gb/24-Port modules and HP 3PAR
Storage Systems
Overview
Benefits
Considerations
Physical view of a Direct-Attach
configuration
Requirements
Installation and configuration
Configuration of the 3PAR controller
ports
Verification of the 3PAR connection
Blade Server configuration
OS Configuration
Summary
Scenario 7: Mixed Fabric-Attach and
Flat SAN connectivity with HP Virtual
Connect FlexFabric 10Gb/24-Port
modules and HP 3PAR Storage
Systems
Overview
Considerations
Physical view of a mix Flat SAN and
Fabric-Attach configuration
Requirements
Installation and configuration
Configuration of the 3PAR controller
ports
Verification of the 3PAR connection
3
70
71
72
72
77
78
78
79
79
80
80
81
81
82
85
87
87
89
89
90
90
91
91
92
92
98
100
Server Profile configuration
OS Configuration
Summary
100
102
102
Scenario 8: Adding VC fabric uplink
ports with Dynamic Login Balancing
to an existing VC Fabric-Attach SAN
fabric
Overview
Benefits
Initial configuration
Adding an additional uplink port
Login Redistribution
Verification
Summary
103
103
103
103
106
108
109
110
Scenario 9: Cisco MDS Dynamic Port
VSAN Membership
Overview
Benefits
Requirements
Installation and configuration
Summary
111
111
111
111
111
114
Scenario 10: Fabric-Attach SAN
fabrics connectivity with HP Virtual
Connect FlexFabric-20/40 F8 module
Overview
Benefits
Considerations
Virtual Connect FlexFabric-20/40 F8
Uplink Port Mappings
Requirements
Installation and configuration
Verification of the SAN Fabrics
configuration
Blade Server configuration
Summary
Scenario 11: Enhanced N-port
Tunking with HP Virtual Connect 16G
24-Port Fibre Channel Module
Overview
Benefits
4
115
115
116
116
117
118
118
123
124
124
125
125
125
Considerations
Compatibility support
Requirements
Installation and configuration
Verification of the trunking
configuration
Blade Server configuration
Trunking information under VCM
Summary
Appendix A: Blade Server
configuration with Virtual Connect
Fibre Channel Modules
Defining a Server Profile with FC
Connections, using the GUI
Defining a Server Profile with FC
Connections, via CLI
Defining a Boot from SAN Server
Profile using the GUI
Defining a Boot from SAN Server
Profile via CLI
Appendix B: Blade Server
configuration with Virtual Connect
FlexFabric Modules
Defining a Server Profile with FCoE
Connections, using the GUI
Defining a Server Profile with FCoE
Connections, via CLI
Defining a Boot from SAN Server
Profile using the GUI
Defining a Boot from SAN Server
Profile using the CLI
126
126
127
130
134
135
135
136
137
137
138
139
141
142
142
144
145
146
Appendix C: Brocade SAN switch NPIV
configuration
Enabling NPIV using the GUI
Enabling NPIV using the CLI
Recommendations
147
147
148
150
Appendix D: Cisco MDS SAN switch
NPIV configuration
Enabling NPIV using the GUI
Enabling NPIV using the CLI
151
151
153
5
Appendix E: Connecting VC FlexFabric
to Cisco Nexus 50xx and 55xx series
Support information
Fibre Channel functions on Nexus
Configuration of the VC SAN Fabric
Configuration of the Nexus switches
Appendix F: Connectivity verification
and testing
Uplink Port connectivity verification
Uplink Port connection issues
Server Port connectivity verification
Server Port connection issues
Connectivity verification from the
upstream SAN switch
Testing the loss of uplink ports
155
155
155
157
157
160
160
163
165
166
166
168
Appendix G: Boot from SAN
troubleshooting
Verification during POST
Troubleshooting
172
172
176
Appendix H: Fibre Channel Port
Statistics
FC Uplink Port statistics
FC Server Port statistics
177
177
181
Acronyms and abbreviations
184
Support and Other Resources
Contacting HP
Related documentation
185
185
186
6
Change History
The following Change History log contains a record of changes made to this document:
Publish / Revised
Version #
February 2015
Edition 3 – Rev 5
September 2014
May 2014
November 2013
7
Edition 3 – Rev 4
Edition 3 – Rev 3
Edition 3 – Rev 2
Section / Nature of change

Added information on HP VC 16Gb 24-Port FC Module

New scenario (11) : Enhanced N-port trunking with HP Virtual Connect
16G 24-Port Fibre Channel Module

Added the HP VC 16Gb 24-Port FC to the Fabric Attach support list

Changed the FC statistics support information (VC 8G/16G 24-port FC
modules statistics are available with VC 4.40)

Added information on FC statistics available through SNMP

Other minor changes

Added HP 3PAR Persistent Ports Support

Added the VC FlexFabric-20/40 F8 Module to the Flat SAN and Fabric
Attach support lists

Other minor changes

Added information on VC FlexFabric-20/40 F8 Module

New scenario with HP Virtual Connect FlexFabric-20/40 F8 Modules

Other minor changes

Added 3PAR StoreServ 7000 series support

New section for the maximum number of c-Class enclosures connected
to a HP 3PAR storage system

Added some connection best practices and physical views when
multiple FlexFabric uplinks are used to connect to 3PAR controller
nodes
Abstract
This guide provides concepts and implementation steps for integrating HP Virtual Connect Fibre Channel modules
and HP Virtual Connect FlexFabric Modules into an existing SAN Fabric.
The scenarios in this guide cover a range of typical building blocks to use when designing a solution.
For more information on BladeSystem and Virtual Connect, go to http://www.hp.com/go/blades/
8
Considerations and concepts
The following concepts apply when using the HP Virtual Connect Fibre Channel or FlexFabric modules:
 To manage an HP Virtual Connect Fibre Channel Module, you must also install an HP Virtual Connect Ethernet
Module. The VC Ethernet module contains the processor on which the Virtual Connect Manager firmware runs.
 Virtual Connect now supports Direct Storage attachment to reduce storage networking costs and to remove the
complexity of FC switch management.
 NPIV support is required in the FC switches that connect to the Virtual Connect Fibre Channel and FlexFabric
Modules.
 Since VC 4.01, Fibre Channel over Ethernet (FCoE) can be used as an alternative to native Fibre Channel (FC). For
more information on configuring FCoE switches and scenarios, see the FCoE Cookbook for HP Virtual Connect in
the Virtual Connect Information Library (http://www.hp.com/go/vc/manuals).
9
VC SAN module descriptions
HP Virtual Connect Fibre Channel Modules:
 HP Virtual Connect 8Gb 20-Port Fibre Channel Module:
 4 Uplink ports 8Gb FC [2/4/8 Gb]
 16 Downlink ports 8Gb FC [1/2/4/8 Gb]
 128 NPIV connections per server
 255 NPIV connections per uplink port
(In other word up to 128 virtual machines
running on the same physical server can access
separate storage resources)
 HP Virtual Connect 8Gb 24-Port Fibre Channel Module:
 8 Uplink ports 8Gb FC [2/4/8 Gb]
 16 Downlink ports 8Gb FC [1/2/4/8 Gb]
 255 NPIV connections per server
 255 NPIV connections per uplink port
(In other word up to 255 virtual machines
running on the same physical server can access
separate storage resources)
 HP Virtual Connect 16Gb 24-Port Fibre Channel Module:
 8 Uplink ports 16Gb FC [4/8/16 Gb]
 16 Downlink ports 16Gb FC [8/161 Gb]
 255 NPIV connections per server
 255 NPIV connections per uplink port
(In other word up to 255 virtual machines
running on the same physical server can access
separate storage resources)
1
16Gb operation requires a c7000 Platinum
Enclosure (SKUs 6XXXXX-B21 and 7XXXXX-B21
except 68661X-B212). If the module is inserted in a
non-Platinum enclosure, the maximum downlink
speed that will be supported is 8Gb, regardless of the
HBA.
The enclosure SKU can be found in the OA Rack
Overview screen. For more information, see the Rack
View section in the OA user guide. Note: The SKU is
listed as a “part number”.
2
10
HP Virtual Connect FlexFabric Modules:
 HP Virtual Connect FlexFabric 10Gb/24-Port Module:
 4 Uplink ports : FC [2/4/8 Gb] or FCoE [10 Gb]
 16 Downlink ports [FlexHBA: any speed]
 255 NPIV connections per server
 255 NPIV connections per uplink port
(In other word up to 255 virtual machines
running on the same physical server to access
separate storage resources)
X1 – X4
Uplink ports available for
FC/FCoE/Enet connection
[FC: 2/4/8 Gb]
[FCoE: 10 Gb]
[Enet: 1/10 Gb]
 HP Virtual Connect FlexFabric-20/40 F8 Module:
 8 Uplink ports : FC [2/4/8 Gb] or FCoE [10 Gb]
 16 Downlink ports [FlexHBA: any speed]
 255 NPIV connections per server
 255 NPIV connections per uplink port
(In other word up to 255 virtual machines
running on the same physical server to access
separate storage resources)
X5-X6 paired Flexports
Can only be configured to
carry same traffic types
(either FC or Ethernet)
11
X1 – X8
Uplink ports available for
FC/FCoE/Enet connection
[FC: 2/4/8 Gb]
[FCoE: 10 Gb]
[Enet: 1/10 Gb]
X7-X8 paired Flexports
Can only be configured to
carry same traffic types
(either FC or Ethernet)
Virtual Connect Fibre Channel support
Virtual Connect Connectivity Stream documents that describe the different supported configurations are available
from the SPOCK webpage:
www.hp.com/storage/spock
(Requires HP Passport; if you do not have an HP Passport account, follow the instructions on the webpage).
For any specific, supported Fabric OS, SAN-OS, or NX-OS versions for SAN that involve 3rd-party equipment, consult
the equipment vendor.
Virtual Connect
Virtual Connect support matrix documents are available from the following Virtual Connect SPOCK webpage:
http://h20272.www2.hp.com/Pages/spock2Html.aspx?htmlFile=hw_virtual_connect.html
Fibre Channel connectivity stream documents are available in the Virtual Connect FC Modules section:
Since VC 4.01, Virtual Connect provides the ability to pass FCoE to an external FCoE capable network switch.
However this guide is only focused on Virtual Connect with Fibre Channel. For FCoE connectivity guidance, see the
FCoE Cookbook for HP Virtual Connect in the Virtual Connect Information Library
(http://www.hp.com/go/vc/manuals).
12
FC and FCoE switches
The SPOCK Switch page contains the supported configurations of the upstream switch connected to Virtual Connect.
Details about firmware and OS versions are usually provided.
http://h20272.www2.hp.com/Pages/spock2Html.aspx?htmlFile=hw_switches.html
Supported VC SAN Fabric configuration
Beginning with Virtual Connect 3.70, there are two supported VC SAN fabric types, Fabric-Attach fabrics and DirectAttach fabrics. A Fabric-Attach fabric uses the traditional method of connecting VC-FC and VC FlexFabric modules,
which requires an upstream NPIV-enabled SAN switch. A Direct-Attach fabric reduces storage networking costs and
removes the complexity of FC switch management by enabling you to directly connect a VC FlexFabric module to a
supported HP 3PAR Storage System.
A VC SAN fabric can only contain uplink ports of one type, either attached to an external SAN switch or directly
connected to a supported storage device. VC isolates ports that do not match the specified fabric type. The isolated
port causes the VC SAN fabric status to become degraded, as well as all associated server profiles and the overall VC
domain status.
Fabric-Attach support
Fabric-Attach Fibre Channel (FC) prerequisites
 The Fabric-Attach Fabric is supported with the HP VC FlexFabric 10Gb/24-Port Module / HP VC FlexFabric-20/40
F8 Module / HP VC 16Gb 24-Port Fibre Channel Module / HP VC 8Gb 24-Port Fibre Channel Module / HP VC 8Gb
20-Port Fibre Channel Module / HP VC 4Gb 20-Port Fibre Channel Module.
 The Fabric-Attach Fabric is only supported if directly connected to Fibre Channel SAN switches.
 NPIV support is required in the FC switches that connect to the Virtual Connect Modules.
Note: Visit the SPOCK website to get the latest Fabric-Attach support information, see “Virtual Connect Fibre
Channel support” section.
Fabric-Attach Fibre Channel (FC) details
 The Fabric-Attach option is the default Fabric type.
 Select Fabric-Attach if the FlexFabric module (or VC-FC module) is connected to a Fibre Channel SAN switch.
 N_Port with NPIV is used to connect to the SAN Fabric.
 Once a fabric is defined, its type cannot be changed until fabric is deleted and recreated.
Virtual Connect Fabric-Attach SAN Fabric support
When using the Virtual Connect Fabric-Attach mode, participating uplinks must be connected to the same SAN fabric
in order to correctly form a Virtual Connect SAN fabric.
13
Figure 1: Participating uplinks must connect to the same SAN fabric
VC Domain
VC SAN A-1
VC SAN A-2
VC SAN B-1
VC SAN B-2
VC SAN Fabrics defined
in the VC Domain
VC-FC Module
VC-FC Module
Bay 4
Bay 3
Fabric 1A
Fabric 1B
Fabric 2A
External Datacenter
SAN Fabrics
Fabric 2B
VC Domain
Figure 2: A same SAN Fabric can consist of several SAN switches. In the following configuration, each SAN Fabric has
two SAN switches, and the previous “Participating uplinks must connect to the same SAN fabric” prerequisite is met.
VC Domain
VC SAN A-1
VC SAN Fabrics defined
in the VC Domain
VC SAN B-1
Bay 1
Bay 2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
12Vdc
27
0
31
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
12Vdc
12Vdc
28
25
29
26
30
27
31
0
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
12Vdc
24
28
25
29
26
30
27
External Datacenter
SAN Fabrics
31
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
Fabric 1A
Fabric 1B
MSA
EVA
XP
3PAR
Figure 3: Connecting uplinks of a single VC SAN Fabric to two different SAN fabrics is not supported
VC Domain
VC SAN Fabrics defined
in the VC Domain
VC SAN B-1
VC SAN A-1
VC-FC Module
VC-FC Module
Bay 3
Fabric 1A
Bay 4
Fabric 1B
Fabric 2A
VC Domain
14
Fabric 2B
External Datacenter
SAN Fabrics
To provide granular control over which server blades use each uplink port, different VC SAN fabrics can be connected
to the same SAN fabric. This configuration enables the distribution of servers according to I/O workloads, as shown
in Figure 3.
Figure 4: Servers distributed according to I/O workloads
VC Domain
VC SAN A-1
VC SAN A-2
VC SAN B-1
VC SAN B-2
VC-FC Module
VC-FC Module
Bay 3
Bay 4
Fabric 1B
Fabric 1A
VC Domain
Direct attached Storage Systems are not supported with the Fabric-Attach mode.
Figure 5: Direct Attached Storage Systems are not supported
VC SAN 1
VC FlexFabric Module
3PAR, MSA, EVA or XP
15
VC SAN Fabrics defined
in the VC Domain
External Datacenter
SAN Fabrics
Figure 5 and Figure 6 show two typical, supported configurations for Virtual Connect Fibre Channel modules and
Virtual Connect FlexFabric modules.
Figure 6: Typical Fabric-Attach SAN configuration with VC-FC 8Gb 20-port modules. Redundant paths - Server-toFabric uplink ratio 4:1
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Storage Controller
Storage Controller
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
0
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
SAN uplink
connection
SAN Switch A
SAN Switch B
LAN Switch A
LAN Switch B
10/100Base-TX
1000Base-X
10/100Base-TX
1000Base-X
RPS
49
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Duplex:Green=Full Duplex,Yellow=Half Duplex
Speed:Green=100Mbps,Yellow=10Mbps
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
RPS
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
49
1
3
4
5
6
7
SUS-1
8
9
10
11
12
13
14
15
16
Duplex:Green=Full Duplex,Yellow=Half Duplex
Speed:Green=100Mbps,Yellow=10Mbps
FAN
1
Ethernet uplink
connection
2
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
5
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X1
X2
X3
X4
X5
X6
X7
X1
X2
X3
X4
X5
X6
X7
X8
2
UID
X1
X1
SHARED
Ethernet uplink
connection
SUS-2
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X8
UID
1
SHARED
4
3
1
5
2
3
4
1
UID
2
3
4
6
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
8
7
Enclosure Interlink
OA1
UID
OA2
UID
iLO
iLO
Reset
Active
Reset
Enclosure
UID
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
Figure 7: Typical Fabric-Attach SAN configuration with VC FlexFabric modules. Redundant paths - Server-to- Fabric
uplink ratio 4:1
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Storage Controller
Storage Controller
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
SAN uplink
connection
SAN Switch A
SAN Switch B
LAN Switch A
LAN Switch B
10/100Base-TX
10/100Base-TX
1000Base-X
1000Base-X
RPS
RPS
49
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
49
52
Mode
1
FAN
1
Ethernet uplink
connection
3
4
5
6
SUS-1
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
FAN
5
SHARED: UPLINK or X-LINK
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
X1
X8
X2
X3
X4
X5
X6
X7
X8
UID
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
4
3
6
5
8
7
Enclosure Interlink
OA1
OA2
UID
UID
iLO
iLO
Reset
Active
Enclosure
UID
Reset
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
HP BladeSystem c7000
16
2
Speed:Green=100Mbps,Yellow=10Mbps
PS
2
PS
1
SUS-2
Ethernet uplink
connection
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
Multiple Fabric-Attach fabric Support
Support for multiple SAN Fabric-Attach fabrics per VC-FC and FlexFabric modules allows the Storage administrator
to assign any of the available VC SAN uplinks to a different SAN fabric and dynamically assign server HBAs to the
desired SAN fabric.
Figure 8: Multiple SAN Fabric-Attach fabrics support
Server 1
HBA1
Server 3
Server 2
HBA1
HBA2
HBA2
VC SAN 2
VC SAN 1
HBA1
VC SAN 3
Server 4
HBA2
HBA1
HBA2
VC SAN 4
VC-FC Module
VC Domain
Fabric 4
Fabric 1
Fabric 2
Figure 9: The Virtual Connect 8Gb 20-Port Fibre
Channel module supports up to 4 SAN fabrics
Fabric 3
Figure 10: The Virtual Connect 8Gb 24-Port and 16Gb
24-Port Fibre Channel modules support up to 8 SAN
fabrics
Fabric 6
Fabric 7
VC-FC 20-port Module
Fabric 5
Fabric 8
Fabric 4
Fabric 1
VC-FC 8Gb 24-Port Module
Fabric 2
Fabric 3
Fabric 4
Fabric 1
Fabric 2
17
Fabric 3
Figure 11: The Virtual Connect FlexFabric module supports up to 4 SAN fabrics
Uplink ports available for
FC connection
VC FlexFabric Module
Fabric 4
Fabric 1
Fabric 2
Fabric 3
Figure 12: The Virtual Connect FlexFabric-20/40 F8 module supports up to 8 SAN fabrics
Fabric 2
Fabric 3
Fabric 1
Fabric 4
VC FlexFabric-20/40 F8 Module
Fabric 8
Fabric 5
Fabric 6
Fabric 7
NPIV requirements for VC Fabric-Attach fabrics
Uplink ports within a VC Fabric-Attach fabric can only be connected to Fibre Channel switch that supports N_port_ID
virtualization (NPIV).
The VC-FC and VC FlexFabric modules are FC standards-based and are compatible with all other NPIV standard–
compliant switch products.
Due to the use of NPIV, special features that are available in standard FC switches such as Brocade ISL Trunking,
Cisco SAN Port Channels, QoS, and extended distances are not supported with VC-FC and VC FlexFabric modules.
For more information about NPIV support for your Fibre Channel switch, refer to the switch vendor documentation.
The SAN switch ports connecting to the VC Fabric uplink ports must be configured to accept NPIV logins. For
additional information about NPIV configuration, see "Appendix C: Brocade SAN switch NPIV configuration" or
"Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration"
depending on your switch model.
18
Port Group in a Fabric-Attach fabric
Virtual Connect Manager version 1.31 and later allows users to group multiple VC Fabric uplinks logically into a
Virtual Connect fabric when attached to the same Fibre Channel SAN fabric.
Figure 13: Fabric-Attach fabrics using 4 uplinks ports
VC Domain
VC SAN 1
VC SAN 2
VC-FC Module
VC-FC Module
Fabric 1
Fabric 2
There are several benefits with Fabric port grouping:
 Bandwidth is increased.
 Server-to-uplink ratio is improved.
 Better redundancy is provided with automatic port failover.
Increased bandwidth
Depending on the VC module and the number of uplinks used, the server-to-uplink ratio (oversubscription ratio) is
adjustable to 2:1, 4:1, 8:1, or 16:1. As few as two or as many as 16 servers share one physical link on a fully
populated enclosure with 16 servers.
Using multiple uplinks reduces the risk of congestion.
Figure 14: 2:1 oversubscription with Virtual Connect
8Gb 24-Port and 16Gb 24-Port Fibre Channel
modules and with Virtual Connect FlexFabric-20/40
F8 modules
Figure 15: 4:1 oversubscription with Virtual
Connect 8Gb 20-Port Fibre Channel modules and
with Virtual Connect FlexFabric modules
2:1
16 servers
VC-FC
24-port
4:1
8 Uplinks
16 servers
VC-FC
20-port
4 Uplinks
Dynamic Logins Balancing Distribution and increased redundancy
When VC Fabric uplinks are grouped into a single fabric, the module uses Dynamic Login Balancing Distribution to
load balance the server connections across all available uplink ports.
The module uses the port with the least number of logins across the VC SAN Fabric or, when the number of logins is
equal, VC makes a round-robin decision.
VC version 3.0 and later do not offer Static Uplink Login Distribution.
19
Figure 16: Dynamic Logins Balancing Distribution with VC-FC module
Server 1
HBA1
Server 3
Server 2
HBA2
HBA1
HBA2
HBA1
HBA2
Server 4
HBA1
HBA2
VC Domain
VC SAN 1
VC-FC Module
Server 3
Server 2
Server 1
Server 4
Fabric 1
Uplink port path failover
The module uses Dynamic Login Balancing Distribution to provide an uplink port path failover that enables server
connections to fail over within the Virtual Connect fabric.
If a fabric uplink port in the group becomes unavailable, hosts logged in through that uplink are reconnected
automatically to the fabric through the remaining uplinks in the group, resulting in auto-failover.
Figure 17: Uplink port path failover
Server 1
HBA1
Server 3
Server 2
HBA2
HBA1
HBA2
HBA1
HBA2
Server 4
HBA1
HBA2
VC Domain
VC SAN 1
VC-FC Module
Server 3
Server 2
Server 1
Server 4
Fabric 1
This automatic failover saves time and effort whenever there is a link failure between an uplink port on VC and an
external fabric, and it allows smooth transition without much disruption to the traffic. However, the hosts must
perform re-login before resuming their I/O operations.
20
Login Redistribution
It might be necessary to re-distribute server logins if an uplink that was previously down is now available, if you
added an uplink to a fabric, or if the number of logins through each available uplink has become unbalanced for any
reason. Virtual Connect Login redistribution supports two modes, Manual or Automatic and is enabled on a per
FabricAttach fabric basis.
Table 1: Manual and Automatic Logins Redistribution support
Login Re-Distribution
Mode
Auto-failover*
Auto-failback**
VC-FC
support
VC FlexFabric and
VC FlexFabric-20/40 F8
support
MANUAL
YES
NO
YES
YES (default)
AUTOMATIC
YES
NO
YES
YES
after link stability delay
* when a port in the SAN Fabric group becomes unavailable.
** when a failed port returns to a good working condition.
 Manual Login Re-Distribution— Default for all FC modules. You must initiate a Login Re-Distribution request
through the VC GUI or CLI interfaces. For all standard VC-FC modules, this is the only supported mode.
To manually redistribute the logins, go to the SAN Fabrics screen, select the Edit menu corresponding to the SAN
Fabric, and click Redistribute.
To manually redistribute logins on a VC SAN fabric through the VC CLI, enter:
set fabric MyFabric –loadBalance
The Redistribute option is only available with a VC SAN fabric with Manual Login Re-Distribution.
21
 Automatic Login Re-Distribution— Automatic Login Redistribution is only available with VC FlexFabric and VC
FlexFabric-20/40 F8 modules in a FabricAttach fabric. With VC-FC modules, the Login Redistribution is manual
only.
When Automatic Login Redistribution is selected, the VC FlexFabric module initiates Login Re-Distribution
automatically when the specified Link Stability time interval expires.
The Link stability interval parameter is defined on a VC domain basis in the Fibre Channel → WWN Settings →
MisceIIaneous tab. This interval defines the number of seconds that the VC fabric uplinks have to stabilize before
the VC module attempts to load-balance the logins.
22
Flat SAN Support
With HP Virtual Connect for 3PAR with Flat SAN technology, you can connect HP 3PAR Storage Systems directly to HP
Virtual Connect FlexFabric Modules with no need for an intermediate SAN fabric. This significantly reduces
complexity and cost while reducing latency between servers and storage by eliminating the need for multitier
storage area networks (SANs).
This direct attachment lets you worry less about storage solution complexity. The need for an expensive
intermediate SAN fabric to create the connection between Virtual Connect and HP 3PAR Storage Systems devices no
longer exists. In addition to being much more cost-efficient, management of your storage solution is made easier.
Valuable IT resources are freed up, along with reduced costs.
Flat SAN support
 Direct-Attach Fabric is only supported with the HP Virtual Connect FlexFabric 10Gb/24-port and HP Virtual
Connect FlexFabric-20/40 F8 modules.
 Direct-Attach Fabric is only supported if directly connected to HP 3PAR storage systems.
 Supported storage systems are HP 3PAR StoreServ 10400/10800, 7000/7450, T400/800 or F200/400.
 Minimum required/supported, HP Virtual Connect 3.70
 Minimum required/supported, HP 3PAR InForm OS v3.1.1 MU1
Unsupported storage systems
 HP MSA/EVA/XP
 HP StoreOnce B6200
 HP LeftHand Storage
 HP Tape Storage
 HP Virtual Tape Libraries
 3rd Party Storage Solutions such as EMC, IBM, Hitachi, NetApp and so on
Flat SAN details
 The Flat SAN option is available only after adding a FlexFabric module port to a VC SAN Fabric.
 No additional licenses or fees apply.
 Virtual Connect SAN fabric uplink port is set as an F_Port (F for Fabric).
 FlexFabric modules run lightweight FC SAN services such as name server, zoning, etc.
 Once a fabric is defined, its type cannot be changed until the fabric is deleted and recreated.
Note: Visit the SPOCK website to get the latest Flat SAN support information, see “Virtual Connect Fibre Channel
support” section.
23
Figure 18: HP Virtual Connect for 3PAR with Flat SAN technology allows a direct attachment to HP 3PAR Storage
Systems
HP 3PAR StoreServ 7450
Flat SAN uplink
connections
(Direct-Attach)
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
3
4
5
6
HP Virtual connect
FlexFabric Modules
8
7
Enclosure Interlink
OA1
UID
OA2
UID
iLO
Reset
iLO
Active
Enclosure
UID
Reset
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
Maximum number of c-Class enclosures connected to a HP 3PAR storage system
Connecting several BladeSystem c-Class enclosures to the same 3PAR Storage system is fully supported. The
number of host ports available on the HP 3PAR controller nodes defines the maximum number of enclosures
supported.
 With 4 controller nodes, a 3PAR StoreServ 10400 can support up to 96 Fibre Channel Host Ports. This means that
24 enclosures can be connected to the storage system when using 4 Virtual Connect uplinks per enclosure.
 With 2 controller nodes, a 3PAR StoreServ 7200 can support up to 12 Fibre Channel Host Ports. This means that 3
enclosures can be connected to the storage system when using 4 Virtual Connect uplinks per enclosure.
Figure 19: Maximum c-Class enclosures connected to a HP 3PAR StoreServ 10400 and StoreServ 7200
3PAR Storage
System
4
4
4
3 enclosures
MAX
24 enclosures
MAX
3PAR Storage
System
4
4
StoreServ 7200
VC Domain
BladeSystem c-Class with
dual VC FlexFabric Modules
24
StoreServ 10400
VC Domain
BladeSystem c-Class with
dual VC FlexFabric Modules
Table 2: Maximum c-Class enclosures connected to a 3PAR Storage System when using 4 Virtual Connect uplinks per
enclosure (with 2 x FlexFabric modules):
Storage Array
Number of FC
Host ports
Maximum
number of
enclosure
StoreServ
10800
StoreServ
10400
T800
T400
StoreServ
7400
StoreServ
7200
F400
F200
192
96
128
64
24
12
24
12
48
24
32
16
6
3
6
3
(192 / 4)
(96 / 4)
(128 / 4)
(64 / 4)
(24 / 4)
(12 / 4)
(24 / 4)
(12 / 4)
Important!: To avoid storage networking issues and potential loss of data associated with duplicate WWNs on the
3PAR system, all VC Domains connected to the same SPAR Storage System must use different HP Pre-Defined
ranges of WWN addresses.
Virtual Connect Direct-Attach SAN Fabric support
When using the Virtual Connect Direct-Attach mode, participating uplinks can be directly connected to the same
3PAR Storage System in order to correctly form a Virtual Connect SAN fabric.
Figure 20: Direct-Attach SAN Fabric uplinks connected to a 3PAR array
VC Domain
VC Direct-Attach SAN Fabrics
defined in the VC Domain
VC SAN B-1
VC SAN A-1
VC FlexFabric Module
VC FlexFabric Module
Bay 2
Bay 1
Direct attached
Storage System
VC Domain
3PAR Storage
System
Note:
 When a Virtual Connect Direct-Attach fabric is using multiple uplinks, the concept of login-balancing or loginredistribution is not applicable. These concepts are only provided on uplinks within a VC Fabric-Attach fabric.
 The Zoning between the server ports and the VC SAN uplink ports is automatically configured based on the VC
SAN Fabric and server profile definitions. This implicit zoning restricts servers connected to a given Direct-Attach
fabric to access only the storage attached to uplinks in a same Direct-Attached fabric. Both the Name server scans
and RSCN messages are limited to this zone.
 A VC SAN fabric may only contain uplink ports of one type.
- VC isolates ports that do not match specified fabric type.
- An isolated port will degrade fabric state and all associated profiles and domain state.
25
 The HP 3PAR Peer Motion (allows non-disruptive data migration from any-to-any 3PAR Storage Arrays) is not
supported at this time with Direct-Attach Flat SAN. For the time being, Peer Motion requires an external SAN
fabric.
 The HP 3PAR Persistent Ports (provides transparent and uninterrupted failover in response to firmware
upgrades, in the event of a node failure, in response to an array port being taken offline administratively or as the
result of a hardware failure in the SAN fabric that results in the storage array losing physical connectivity to the
fabric) is supported if the configuration fulfills the necessary requirements. For more information, see HP 3PAR
StoreServ Persistent Ports Technical white paper, http://h20195.www2.hp.com/v2/GetPDF.aspx%2F4AA44545ENW.pdf
In order to give more granular control over which server blades use each uplink port, several Direct-Attach VC SAN
fabrics can be connected to the same 3PAR storage System. This configuration can enable the distribution of servers
according to their I/O workloads.
Figure 21: Multiple Direct-Attach VC SAN Fabrics connected to the same 3PAR array
VC Domain
VC SAN A-1
VC SAN A-2
VC SAN B-1
VC FlexFabric Module
VC FlexFabric Module
Bay 2
Bay 1
VC Domain
3PAR Storage
System
26
VC Direct-Attach SAN Fabrics
defined in the VC Domain
VC SAN B-2
Direct attached
Storage System
Up to four HP 3PAR Storage Systems can be directly connected to a redundant pair of VC FlexFabric modules; this is
because only 4 uplink ports are available for FC connection on the FlexFabric module.
Figure 22: Maximum 3PAR arrays connected to a VC Domain
VC Domain
VC Direct-Attach SAN Fabrics
defined in the VC Domain
VC SAN B-1
VC SAN A-1
VC FlexFabric Module
VC FlexFabric Module
Bay 2
Bay 1
Direct attached
Storage Systems
VC Domain
3PAR Storage
Systems
To support more than four direct-attached 3PAR arrays, it is necessary to add more pairs of VC FlexFabric modules in
a c7000 chassis.
Figure 23: Direct-Attach configuration with more than a pair of VC modules
VC Domain
VC SAN A-1
VC SAN B-1
VC SAN A-2
VC FlexFabric Modules
VC Direct-Attach SAN Fabrics
defined in the VC Domain
VC SAN B-2
VC FlexFabric Modules
Bay 1
Bay 2
Bay 3
Bay 4
Direct attached
Storage Systems
VC Domain
VC Domain
3PAR Storage
Systems
27
For more granularity and control, the 3PAR Storage Systems can be connected to different VC SAN Fabrics, figure 24
shows another supported configuration.
Figure 24: Different SAN Fabrics can be used to connect 3PAR arrays
VC Domain
VC SAN A-1
VC SAN A-2
VC SAN B-1
VC FlexFabric Module
VC Direct-Attach SAN Fabrics
defined in the VC Domain
VC SAN B-2
VC FlexFabric Module
Bay 2
Bay 1
Direct attached
Storage Systems
VC Domain
3PAR Storage
Systems
Unsupported Virtual Connect Direct-Attach SAN Fabric configurations
When using the Direct-Attach mode, participating uplinks must be directly connected to a 3PAR Storage System. Any
other storage systems are not supported.
Figure 25: Direct-Attach uplinks can only be connected to a 3PAR array
VC SAN A-1
VC Direct-Attach SAN Fabrics
defined in the VC Domain
VC FlexFabric Module
MSA, EVA, XP or
storage from
other vendors
28
Direct attached
Storage System
The Direct-Attach mode does not support connecting participating uplinks to a SAN Fabric.
Figure 26: Direct-Attach uplinks cannot be connected to a SAN Fabric
VC SAN A-1
VC Direct-Attach SAN Fabrics
defined in the VC Domain
VC SAN A-2
VC FlexFabric Module
Fabric 1
External Datacenter
SAN Fabric
Physical view of a Direct-Attach Flat SAN configuration
Figure 27: Virtual Connect Flat SAN with VC FlexFabric modules for HP 3PAR Storage Systems,
Redundant paths - Server-to- Direct-Attach uplink ratio 16:1
HP 3PAR StoreServ 10400
Controller Node 0
Controller Node 1
SAN uplink
connections
(Direct-Attach)
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
SUS-1
LAN Switch A
10/100Base-TX
HP VC FlexFabric 10Gb/24-Port Module
3
4
5
6
7
8
OA1
UID
Reset
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
iLO
Active
Enclosure
UID
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
Reset
1000Base-X
Active
RPS
52
Mode
49
50
Remove management modules before ejecting sleeve
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
FAN
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
10
Ethernet uplink
connection
Ethernet uplink
connection
PS
6
PS
5
PS
4
PS
3
PS
2
HP BladeSystem c7000
29
LAN Switch B
10/100Base-TX
OA2
UID
iLO
RPS
49
SUS-2
Enclosure Interlink
1000Base-X
PS
1
Figure 28: Virtual Connect Flat SAN with VC FlexFabric modules for HP 3PAR Storage Systems,
Redundant paths - Server-to- Direct-Attach uplink ratio 8:1
Controller Node 0
Controller
Node 1
Controller
Node 0
HP 3PAR StoreServ 10400
Controller Node 1
SAN uplink
connections
(Direct-Attach)
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
X1
X8
X2
X3
X4
X5
X6
X7
X8
UID
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
SUS-1
LAN Switch A
10/100Base-TX
3
4
5
6
7
8
OA1
RPS
49
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
iLO
Active
Enclosure
UID
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
Reset
LAN Switch B
10/100Base-TX
OA2
UID
UID
iLO
Reset
1
SUS-2
Enclosure Interlink
1000Base-X
1000Base-X
Active
RPS
49
52
Mode
50
Remove management modules before ejecting sleeve
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
FAN
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
10
Ethernet uplink
connection
Ethernet uplink
connection
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
Note: The 8:1 Oversubscription with 2 FC cables per FlexFabric module is the most common use case.
Note: In order to improve the redundancy, it is recommended to connect the FC cables in a crisscross manner
(i.e. each FlexFabric module is connected to two different controller nodes).
30
Remote replication design with Virtual Connect Flat SAN technology
HP 3PAR Data replication services provide real-time replication and disaster recovery technology that allows the
protection and sharing of data. You can implement the data replication services between HP 3PAR Storage arrays to
distribute data between local and remote arrays or data centers, even if they are geographically dispersed.
HP 3PAR Remote copy can be set up between several Direct-Attached 3PAR storage systems, knowing that the
maximum number of supported arrays is 4 sources to 1 target or 1 source to 2 targets.
Remote replication is delivered using either RCIP (Remote Copy over IP) or RCFC (Remote Copy over Fibre Channel).
HP recommends using RCIP with a Direct-Attach 3PAR configuration because it doesn’t require a SAN Fabric. Use of a
SAN Fabric can result in increased complexity and IT infrastructure costs.
The remote copy over IP port on the HP 3PAR StoreServ 10000 controller node is port E1. (RJ45/1Gb).
Figure 29: HP 3PAR Remote replication services with Virtual Connect Flat SAN technology
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
4
3
SITE 1
6
5
8
7
Enclosure Interlink
OA1
UID
OA2
UID
iLO
Reset
iLO
Active
Enclosure
UID
Reset
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
SAN uplink
connections
(Direct-Attach)
3PAR Storage System
Secondary
S
Replication link between site 1 and site 2
Primary
P
Native IP-based Remote Copy
3PAR Storage System
P
Primary
S
Secondary
SAN uplink
connections
(Direct-Attach)
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
SITE 2
HP VC FlexFabric 10Gb/24-Port Module
4
3
6
5
8
7
Enclosure Interlink
OA1
UID
Reset
OA2
UID
iLO
iLO
Active
Enclosure
UID
Reset
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
Supported Distance and latencies:
 Synchronous IP
Max distance: 210km/130 miles
Max supported Latency: 2.6ms
 Asynchronous Periodic IP
Long-distance implementation
Max supported Latency: 150ms round trip
Note: RCFC is supported but it requires an external SAN Fabric.
31
Details about the 3PAR controller connectivity
 HP 3PAR Controller Nodes are always installed in pairs, and a system can support from 2 to 8 controllers.
 9 PCI-e slots are available per controller node and are used for both host and drive chassis connections using HP
3PAR Host/Disk Adapters.
 As a best practice, HP recommends using PCI slots 2, 5, 8, 1, 4, 7 for the Direct-Attach FlexFabric connections.
These PCI slots are also recommended for the host connections.
 The 9 PCI-e slots are equally balanced across 3 PCI-e buses so it is better to connect the FlexFabric uplinks from 2
to 7 in the following order: 2  5  8  1  4  7.
 If only two HP 3PAR Host adapters are available, the connections should be load balanced across the two
adapters.
 The remote copy port (port E1) used with a Direct-Attach 3PAR configuration is also seen in Figure 28.
Figure 30: HP 3PAR V-Class controller node recommended connection
c-class enclosures
3PAR StoreServ 10000
PCI Slots
Controller Node
recommended for
Virtual Connect
FlexFabric Connections
6 7 8
HP 3PAR StoreServ 10400
0 1
Controller Node 0
and
Controller Node 1
3 4 5
E1 Remote Copy Ethernet 1Gb port (RCIP)
0 1 2
Recommended connection order : slot 22  5
5  8  11  4  7
Note: This diagram shows only the first controller connections.
32
Mixed Fabric-Attach and Flat SAN mode support
You can mix Virtual Connect Fabric-Attach and Direct-Attach Flat SAN fabrics on a same Virtual Connect domain. This
mix is useful if an administrator needs to attach additional storage systems that are not supported today with the
Direct-Attach mode.
Figure 31: Mixed Fabric-Attach and Direct-Attach SAN Fabrics configuration
VC SAN Fabrics defined
VC SAN A-1
VC SAN A-2
Fabric-Attach
in the VC Domain
Direct-Attach
VC FlexFabric Module
N-Port
F-Port
Direct attached
Storage Systems
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
3PAR Storage
System
Fabric attached
MSA
StoreOnce
Backup
System
Storage Systems
EVA
XP
The mixed Fabric-Attach and Direct-Attach fabrics require the creation of two different fabrics because a VC SAN
fabric can only contain uplink ports of one type. The following configuration in Figure 30 is therefore not supported.
Figure 32: VC SAN Fabric uplinks cannot be connected to at the same time to a SAN Fabric and to a Storage System
VC SAN Fabrics defined
in the VC Domain
VC SAN 1
(true for both Fabric-Attach
and Direct-Attach fabrics)
VC FlexFabric Module
Direct attached
Storage Systems
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
3PAR Storage
System
Fabric attached
StoreOnce
Backup
System
MSA
Storage Systems
EVA
33
XP
Physical view of a mixed Flat SAN and Fabric-Attached configuration
Figure 33: Typical mix Fabric-Attach and Direct-Attach FC configuration with VC FlexFabric modules.
Redundant paths - Server-to- Fabric-Attach uplink ratio 8:1 - Server-to- Direct-Attach uplink ratio 8:1
HP XP 24000
HP EVA 8400
HP 3PAR StoreServ 10400
Fabric-1
SAN uplink
connections
(Direct-Attach)
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
SAN uplink
connections
(Fabric-Attach)
SAN Switch A
SAN Switch B
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
X1
X8
X2
X3
X4
X5
X6
X7
X8
UID
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
SUS-1
LAN Switch A
10/100Base-TX
3
4
5
6
7
8
OA1
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
10/100Base-TX
iLO
Active
Enclosure
UID
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
Reset
Active
RPS
49
52
Mode
50
Remove management modules before ejecting sleeve
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
FAN
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
10
Ethernet uplink
connection
Ethernet uplink
connection
PS
6
PS
5
PS
4
PS
3
PS
2
HP BladeSystem c7000
34
1000Base-X
OA2
UID
UID
iLO
Reset
RPS
2
LAN Switch B
Enclosure Interlink
1000Base-X
49
1
SUS-2
PS
1
Backup design for a Direct Attached 3PAR solution
Disk or Tape backup systems cannot be directly attached to FlexFabric modules. Instead, you must use a SAN Fabric
or a LAN-based solution, which are common for use with backup solutions.
Figure 34: Backup design example for a direct attached 3PAR solution
Disk Backup
Solution
Tape Backup
Solution
HP StoreOnce
B6000 Backup
Systems
HP StoreEver
HP ESL G3
Tape Library
HP 3PAR StoreServ 10800
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
12Vdc
HP StorageWorks
4/32B SAN Switch
SAN uplink
connections
(Fabric-Attach)
SAN Switch A
SAN Switch B
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
SUS-1
LAN Switch A
10/100Base-TX
HP VC FlexFabric 10Gb/24-Port Module
3
4
5
6
7
8
OA1
UID
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
iLO
Active
Enclosure
UID
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
Reset
LAN Switch B
10/100Base-TX
OA2
UID
iLO
Reset
RPS
2
SUS-2
Enclosure Interlink
1000Base-X
49
1
SAN uplink
connections
(Direct-Attach)
1000Base-X
Active
RPS
52
Mode
49
50
Remove management modules before ejecting sleeve
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
FAN
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
10
Ethernet uplink
connection
Ethernet uplink
connection
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
For more information about 3PAR implementation, support and services, contact your HP representative or see
http://www.hp.com/go/3par
For HP 3PAR documentations, see
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?lang=en&cc=us&contentType=Sup
portManual&prodTypeId=18964&prodSeriesId=5044394&docIndexId=179
35
Multi-enclosure stacking configuration
Virtual Connect version 2.10 and higher supports the connection of up to four c7000 enclosures, which can reduce
the number of network connections per rack and also enables a single VC manager to control multiple enclosures.
For more information, see HP Virtual Connect Multi-Enclosure Stacking Reference Guide in the “Related
Documentation“ section at the end of this document.
Multi-enclosure stacking with Fabric-Attach FC Storage
When utilizing Virtual Connect Fabric-Attach FC storage in a VC Multi Enclosure environment, it is important to
remember that:
 All Virtual Connect Fibre Channel modules (or each SAN Uplink port of VC FlexFabric modules) must be connected
to the SAN Fabrics.
 When Virtual Connect Fibre Channel modules (or FlexFabric modules using SAN connections) are implemented in a
multi-enclosure domain, all enclosures must have identical VC-FC module (or VC FlexFabric module) placement
and cabling. This ensures that the profile mobility is maintained, so that when a profile is moved from one
enclosure to another within the stacked VC Domain, SAN connectivity is preserved.
Figure 35: Multi-Enclosure Stacking with Fabric-Attach SAN Fabrics requires all VC FC modules to be connected to
the SAN
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Storage Controller
Storage Controller
Fabric attached
Storage Systems
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
SAN Switch A
SAN Switch B
SAN uplink
connections
(Fabric-Attach)
LAN Switch A
10/100Base-TX
1000Base-X
RPS
49
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
1
Ethernet uplink
connection
SUS-1
FAN
5
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X8
UID
1
X1
X2
X3
X4
X5
X6
X7
X8
2
UID
X1
X1
SHARED
SHARED
4
3
1
5
2
3
4
1
UID
2
3
4
6
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
8
7
Enclosure Interlink
OA1
UID
OA2
UID
iLO
Reset
iLO
Active
Reset
Enclosure
UID
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
Identical VC Fibre
Channel modules,
placement
and cabling
PS
1
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X1
X2
X3
X4
X5
X6
X7
X1
X2
X3
X4
X5
X6
X7
X8
2
UID
X1
Ethernet uplink
connection
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X8
UID
1
X1
SHARED
SHARED
4
3
1
5
2
3
4
1
UID
2
3
LAN Switch B
4
6
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
8
7
10/100Base-TX
1000Base-X
Enclosure Interlink
OA1
RPS
UID
OA2
UID
iLO
Reset
iLO
Active
Enclosure
UID
Reset
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
2
3
4
5
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
PS
1
HP BladeSystem c7000
2 enclosures stacking VC Domain
36
49
SUS-2
1
Speed:Green=100Mbps,Yellow=10Mbps
: 10Gb Stack Links
Figure 36: Unsupported Multi-Enclosure Stacking configuration
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Storage Controller
Storage Controller
Fabric attached
Storage Systems
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
SAN Switch A
SAN Switch B
SAN uplink
connections
(Fabric-Attach)
LAN Switch A
10/100Base-TX
1000Base-X
RPS
49
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
1
Ethernet uplink
connection
SUS-1
FAN
5
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
HP VC Flex-10 Enet Module
X8
UID
1
X1
X2
X3
X4
X5
X6
X7
X8
2
UID
X1
X1
SHARED
SHARED
4
3
1
5
2
3
4
1
UID
2
3
VC-FC Modules
4
6
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
8
7
Enclosure Interlink
OA1
UID
OA2
UID
iLO
iLO
Reset
Active
Reset
Enclosure
UID
Active
Remove management modules before ejecting sleeve
FAN
6
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
FAN
1
VC FlexFabric Modules
Different VC Fibre
Channel modules,
placement
and cabling
FAN
10
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
Ethernet uplink
connection
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
4
3
LAN Switch B
6
5
8
7
10/100Base-TX
1000Base-X
Enclosure Interlink
OA1
UID
iLO
Active
Enclosure
UID
Reset
FAN
6
FAN
10
PS
5
PS
4
PS
3
PS
2
SUS-2
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
PS
1
HP BladeSystem c7000
2 enclosures stacking VC Domain
37
49
Active
Remove management modules before ejecting sleeve
PS
6
RPS
OA2
UID
iLO
Reset
: 10Gb Stack Links
Multi-enclosure stacking with Flat SAN Technology
When using HP Virtual Connect for 3PAR with Flat SAN technology in a VC Multi Enclosure environment:
 All VC FlexFabric modules must be connected to the 3PAR Storage System(s).
 Server profile migration of a SAN-booted server between enclosures is not supported.
 With domains managed by Virtual Connect Enterprise Manager, a server profile migration of a SAN-booted server
between enclosures within the same Domain Group or between different Domain Groups is not supported.
To perform in the VC Multi Enclosure environment a server profile migration of a SAN-booted server between
enclosures, the following manual steps should be implemented when utilizing Virtual Connect for 3PAR with Flat
SAN technology:
1.
Power off the server.
2.
Un-assign the server profile.
3.
Change Primary and Secondary Target WWNs in the FC Boot Parameters section of the profile to reflect the
WWNs of the 3PAR storage array ports connected to the destination enclosure. For more information about the
FC Boot parameters, see Server Profile Configuration.
4.
Assign profile to the destination location.
5.
Power-on the destination server.
Figure 37: Multi-Enclosure Stacking with Direct-Attach SAN Fabrics requires all VC FlexFabric modules to be
connected to the 3PAR Storage System
3PAR Storage System
SAN uplink
connections
(Direct-Attach)
LAN Switch A
10/100Base-TX
1000Base-X
RPS
49
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
1
Ethernet uplink
connection
SUS-1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
3
4
5
6
7
8
Enclosure Interlink
OA1
OA2
UID
UID
iLO
iLO
Reset
Active
Reset
Enclosure
UID
Active
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
Ethernet uplink
connection
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
4
3
LAN Switch B
6
5
8
7
10/100Base-TX
1000Base-X
Enclosure Interlink
OA1
OA2
UID
UID
iLO
iLO
Reset
Active
Enclosure
UID
Reset
RPS
Active
49
Remove management modules before ejecting sleeve
FAN
6
FAN
10
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
2 enclosures stacking VC Domain
38
SUS-2
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
: 10Gb Stack Links
Scenario 1: Simplest scenario with multipathing
Overview
This scenario covers the setup and configuration of two VC Fabric-Attach SAN Fabrics, each using a single uplink
connected to a redundant Fabric.
Figure 38: Logical view
16:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 16
Server 3
…
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
…
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Module
Fabric 1
39
Fabric 2
HBA1
HBA2
Figure 39: Physical view
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
12Vdc
12Vdc
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
1
2
3
1
4
2
3
4
UID
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HBA 1
HBA 2
Blade Server
Benefits
This configuration offers the simplicity of managing only one redundant fabric with a single uplink. Transparent
failover is managed by a multipathing I/O driver running on the server Operating System.
This scenario maximizes the use of the VC Fabric uplink ports, reduces the total number of switch ports needed in
the datacenter and saves money (Fabric ports can be expensive).
Considerations
In a fully populated c7000 enclosure, the server-to-uplink ratio is 16:1. This configuration can result in poor
response time and sometimes requires particular performance monitoring attention.
You can use more uplink ports, both for better performance and because doing so provides login-balancing or loginredistribution. Also, the use of more than one uplink per VC SAN fabric provides uplink failover in case of failure.
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts have to perform re-login before resuming their I/O
operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server
Operating System can provide a completely transparent transition.
The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of
NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are
not supported with VC-FC and VC FlexFabric.
Requirements
This configuration requires:
 Two VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV.
 At least two VC-FC modules.
 A minimum of two VC fabric uplink ports connected to the redundant SAN fabric.
40
For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV
configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch
NPIV configuration" depending on your switch model.
Installation and configuration
Switch configuration
Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco
Nexus or Cisco MDS Fibre Channel infrastructure.
VC CLI commands
In addition to the GUI, many of the configuration settings within VC can be established using a CLI command set. In
order to connect to VC using a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC module
 Physically connect Port 1 on the first VC-FC module to a switch port in SAN Fabric 1.
 Physically connect Port 1 on the second VC-FC module to a switch port in SAN Fabric 2.
Defining a new VC Fabric-Attach SAN Fabric using the GUI
Configure the VC-FC modules and create a VC SAN Fabric.
41
1.
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
From the Define SAN Fabric dialog, provide the VC Fabric Name, in this case Fabric-1, and add the Fabric uplink
Port 1 from the first VC-FC module.
3.
Click Apply.
4.
On the SAN Fabrics screen, click Add to create the second fabric:
5.
Create a new VC Fabric named Fabric-2.
6.
Under Enclosure Uplink Ports, add Port 1 from the second VC-FC module, and then click Apply.
42
Two VC SAN fabrics have been created, each with one uplink port allocated from one VC module.
Defining a new VC SAN Fabric using CLI
Configure the VC-FC modules from the CLI.
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add fabric Fabric-1 Bay=5 Ports=1
add fabric Fabric-2 Bay=6 Ports=1
3.
When complete, run the show fabric command.
Blade Server configuration
Server profile configuration steps can be found in Appendix A.
Verification
See Appendix F and Appendix G for verifications and troubleshooting steps.
43
Summary
In this scenario you have created two FC SAN Fabrics, utilizing a single uplink each; this is the simplest scenario that
can be used to maximize the use of the VC-FC uplink ports and reduce the number of datacenter SAN ports. A
multipathing driver is required for transparent failover between the two server HBA ports.
Additional uplinks could be added to the SAN fabrics which could increase performance and/or availability. This is
covered in the following scenario.
44
Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic
Login Balancing connected to the same redundant SAN
fabric
Overview
This scenario covers the setup and configuration of two VC Fabric-Attach SAN Fabrics with Dynamic Login Balancing
Distribution, each utilizing two to eight uplink ports connected to a redundant Fabric.
Figure 40 : 8:1 oversubscription with VC-FC 8Gb 20-Port modules using 4 uplink ports
8:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 16
Server 3
…
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
…
HBA1
HBA2
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Modules
Fabric 1
Fabric 2
Note: Static Login Distribution has been removed since VC firmware 3.00 but is the only method available in VC
firmware 1.24 and earlier. Dynamic Login Balancing capabilities are included in VC firmware 1.3x and later.
45
Figure 41: 4:1 oversubscription with VC-FC 8Gb 20-Port modules using 8 uplink ports
4:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 16
Server 3
…
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
…
HBA1
HBA2
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Module
Fabric 1
Fabric 2
Figure 42: 2:1 oversubscription with VC-FC 8Gb 24-Port modules using 16 uplink ports
2:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 16
Server 3
…
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
…
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
24-port Module
Fabric 1
46
Fabric 2
HBA1
HBA2
Figure 43: Physical view
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
12Vdc
12Vdc
25
29
26
30
27
31
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
1
2
3
1
4
2
3
4
UID
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HBA 1
HBA 2
Blade Server
Benefits
Using multiple ports in each VC Fabric-Attach SAN fabric allows dynamically distribution of server logins across the
ports using a round robin format. Dynamic Login Distribution performs auto-failover for the server logins if the
corresponding uplink port becomes unavailable. Servers that were logged in to the failed port are reconnected to
one of the remaining ports in the VC SAN fabric.
This configuration offers increased performance and better availability. The server-to-uplink ratio is adjustable, up
to 2:1 with the VC-FC 8Gb 24-port module (as few as two servers share one physical Fabric uplink) and up to 4:1 with
the VC-FC 20-port and FlexFabric modules.
Considerations
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O
operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server
Operating System can provide a completely transparent transition.
The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of
NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are
not supported with VC-FC and VC FlexFabric.
Requirements
This configuration requires:
 Two VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV
 At least two VC-FC modules
 A minimum of four VC fabric uplink ports connected to the redundant SAN fabric.
For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV
configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch
NPIV configuration" depending on your switch model.
47
Installation and configuration
Switch configuration
Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco
MDS or Cisco Nexus Fibre Channel infrastructure.
VC CLI commands
In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In
order to connect to VC using a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC module
 Physically connect the uplink ports on the first VC-FC module to switch ports in SAN Fabric 1
 Physically connect the uplink ports on the second VC-FC module to switch ports in SAN Fabric 2
Defining a new VC SAN Fabric using the GUI
Configure the VC-FC modules from the HP Virtual Connect Manager home screen.
1.
48
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
In the Define SAN Fabric screen, provide the VC Fabric Name, in this case Fabric-1.
3.
Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port
1, Port 2, Port 3 and Port 4 from the first VC-FC module (Bay 5).
4.
On the SAN Fabrics screen, click on Add to create the second fabric:
49
5.
Create a new VC Fabric named Fabric-2 and add the uplink ports that will be connected to this fabric and then
click Apply. The following example uses Port 1, Port 2, Port 3 and Port 4 from the second VC-FC module (Bay
6).
6.
You have created two VC Fabric-Attach SAN fabrics each with four uplink ports allocated from a VC module in
Bay 5 and a VC module in Bay 6.
50
Defining a new VC SAN Fabric using the CLI
Configure the VC-FC modules from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add fabric Fabric-1 Bay=5 Ports=1,2,3,4
add fabric Fabric-2 Bay=6 Ports=1,2,3,4
3.
When complete, run the show fabric command.
Blade Server configuration and verification
See Appendix A for server profile configuration steps. See Appendix F and Appendix G for verifications and
troubleshooting steps.
Summary
This scenario shows two FC SAN Fabric-Attach fabrics with multiple uplink ports using Dynamic Login Distribution
which allows for login balancing and host connectivity auto failover. This configuration enables increased
performance and improved availability. Host login connections to the VC Fabric uplink ports are handled
dynamically, and the load is balanced across all available ports in the group.
A multipathing driver is required for transparent failover between the two server HBA ports.
51
Scenario 3: Multiple VC Fabric-Attach SAN fabrics with
Dynamic Login Balancing connected to the same redundant
SAN fabric with different priority tiers
Overview
This scenario covers the setup and configuration of four VC Fabric-Attach SAN fabrics with Dynamic Login Balancing
Distribution that are all connected to the same redundant SAN fabric.
Figure 44 : Multiple VC Fabric-Attach SAN Fabrics with different priority tiers connected to the same fabrics
Server 1
5:1
server-to-uplink ratio
1:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
on a fully populated enclosure with 16 servers
Server 2
Server 3
Server 4
Server 15
Server 16
HBA1 HBA2
HBA1 HBA2
…
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
…
VC Domain
Fabric-1-Tier1
Fabric-1-Tier2
Fabric-2-Tier1
Fabric-2-Tier2
VC-FC 8Gb
20-port Module
Fabric 1
52
Fabric 2
Figure 45: Physical view
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
12Vdc
12Vdc
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
1
2
3
1
4
2
3
4
UID
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HP ProLiant
BL460c
UID
HBA 1
HBA 2
NIC
1
NIC
2
HP ProLiant
BL460c
UID
HP ProLiant
BL460c
UID
HBA 1
NIC
1
NIC
2
HBA 1
NIC
1
HBA 2
NIC
2
HBA 1
HP ProLiant
BL460c
UID
NIC
1
HBA 2
HBA 2
NIC
2
HBA 1
Blade Servers
1 to 15
HBA 2
Blade 16
Note: Static Login Distribution has been removed since VC firmware 3.00 but is the only method available in VC
firmware 1.24 and earlier. Dynamic Login Balancing capabilities are included in VC firmware 1.3x and later.
Benefits
This configuration can guarantee non-blocking throughput for a particular application or set of blades by creating a
separate VC SAN Fabric for that important traffic, and ensuring that the total aggregate uplink throughput for that
particular fabric is greater than or equal to the throughput for the HBAs used.
In other words, this is a way to adjust the server-to-uplink ratio, to control more granularly which server blades use
which VC uplink port, and also to enable the distribution of servers according to their I/O workloads.
53
Considerations
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O
operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server
Operating System can provide a completely transparent transition.
The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of
NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are
not supported with VC-FC and VC FlexFabric.
Requirements
This configuration requires:
 Four VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV
 At least two VC-FC modules
 A minimum of four VC fabric uplink ports connected to the redundant SAN fabric.
For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV
configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch
NPIV configuration" depending on your switch model.
Additional information, such as over subscription rates and server I/O statistics, is important to help with server
workload distribution across VC-FC ports.
Installation and configuration
Switch configuration
Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco
MDS or Cisco Nexus Fibre Channel infrastructure.
VC CLI commands
In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC module
 Physically connect the uplink ports on the first VC-FC module to switch port in SAN Fabric 1
 Physically connect the uplink ports on the second VC-FC module to switch port in SAN Fabric 2
Defining a new VC SAN Fabric using the GUI
Configure the VC-FC modules from the HP Virtual Connect Manager home screen:
54
1.
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
From the Define SAN Fabric dialog, provide the VC Fabric Name, in this case Fabric-1-Tier1
3.
Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port
1, Port 2 and Port 3 from the first VC-FC module (Bay 5).
55
4.
On the SAN Fabrics screen, click Add to create the second fabric:
5.
Create a new VC Fabric named Fabric-1-Tier2.
6.
Under Enclosure Uplink Ports, add Bay 5, Port 4, and then click Apply.
You have created two VC Fabric-Attach fabrics, each with uplink ports allocated from one VC module in Bay 5.
One of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink.
56
7.
57
Create two additional VC SAN Fabrics, Fabric-2-Tier1 and Fabric-2-Tier2 attached this time to the second VCFC module:
-
Fabric-2-Tier1 with 3 ports:
-
Fabric-2-Tier2 with one port:
You have created four VC Fabric-Attach fabrics, two for the server group Tier1 with three uplink ports and two
for the guaranteed throughput server Tier2 with one uplink port.
Defining a new VC SAN Fabric using the CLI
Configure the VC-FC modules from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add
add
add
add
3.
58
fabric
fabric
fabric
fabric
Fabric-1-Tier1
Fabric-1-Tier2
Fabric-2-Tier1
Fabric-2-Tier2
Bay=5
Bay=5
Bay=6
Bay=6
When complete, run the show fabric command.
Ports=1,2,3
Ports=4
Ports=1,2,3
Ports=4
Verification of the SAN Fabrics configuration
Make sure all the SAN fabrics belonging to the same Bay are connected to the same core FC SAN fabric switch.
1.
Go to the SAN Fabrics screen.
2.
The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are
connected. Verify that the entries from the same Bay number are all the same, indicating a single SAN fabric.
For additional verification and troubleshooting steps, see Appendix F.
59
Same upstream fabric
Blade Server configuration
For server profile configuration steps, see Appendix A.
After you have configured the VC SAN fabrics, you can select a Server Profile and choose the VC fabric to which you
would like your HBA ports to connect.
1.
Select the server profile, in this case esx4-1
2.
Under FC HBA Connections, select the FC SAN fabric name to which you would like Port 1 Bay 5 to connect.
Verification
See Appendix F and Appendix G for verifications and troubleshooting steps.
Summary
This scenario shows how you can create multiple VC Fabric-Attach SAN fabrics that are all connected to the same
redundant SAN fabric. This configuration enables you to control the throughput for a particular application or set of
blades.
60
Scenario 4: Multiple VC Fabric-Attach SAN fabrics with
Dynamic Login Balancing connected to several redundant
SAN fabric with different priority tiers
Overview
This scenario covers the setup and configuration of four VC Fabric-Attach SAN fabrics with Dynamic Login Balancing
Distribution that are all connected to the different redundant SAN fabric.
Figure 46 : Multiple VC SAN Fabrics with different priority tiers connected to different SAN Fabrics
Server 1
5:1
server-to-uplink ratio
1:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
on a fully populated enclosure with 16 servers
Server 2
Server 3
Server 4
Server 15
Server 16
HBA1 HBA2
HBA1
…
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
HBA2
…
VC Domain
Fabric-A-1
Fabric-A-2
Fabric-B-1
Fabric-B-2
Fabric 1B
Fabric 2B
VC-FC 8Gb
20-port Module
Fabric 1A
61
Fabric 2A
Figure 47: Physical view
Storage Array 1
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
Storage Array 2
DP1-B
Mfg
Mfg
HP EVA HSV300
RS232
Invalid
Address
Host 0
Host 1
1
ID
Fault
Hub
Mode
2Gb
b
1Gb
Host 2
Host 3 2G
Host 3 2G
b
1Gb
Host 2
4Gb
Host 0
Host 1
Fault
4Gb
RS232
Fabric 1A
0
4
1
5
2
6
Fabric 1B
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
0
12Vdc
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
HP 3PAR F-class
31
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
Fabric 2A
CI
S C O
S
Fabric 2B
Y S T E M S
MDS 9140 MULTILAYER INTELLIGENT FC SWITCH
9
DS-C9140-K9
STATUS
Console
MGMT
10/100
1
2
3
4
9
10
11
12
13
28
29
12
13
14
15
16
17
32
33
16
17
18
19
20
21
36
37
20
24
21
22
23
24
2
3
1
4
S C O
S
Y S T E M S
MDS 9140 MULTILAYER INTELLIGENT FC SWITCH
9
DS-C9140-K9
Console
MGMT
10/100
1
2
3
4
9
10
11
12
13
28
29
12
13
14
15
16
17
32
33
16
17
18
19
20
33
34
35
36
ACT
5
6
2
7
25
8
3
26
27
28
29
30
31
32
33
34
35
36
40
37
38
39
40
25
FAN
LINK
ACT
5
6
7
8
25
26
27
28
29
30
31
32
4
UID
UID
HP 4Gb VC-FC Module
HP 4Gb VC-FC Module
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HP ProLiant
BL460c
HBA 1
HBA 2
UID
NIC
1
NIC
2
HP ProLiant
BL460c
HBA 1
HBA 2
UID
NIC
1
NIC
2
HBA 1
HP ProLiant
BL460c
UID
NIC
1
HBA 2
NIC
2
HBA 1
Blade Servers
1 to 14
HBA 2
HBA 1
HBA 1
HBA 2
HBA 2
Blade 15 to 16
Note: Static Login Distribution has been removed since VC firmware 3.00 but is the only method available in VC
firmware 1.24 and earlier. Dynamic Login Balancing capabilities are included in VC firmware 1.3x and later.
Benefits
This configuration offers the ability to connect different redundant SAN Fabrics to the VC-FC module which gives you
more granular control over which server blades use each VC-FC port, while also enabling the distribution of servers
according to their I/O workloads.
Considerations
Each Virtual Connect 4Gb 20-Port Fibre Channel module, 8Gb 20-Port Fibre Channel module and FlexFabric module
support up to 4 SAN fabrics. Each Virtual Connect 8Gb 24-Port Fibre Channel module supports up to 8 SAN fabrics.
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts must re-login before resuming their I/O operations. Only
a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System
can provide a completely transparent transition.
The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of
NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are
not supported with VC-FC and VC FlexFabric.
62
20
21
36
37
24
21
22
23
24
37
38
39
40
PS
25
LINK
1
CI
STATUS
PS
FAN
40
Requirements
This configuration requires:
 At least two Fabric-Attach SAN fabrics with one or more switches that support NPIV
 At least one VC-FC module
 A minimum of two VC fabric uplinks connected to each of the SAN fabrics.
For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV
configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch
NPIV configuration" depending on your switch model.
Additional information, such as over subscription rates and server I/O statistics, is important to help with server
workload distribution across VC-FC ports.
Installation and configuration
Switch configuration
Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco
MDS or Cisco Nexus Fibre Channel infrastructure.
VC CLI commands
In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In
order to connect to VC using a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC module
Physically connect some uplink ports as follows:
 On the first VC-FC module to switch port in SAN Fabric 1A
 On the first VC-FC module to switch port in SAN Fabric 2A
 On the second VC-FC module to switch port in SAN Fabric 1B
 On the second VC-FC module to switch port in SAN Fabric 2B
Defining a new VC SAN Fabric via the GUI
Configure the VC-FC modules from the HP Virtual Connect Manager home screen.
1.
63
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
In the Define SAN Fabric screen, provide the VC Fabric Name, in this case Fabric-A-1
3.
Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port
1, Port 2 and Port 3 from the first VC-FC module (Bay 5).
4.
On the SAN Fabrics screen, click Add to create the second fabric.
5.
Create a new VC Fabric named Fabric-A-2. Under Enclosure Uplink Ports, add Bay 5, Port 4, and then click
Apply.
64
You have created two VC Fabric-Attach fabrics, each with uplink ports allocated from one VC module in Bay 5.
One of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink.
6.
Follow the same steps to create two additional VC SAN Fabrics, Fabric-B-1 and Fabric_B-2 attached this time to
the second VC-FC module:
-
Fabric_B-1 with 3 ports:
-
Fabric-B-2 with one port:
You have created four VC Fabric-Attach fabrics have been created, each with uplink ports allocated from one VC
module in Bay 3. Two of the fabrics are configured with Dynamic Login Balancing, and one is configured with
65
Static Login Distribution.
Defining a new VC SAN Fabric using the CLI
Configure the VC-FC modules from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add
add
add
add
3.
66
fabric
fabric
fabric
fabric
Fabric-A-1
Fabric-A-2
Fabric-B-1
Fabric-B-2
Bay=5
Bay=5
Bay=6
Bay=6
Ports=1,2,3
Ports=4
Ports=1,2,3
Ports=4
When complete, run the show fabric command.
Verification of the SAN Fabrics configuration
Make sure all the SAN fabrics belonging to the same Bay are connected to a different core FC SAN fabric switch:
1.
Go to the SAN Fabrics screen.
2.
The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are
connected. Verify that the VC module uplink ports are physically connected to four independent FC SAN
switches:
Different upstream fabrics
First Fabric (Brocade)
Second Fabric (Cisco)
The first redundant Fabric is connected to two Brocade Silkworm 300 SAN switches:

Fabric_A-1 uplink ports are connected to FC switch 10:00:00:05:1E:5B:2C:14

Fabric_B-1 uplink ports are connected to FC switch 10:00:00:05:1E:5B:DC:82
The second redundant Fabric is connected to two Cisco Nexus 5010 switches:

Fabric_A-2 uplink ports are connected to FC switch 20:01:00:0D:EC:CD:F1:C1

Fabric_B-2 uplink ports are connected to FC switch 20:01:00:0D:EC:CF:B4:C1
For additional verification and troubleshooting steps, see Appendix G.
67
Blade Server configuration
For server profile configuration steps, see Appendix A.
After you have configured the VC SAN fabrics, you can select a Server Profile and choose the VC fabric to which you
would like your HBA ports to connect.
1.
Select the server profile, in this case esx4-1
2.
Under FC HBA Connections, select the FC SAN fabric name to which you would like Port 1 Bay 5 to connect.
Verification
See Appendix F and Appendix G for verifications and troubleshooting steps.
Summary
This scenario shows how you can create multiple VC Fabric-Attach SAN fabrics that are connected to independent
SAN fabric switches; for example, a first VC SAN Fabric can be connected to a Brocade SAN environment while a
second one is connected to a Cisco SAN Fabric. This configuration enables you to granularly control the server
connections to independent SAN fabrics.
68
Scenario 5: Fabric-Attach SAN fabrics connectivity with HP
Virtual Connect FlexFabric 10Gb/24-Port module
Overview
Virtual Connect FlexFabric is an extension of Virtual Connect Flex-10 which leverages the new FCoE (Fibre Channel
over Ethernet) protocols. By leveraging FCoE for connectivity to existing Fibre Channel SAN networks, you can
reduce the number of HBAs required within the server blade and the Fibre Channel modules. This further reduces
cost, complexity, power and administrative overhead.
Figure 48: Multiple VC Fabric-Attach SAN Fabrics with different priority tiers connected to the same fabrics
5:1
server-to-uplink ratio
1:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 3
Server 4
CNA
CNA
CNA
HBA
HBA
HBA
Server 15
Server 16
CNA
CNA
CNA
HBA
HBA
…
1
1
2
2
Fabric-1-Tier1
1
2
1
2
…
Fabric-1-Tier2
1
HBA
2
1
Fabric-2-Tier1
2
Fabric-2-Tier2
VC FlexFabric 10Gb/24-Port
Modules
VC Domain
Fabric 1
Ethernet
only ports!
Fabric 2
This scenario is similar to scenario 3, which has multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing
connected to the same redundant SAN fabric with different priority tiers but instead of using the legacy Virtual
Connect Fibre Channel modules with HBAs inside the servers, this scenario uses CNAs (Converged Network Adapters)
with adjustable FlexHBAs (Flexible Host Bus Adapters) with the VC FlexFabric modules.
However, you can implement scenarios 1 through 4 with the VC FlexFabric modules.
Benefits
Using FlexFabric technology with Fibre Channel over Ethernet, these modules converge traffic over high speed 10 Gb
connections to servers with HP FlexFabric Adapters. Each module provides four adjustable connections (three data
and one storage or all data) to each 10 Gb server port. You avoid the confusion of traditional and other converged
network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches, extension modules,
cables and software licenses. Also, Virtual Connect wire-once connection management is built-in, enabling server
adds, moves, and replacement in minutes instead of days.
With the HP Virtual Connect FlexFabric 10Gb/24-port Module, you can fine-tune the bandwidth speed of the storage
connection between the FlexFabric adapter and VC FlexFabric module just as you can for Ethernet Flex-10 NIC
connections. You can adjust the FlexHBA port in 100Mb increments up to a full 10Gb connection. On external uplinks,
Fibre Channel ports will auto-negotiate 2, 4 or 8Gb speeds based on the upstream switch port setting.
69
This configuration offers the simplest, most converged and flexible way to connect to any network eliminating 95%
of network sprawl and improving performance without disrupting operations.
The server-to-uplink ratio is adjustable, up to 4:1 with the FlexFabric modules.
Considerations
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O
operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server
Operating System can provide a completely transparent transition.
The SAN switch ports connecting to the VC FlexFabric modules must be configured to accept NPIV logins. Due to the
use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended
distances are not supported with VC-FC and VC FlexFabric.
Figure 49: Physical view
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
Native Fibre Channel
31
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
SHARED: UPLINK or X-LINK
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
X1
X8
X2
X3
X4
X5
X6
X7
X8
UID
UID
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
VC FlexFabric Modules
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HP ProLiant
BL460c
UID
CNA 1
CNA 2
NIC
1
NIC
2
HP ProLiant
BL460c
UID
CNA 1
CNA 2
NIC
1
NIC
2
CNA 1
CNA 1
HP ProLiant
BL460c
UID
NIC
1
CNA 2
CNA 2
NIC
2
CNA 1
Blade Servers
1 to 14
70
CNA 2
Blade server
16
DCB/FCoE
Virtual Connect FlexFabric Uplink Port Mappings
It is important to note how the external uplink ports on the FlexFabric module are configured. The graphic below
outlines the type and speed each port can be configured as:
 Ports X1 – X4: Can be configured as 10Gb Ethernet or Fibre Channel.
FC speeds supported = 2Gb, 4Gb or 8Gb using 4Gb or 8Gb FC SFP modules, refer to the FlexFabric Quick Spec for a
list of supported SFP modules.
 Ports X5 – X8: Can be configured as 1Gb or 10Gb Ethernet
 Ports X7 – X8: Are also shared as internal cross connect
Note: Even though the Virtual Connect FlexFabric module supports Stacking, stacking only applies to Ethernet
traffic. FC uplinks cannot be consolidated, as it is not possible to stack the FC ports.
Figure 50: FlexFabric Module port configuration, speeds and types
Four Flexible Uplink Ports (X1-X4)





Individually configurable as FC/FCoE/Ethernet
Ethernet 10Gb (only): SR/LR/LRM SFP+ Transceivers, Copper DAC
Fibre Channel: 2/4/8Gb: Short/Long Wave FC Transceivers
FC uplinks: N_Ports, just like legacy VC-FC module uplinks
Flat SAN support with 3PAR
LAN
SAN
X1
X3
X2
Four Ethernet Uplink Ports (X5-X8)
 Ethernet only (1/10 GbE)
 SFP+ SR/LR/ELR/LRM/Copper DAC
 Stacking supported for Ethernet Only (FCoE future upgrade)
LAN
X4
Ports that can be enabled
for SAN connection
16 connections to
FlexFabric CNA’s
Individually Configurable
10Gb Ethernet or Flex-10/FCoE or Flex-10/iSCSI
Note: Since VC 4.01, the Virtual Connect FlexFabric SAN uplinks can be connected to an upstream DCB port with
Dual-hop FCoE support. For more information about FCoE support, see Dual-Hop FCoE with HP Virtual Connect
modules Cookbook in the Related Documentation section.
71
Requirements
With Virtual Connect FlexFabric this configuration requires:
 Four VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV
 At least two VC FlexFabric modules
 A minimum of four VC fabric uplink ports connected to the redundant SAN fabric.
For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV
configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch
NPIV configuration" depending on your switch model.
Additional information, such as over subscription rates and server I/O statistics, is important to help with server
workload distribution across VC FlexFabric FC ports.
Installation and configuration
Switch configuration
Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco
MDS or Cisco Nexus Fibre Channel infrastructure.
VC CLI commands
In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC FlexFabric module
With Virtual Connect FlexFabric modules, you can use only uplink ports X1, X2, X3 and X4 for Fibre Channel
connectivity.
Physically connect some uplink ports (X1, X2, X3 or X4) as follows:
 On the first VC FlexFabric module to switch ports in SAN Fabric 1A
 On the first FlexFabric module to switch ports in SAN Fabric 2A
 On the second FlexFabric module to switch ports in SAN Fabric 1B
 On the second FlexFabric module to switch ports in SAN Fabric 2B
Defining a new VC SAN Fabric via GUI
Configure the VC FlexFabric module from the HP Virtual Connect Manager home screen:
72
1.
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric-1-Tier1. Leave the default
Fabric-Attach fabric type.
3.
Add the uplink ports that will be connected to this fabric, and then click Apply.
The following example uses Port X1, Port X2 and Port X3 from the first Virtual Connect FlexFabric module (Bay
1).
73
Show Advanced Settings is only available with VC FlexFabric modules and provides the option to enable the
Automatic Login Re-Distribution.
The Automatic Login Re-distribution method allows FlexFabric modules to fully control the login allocation
between the servers and Fibre uplink ports. A FlexFabric module will automatically re-balance the server logins
once every time interval defined in the Fibre Channel → WWW Settings → Miscellaneous tab.
4.
Select either Manual or Automatic and click Apply.
If you select Manual Login Re-Distribution, the login allocation between the servers and Fabric uplink ports
never changes even after the recovery of a port failure. This remains true until an administrator decides to
initiate the server login re-balancing.
To initiate server login rebalancing, go to the SAN Fabrics screen and select the Edit menu corresponding to the
SAN Fabric, and click Redistribute
On the SAN Fabrics screen, click Add to create the second fabric:
74
5.
Create a new VC Fabric-Attach fabric named Fabric-1-Tier2. Under Enclosure Uplink Ports, add Bay 1, Port X4,
and then click Apply.
You have created two VC fabrics, each with uplink ports allocated from one VC FlexFabric module in Bay 1. One
of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink.
75
6.
Create two additional VC Fabric-Attach SAN Fabrics, Fabric-2-Tier1 and Fabric-2-Tier2 attached this time to
the second VC FlexFabric module:
-
Fabric-2-Tier1 with 3 ports:
-
Fabric-2-Tier2 with one port:
You have created four VC fabrics, two for the server group Tier1 with three uplink ports and two for the
guaranteed throughput server Tier2 with one uplink port.
76
Defining a new VC SAN Fabric using the CLI
Configure the VC FlexFabric modules from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add
add
add
add
3.
fabric
fabric
fabric
fabric
Fabric-1-Tier1
Fabric-1-Tier2
Fabric-2-Tier1
Fabric-2-Tier2
Bay=1
Bay=5
Bay=6
Bay=6
Ports=1,2,3
Ports=4
Ports=1,2,3
Ports=4
When complete, run the show fabric command.
Verification of the SAN Fabrics configuration
Make sure all of the SAN fabrics belonging to the same Bay are connected to the same core FC SAN fabric switch:
1.
77
Go to the SAN Fabrics screen.
2.
The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are
connected. Verify that the entries from the same Bay number are all the same, indicating a single SAN fabric.
For additional verification and troubleshooting steps, see Appendix F.
Same upstream fabric
Blade Server configuration
See Appendix B for server profile configuration steps with VC FlexFabric.
Summary
This VC FlexFabric scenario shows how you can create multiple VC SAN fabrics that are all connected to the same
redundant SAN fabric. This configuration enables you to control the throughput for a particular application or set of
blades.
78
Scenario 6: Flat SAN connectivity with HP Virtual Connect
FlexFabric 10Gb/24-Port modules and HP 3PAR Storage
Systems
Overview
HP Virtual Connect provides the industry’s first direct attach connection to Fibre channel storage that does not
require dedicated Fibre Channel switches. This new technology called Flat SAN significantly reduces complexity and
cost with while reducing latency between servers and storage by eliminating the need for multitier storage area
networks (SANs). Designed for virtual and cloud workloads, this solution reduces storage networking costs by 50%
and enables 2.5X faster provisioning compared to competitive offerings.
Figure 51: HP Virtual Connect Flat SAN technology with HP 3PAR Storage Systems
16:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 3
Server 4
CNA
CNA
CNA
CNA
HBA
HBA
HBA
HBA
Server 16
…
1
2
1
2
1
2
1
2
CNA
…
HBA
1
2
VC Domain
Direct-Attach
Fabric-1
Direct-Attach
Fabric-2
VC FlexFabric
Modules
Ethernet
only ports!
3PAR Storage
System
79
Benefits
Storage solutions usually include components such as server HBAs (Host Bus Adapters), SAN switches/directors,
optical transceivers/cables, and storage systems. You may have concerns about management and efficiency,
because of the sheer number of components. Moreover, different components require different tools—such as SAN
fabric management, storage management (for each type of storage), and HBA management.
HP Virtual Connect Flat SAN technology for HP 3PAR Storage Systems helps you to:
Reduce costs
 Do away with the need for expensive SAN fabrics, HBAs, and cables.
 Save on operating costs and cut down on capital expenditure.
 Scale with the “pay-as-you-grow” model, which lets you pay for only what you need now.
Overcome complexity
 Connect Virtual Connect FlexFabric Fibre Channel connects directly to HP 3PAR FC storage to simplify your server
connections.
 Improve efficiency with automated fabric management with simplified management tools.
 Configure your Virtual Connect as “direct attach” or “fabric attach,” depending on your solution design.
Simplify management
 Manage through a single pane of glass with Virtual Connect Manager Web-based and Command Line Interfaces.
 Use Virtual Connect technology to further improve management efficiency.
 Reduce disparity with separate fabric and device management.
Considerations
In a fully populated c7000 enclosure, the server-to-uplink ratio is 16:1. This configuration can result in poor
response time and might also require particular performance monitoring attention. You can use additional uplink
ports for better performance.
A properly configured multipathing I/O driver must be running on the server Operating System to prevent server I/O
disruption when a failure occurs somewhere between VC and the external storage system.
Be sure to configure the SAN 3PAR host ports that connect to the VC FlexFabric modules to accept the connection.
80
Physical view of a Direct-Attach configuration
Figure 52: Virtual Connect Flat SAN with VC FlexFabric modules for HP 3PAR Storage Systems,
Redundant paths - Server-to- Direct-Attach uplink ratio 16:1
HP 3PAR StoreServ 10400
Controller Node 0
Controller Node 1
SAN uplink
connections
(Direct-Attach)
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
UID
X2
X3
X4
X5
X6
X7
X8
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
SUS-1
LAN Switch A
10/100Base-TX
HP VC FlexFabric 10Gb/24-Port Module
3
4
5
6
7
8
OA1
UID
iLO
Reset
2
3
4
5
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
iLO
Active
Enclosure
UID
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
Reset
LAN Switch B
10/100Base-TX
OA2
UID
RPS
49
1
Speed:Green=100Mbps,Yellow=10Mbps
SUS-2
Enclosure Interlink
1000Base-X
1000Base-X
Active
RPS
52
Mode
49
50
Remove management modules before ejecting sleeve
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
FAN
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
10
Ethernet uplink
connection
Ethernet uplink
connection
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
Requirements
This configuration with Virtual Connect FlexFabric requires:
 Two VC Direct-Attach SAN fabrics.
 At least two VC FlexFabric modules.
 A minimum of two VC fabric uplink ports connected to the 3PAR Controller nodes.
For more information about implementing and configuring a 3PAR storage system, contact your HP representative
or see http://www.hp.com/go/3par
81
Installation and configuration
VC CLI commands
In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC FlexFabric module
With Virtual Connect FlexFabric modules, you can only use uplink ports X1, X2, X3 and X4 for Fibre Channel
connectivity.
Make the following physical connections:
 Uplink ports X1on the first VC FlexFabric module to the first HP 3PAR Controller node.
 Uplink ports X1on the second VC FlexFabric module to the second HP 3PAR Controller node.
See ‘Details about the 3PAR controller connectivity’ to properly connect the 3PAR V-Class controller node.
Defining a new VC SAN Fabric via GUI
Configure the VC FlexFabric module from the HP Virtual Connect Manager home screen:
1.
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric-1-3PAR.
3.
Add Port X1 from the first Virtual Connect FlexFabric module (Bay 1).
82
4.
Change the default Fabric-Attach fabric type to DirectAttach and then click Apply.
5.
On the SAN Fabrics screen, click Add to create the second fabric.
6.
Create a new VC Direct-Attach fabric named Fabric-2-3PAR.
7.
Add Port X1 from the second Virtual Connect FlexFabric module (Bay 2).
8.
For the fabric type, select DirectAttach and then click Apply.
You have created two VC Direct-Attach fabrics, each with one uplink port allocated from one VC module. The
connection status is red for both fabrics because you still need to configure the 3PAR controller ports.
83
Defining a new VC SAN Fabric using the CLI
Configure the VC Direct-Attach SAN Fabrics from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add fabric Fabric-1-3PAR Bay=1 Ports=1 Type=DirectAttach
add fabric Fabric-2-3PAR Bay=2 Ports=1 Type=DirectAttach
3.
84
When complete, run the show fabric command.
Configuration of the 3PAR controller ports
Configure the 3PAR Controller ports to accept the Direct-Attach connection to the FlexFabric modules.
1.
Start the 3PAR InForm Management Console.
2.
Go to the Ports folder and select the first port used by the VC modules.
3.
Right-click the port and click Configure…
4.
Change the Connection Mode from Disk to Host.
85
5.
Change the Connection Type from Loop to Point and click Ok.
6.
Modifying the FC Port configuration can disrupt partner hosts using the same PCI slot. If all of your hosts use a
redundant connection to the 3PAR array, you can click Yes to the warning message. If they don’t, do not
continue until every host is also connected to another 3PAR controller node.
7.
Repeat the same steps for the second 3PAR port connected to the second FlexFabric module.
86
Verification of the 3PAR connection
After you have completed the 3PAR port configuration, the External Connections tab of the SAN Fabrics window
should indicate a green status and a green port status for each of the Direct-Attach Fabrics, and show the link speed
and the WWN of the controller node port.
If the port is unlinked and no connectivity exists, check the Port Status for the reason. For more information about
possible causes, see Appendix F.
Blade Server configuration
Configure a server profile with a Direct-Attached 3PAR array.
1.
From the server profile interface, select the 3PAR Direct-Attach Fabrics for the two FC/FCOE HBA connections.
2.
For a Boot from SAN configuration click Fibre Channel Boot Parameters.
87
3.
Select Primary and Secondary in the SAN Boot column
4.
Enter the WWN of your 3PAR Controller node ports and LUN numbers:
Virtual Connect Manager
3PAR InForm Management Console
3PAR Controller port 1
3PAR Controller port 2
88
OS Configuration
The Operating System requires installation of MPIO for high-availability with load balancing of I/O.
In this scenario, where each Direct-Attach Fabrics have one VC uplink port, the Operating System should discover 2
different paths to the 3PAR volume.
For more information about 3PAR implementation or configuring an HP 3PAR Storage System with Windows, Linux,
or VMware, see
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?lang=en&cc=us&taskId=135&prodC
lassId=-1&contentType=SupportManual&docIndexId=64180&prodTypeId=18964&prodSeriesId=5044394#2
Summary
This VC Direct-Attach FlexFabric scenario shows how you can create easily multiple VC SAN fabrics that are
connected directly to the same 3PAR Storage System. This configuration enables you to reduce the complexity of
your enterprise storage solution.
89
Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity
with HP Virtual Connect FlexFabric 10Gb/24-Port modules
and HP 3PAR Storage Systems
Overview
You can mix the HP Virtual Connect direct-attach Flat SAN connection to Fibre Channel storage with a traditional VC
Fabric-Attach SAN Fabric connected to Fibre Channel switches. This capability in Virtual Connect FlexFabric modules
can be useful for backup and data migration because attaching additional storage systems or backup solutions are
not supported today with the Direct-Attach mode.
This scenario is particularly useful for migrating existing legacy SAN storage systems to 3PAR using the SAN Fabric
connection.
Figure 53: Mix Fabric-Attach and Direct-Attach FC configuration with VC FlexFabric modules
1:1
server-to-uplink ratio
7:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 3
Server 4
CNA
CNA
CNA
CNA
HBA
HBA
HBA
HBA
Server 14
Server 15
Server 16
CNA
CNA
CNA
…
2
1
2
1
1
2
…
2
1
HBA
HBA
2
1
1
2
HBA
1
2
VC Domain
Fabric-Attach
Fabric-1
Direct-Attach
Fabric-1
Fabric-Attach
Fabric-2
Direct-Attach
Fabric-2
VC FlexFabric
Modules
Fabric-1
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
12Vdc
28
25
29
26
30
27
Fabric-2
0
31
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
12Vdc
12Vdc
27
31
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
MSA
EVA
XP
Optional FC links
90
StoreOnce
Disk Backup
StoreEver
Tape Library
3PAR Storage
System
Considerations
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabrics. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts must re-login before resuming their I/O operations. Only
a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System
can provide a completely transparent transition.
The SAN switch ports connecting to the VC Fabric-Attach SAN Uplinks must be configured to accept NPIV logins. Due
to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended
distances are not supported with VC FlexFabric.
Be sure to properly configure the SAN 3PAR host ports connecting to the VC FlexFabric modules to accept the
connection.
You can also connect the 3PAR to the SAN switches to offer a 3PAR access to all servers connected to the SAN.
Physical view of a mix Flat SAN and Fabric-Attach configuration
Figure 54: Mix Fabric-Attach and Direct-Attach FC configuration with VC FlexFabric modules.
Redundant paths - Server-to- Fabric-Attach uplink ratio 8:1 - Server-to- Direct-Attach uplink ratio 8:1
HP 3PAR StoreServ 10400
HP XP 24000
HP EVA 8400
Controller Node 0
Fabric-1
SAN uplink
connections
(Direct-Attach)
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
12Vdc
Controller Node 1
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
SAN Switch A
SAN Switch B
SAN uplink
connections
(Fabric-Attach)
Controller
Node 0
FAN
1
FAN
5
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
SHARED: UPLINK or X-LINK
X1
X8
X2
X3
X4
X5
X6
X7
X8
UID
UID
2
1
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
SUS-1
LAN Switch A
3
4
5
6
10/100Base-TX
OA1
3
4
5
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
50
Unit
51
20%
40%
60%
80%
100%
10/100Base-TX
iLO
Active
Enclosure
UID
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
Reset
1000Base-X
OA2
UID
UID
iLO
Reset
RPS
2
LAN Switch B
Enclosure Interlink
1000Base-X
49
1
Speed:Green=100Mbps,Yellow=10Mbps
SUS-2
8
7
Active
RPS
49
52
Mode
50
Remove management modules before ejecting sleeve
1
2
3
4
5
6
Speed:Green=100Mbps,Yellow=10Mbps
FAN
6
7
8
9
10
11
12
13
14
Duplex:Green=Full Duplex,Yellow=Half Duplex
15
16
17
18
19
20
21
22
23
24
25
Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
H3C S3600
47 48
Series
Console
Unit
51
20%
40%
60%
80%
100%
PWR
Flashing=PoE
Yellow=Duplex
Green=Speed
52
Mode
FAN
10
Ethernet uplink
connection
Ethernet uplink
connection
PS
6
PS
5
PS
4
PS
3
PS
2
PS
1
HP BladeSystem c7000
Note: The 8:1 Oversubscription with 2 FC cables per FlexFabric module is the most common use case.
Note: In order to improve the redundancy, it is recommended to connect the FC cables in a crisscross manner
(i.e. each FlexFabric module is connected to two different controller nodes).
91
Controller
Node 1
Requirements
This configuration with Virtual Connect FlexFabric requires:
 Two VC Direct-Attach SAN fabrics
 Two VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV
 At least two VC FlexFabric modules
 A minimum of two VC fabric uplink ports connected to the 3PAR Controller nodes
 A minimum of two VC fabric uplink ports connected to the redundant SAN fabric
For more information about implementing and configuring a 3PAR storage system, contact your HP representative
or see http://www.hp.com/go/3par.
Installation and configuration
VC CLI commands
In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC FlexFabric module
With Virtual Connect FlexFabric modules, you can only use uplink ports X1, X2, X3 and X4 for Fibre Channel
connectivity.
We recommend the following physical connections:
 Uplink ports X1 and X2 on FlexFabric module in Bay 1 to a switch port in SAN Fabric 1.
 Uplink ports X1 and X2 on FlexFabric module in Bay 2 to a switch port in SAN Fabric 2.
 Uplink ports X3 on FlexFabric module in Bay 1 to the first HP 3PAR Controller node.
 Uplink ports X4 on FlexFabric module in Bay 1 to the second HP 3PAR Controller node.
 Uplink ports X3 on FlexFabric module in Bay 2 to the first HP 3PAR Controller node.
 Uplink ports X4 on FlexFabric module in Bay 2 to the second HP 3PAR Controller node.
See ‘Details about the 3PAR controller connectivity’ to properly connect the 3PAR controller node.
Defining a new VC SAN Fabric using the GUI
Configure the VC FlexFabric module from the HP Virtual Connect Manager home screen:
92
1.
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric-1-3PAR.
3.
Add Port X3 and Port X4 from the first Virtual Connect FlexFabric module (Bay 1).
4.
Change the default Fabric-Attach fabric type to DirectAttach
5.
You can set a Preferred and Maximum FCoE connection speed that can be applied to server profiles when an
FCoE connection is used.
6.
Click Apply.
7.
On the SAN Fabrics screen, click Add to create the second fabric.
93
8.
Create a new VC Direct-Attach fabric named Fabric-2-3PAR and then add Port X3 and Port X4 from the second
Virtual Connect FlexFabric module (Bay 2).
9.
For the fabric type, select DirectAttach and if required select the proper Preferred and Maximum FCoE
connection speed, then click Apply.
You have created two VC Direct-Attach fabrics, each with two uplink ports allocated from one VC module but
connected to 2 different 3PAR controller nodes for better redundancy.
Now you can create the two Fabric-Attach SAN Fabrics:
1.
On the SAN Fabrics screen, click Add to create the third fabric.
2.
Create a new VC Fabric-Attach fabric named Fabric-1 and then add Port X1 and X2 from the first Virtual
Connect FlexFabric module (Bay 1).
3.
For the fabric type, keep the default FabricAttach.
94
4.
If necessary, select the Show Advanced Settings to change the default Login Re-Distribution or the
Preferred/Maximum FCoE connection speed, then click Apply.
5.
Create the last VC Fabric-Attach fabric named Fabric-2 and then add Port X1 and X2 from the second Virtual
Connect FlexFabric module (Bay2).
6.
For the fabric type, keep the default FabricAttach, change the advanced settings options then click Apply.
95
You have created four VC fabrics, two for the Direct-Attach 3PAR and two for the SAN Fabric connectivity. The
connection status is red for both Flat SAN fabrics because we still need to configure the 3PAR controller ports.
Defining a new VC SAN Fabric using the CLI
Configure the VC Direct-Attach SAN Fabrics from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add
add
add
add
96
fabric
fabric
fabric
fabric
Fabric-1-3PAR Bay=1 Ports=3,4 Type=DirectAttach
Fabric-2-3PAR Bay=2 Ports=3,4 Type=DirectAttach
Fabric-1 Bay=1 Ports=1,2
Fabric-2 Bay=2 Ports=1,2
3.
97
When complete, run the show fabric command.
Configuration of the 3PAR controller ports
Configure the 3PAR Controller ports to accept the Direct-Attach connection to the FlexFabric modules.
1.
Start the HP 3PAR Management Console.
2.
Go to the Ports folder and select the first port used by the VC modules.
3.
Right-click this port and select Configure…
4.
Change the Connection Mode from Disk to Host.
98
5.
Change the Connection Type from Loop to Point and click Ok.
6.
Modifying the FC Port configuration can disrupt partner hosts using the same PCI slot. If all of your hosts use a
redundant connection to the 3PAR array, you can click Yes to the warning message. If they don’t, do not
continue until every host is also connected to another 3PAR controller node.
7.
Repeat the same steps for the three other 3PAR ports connected to the FlexFabric modules.
99
Verification of the 3PAR connection
After you have completed the 3PAR port configuration, the External Connections tab of the SAN Fabrics window
should indicate a green status and a green port status for each of the Direct-Attach Fabrics, and show the link speed
and the WWN of the controller node port.
If the port is unlinked and no connectivity exists, the cause is displayed in the Port Status column. For more
information about possible causes, see Appendix F.
Server Profile configuration
Configure a server profile with a Direct-Attached 3PAR array.
1.
From the server profile interface, select the 3PAR Direct-Attach Fabrics for the two FC/FCOE HBA connections.
2.
For a Boot from SAN server configuration click Fibre Channel Boot Parameters.
100
3.
Select Primary and Secondary in the SAN Boot column.
4.
For the Target Port Name, enter the WWN of your 3PAR Controller node ports and LUN numbers. For each
Direct-Attach Fabric, it is recommended to choose 3PAR port WWNs located on different controller nodes as VC
supports only two SAN boot targets per VC profile:
Controller Node 0
Controller Node 1
101
OS Configuration
The Operating System requires installation of MPIO for high-availability with load balancing of I/O.
In this scenario, where each Direct-Attach Fabrics have two VC uplink ports, the Operating System should discover 4
different paths to the 3PAR volume.
For more information about 3PAR implementation or configuring an HP 3PAR Storage System with Windows, Linux,
or VMware, see
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?lang=en&cc=us&taskId=135&prodC
lassId=-1&contentType=SupportManual&docIndexId=64180&prodTypeId=18964&prodSeriesId=5044394#2
Summary
This VC Direct-Attach and Fabric-Attach FlexFabric scenario shows how you can create multiple VC SAN fabrics that
are connected directly to a 3PAR Storage System and to a SAN Fabric with other Storage arrays. This configuration
enables you to support the heterogeneous environments that are often found in a datacenter, and is also useful for
migrating an existing, costly SAN to a less complex and lower cost storage solution.
102
Scenario 8: Adding VC fabric uplink ports with Dynamic
Login Balancing to an existing VC Fabric-Attach SAN fabric
Overview
This scenario provides instructions for adding VC Fabric uplink ports to an existing VC Fabric-Attach SAN fabric using
VC-FC modules, and to manually redistribute the server blade HBA logins.
Benefits
With this configuration, you can add additional VC Fabric uplink ports to an existing VC SAN fabric and redistribute
the server blade HBA logins. You can also add ports to decrease the number of server blades accessing the VC fabric
uplinks and provide increased bandwidth.
Initial configuration
The following figure (Figure 55) shows the use of two uplink ports per VC-FC modules to connect to a redundant SAN
fabric.
Figure 55: Initial configuration with 2 uplink ports
Server 1
HBA1 HBA2
Server 2
Server 3
HBA1 HBA2
HBA1 HBA2
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Module
Fabric 1
103
Fabric 2
1.
In HP Virtual Connect Manager, click Interconnect Bays on the left side of the screen.
2.
Select the first VC-FC module.
104
As shown in the following image, the screen displays the VC-FC Uplink port information (port speed, connection
status, the upstream switch WWN ports) and provides the server port details. The Uplink Port column identifies
which VC-FC uplink port a server is using.
Two uplink
ports are
used for
Fabric-A-1
Note: VC 3.70 and later include the “Uplink Port” information , so it is no longer necessary to look on the
upstream SAN switch port information to identify which VC uplink FC port a server is using.
Figure 56: Actual distributed logins across the uplink ports of VC-FC Bay 1
Server 1
Server 2
Server 3
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Module
Server 2
Server 7
Server11
Server 1
Server 3
Fabric 1
105
Fabric 2
Adding an additional uplink port
Add an additional uplink port to the VC SAN Fabric to increase the server bandwidth.
Figure 57: Configuration with 3 uplink ports
Server 1
Server 2
Server 3
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Module
Fabric 1
Fabric 2
Using the GUI
Add an additional uplink port from the GUI.
1.
106
Select the VC-FC SAN fabric to which you want to add a port and click Edit.
2.
Select the port you want to add and click Apply.
Using the CLI
Add an additional uplink port from the CLI.
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following command to add one uplink port (Port 3) to the existing fabric (Fabric-A-1) with port 1 and 2
already member of that fabric.
set fabric Fabric-A-1 Ports=1,2,3
107
Login Redistribution
The login redistribution is not automatic with the Manual ‘Login Re-Distribution’ mode. So the current logins may not
have changed yet on the module if you are using this mode.
Connect back to the Interconnect Bays / VC-FC Module 1 to see that every server is still using the same uplink ports.
Note: When Automatic Login Re-Distribution is configured (only supported with the FlexFabric modules) the login
redistribution is automatically initiated when the defined time interval expires (for more information, see
‘Consideration and concepts’).
Figure 58: Hot adding one uplink port to the existing VC SAN fabric does not automatically redistribute the server
logins with VC-FC (except with VC FlexFabric)
Server 1
Server 2
Server 3
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Module
Server 2 Server 1
Server 7 Server 3
Server11
Fabric 1
108
Fabric 2
Manual Login Redistribution using the GUI
To manually redistribute the logins, go to the SAN Fabrics screen, select the Edit menu corresponding to the SAN
Fabric, and click Redistribute.
Manual Login Redistribution using the CLI
Enter the following command to redistribute the logins using the VC CLI:
set fabric MyFabric -loadBalance
Verification
Go back to the VC-FC Module 1 port information to check again which server9s) is connected to each port.
109
Figure 59: Newly distributed logins
Server 1
Server 2
Server 3
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
VC Domain
Fabric-2
Fabric-1
VC-FC 8Gb
20-port Module
Server 2 Server 1
Server11 Server 3
Fabric 1
Server 7
Fabric 2
Summary
This scenario demonstrates how to add an uplink port to an existing VC-FC fabric with Dynamic Login Balancing
enabled. VCM allows you to manually redistribute the HBA host logins to balance the FC traffic through a newly
added uplink port. New logins go to the newly added port until the number of logins becomes equal. After this, the
login distribution uses a round robin method of assigning new host logins.
Logins are not redistributed if you do not use this manual method, with one exception. With the Virtual connect
FlexFabric module, automatic login redistribution is available by configuring a link stability interval parameter. This
interval defines the number of seconds that the VC module waits for the VC Fabric uplinks to stabilize before the
module attempts to re-load balance the server logins.
Login redistribution can affect the server traffic because the hosts must re-login before resuming their I/O
operations. A smooth transition without much traffic disruption can occur with a redundant Fabric connection and an
appropriate server MPIO driver.
110
Scenario 9: Cisco MDS Dynamic Port VSAN Membership
Overview
This scenario covers the steps to configure Cisco MDS family FC switches to operate in Dynamic Port VSAN
Membership mode. This allows you to configure VSAN membership based on the WWN of an HBA instead of the
physical port, which allows an NPIV-based VC fabric uplink with multiple WWNs to be set up on separate VLANs. With
standard VSAN membership configuration that is based on physical ports, you must configure all the HBAs on the
uplink port to the same VSAN.
Knowledge of Cisco Fabric Manager is needed to complete this scenario. For more information, see the Cisco website
http://www.cisco.com.
Additional information about setting up this scenario can be found in the MDS Cookbook 3.1
(http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/rel_3_x/cookbook/MDScookbook31.p
df).
Benefits
This configuration allows you to assign the Virtual connect fabric hosts to different VSANs.
Requirements
This configuration requires:
 A single SAN fabric with one or more switches that support NPIV.
 At least one VC-FC module.
 At least one VC fabric uplink connected to the SAN fabric.
 A Cisco MDS switch running minimum SAN-OS 2.x or NX-OS 4.x.
Additional information, such as over subscription rates and server I/O statistics, is important to help with server
workload distribution across VC-FC ports. For more information about configuring Cisco SAN switches for NPIV, see
‘Appendix D: Cisco MDS SAN switch NPIV configuration’.
Installation and configuration
1.
Log in to Fabric Manager for the MDS FC Switch.
2.
Click DPVM Setup.
111
3.
From the DVPM Setup Wizard, select the Master Switch, and click Next.
4.
To enable the manual selection of device WWN to VSAN assignment, be sure that the Create Configuration from
Currently Logged in End Devices is unchecked.
If you want to accept the current VSAN assignments, check the box. This presents all the WWNs and VSAN
assignments from the fabric.
5.
112
Click Next.
6.
Click Insert to add the WWN and VSAN assignments.
7.
Select all VC-FC fabric devices in the fabric for interface FC2/10. You must configure each one individually and
assign the VSAN ID to which you want that WWN associated.
113
8.
After all WWNs are configured, click Finish to activate the database. DPVM database configuration overrides
any settings made in the VSAN port configuration at the physical port.
Summary
This scenario gives a quick glance on how to follow the DVPM Setup Wizard to enable the MDS switch for the
assignment of VSANs based on the WWN of the device logging into the fabric, and not by the configuration of the
physical port. For additional details and steps not covered here, see MDS 3.x Cookbook
(www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/rel_3_x/cookbook/MDScookbook31.pdf).
114
Scenario 10: Fabric-Attach SAN fabrics connectivity with
HP Virtual Connect FlexFabric-20/40 F8 module
Overview
HP Virtual Connect FlexFabric-20/40 F8 Modules are the simplest, most flexible way to connect virtualized server
blades to data or storage networks. VC FlexFabric-20/40 F8 modules eliminate up to 95% of network sprawl at the
server edge with one device that converges traffic inside enclosures and directly connects to external LANs and
SANs. Using Flex-10 and Flex-20 technology with Fibre Channel over Ethernet and accelerated iSCSI, these modules
converge traffic over high speed 10Gb/20Gb connections to servers with HP FlexFabric Adapters. Each redundant
pair of Virtual Connect FlexFabric modules provide 8 adjustable downlink connections ( six Ethernet and two Fibre
Channel, or six Ethernet and 2 iSCSI or eight Ethernet) to dual port10Gb/20Gb FlexFabric Adapters on servers. Up to
twelve uplinks with eight Flexport and four QSFP+ interfaces, without splitter cables, are available for connection to
upstream Ethernet and Fibre Channel switches. Including splitter cables up to 24 uplinks are available for connection
to upstream Ethernet and Fibre Channel. VC FlexFabric-20/40 F8 modules avoid the confusion of traditional and
other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches,
extension modules, cables and software licenses. Also, Virtual Connect wire-once connection management is builtin enabling server adds, moves and replacement in minutes instead of days or weeks.
Figure 60: Multiple VC Fabric-Attach SAN Fabrics with different priority tiers connected to the same fabrics
5:1
server-to-uplink ratio
1:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 3
Server 4
Server 15
Server 16
CNA
CNA
CNA
CNA
HBA
HBA
HBA
CNA
CNA
HBA
HBA
…
1
1
2
2
Fabric-1-Tier1
1
2
1
2
Fabric-1-Tier2
…
1
2
Fabric-2-Tier1
HBA
1
2
Fabric-2-Tier2
VC FlexFabric-20/40 F8
Modules
VC Domain
Ethernet
only ports!
Fabric 1
Fabric 2
This scenario is similar to scenario 5, which has multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing
connected to the same redundant SAN fabric with different priority tiers but instead of using the legacy Virtual
Connect FlexFabric modules, this scenario uses the Virtual Connect FlexFabric-20/40 F8 Modules with the HP
FlexFabric 20Gb Adapters using the Flex-20 technology with Fibre Channel over Ethernet, providing adjustable
FlexHBAs up to 20Gb speed.
115
Benefits
Using FlexFabric technology with Fibre Channel over Ethernet, these modules converge traffic over high speed 10Gb
and 20Gb connections to servers with HP FlexFabric 10/20Gb Adapters. Each module provides 4 adjustable
connections (three data and one storage or all data) to each 10Gb/20Gb server port. You avoid the confusion of
traditional and other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel
switches, extension modules, cables and software licenses. Also, Virtual Connect wire-once connection management
is built-in, enabling server adds, moves, and replacement in minutes instead of days.
With the HP Virtual Connect FlexFabric-20/40 F8 Module, you can fine-tune the bandwidth speed of the storage
connection between the FlexFabric adapter and VC FlexFabric module just as you can for Ethernet Flex-10 NIC
connections. You can adjust the FlexHBA port in 100Mb increments up to a full 20Gb connection with 20Gb
FlexFabric adapters. On external uplinks, Fibre Channel ports will auto-negotiate 2, 4 or 8Gb speeds based on the
upstream switch port setting.
This configuration offers the simplest, most converged and flexible way to connect to any network eliminating 95%
of network sprawl and improving performance without disrupting operations.
The server-to-uplink ratio is adjustable, up to 2:1 with the FlexFabric-20/40 F8 modules.
Considerations
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O
operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server
Operating System can provide a completely transparent transition.
The SAN switch ports connecting to the VC FlexFabric modules must be configured to accept NPIV logins. Due to the
use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended
distances are not supported with VC-FC and VC FlexFabric.
Figure 61: Physical view
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
Fabric-1
Fabric-2
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
Native Fibre Channel
31
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
VC FlexFabric-20/40 F8 Modules
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HP ProLiant
BL460c
UID
CNA 1
CNA 2
NIC
1
NIC
2
HP ProLiant
BL460c
UID
CNA 1
CNA 2
NIC
1
NIC
2
CNA 1
CNA 1
HP ProLiant
BL460c
UID
NIC
1
CNA 2
CNA 2
NIC
2
CNA 1
Blade Servers
1 to 14
116
CNA 2
Blade server
16
DCB/FCoE
Virtual Connect FlexFabric-20/40 F8 Uplink Port Mappings
It is important to note how the external uplink ports on the FlexFabric-20/40 F8 module are configured. The graphic
below outlines the type and speed each port can be configured as:
 Flexible Ports X1 – X8: Can be configured as 1/10 Gb Ethernet or Fibre Channel
FC speeds supported = 2Gb, 4Gb or 8Gb using 4Gb or 8Gb FC SFP modules, refer to the FlexFabric-20/40 F8 Quick
Spec for a list of supported SFP+ transceivers and DAC
 Ports Q1 – Q4: Can be configured as only 10Gb or 40Gb Ethernet (each port can do 1x40, 1x10 or 4x10)
 Ports X5/X6 and X7/X8: Are paired and can only be configured to carry same traffic types (either FC or Ethernet)
Note: Even though the Virtual Connect FlexFabric-20/40 F8 module supports Stacking, stacking only applies to
Ethernet traffic. FC uplinks cannot be consolidated, as it is not possible to stack the FC ports.
Figure 62: FlexFabric-20/40 F8 Module port configuration, speeds and types
Eight Flexible Uplink Ports (X1-X8)
Four 40Gb QSFP+ Uplink Ports (Q1-Q4)
 Ethernet (only): 1x40, 1x10 or 4x10 (FCoE future upgrade)
 QSFP+ transceivers: SR4 100m/SR4 300m/LR4/Quad-to-SFP+
 QSFP+ cables: Copper/AOC QSFP+ DAC, QSFP+ to 4x10G SFP+





Individually configurable as FC/FCoE/Ethernet
Ethernet 1/10Gb: SR/LR/LRM SFP+ Transceivers, Copper/AOC DAC
Fibre Channel: 2/4/8Gb: Short/Long Wave FC Transceivers
FC uplinks: N_Ports, just like legacy VC-FC module uplinks
Flat SAN support with 3PAR
LAN
SAN
LAN
16 connections to
FlexFabric CNA’s
Individually Configurable
10Gb Ethernet or Flex-10/FCoE or Flex-10/iSCSI
20Gb Ethernet or Flex-20/FCoE or Flex-20/iSCSI
X1
X2
X3
X4
X5
X6
X7
X8
PAIRED
PAIRED
Ports that can be enabled
for SAN connection
Note: Since VC 4.01, the Virtual Connect FlexFabric SAN uplinks can be connected to an upstream DCB port with
Dual-hop FCoE support. For more information about FCoE support, see Dual-Hop FCoE with HP Virtual Connect
modules Cookbook in the Related Documentation section.
117
Requirements
With Virtual Connect FlexFabric-20/40 F8, this configuration requires:
 Four VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV
 At least two VC FlexFabric-20/40 F8 modules
 A minimum of four VC fabric uplink ports connected to the redundant SAN fabric.
For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV
configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch
NPIV configuration" depending on your switch model.
Additional information, such as over subscription rates and server I/O statistics, is important to help with server
workload distribution across VC FlexFabric-20/40 F8 FC ports.
Installation and configuration
Switch configuration
Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco
MDS or Cisco Nexus Fibre Channel infrastructure.
VC CLI commands
In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In
order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC
provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following
scenario provides the CLI commands needed to configure each setting of the VC.
Configuring the VC FlexFabric-20/40 F8 module
With Virtual Connect FlexFabric-20/40 F8 modules, you can use only uplink ports X1 to X8 for Fibre Channel
connectivity.
Physically connect some uplink ports (X1, X2, X3 or X4) as follows:
 On the first VC FlexFabric-20/40 F8 module to switch ports in SAN Fabric 1A
 On the first FlexFabric-20/40 F8 module to switch ports in SAN Fabric 2A
 On the second FlexFabric-20/40 F8 module to switch ports in SAN Fabric 1B
 On the second FlexFabric-20/40 F8 module to switch ports in SAN Fabric 2B
118
Defining a new VC SAN Fabric via GUI
Configure the VC FlexFabric-20/40 F8 module from the HP Virtual Connect Manager home screen:
1.
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric-1-Tier1. Leave the default
Fabric-Attach fabric type.
3.
Add the uplink ports that will be connected to this fabric, and then click Apply.
The following example uses Port X1, Port X2 and Port X3 from the first Virtual Connect FlexFabric module (Bay
1).
119
Show Advanced Settings is only available with VC FlexFabric modules and provides the option to enable the
Automatic Login Re-Distribution.
The Automatic Login Re-distribution method allows FlexFabric modules to fully control the login allocation
between the servers and Fibre uplink ports. A FlexFabric module will automatically re-balance the server logins
once every time interval defined in the Fibre Channel → WWW Settings → Miscellaneous tab.
4.
Select either Manual or Automatic and click Apply.
If you select Manual Login Re-Distribution, the login allocation between the servers and Fabric uplink ports
never changes even after the recovery of a port failure. This remains true until an administrator decides to
initiate the server login re-balancing.
To initiate server login rebalancing, go to the SAN Fabrics screen and select the Edit menu corresponding to the
SAN Fabric, and click Redistribute
On the SAN Fabrics screen, click Add to create the second fabric:
120
5.
Create a new VC Fabric-Attach fabric named Fabric-1-Tier2. Under Enclosure Uplink Ports, add Bay 1, Port X4,
and then click Apply.
You have created two VC fabrics, each with uplink ports allocated from one VC FlexFabric module in Bay 1. One
of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink.
121
6.
Create two additional VC Fabric-Attach SAN Fabrics, Fabric-2-Tier1 and Fabric-2-Tier2 attached this time to
the second VC FlexFabric module:
-
Fabric-2-Tier1 with 3 ports:
-
Fabric-2-Tier2 with one port:
You have created four VC fabrics, two for the server group Tier1 with three uplink ports and two for the
guaranteed throughput server Tier2 with one uplink port.
122
Defining a new VC SAN Fabric using the CLI
Configure the VC FlexFabric modules from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add
add
add
add
3.
fabric
fabric
fabric
fabric
Fabric-1-Tier1
Fabric-1-Tier2
Fabric-2-Tier1
Fabric-2-Tier2
Bay=1
Bay=5
Bay=6
Bay=6
Ports=1,2,3
Ports=4
Ports=1,2,3
Ports=4
When complete, run the show fabric command.
Verification of the SAN Fabrics configuration
Make sure all of the SAN fabrics belonging to the same Bay are connected to the same core FC SAN fabric switch:
1.
123
Go to the SAN Fabrics screen.
2.
The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are
connected. Verify that the entries from the same Bay number are all the same, indicating a single SAN fabric.
Same upstream fabric
For additional verification and troubleshooting steps, see Appendix F.
Blade Server configuration
See Appendix B for server profile configuration steps with VC FlexFabric.
Summary
This VC FlexFabric-20/40 F8 scenario shows how you can create multiple VC SAN fabrics that are all connected to the
same redundant SAN fabric. This configuration enables you to control the throughput for a particular application or
set of blades.
124
Scenario 11: Enhanced N-port Tunking with HP Virtual
Connect 16G 24-Port Fibre Channel Module
Overview
This scenario describes the setup and requirements for installing HP Virtual Connect 16G 24-Port Fibre Channel
Modules with Enhanced N-port Trunking support.
Figure 63: HP Virtual Connect 16G 24-Port Fibre Channel Modules connected to Brocade switches using N_Port
Trunking
8:1
server-to-uplink ratio
on a fully populated enclosure with 16 servers
Server 1
Server 2
Server 16
Server 3
…
HBA1 HBA2
HBA1 HBA2
HBA1 HBA2
…
HBA1
HBA2
VC Domain
Fabric-1
Fabric-2
VC-FC 16Gb
24-Port Modules
Trunking
Brocade Fabric 1
Trunking
Brocade Fabric 2
This scenario with two VC Fabric-Attach SAN fabrics is similar to scenario 2 but is using the Enhanced N-port
Trunking support of the HP Virtual Connect 16G 24-Port Fibre Channel Module with external Fabric OS switches
(Brocade or HP B-series switches).
Note: With any other type of Fabric switches, the use of Dynamic Login Balancing Distribution must be used.
Benefits
Enhanced N-port Trunking support with external Fabric OS switches (Brocade or HP B-series switches) provides
higher bandwidth to enable demanding applications and high density server virtualization. A single N_Port trunk
made up of up to eight SAN ports can provide a total of up to 128 Gbps balanced throughput.
The HP Virtual Connect 16G 24-Port Fibre Channel Module introduces 16Gb FC technology on both internal and
external facing ports enabling high-performance connectivity with aggregate bandwidth of 896 Gbps (28 ports x 16
Gbps x 2 for full duplex).
16Gb FC technology provides enough capacity for emerging technologies and new hardware demands while
lowering costs through fewer SFPs and cables.
With height 16Gb external SAN-facing ports, the server-to-uplink ratio can reach up to 2:1 for best performance.
125
Considerations
The HP Virtual Connect 16Gb 24-Port Fibre Channel Module requires a Virtual Connect Ethernet Module installed in
the system for management and administration.
A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a
failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition
without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O
operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server
Operating System can provide a completely transparent transition.
Compatibility support
 The HP BladeSystem c7000 Platinum Enclosure is required to permit 16 Gbps speed on internal ports. Other HP
BladeSystem c7000 Enclosures will have a maximum speed of 8 Gbps.
 The VC 16G 24-Port Fibre Channel Module and VC 8G Fibre Channel Module (20-Port or 24-Port) are not supported
side-by-side nor are they swappable.
 The VC 16G 24-Port Fibre Channel Module and VC 8G Fibre Channel Module (20-Port or 24-Port) are not supported
together in same bay group in stacked domain.
 Double Dense mode is not supported with the VC 16G 24-Port Fibre Channel Module.
 The VC 16G 24-Port Fibre Channel Module is not supported in c3000 enclosures.
 The VC 16G 24-Port Fibre Channel Module does not support 4Gb FC HBA in any server.
 The VC 16G 24-Port Fibre Channel Module supports 8Gb FC HBA in G7 or later.
 HP LPe1605 16Gb Fibre Channel HBA (Emulex) and HP QMH2672 16Gb FC HBA (Qlogic) are supported in Gen8
servers or later.
For more information you can refer to the Virtual Connect 16Gb 24-port Fibre Chanel module QuickSpec.
Figure 64: Physical view
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
0
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
Mfg
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
12Vdc
12Vdc
F_Port
F_Port
Trunking
31
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
Brocade or
HP B-series
Switches
Trunking
N_Port
N_Port
VC-FC 16Gb
24-Port Modules
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HBA 1
HP ProLiant
BL460c
HBA 2
UID
NIC
1
NIC
2
HBA 1
HP ProLiant
BL460c
NIC
1
HBA 1
HBA 1
Blade Servers
1 to 16
126
HBA 2
UID
NIC
2
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HBA 2
HBA 2
Requirements
This configuration requires:
 Two VC 16G 24-Port Fibre Channel Modules
 A minimum of four (2x2) VC fabric uplink ports
 Two external Fabric OS switches (Brocade or HP B-series switches) with an ISL Trunking license
 VC 4.40 or later is required for the 16Gb 24-Port Fibre Channel Module support
Trunking Requirements
N_Port Trunking allows the creation of a trunk group between N_Ports on the VC 16G 24-Port Fibre Channel Module
and F_Ports on a Fabric OS switch (Brocade or HP B-series switches).
On the Fabric OS switch side, it requires an F_Port trunking configuration.
F_Port trunking was originally only supported when Brocade ports were connected to a Brocade Access Gateway
module or to a Brocade Host Bus Adapter. Starting with VC 4.40, the VC 16G 24-port Fibre Channel Module is added
to this supported list.
VC SAN A
16G 24-Port VC FC Module
N_Ports
Bay 1
N_Port Trunking
Trunking
F_Ports
F_Port Trunking
0
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
31
12Vdc
HP StorageWorks
4/32B SAN Switch
Fabric OS switch
Note: This feature does not require any particular license on the Virtual Connect Module but you must install an ISL
Trunking license (usually included in the Power Pack+ Software Bundle) on the external Fabric OS switch.
Note: When the external switch is not configured for trunking, VC uses the legacy Dynamic Login Balancing
Distribution.
127
N-port Trunking vs. Dynamic Login Balancing Distribution
The following table describes the Pros and Cons between Enhanced N-port Trunking and Dynamic Login Balancing
Distribution.
Table 3: Pros & Cons between N-port Trunking and Dynamic Login Balancing Distribution
Enhanced N-port Trunking
Pros
 Optimized bandwidth utilization
 Does not require any external switch configuration
 Optimizes fabric-wide performance and load
 Is compatible with any switch vendors
balancing distribution
 No traffic disruption during uplink failure
Cons
Dynamic Login Balancing Distribution
 Is only compatible with Fabric OS Brocade
switches
 Requires additional configuration steps on the
external SAN switches
 Requires a Brocade ISL license
 Does not require any license
 Wasted bandwidth
 Limited performance
 Short traffic disruption during uplink failure
 Some limitations during workload peaks
The following table describes the differences between N-port Trunking and Dynamic Login Balancing Distribution.
Table 4: N-port Trunking vs. Dynamic Login Balancing Distribution comparison
Enhanced N-port Trunking
Traffic
Distribution
 Optimizes uplink usage by evenly distributing
traffic across all uplinks at the frame level
 Uses the same mechanism as E_port to
E_port trunking
 Managed as a single logical link
 Maintains in-order delivery to ensure data
reliability
Performance
 Server performance is limited to the speed of
the entire trunk
 Each uplink inside the trunk is loaded equally
 Provides a better high-performance solution
Dynamic Login Balancing Distribution
 Routes are assigned to uplinks with the least
number of logins or in a round-robin fashion
when the number of logins is equal
 Load balancing does not look at the port
utilization
 Server traffic stays on the dedicated server’s
uplink
 Server performance is limited to the speed of
one uplink
 Some uplinks can experience congestion while
others are underutilized
for network and data intensive applications
 Optimizes fabric-wide performance and load
balancing with Dynamic Path Selection (DPS)
Bandwidth
 Optimized bandwidth utilization
 Wasted bandwidth by less efficient traffic
routing
Transient
workload
peaks impact
 Much less likely to impact the performance of
Availability /
Fault
Tolerance
 Enables seamless failover during individual
other servers
link failures
 Prevents reassignments of the Address
Identifier when N_Ports go offline
 Likely to impact the performance of other
servers as uplink port utilization is not a login
balancing criteria
 Short traffic disruption during individual link
failures (maintained seamless by the MPIO
driver when the second path is available)
 Failover during individual link failures causes
hosts to re-login
Max Server
performance
with 16G HBA
and two 8G
uplinks
128
 Limited to the speed of the logical link
 Limited to the speed of one uplink
 Max server bandwidth: 2x8G=16G
 Max server bandwidth: 1x8G=8G
The use of Dynamic Login Balancing Distribution can impact the performance of servers whereas N_Port Trunking
can significantly optimize the overall bandwidth utilization as illustrated in Figure 65.
Figure 65 : Comparing the server bandwidth impact with Dynamic Login Balancing Distribution vs. N_Port Trunking
With Dynamic Login
Balancing Distribution
4 Gbps
5.5 Gbps
With N_Port Trunking
4 Gbps
4 Gbps
1.5 Gbps
1.5 Gbps
4 Gbps
4.5 Gbps
Congestion
24Gbps
8G
5.5 Gbps
8G
5.5 Gbps
VC-FC 16G Module
4 Gbps
VC-FC 16G Module
4 Gbps
1.5 Gbps
1.5 Gbps
4.5 Gbps
4.5 Gbps
N_Port Trunking dynamically performs load sharing, at the frame level, across the VC uplinks with the adjacent
Brocade switch. One uplink port is used to assign traffic for the trunking group, and is referred to as the trunking
master. The trunking master represents and identifies the entire trunking group. The rest of the group members are
referred to as slave links that help the trunking master direct traffic across uplinks, allowing efficient and balanced
in-order communication.
Figure 66 : With trunking only the N_Port Master has the F_Ports mapped to it
Hosts
VC-FC 16G Modules
Switches
F_Port
With Dynamic Login
Balancing Distribution
F_Port
N_Port
Area 1 - F_Port (NPIV)
N_Port
Area 2 - F_Port (NPIV)
F_Port
N_Port
F_Port
Area 3 - F_Port (NPIV)
F_Port
With N_Port Trunking
Only the N_Port master
will have the F_Ports
mapped to it
F_Port
N_Port Master
Area 1 - F_Port Master (NPIV)
N_Port Slave
Area 1 - F_Port Slave (NPIV)
F_Port
N_Port Slave
F_Port
Area 1 - F_Port Slave (NPIV)
With N_Port Trunking, whenever a link within the trunk goes offline or becomes disabled, the trunk remains fully
functional, the traffic is automatically rerouted in less than a second and there are no reconfiguration requirements.
A failure does not completely “break the pipe,” but simply makes the pipe thinner. As a result, data traffic is much
less likely to be affected by link failures, and the bandwidth automatically increases when the link is repaired.
129
Trunking support
 N_Port Trunking is only supported between VC 16G 24-Port Fibre Channel Module and F_Ports on a Fabric OS
switch (Brocade or HP B-series switches).
 The Fabric OS switch must be running in Native mode. You cannot configure trunking between the VC 16G 24-port
Fibre Channel Module and the F_Ports of a Fabric OS switch running in Access Gateway mode.
 All of the ports in a trunk group must belong to the same port group.
 N_Port Trunking is only supported between ports belonging to the same VC Module and ports belonging to the
same external Fabric OS switch.
VC SAN A
0
4
1
5
2
6
3
7
8
12
9
13
10
12Vdc
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
VC SAN A
VC SAN A
26
30
27
31
12Vdc
0
HP StorageWorks
4/32B SAN Switch
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
26
30
27
0
31
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
12Vdc
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
12Vdc
29
26
30
27
0
31
12Vdc
4
1
5
2
6
3
7
8
12
9
13
10
14
11
15
16
20
17
21
18
22
19
23
24
28
25
29
12Vdc
HP StorageWorks
4/32B SAN Switch
HP StorageWorks
4/32B SAN Switch
Fabric A
Fabric B
An FC trunk can only be formed between a
16G VC-FC Module and one Brocade device.
 All of the ports in a trunk group must meet the following conditions:
o
They must be running at the same speed.
o
They must be configured for the same distance.
o
The F_Ports on the Fabric OS switch must have the same encryption, compression, QoS, and FEC
settings.
 There must be a direct connection between the Fabric OS switch and VC FC Module.
For more information about F_Port trunking on a Fabric OS switch, see the latest version of the Brocade Fabric OS
Administrators Guide.
Installation and configuration
Configuring NPIV on the External Switch
It is necessary to configure the external Fabric OS switches for NPIV support, see "Appendix C: Brocade SAN switch
NPIV configuration" for the steps required to configure NPIV on a Brocade SAN switch.
Configuring F_Port trunking on the External Switch
In order to create a Trunk between VC and Brocade, it is necessary to configure the external Brocade switches for
F_Port trunking but there is no configuration required on the Virtual Connect Fibre Channel Module.
For each external switch, you must configure first an F_Port trunk group and statically assign an Area_ID to the
trunk group. Assigning a Trunk Area (TA) to a port or trunk group enables F_Port masterless trunking on that port or
trunk group.
This section describes the configuration steps you must perform on the two switches.
130
26
30
27
31
12Vdc
12Vdc
HP StorageWorks
4/32B SAN Switch
Fabric A
1.
Connect to the first switch and log in using an account assigned to the admin role.
2.
Ensure that the switch has the trunking licenses enabled.
3.
Configure both ports for trunking by using the portcfgtrunkport port mode command.
switch:admin> portcfgtrunkport 2 1
switch:admin> portcfgtrunkport 3 1
Note: Mode 1 enables trunking on the specified port, mode 0 disables trunking.
Note: Ensure that the ports within a trunk have the same speed. For more information on this command, see
help portcfgtrunkport
4.
Disable the ports to be used for trunking by using the portdisable command.
switch:admin> portdisable 2-3
5.
Enable the trunk on the ports by using the porttrunkarea command. The following example enables a TA
(Trunk Area) for ports 2 and 3 with port index of 2.
switch:admin> porttrunkarea --enable 2-3 -index 2
Trunk index 2 enabled for ports 2 and 3.
6.
Enable the ports specified in step 3 using the portenable command.
switch:admin> portenable 2-3
7.
Repeat the same steps on switch 2. You can use an identical or different port index.
Connecting the VC module
 Physically connect the uplink ports on the first VC 16G 24-Port Fibre Channel Module to switch ports in Brocade
Fabric 1
 Physically connect the uplink ports on the second VC 16G 24-Port Fibre Channel Module to switch ports in Brocade
Fabric 2
Defining a new VC SAN Fabric via GUI
As previously stated, there is no configuration required on the Virtual Connect Fibre Channel Module in order to
create a Trunk between VC and Brocade. So we will simply create a standard VC SAN Fabric using the VC 16G 24-Port
Fibre Channel module.
131
1.
From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page.
2.
Create a first VC Fabric named Fabric-1 and add Port 1 and Port 2 from the first VC 16G 24-Port Fibre Channel
Module (Bay 3) and then click Apply.
3.
On the SAN Fabrics screen, click on Add to create the second fabric:
132
4.
Create a second VC Fabric named Fabric-2 and add Port 1 and Port 2 from the second VC 16G 24-Port Fibre
Channel Module (Bay 4) and then click Apply.
5.
You have created two VC Fabric-Attach SAN fabrics each with two uplink ports allocated from VC Module in Bay
3 and 4.
133
Defining a new VC SAN Fabric using the CLI
To configure the VC 16G 24-Port Fibre Channel Modules from the CLI:
1.
Log in to the Virtual Connect Manager CLI:
2.
Enter the following commands to create the fabrics and assign the uplink ports:
add fabric Fabric-1 Bay=3 Ports=1,2
add fabric Fabric-2 Bay=4 Ports=1,2
3.
When complete, run the show fabric command.
Verification of the trunking configuration
Virtual Connect Manager does not display any trunking states however trunking monitoring and verification can be
done on the Brocade switches:
1.
Connect to the first switch and log in using an account assigned to the admin role.
2.
Using the porttrunkarea --show enabled command, verify that the ports you enabled for F_Trunking appear in
the output:
switch:admin> porttrunkarea --show enabled
Port Type
State
Master
TI DI
------------------------------------2
F-port Slave
3
2
2
3
F-port Master 3
2
3
-------------------------------------
3.
Enter switchshow to display the switch and port information.
switch:admin> switchshow
switchName:
SW6505
switchType:
118.1
switchState:
Online
switchMode:
Native
switchRole:
Principal
switchDomain:
1
switchId:
fffc01
switchWwn:
10:00:00:27:f8:49:6c:da
zoning:
ON (Test_Compellent)
134
switchBeacon:
OFF
FC Router:
OFF
FC Router BB Fabric ID: 1
Address Mode:
0
Index Port Address Media Speed State
Proto
==============================================
0
0
010000
id
N8
Online
FC
1
1
010100
id
N8
No_Light
FC
2
2
010200
id
N16
Online
FC
2
3
010200
id
N16
Online
FC
4
4
010400
id
N8
No_Light
FC
5
5
010500
id
N8
No_Light
FC
F-Port
1 N Port + 1 NPIV public
F-Port (Trunk port, master is Port 3 )
F-Port (Trunk master)
/…/
4.
Enter porttrunkarea --show trunk to display the trunking information.
switch:admin> porttrunkarea --show trunk
Trunk Index
2:
3->18 sp: 16.000G bw: 32.000G deskew 15 MASTER
Tx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%)
Rx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%)
Tx+Rx: Bandwidth 64.00Gbps, Throughput 0.00bps (0.00%)
2->17 sp: 16.000G bw: 16.000G deskew 15
Tx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%)
Rx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%)
Tx+Rx: Bandwidth 64.00Gbps, Throughput 0.00bps (0.00%)
Note: Additional trunking information is also available under VCM but only when a server is logged into the
fabric.
Blade Server configuration
See Appendix A for server profile configuration steps. See Appendix F and Appendix G for verifications and
troubleshooting steps.
Trunking information under VCM
Once a server profile is created and a server is logged into the fabric, Virtual Connect displays the Trunk Area index in
the Interconnect Bays / Module / Server Ports section:
1.
Click Interconnect Bays at the bottom of the left navigation menu.
2.
Click on the bay number corresponding to the 16G Fibre Channel Module.
135
3.
Then go to the Server Ports section
The Uplink Port column displays the Trunk Area Index configured on the Brocade switch (i.e. 2 for both ports).
Note: When trunking is neither configured nor enabled on the Brocade switch, Dynamic Login Balancing
Distribution is used and the Uplink Port column displays the uplink port used by each profile.
Summary
This N_Port Trunking scenario provides a high-performance solution for network and data-intensive applications by
optimizing application performance and availability across the network, and simplifying network design and
management.
136
Appendix A: Blade Server configuration with Virtual
Connect Fibre Channel Modules
Defining a Server Profile with FC Connections, using the GUI
1.
On the Virtual Connect Manager screen, click Define / Server Profile to create a Server Profile.
2.
Enter a Profile name.
3.
In the Network Connections section, select the required networks.
4.
In the FC HBA Connections section, expand the Port 1 drop down menu and select the first fabric.
137
5.
Then expand the Port 2 drop down menu and select the second fabric.
Note: HP recommends using redundant FC connections for failover to improve availability. If a SAN failure
occurs, the multipath connection uses the alternate path so that servers can still access data. FC performance
can also be improved with I/O load balancing mechanisms.
6.
The following screen illustrates the creation of the Profile_1 server profile.
Note: WWN addresses are provided by Virtual Connect. Although this is not recommended, you can override this
setting and use the WWNs that were assigned to the hardware during manufacture by selecting the Use Server
Factory Defaults for Fibre Channel WWNs checkbox. This action applies to every Fibre Channel connection in the
profile.
7.
Assign the Profile to a Server Bay and click Apply.
Defining a Server Profile with FC Connections, via CLI
You can copy and paste the following commands into an SSH based CLI session (the command syntax might be
different with an earlier VC version).
# Create and Assign Server Profile “Profile_1” to server bay 1
add profile Profile_1
set enet-connection Profile_1 1 pxe=Enabled Network=1-management-vlan
138
set enet-connection Profile_1 2 pxe=Disabled Network=2-management-vlan
set fc-connection Profile_1 1 Fabric=Fabric-A-1 Speed=Auto
set fc-connection Profile_1 2 Fabric=Fabric-B-1 Speed=Auto
assign profile Profile_1 enc0:1
Defining a Boot from SAN Server Profile using the GUI
1.
On the Virtual Connect Server Profile screen, click the Fibre Channel Boot Parameters checkbox to configure
the Boot from SAN parameters.
2.
A new section pops up, click the drop-down arrow in the SAN Boot box for Port 1, then select the boot order:
Primary.
3.
Enter a valid Boot Target name (WWN) and LUN number for the Primary Port.
4.
Click the second port drop-down menu, and select Secondary.
139
5.
Enter a valid Boot Target name and LUN number for the Secondary Port, and then click Apply.
Note: Target Port name can be entered with the following format:
mm-mm-mm-mm-mm-mm-mm-mm or mm:mm:mm:mm:mm:mm:mm:mm
or mmmmmmmmmmmmmmmm
6.
Assign the profile to a server bay, and then click Apply.
7.
The server can now be powered on (using the OA, the iLO, or the Power button)
Note: To view the Option ROM boot details on servers with recent System BIOS, press any key as the system is
booting.
140
8.
While the server starts up, a screen similar to the following appears:
Boot from SAN
disk correctly
detected during
POST
Defining a Boot from SAN Server Profile via CLI
You can copy and paste the following commands into an SSH based CLI session (the command syntax might be
different with an earlier VC version).
# Create and Assign Server Profile “Profile_1” Booting from SAN to server bay 1
add profile BfS_Profile_1
set enet-connection BfS_Profile_1 1 pxe=Enabled Network=1-management-vlan
set enet-connection BfS_Profile_1 2 pxe=Disabled Network=2-management-vlan
set fc-connection BfS_Profile_1 1 Fabric=Fabric_1 Speed=Auto
set fc-connection BfS_Profile_1 1 BootPriority=Primary
BootPort=50:01:43:80:02:5D:19:78 BootLun=1
set fc-connection BfS_Profile_1 2 Fabric=Fabric_2 Speed=Auto
set fc-connection BfS_Profile_1 2 BootPriority=Secondary
BootPort=50:01:43:80:02:5D:19:7D BootLun=1
141
Appendix B: Blade Server configuration with Virtual
Connect FlexFabric Modules
Defining a Server Profile with FCoE Connections, using the GUI
1.
On the Virtual Connect Manager screen, click Define, Server Profile to create a Server Profile.
2.
Enter a Profile name.
3.
In the Network Connections section, select the required networks.
142
4.
Select the FC SAN Name box of the FCoE HBA Connections:
-
For Port 1, select Fabric_1.
-
For Port 2, select Fabric_2.
Note: HP recommends using redundant Fabric connections for failover to improve availability. If a SAN fails, the
multipath connection uses the alternate path so that servers can still access their data. You can also improve
FC performance using I/O load balancing mechanisms.
Also, you do not need to configure any iSCSI Connection when using a single CNA because the CNA Physical
Function 2 can only be configured as Ethernet or FCoE or iSCSI.
143
5.
The following screen illustrates the creation of the Profile_1 server profile.
Note: WWNs for the domain are provided by Virtual Connect. You can override this setting and use the WWNs
that were assigned to the hardware during manufacture by selecting the Use Server Factory Defaults for Fibre
Channel WWNs checkbox. This action applies to every Fibre Channel connection in the profile.
6.
Assign the Profile to a Server Bay and click Apply.
Defining a Server Profile with FCoE Connections, via CLI
You can copy and paste the following commands into an SSH based CLI session with Virtual Connect v3.15; however,
the command syntax might be different with an earlier VC version.
# Create and Assign Server Profile “Profile_1” to server bay 1
add profile Profile_1
set enet-connection Profile_1 1
set enet-connection Profile_1 2
set fcoe-connection Profile_1:1
set fcoe-connection Profile_1:2
assign profile Profile_1 enc0:1
144
pxe=Enabled Network=1-management-vlan
pxe=Disabled Network=2-management-vlan
Fabric=Fabric_1 SpeedType=4Gb
Fabric=Fabric_2 SpeedType=4Gb
Defining a Boot from SAN Server Profile using the GUI
1.
From the FCoE HBA Connections section of the Virtual Connect Server Profile screen, select Fibre Channel Boot
Parameters to configure the Boot from FCoE SAN parameters.
2.
From the FCoE HBA Connections pop up, click the drop-down arrow in the SAN Boot box for Port 1 and select the
boot order Primary.
3.
Enter a valid Boot Target name and LUN number for the Primary Port.
4.
Optionally, select the second port, click the drop-down arrow in the SAN Boot box, and then select the boot
order Secondary.
5.
Enter a valid Boot Target name and LUN number for the Secondary Port and click Apply.
Note: Target Port name can be entered with the following format:
mm-mm-mm-mm-mm-mm-mm-mm or mm:mm:mm:mm:mm:mm:mm:mm
or mmmmmmmmmmmmmmmm
145
6.
Assign the profile to a server bay and click Apply. You can now power on the server (using either the OA, the iLO,
or the Power button).
Note: To view the Option ROM boot details servers with a recent System BIOS, press any key as the system
boots up.
7.
While the server starts up, a screen similar to this one should be displayed:
SAN volume correctly
detected during POST by
the two adapters
Defining a Boot from SAN Server Profile using the CLI
You can copy and paste the following commands into an SSH based CLI session with Virtual Connect v3.15; however,
the command syntax might be different with an earlier VC version.
# Create and Assign Server Profile “Profile_1” Booting from SAN to server bay 1
add profile BfS_Profile_1
set enet-connection BfS_Profile_1 1 pxe=Enabled Network=1-management-vlan
set enet-connection BfS_Profile_1 2 pxe=Disabled Network=2-management-vlan
set fcoe-connection BfS_Profile_1:1 Fabric=Fabric_1 SpeedType=4Gb
set fcoe-connection BfS_Profile_1:2 Fabric=Fabric_2 SpeedType=4Gb
set fcoe-connection BfS_Profile_1:1 BootPriority=Primary
BootPort=50:01:43:80:02:5D:19:78 BootLun=1
set fcoe-connection BfS_Profile_1:2 BootPriority=Secondary
BootPort=50:01:43:80:02:5D:19:7D BootLun=1
assign profile BfS_Profile_1 enc0:1
146
Appendix C: Brocade SAN switch NPIV configuration
Enabling NPIV using the GUI
1.
Log on to the SAN switch using the IP address and a web browser. After you are authenticated, the switch home
page appears.
2.
Click Port Admin. The Port Administration screen appears.
3.
If you are in Basic Mode, click Show Advanced Mode in the top right corner. When you are in Advanced Mode,
the Show Basic Mode button appears.
4.
Select the port you want to enable with NPIV, in this case, Port 13. When NPIV is disabled, the NPIV Enabled field
shows a value of false.
147
5.
To enable NPIV on this port, click Enable NPIV under the General tab, and then confirm your selection. The NPIV
Enabled entry shows a value of true.
Enabling NPIV using the CLI
1.
Initiate a telnet session to the switch, and then authenticate your account. The Brocade Fabric OS CLI appears.
2.
To enable or disable NPIV on a port-by-port basis, use the portCfgNPIVPort command.
For example, to enable NPIV on port 13, enter the following command:
Brocade:admin> portCfgNPIVPort 13 1
where 1 indicates that NPIV is enabled (0 indicates that NPIV is disabled).
148
3.
To be sure that the port is enabled, enter the switchshow command.
NPIV is enabled and
detected on that port
4.
To be sure that NPIV is enabled and operational on a specific port, use the portshow command.
For example, to display information for Port 13, enter the following:
Brocade:admin> portshow 13
In the portWwn of device(s) connected entry, more than one HBA appears. This indicates a successful
implementation of VC-FC. On the enclosure, two server blade HBAs are installed and powered on, and either an
HBA driver is loaded or the HBA BIOS utility is active.
The third WWN on the port is the VC module (currently, all VC-FC modules use the 20:00…range).
Two servers are currently connected
Port WWN of the VC-FC
149
Recommendations
When Fibre Channel uplink ports on VC-FC 8Gb 20-port module or VC FlexFabric module are configured to operate at
8Gb speed and are connecting to HP B-series Fibre Channel SAN switches, the minimum supported version of the
Brocade FOS is v6.4.x.
In addition, “FillWord” on those switch ports must be configured with option Mode 3 to prevent connectivity issues
at 8Gb speed.
On HP B-series FC switches, use the portCfgFillWord (portCfgFillWord <port#> <Mode>) command to configure
this setting:
Mode
Link Init/Fill Word
Mode 0
IDLE / IDLE
Mode 1
ARBF / ARBF
Mode 2
IDLE / ARBF
Mode 3
If ARBF / ARBF fails, use IDLE / ARBF
Modes 2 and 3 are compliant with FC-FS-3 specifications (standards specify the IDLE/ARBG behavior of Mode 2,
which is used by Mode 3 if ARBF/ARBF fails after 3 attempts).
For most environments, Brocade recommends using Mode 3 because it provides more flexibility and compatibility
with a wide range of devices. If the default setting or Mode 3 does not work with a particular device, contact your
switch vendor for further assistance.
150
Appendix D: Cisco MDS SAN switch NPIV configuration
Enabling NPIV using the GUI
Most Cisco MDS Fibre Channel switches running SAN-OS 3.1 (2a) or later support NPIV.
To enable NPIV on Cisco Fibre Channel switches:
1.
From the Cisco Device Manager, click Admin, and then select Feature Control.
2.
From the Feature Control screen, click npiv.
3.
In the Action column select enable, and then click Apply.
4.
Click Close to return to the Device Manager screen.
5.
To verify that NPIV is enabled on a specific port, double-click the port you want to check.
151
6.
Click the FLOGI tab.
In the PortName column, more than one HBA appears. This indicates a successful implementation of VC-FC.
152
Enabling NPIV using the CLI
1.
To verify that NPIV is enabled, enter following command:
CiscoSANswitch# show running-config
If the npiv enable entry does not appear in the list, NPIV is not enabled on the switch.
2.
To enable NPIV, use the following commands from global config mode:
CiscoSANswitch#
CiscoSANswitch#
CiscoSANswitch#
CiscoSANswitch#
config terminal
NPIV enable
exit
copy running-config startup-config
NPIV is enabled globally on the switch on all ports and all VSANs.
3.
153
To disable NPIV, enter the no npiv enable command.
4.
To verify that NPIV is enabled on a specific port, enter following command for port ext1:
CiscoSANswitch# show flogi database interface ext1
Port WWN of the VC-FC
Four servers are
currently connected
In the PORT NAME column, more than one HBA appears. This indicates a successful implementation of VC-FC.
On the enclosure, two server blade HBAs are installed and powered on, and either an HBA driver is loaded or the
HBA BIOS utility is active. The third WWN on the port is the VC module (currently, all VC-FC modules use the
20:00…range).
5.
If the VC module is the only device on the port, verify that:
-
154
A VC profile is applied to at least one server blade.
At least one server blade with a profile applied is powered on.
At least one server blade with a profile applied has an HBA driver loaded.
You are using the latest BIOS version on your HBA.
Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx
and 55xx series
Since VC 4.01, Virtual Connect provides the ability to pass FCoE (Dual Hop) to an external FCoE capable network
switch like the Nexus switches.
VC 3.70 and later allow you to connect VC FlexFabric modules to Nexus 50xx and 55xx series using Native Fibre
Channel.
For information about the FCoE integration between Virtual Connect and Nexus switches see Dual-Hop FCoE with HP
Virtual Connect modules Cookbook in the Related Documentation section.
For information about the Ethernet integration between Virtual Connect and Nexus switches see HP Virtual Connect
Flex-10 & FlexFabric Cisco Nexus 5000 & 2000 series Integration
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02656171/c02656171.pdf
Support information
Visit the “C-Series FCoE Switch Connectivity stream” from the SPOCK website to get the latest support information
for Virtual Connect. See http://h20272.www2.hp.com/Pages/spock2Html.aspx?htmlFile=hw_switches.html
Fibre Channel functions on Nexus
Support of native Fibre Channel functions on Nexus switches has the following options:
 You can configure all Unified ports as 8/4/2/1G Native Fibre Channel. The SPS (Storage Protocol Services) license
is required to enable the use of Native FC operations. Unified ports are identified by their orange color:
Figure 67: Nexus N55-M16UP Expansion module for Nexus 5548 with 16 Unified Ports
1/10 GIGABIT ETHERNET
N55-M16UP
1
2
3
4
5
6
7
1/2/4/8G FIBRE CHANNEL
8
9
10
11
12
13
14
15
16
 The expansion module N55-M8P8FP for Nexus 55xx series provides eight ports as native Fibre Channel ports. The
N55-M8P8FP for Nexus 55xx series provides eight ports as native Fibre Channel ports. The SPS license is also
required to enable Native Fibre Channel operation. Fibre Channel ports are identified by their green color:
Figure 68: Nexus N55-M8P8FP Expansion module for Nexus 5548 with 8 FC Ports
Figure 69: Nexus N5K-M1008 Expansion module for Nexus 50xx with 8 FC Ports
1/2/4G FIBRE CHANNEL
8
7
6
5
4
3
2
1
N5K-M1008
Note: The Ethernet Nexus ports on the base chassis as well as those on the expansion modules cannot be used to
support Fibre Channel functions. There are not colored like the 8 ports on the left-hand side of the N55-M8P8FP
expansion module.
155
Figure 70: Nexus 5548 connected to FlexFabric modules using Native FC ports from the N55-M8P8FP Expansion
module
Storage Array
Mgmt
LAN
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
DP1-B
Mfg
Mfg
LAN
Fabric-2
CISCO NEXUS N5548UP
1
2
3
4
5
6
7
8
9
10 11
12 13
14 15
18 19
17
16
20 21
22 23
26 27
24 25
28 29
30 31
32
1/10 GIGABIT ETHERNET
1/2/4/8G FIBRE CHANNEL
Cisco Nexus 5548
3
ID
N55-M8P8FP
STAT
1
2
3
4
5
6
7
8
1
3
2
5
4
7
6
8
Fabric-1
CISCO NEXUS N5548UP
1
3
2
5
4
7
6
8
10 11
9
12 13
14 15
18 19
17
16
20 21
22 23
24 25
26 27
28 29
30 31
32
1/10 GIGABIT ETHERNET
1/2/4/8G FIBRE CHANNEL
FC links
3
LAN
ID
N55-M8P8FP
STAT
1
2
3
4
Cisco Nexus 5548
5
6
7
8
1
3
2
4
5
7
6
8
LAN
FC links
SHARED: UPLINK or X-LINK
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
X1
X8
X2
X3
X4
X5
X6
X7
X8
UID
UID
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
X1 X2
X1 X2
VC FlexFabric Modules
HP ProLiant
BL460c
UID
NIC
1
HP ProLiant
BL460c
NIC
2
CNA 1
CNA 2
CNA 1
CNA 2
CNA 1
CNA 2
CNA 1
CNA 2
UID
NIC
1
HP ProLiant
BL460c
NIC
2
UID
NIC
1
HP ProLiant
BL460c
NIC
2
UID
NIC
1
NIC
2
Blade Servers
1 to 16
Figure 71: Nexus 5548 connected to FlexFabric modules using Unified ports
Storage Array
Mgmt
UID
PS 1
Cntrl 1
PS 2
Cntrl 2
UID
LAN
1
2
DP1-A
UID
DP1-B
1
2
DP1-A
LAN
DP1-B
Mfg
Mfg
Fabric-1
Fabric-2
Cisco Nexus 5548
CISCO NEXUS N5548UP
1
2
3
4
5
6
7
8
9
10 11
12 13
Cisco Nexus 5548
14 15
16
17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32
CISCO NEXUS N5548UP
1/10 GIGABIT ETHERNET
1
2
3
4
5
6
7
8
9
10 11
12 13
14 15
16
17
18 19
20 21
3
24 25
26 27
28 29
30 31
32
1/10 GIGABIT ETHERNET
3
ID
ID
N55-M16P
STAT
1
2
3
4
5
6
7
8
9
10 11
12 13
14 15
16
N55-M16P
STAT
1
2
3
4
5
6
7
8
9
10 11
12 13
SUS-2
(Eth)
SUS-1
(Eth)
FC links
FC links
SHARED: UPLINK or X-LINK
SHARED: UPLINK or X-LINK
X1
X2
X3
X4
X5
X6
X7
X1
X8
X2
UID
UID
HP VC FlexFabric 10Gb/24-Port Module
HP VC FlexFabric 10Gb/24-Port Module
X1 X2
X1 X2
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HP ProLiant
BL460c
CNA 1
CNA 2
CNA 1
CNA 2
CNA 1
CNA 2
CNA 1
CNA 2
UID
NIC
1
NIC
2
HP ProLiant
BL460c
UID
NIC
1
NIC
2
HP ProLiant
BL460c
UID
NIC
1
NIC
2
Blade Servers
1 to 16
156
22 23
X3
X4
X5
X6
X7
X8
VC FlexFabric Modules
14 15
16
Configuration of the VC SAN Fabric
The configuration of the VC Domain will follow one of the scenarios described in this cookbook.
After you have configured the VC Domain with two or more VC SAN Fabrics (Fabric-Attach), configure the Nexus
switches.
Configuration of the Nexus switches
Enabling the FC protocol on Nexus ports
Before using FC capabilities, make sure that a correct Storage Protocol Services license is installed. If the license is
not found, the software loads the FC plugins with a grace period of 180 days. If the license is found, all Fibre Channel
and FCoE related CLI are available.
To enable Fibre Channel on the switch (including FCoE); enter the following commands from the CLI:
switch# configure terminal
switch(config) # feature fcoe
Note: Cisco SAN port channels that are useful to bond multiple Fibre Channel interfaces together for both
redundancy and increased aggregate throughput are not supported with Virtual Connect.
Configuring Fibre Channel Ports
When using expansion modules with native FC ports, there is no specific configuration. Interfaces fc2/1, fc2/2, etc.
are automatically presented.
Configuring Unified Ports
By default, the Unified ports are Ethernet ports but you can change the port mode to native Fibre Channel on any
port on the Cisco Nexus 55xx switch.
Note: Fibre Channel ports cannot be enabled randomly; they must be configured in a specific order, starting from the
last Unified port of the module to the first one (1/32 > 1/31 > … > 1/1).
If you do not follow this order, the system displays the following error:
ERROR: FC range should end on last port of the module
To change the Unified port mode to Fibre Channel for port 31 and 32, enter the following command in the CLI:
switch# configure terminal
switch(config) # slot 1
switch(config-slot) # port 31-32 type fc
switch(config-slot) # copy running-config startup-config
switch(config-slot) # reload
When the switch comes back, two new Fibre Channel interfaces fc1/31 and fc1/32 are available.
To enable the two Fibre Channel interface enter:
Nexus-switch(config)# interface fc1/31
Nexus-switch(config-if)# no shutdown
Nexus-switch(config)# interface fc1/32
Nexus-switch(config-if)# no shutdown
Complete the same configuration on the second Nexus switch.
Physical connection
Physically connect the VC SAN uplinks to the Nexus Fibre Channel interfaces. Make sure to use Fibre Channel
transceivers on the Nexus ports and on the FlexFabric uplinks.
157
Enabling NPIV on the Nexus switches
You must enable NPIV on the Nexus switches in order to connect to the VC FlexFabric modules.
Enabling NPIV using the GUI
1. From the Cisco Device Manager, click Admin, and then select Feature Control.
2.
From the Feature Control screen, click npiv.
3.
In the Action column select enable, and then click Apply.
4.
Click Close to return to the Device Manager screen.
Enabling NPIV using the CLI
1. To verify that NPIV is enabled, enter the following command:
Nexus-switch#show npiv status
NPIV is disabled
2.
To enable NPIV, use the following commands from global config mode:
Nexus-switch# config terminal
Nexus-switch(config)# feature NPIV
Nexus-switch(config)# exit
Nexus-switch# copy running-config startup-config
NPIV is enabled globally on the switch on all ports and all VSANs.
158
Connectivity checking
1.
To check an FC interface status, enter:
Nexus-switch#show interface fc2/2
Or
Nexus-switch#show interface fc 1/31
The FC interface must be up and displaying the link speed.
2.
To verify that NPIV is properly detected on a specific port, enter following command for port fc2/2:
Nexus-switch# show flogi database interface fc2/2
The PORT NAME column displays two WWNs. This indicates a successful SAN connection between VC and two,
installed and powered on, blade servers with either an FC driver loaded or with the Emulex BIOS utility running.
The third WWN is the Virtual Connect module WWN.
Port WWN of the VC-FC
Two servers are
currently connected
Note: If NPIV is not detected, a No flogi sessions found is returned.
3.
If the VC module is the only device on the FC port, verify that:
-
159
A VC profile is applied to at least one server blade.
At least one blade server with a profile assigned is powered on.
At least one blade server with a profile assigned has a CNA/HBA driver loaded.
You are using the latest BIOS version on your CNA/HBA.
Appendix F: Connectivity verification and testing
Uplink Port connectivity verification
Use the VCM GUI to verify that VC Fabric uplink ports and the server Blade are logged in to the fabric.
1.
Click the Interconnect Bays link on the VCM left-menu.
2.
Click either the first VC-FC module (shown below as Bay 5), or the first FlexFabric module (shown in the second
image as Bay 1).
-
160
With VC-FC module:
-
3.
Make sure that all uplink ports are logged in to the fabric:
-
161
With FlexFabric module:
With VC-FC module:
-
With FlexFabric module:
Click Uplink Ports to see the uplink ports information.
162
Uplink Port connection issues
There are several issues that can lead to a VC Fabric uplink ports “NOT-LOGGED-IN” status:
 Faulty cable, transceiver failure, wrong or incompatible SFP used (port populated with an SFP module that does
not match the usage assigned to the port, such as a Fibre Channel SFP connected to a port designated for
Ethernet network traffic).
 Upstream switch does not support NPIV or NPIV is not enabled (See Appendix C, D, and E for more information
about configuring NPIV on FC switches).
 Using an unsupported configuration, see ‘Supported VC SAN fabric configuration’.
-
Uplink ports that are members of the same Fabric-Attach SAN Fabric have been connected to different SAN
switches belonging to a different SAN Fabric.
Uplink ports have been connected directly to a Storage Disk Array when using a Fabric-Attach VC Fabric.
Uplink ports member of a Direct-Attach fabric have been connected to a non-supported Storage Disk Array.
 The Direct-Attach fabric ports have been connected to un-configured 3PAR ports.
 You are connected to a Brocade SAN switch at 8Gb and you haven’t configured the fill word of an 8G FC port, see
Appendix C.
The Fibre Channel SFP transceiver brand is usually not the reason for an unlinked connectivity issue because many
FC SFPs are allowed to interoperate with VC-FC modules and FlexFabric modules regardless of whether they are HP
or third party branded. In VC 3.30 and later, FlexFabric modules detect whether SFPs are from a supported list of HP
part numbers as documented in the VC Quickspecs. If the SFPs are not one of the HP part numbers, such as A7446B,
AJ715A, AJ716A, AJ718A, these SFPs are reported as “uncertified”. If a customer has an issue with these pluggable
modules, HP support recommends using only officially supported models.
Some useful information that can help to troubleshoot connectivity issues is displayed in the SAN Fabrics or
Interconnect Bays / Module VCM pages:
Note: Port status information appears on several screens throughout the GUI.
163
If a port status is unlinked and no connectivity exists, one of the following causes may appear:
 Not Linked/E-Key—Port is not linked due to an electronic keying error. For example, a mismatch in the type of
technology exists between the server and module ports.
 Not Logged In—Port is not logged in to the remote device.
 Incompatible—Port is populated with an SFP module that does not match the usage assigned to the port, such as
a Fibre Channel SFP connected to a port designated for Ethernet network traffic.
Note: A port that is not assigned to a specific function is assumed to be designated for Ethernet network traffic. An
FCoE-capable port that has an SFP-FC module connected that is not yet assigned to a fabric or network is designated
for a network, and the status is "Incompatible". When a fabric is created on that port, the status changes to "Linked".
 Unsupported—Port is populated with an SFP module that is not supported. For example:
-
An unsupported module is connected.
-
A 1Gb or 10Gb Ethernet module is connected to a port that does not support that particular speed.
-
An LRM module is connected to a port that is not LRM-capable.
-
An FC module is connected to a port that is not FC-capable.
 Administratively Disabled—Port has been disabled by an administrative action, such as setting the uplink port
speed to ‘disabled.’
 Unpopulated—Port does not have an SFP module connected.
 Unrecognized—SFP module connected to the port cannot be identified.
 Failed Validation—SFP module connected to the port failed HPID validation.
 Smart Link—Smart Link feature is enabled.
 Not Linked/Loop Protected—VCM is intercepting BPDU packets on the server downlink ports and has disabled
the server downlink port to prevent a loop condition.
 Linked/Uncertified—Port is linked to another port, but the connected SFP module is not certified by HP to be
fully compatible. In this case, the SFP module might not work properly. Use certified modules to ensure server
traffic.
164
Server Port connectivity verification
Boot the server before verifying that a blade server can successfully log in to the fabrics.
1.
Start the blade server. The HBA logs in to the SAN fabric right after the HBA Bios screen is shown during POST
2.
From the Interconnect Bays / Module, verify that the server is logged in to the fabric:
-
With VC-FC module:
-
With FlexFabric module by going to the Server Ports tab:
165
Server Port connection issues
There are several issues that can lead to a server ‘NOT-LOGGED-IN’ status:
-
The VC Fabric uplink ports are also in a NOT-LOGGED-IN state (see the previous section).
Server is turned off or is rebooting.
HBA/CNA firmware is out of date.
Connectivity verification from the upstream SAN switch
You can also verify connectivity from the upstream SAN switch.
1.
Log in to the Brocade SAN switch GUI.
2.
Click Name Server on the left side of the screen. The Name Server list appears.
3.
From the Name Server screen, locate the PWWN the server uses (such as, 50:06:0B:00:00:C2:62:2C) and then
identify the Brocade Port used by the VC-FC uplink (port 11).
4.
From a Command Prompt, open a Telnet session to the Brocade SAN switch and enter:
Brocade:admin> switchshow
The comment “NPIV public” on port 11 means that port 11 is detected as using NPIV.
166
5.
To get more information on port 11, type:
Brocade:admin> Portshow 11
167
Testing the loss of uplink ports
This section provides details for testing the loss of uplink ports in a VC SAN Fabric, confirming the port failover in the
same VC SAN Fabric, testing the loss of a complete VC SAN Fabric (all ports), and checking the working status of the
MPIO Driver.
Note: To find more information about MPIO, visit the MPIO manuals web page.
This test is made on a Boot from SAN server running Windows 2008 R2 with a MPIO Driver installed for HP EVA. This
server VC profile has a redundant FCoE connection to reach the EVA.
The WWPN of this server is 50:06:0B:00:C3:1A:04 for port 1and 50:06:0B:00:C3:1A:06 for port 2.
Each SAN Fabric has been configured with two uplink ports (X1 & X2) belonging to two different modules.
The HP MPIO for EVA properly detects the four active paths to the C:\ drive.
168
The server is currently logged to the Fabric through Port 1 of the upstream Brocade SAN switch. This Brocade port 1
is physically connected to the VC FlexFabric module 1 uplink port X1.
To verify the VC uplink port failover:
1.
Simulate a failure of VC fabric uplink port X1 by disabling the upstream Brocade port 1.
Note: When the link is recovered from the SAN switch, there is no failback to the original port. The host stays
logged in to the current uplink port. The next host that logs on is balanced to another available port on the VCFC module.
2.
169
The VC Uplink Port X1 is now disconnected.
3.
Back to the Brocade Command Line, the port 2 information shows the new login distribution, the server is now
using Brocade Port 2 instead of Port 1.
The server has been automatically reconnected (failed over) to another uplink port. The failover only takes a
few seconds to complete.
4.
The server MPIO Manager shows no degraded state as the VC Fabric_1 remains good with one port still
connected.
5.
Disconnect the remaining port of Fabric_1 by shutting down port 2 on the Brocade SAN Switch.
Note: Before you turn off the port, make sure both server HBA ports are correctly presented to the Storage
Array.
6.
170
From the brocade Command Line, disable port 2.
7.
The new VC SAN Fabric status is now ‘Failed’ as all port members have been disconnected.
8.
On the server side, the Boot from SAN server is still up and running with the server MPIO Manager showing a
‘Degraded’ state because half of the active path has been lost.
The failover to the second HBA port took only a few seconds to complete and has not affected the Operating
System.
171
Appendix G: Boot from SAN troubleshooting
Verification during POST
If you are having Boot from SAN issues with Virtual Connect, you can gather useful information during the server
Power-On Self-Test (POST).
Boot from SAN not activated
During POST, the “Bios is not installed” message means sometimes that Boot from SAN is not activated.
Figure 72: Qlogic HBA showing Boot from SAN error during POST
No SAN
volume
Boot from SAN is
not activated
172
Figure 73: Emulex showing Boot from SAN deactivated during POST
Boot from SAN
is not activated
Figure 74: Emulex OneConnect Utility (press CTRL+E) showing Boot from SAN deactivated during POST
Boot from SAN
is not activated
173
Boot from SAN activated
Figure 75: Qlogic HBA showing Boot from SAN activated and SAN volume detected during POST
SAN volume
detected
by the adapter
Boot from SAN
is activated
Figure 76: Emulex showing Boot from SAN activated and SAN volume detected during POST with an EVA Storage
array
SAN volume
detected by the
two adapters
Boot from SAN
is activated
174
Figure 77: Emulex showing Boot from SAN activated and SAN volume detected during POST with a Direct-Attach
3PAR Storage array
SAN volume
detected by the
two adapters
Boot from SAN
is activated
175
Boot from SAN misconfigured
Figure 78: A “Bios is not installed” message can be shown as well when Boot from SAN is activated but with an
incorrectly configured Boot target WWPN
Troubleshooting
Main points to check when facing a Boot from SAN error:
 Make sure the storage presentation and zoning configuration are correct.
 Under the VC Profile, check the Boot from SAN configuration; make sure the WWPN of the Storage target and LUN
number are correct.
 Make sure the VC Fabric uplink ports are logged-in the Fabric (see Appendix F).
 Make sure the FC/FCoE server ports are logged-in the Fabric (see Appendix F).
176
Appendix H: Fibre Channel Port Statistics
Fibre Channel port statistics are available in the Virtual Connect GUI and CLI to provide improved reporting and
metrics for both FC server ports and FC uplink ports of the VC modules (i.e. FlexFabric modules and VC 8G/16G FC
modules).
Note: FC Statistics are not available on the HP Virtual Connect 8Gb and 4Gb 20-Port Fibre Channel Modules.
Note: Throughput Statistics are only available for the Ethernet traffic at this time.
Note: FC Statistics are also available through SNMP using the Fibre Alliance MIB (also known as FCMGMT-MIB, RFC
4044). You can download the Fibre Alliance MIB from the Fibre Alliance website at the following link:
http://www.fibrealliance.org/fb/mib_intro.htm. Once FCMGMT-MIB is imported into a SNMP tool, you can collect
from the Virtual Connect modules all necessary FC statistics. With FlexFabric adapters, FlexHBA ports can be
monitored using the Bridge MIB (RFC 4188) and Interface MIB (RFC 2863).
 For more information about SNMP and how to enable SNMP, refer to the Virtual Connect User Guide.
 For more information about FC statistics that are available for the different VC modules and their detailed
descriptions, refer to the Virtual Connect User Guide.
FC Uplink Port statistics
To access FC uplink port statistics from the VC GUI:
1.
Click Interconnect Bays at the bottom of the left navigation menu.
2.
Click on the bay number corresponding to the module on which you need statistics:
177
3.
On the next page, you need to select the Uplink Ports tab with a VC FlexFabric module:
4.
Detailed information and statistics are available for each FC Uplink port:

178
With VC FlexFabric module:

With VC 8G /16G Fibre Channel module:
Note: FC statistics from VC 16Gb 24-Port FC Module and VC 8Gb 24-Port FC Module requires VC 4.40 and
later.
179
5.
When you click on the Detailed Stats / Info link, the following FC statistics are displayed:
Note: For a detailed description of every counter, refer to the Virtual Connect User Guide
6.
The same statistics are also available from the VC CLI:
# Show the statistics of FC uplink port X1 of VC FlexFabric module in bay 2
show statistics enc0:2:X1
180
FC Server Port statistics
Fibre Channel statistics are collected for all server ports on the VC 8G/16G 24-port FC Module.
Important: On Virtual Connect FlexFabric Modules, all server downlink ports are Enhanced Ethernet ports only so
native FC statistics are not provided as Fibre Channel over Ethernet (FCoE) is used to transport FC frames. The
following screenshot shows the Ethernet detailed statistics of the first FlexHBA of the server in bay 3 (i.e. LOM1:1b of d3):
To access FC server port statistics from the VC GUI:
1.
181
Click in the left navigation menu: Interconnect Bays > bay number.
2.
Detailed information and statistics are available for each FC Server port:
Note: FC statistics from VC 16Gb 24-Port FC Module and VC 8Gb 24-Port FC Module requires VC 4.40 and later.
182
3.
When you click on the Detailed Stats / Info link, the following FC statistics are displayed:
Note: For a detailed description of every counter, refer to the Virtual Connect User Guide.
4.
The same statistics are also available from the VC CLI:
# Show statistics of FC server port d3, physical function 2 (FCoE enabled port) of VC FlexFabric module in bay 1
show statistics enc0:1:d3:v2
183
Acronyms and abbreviations
Term
Definition
BIOS
Basic Input/Output System
CLI
CNA
DCB
GUI
FC
FCoE
Flex-10 NIC Port*
Command Line Interface
Converged Network Adapter
Data Center Bridging (new enhanced lossless Ethernet fabric)
Graphical User Interface
Fibre Channel
Fibre Channel over Ethernet
A physical 10Gb port that is capable of being partitioned into 4 Flex NICs
Flexible Host Bus Adapter. Physical function 2 or a FlexFabric CNA can act as either
an Ethernet NIC, FCoE connection or iSCSI NIC with boot and iSCSI offload
capabilities.
Fabric OS, Brocade Fibre Channel operating system
Host Bus Adapter
Input / Output
Cisco OS (originally Internetwork Operating System)
Internet Protocol
Internet Small Computer System Interface
Link Aggregation Control Protocol (see IEEE802.3ad)
LAN-on-Motherboard. Embedded network adapter on the system board
Logical Unit Number
Multipath I/O
Mezzanine Slot 1; (LOM) LAN Motherboard/Systemboard NIC
N_Port ID Virtualization
Cisco OS for Nexus series
Operating System
Power-On Self-Test
Remote Copy over Fibre Channel
Remote Copy over IP
Read-only memory
Storage Area Network
Small Computer System Interface
Small form-factor pluggable transceiver
Storage Protocol Services
Secure Shell
Virtual Connect
Virtual Connect Fibre Channel module
Virtual Connect Manager
Virtual Local-area network
Virtual storage-area network
Virtual NIC port. A software-based NIC used by Virtualization Managers
Virtual Connect Network used to connect server NICs to the external Network
World Wide Name
World Wide Port Name
FlexHBA**
FOS
HBA
I/O
IOS
IP
iSCSI
LACP
LOM
LUN
MPIO
MZ1 or MEZZ1; LOM
NPIV
NXOS
OS
POST
RCFC
RCIP
ROM
SAN
SCSI
SFP
SPS
SSH
VC
VC-FC
VCM
VLAN
VSAN
vNIC
vNet
WWN
WWPN
*This feature was added for Virtual Connect Flex-10
**This feature was added for Virtual Connect FlexFabric
184
Support and Other Resources
Contacting HP
Before you contact HP
Be sure to have the following information available before you call contact HP:
 Technical support registration number (if applicable)
 Product serial number
 Product model name and number
 Product identification number
 Applicable error message
 Add-on boards or hardware
 Third-party hardware or software
 Operating system type and revision level
HP contact information
For help with HP Virtual Connect, see the HP Virtual Connect webpage: http://ww.hp.com/go/virtualconnect
For the name of the nearest HP authorized reseller:
See the Contact HP worldwide (in English) webpage:
http://www.hp.com/country/us/en/wwcontact.html
For HP technical support:
In the United States, for contact options see the Contact HP United States webpage:
http://welcome.hp.com/country/us/en/contact_us.html
To contact HP by phone:
 Call 1-800-HP-INVENT (1-800-474-6836). This service is available 24 hours a day, 7days a week. For continuous
quality improvement, calls may be recorded or monitored.
 If you have purchased a Care Pack (service upgrade), call 1-800-633-3600. For more information about Care
Packs, refer to the HP website:
http://www.hp.com/hps
 In other locations, see the Contact HP worldwide (in English) webpage:
http://welcome.hp.com/country/us/en/wwcontact.html
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website:
http://www.hp.com/country/us/en/contact_us.html
After registering, you will receive email notification of product enhancements, new driver versions, firmware
updates, and other product resources.
185
Related documentation
HP Virtual Connect Manager 4.40 Release Notes
http://h20564.www2.hp.com/portal/site/hpsc/public/kb/docDisplay/?docId=c04568350
HP Virtual Connect for c-Class BladeSystem Version 4.40 User Guide
http://h20564.www2.hp.com/portal/site/hpsc/public/kb/docDisplay/?docId=c04562188
HP Virtual Connect Version 4.40 CLI User Guide
http://h20564.www2.hp.com/portal/site/hpsc/public/kb/docDisplay/?docId=c04562191
HP Virtual Connect for c-Class BladeSystem Setup and Installation Guide Version 4.40
http://h20564.www2.hp.com/portal/site/hpsc/public/kb/docDisplay/?docId=c04562185
HP Virtual Connect FlexFabric Cookbook
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf
FCoE Cookbook for HP Virtual Connect
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03808925/c03808925.pdf
iSCSI Cookbook for HP Virtual Connect
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02533991/c02533991.pdf
HP Virtual Connect Multi-Enclosure Stacking Reference Guide
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02102153/c02102153.pdf
Implementing HP Virtual Connect Direct-Attach Fibre Channel with HP 3PAR StoreServ Systems,
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c03605173/c03605173.pdf
HP Boot from SAN Configuration Guide
http://h20564.www2.hp.com/bc/docs/support/SupportManual/c01861120/c01861120.pdf
HP welcomes your feedback. To make comments and suggestions about product documentation, send a message
to docsfeedbac[email protected] Include the document title and manufacturing part number. All submissions become
the property of HP.
Get connected
hp.com/go/getconnected
Current HP driver, support, and security alerts
delivered directly to your desktop
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only
warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing
herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained
herein.
Trademark acknowledgments, if needed.
186
c01702940, Edition 3 - Updated February 2015
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement