HP 3PAR VMware ESX/ESXi Implementation Guide.
HP 3PAR VMware ESX/ESXi Implementation
Guide
Abstract
This implementation guide provides information for establishing communication between an HP 3PAR StoreServ Storage and
a VMware ESX host. General information is provided on the basic procedures required to allocate storage on the HP 3PAR
StoreServ Storage that is accessed by the ESX host.
HP Part Number: QL226-98122
Published: March 2015
© Copyright 2015 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Oracle® is a registered trademark of Oracle and/or its affiliates.
Red Hat® is a registered trademark of Red Hat, Inc. in the United States and other countries.
VMware®, VMware® vCenter™, and VMware vSphere® are registered trademarks or trademarks of VMware, Inc. in the United States and/or
other jurisdictions.
Microsoft® and Windows® are trademarks of the Microsoft group of companies.
Contents
1 Introduction...............................................................................................6
Supported Configurations..........................................................................................................6
HP 3PAR OS Upgrade Considerations.........................................................................................7
Audience.................................................................................................................................7
2 Configuring the HP 3PAR StoreServ Storage for FC.........................................8
Configuring the HP 3PAR StoreServ Storage Port Running HP 3PAR OS 3.2.x or 3.1.x .......................8
Setting Up the Ports..............................................................................................................9
Creating the Host Definition................................................................................................10
Setting Up and Zoning the Fabric.............................................................................................11
HP 3PAR Coexistence.........................................................................................................12
Configuration Guidelines for FC Switch Vendors....................................................................13
Target Port Limits and Specifications for FC............................................................................14
HP 3PAR Priority Optimization for FC...................................................................................14
HP 3PAR Persistent Ports for FC............................................................................................15
HP 3PAR Persistent Ports Setup and Connectivity Guidelines for FC......................................15
3 Configuring the HP 3PAR StoreServ Storage for iSCSI....................................16
Setting Up the Ports for an iSCSI Connection..............................................................................16
Creating the iSCSI Host Definition on an HP 3PAR StoreServ Storage.............................................17
Target Port Limits and Specifications..........................................................................................19
HP 3PAR Priority Optimization for iSCSI.....................................................................................19
HP 3PAR Persistent Ports for iSCSI.............................................................................................20
4 Configuring the HP 3PAR StoreServ Storage for FCoE....................................21
Setting Up the FCoE Switch, FCoE Initiator, and FCoE target ports.................................................21
Creating the Host Definition.....................................................................................................23
Target Port Limits and Specifications..........................................................................................25
HP 3PAR Priority Optimization for FCoE.....................................................................................25
HP 3PAR Persistent Ports for FCoE.............................................................................................25
FCoE................................................................................................................................26
5 Configuring the Host for an FC Connection..................................................27
Installing the HBA and Drivers..................................................................................................27
Installing Virtual Machine Guest OS..........................................................................................28
Multipath Failover Considerations and I/O Load Balancing..........................................................30
Configuring Round Robin Multipathing on ESX 4.x or later for FC............................................31
Configuring ESX/ESXi Multipathing for Round Robin via SATP PSP............................................33
ESX/ESXi 4.0 GA - 4.0 MUx..........................................................................................34
ESX/ESXi 4.1 GA - 4.1 MUx...........................................................................................35
ESXi 5.x and ESXi 6.0...................................................................................................35
SATP Info Commands.........................................................................................................36
Default SATP Rules and Their Current Default PSP...............................................................36
SATP Custom Rules and Associated Defined Parameters......................................................37
Show Device Information................................................................................................37
Script Alternative for Path Policy Changes on Storage Devices without a Host Reboot..............38
Performance Considerations for Multiple Host Configurations........................................................39
ESX/ESXi Handling SCSI Queue Full and Busy Messages from the HP 3PAR StoreServ Storage
Array...............................................................................................................................39
VMware ESX Release ESX 4.x, ESXi 5.0 and 5.0 Updates, ESXi 5.5 and 5.5 Updates, and
ESXi 6.0......................................................................................................................39
VMware ESXi Release 5.1...............................................................................................39
ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 Additional Feature Considerations........................................40
Contents
3
Storage I/O Control...........................................................................................................40
VAAI (vStorage APIs for Array Integration).............................................................................40
HP 3PAR VAAI Plug-in 1.1.1 for ESX 4.1.................................................................................41
HP 3PAR VAAI Plug-in 2.2.0 for ESXi 5.x and ESXi 6.0...........................................................41
UNMAP (Space Reclaim) Storage Hardware Support for ESXi 5.x or ESXi 6.0...........................41
Out-of-Space Condition for ESX 4.1, ESXi 5.x, or ESXi 6.0.......................................................42
Additional New Primitives Beginning with ESXi 5.x.................................................................44
VAAI and New Feature Support Table..................................................................................44
VAAI Plug-in Verification.....................................................................................................44
6 Configuring the Host as an FCoE Initiator Connecting to an FC target or an FCoE
Target........................................................................................................47
Configuring the FCoE Switch....................................................................................................47
Using system BIOS to configure FCoE........................................................................................47
Configuring an HP 3PAR StoreServ Storage Port for an FCoE Host Connection................................49
Configuring Initiator FCoE to FC Target.....................................................................................50
Configuring Initiator FCoE to FCoE Target..................................................................................51
7 Configuring the Host for an iSCSI Connection..............................................52
Setting Up the Switch and iSCSI Initiator....................................................................................52
Installing Software iSCSI on VMware ESX..................................................................................52
Installing Virtual Machine Guest Operating System.....................................................................53
Creating a VMkernel Port........................................................................................................54
Configuring a Service Console Connection for the iSCSI Storage..................................................57
Configuring the VMware Software iSCSI Initiator........................................................................59
Setting Up and Configuring Challenge-Handshake Authentication Protocol.....................................62
Configuring CHAP on ESX/ESXi Host........................................................................................63
Hardware iSCSI Support .........................................................................................................64
Independent Hardware iSCSI..............................................................................................64
Dependent Hardware iSCSI.................................................................................................67
iSCSI Failover Considerations and Multipath Load Balancing........................................................71
Performance Considerations for Multiple Host Configurations........................................................71
ESX/ESXi Additional Feature Considerations..............................................................................71
8 Allocating Storage for Access by the ESX Host.............................................72
Creating Storage on the HP 3PAR StoreServ Storage...................................................................72
Creating Thinly Deduplicated Virtual Volumes........................................................................72
Creating Virtual Volumes....................................................................................................72
Exporting LUNs to an ESX Host.................................................................................................73
Creating a VLUN for Export................................................................................................74
Discovering LUNs on VMware ESX Hosts...................................................................................75
Removing Volumes..................................................................................................................75
Host and Storage Usage.........................................................................................................77
Eventlog and Host Log Messages.........................................................................................77
9 Booting the VMware ESX Host from the HP 3PAR StoreServ Storage................78
10 Using VVOLs (VMware Virtual Volumes) with HP 3PAR StoreServ Storage.......79
VMware VVOL Protocol Endpoint and HP 3PAR..........................................................................79
The HP 3PAR VASA Provider................................................................................................81
VVOL Administrator User ID and Storage Container Setup.......................................................81
Defining CPGs for VVOLs....................................................................................................82
Default HP 3PAR CPGs for VVOLs........................................................................................83
Registering the HP 3PAR VASA Provider with vCenter..............................................................84
11 Support and Other Resources...................................................................88
Contacting HP........................................................................................................................88
HP 3PAR documentation..........................................................................................................88
4
Contents
Typographic conventions.........................................................................................................91
HP 3PAR branding information.................................................................................................91
12 Documentation feedback.........................................................................92
Index.........................................................................................................93
Contents
5
1 Introduction
This implementation guide provides information for establishing communication between an HP 3PAR
StoreServ Storage and a VMware ESX host. General information is provided on the basic procedures
for allocating storage on the HP 3PAR StoreServ Storage that is accessed by the ESX host.
Required
For predictable performance and results with HP 3PAR StoreServ Storage, use the information in
this guide in concert with the documentation set provided by HP 3PAR for the HP 3PAR StoreServ
Storage and the documentation provided by the vendor for their respective products.
HP 3PAR OS versions
For purposes of this guide, listed below are the HP 3PAR OS versions in chronological release
order:
HP 3PAR OS 2.3.1→HP 3PAR OS 3.1.1→HP 3PAR OS 3.1.2→HP 3PAR OS 3.1.3→HP 3PAR OS 3.2.1
For example, where the guidance suggests HP 3PAR OS 3.1.x and later, it applies to HP 3PAR
OS 3.1.1, HP 3PAR OS 3.1.2, HP 3PAR OS 3.1.3, HP 3PAR OS 3.2.1 and any associated MUs.
Supported Configurations
The following types of host connections are supported between the HP 3PAR StoreServ Storage
and hosts running an VMware ESX OS:
•
Fibre Channel (FC)
•
iSCSI
•
◦
As Software iSCSI initiator
◦
As Hardware iSCSI initiator
Fibre Channel over Ethernet (FCoE) connections
◦
FCoE initiator connecting to an FC target or an FCoE target
FC connections are supported between the HP 3PAR StoreServ Storage and the ESX host in both
a fabric-attached and direct-connect topology.
For information about supported hardware and software platforms, see HP SPOCK:
http://www.hp.com/storage/spock
For more information about HP 3PAR storage products, use the links provided in the following
table:
Table 1 HP 3PAR Storage Products
6
Product
See...
HP 3PAR StoreServ 7000 Storage
HP Support Center
HP 3PAR StoreServ 10000 Storage
HP Support Center
HP 3PAR Storage Systems
HP Support Center
HP 3PAR Software — Device Management
HP Support Center
HP 3PAR Software—Replication
HP Support Center
Introduction
HP 3PAR OS Upgrade Considerations
This implementation guide refers to new installations. For information about planning an online
HP 3PAR OS upgrade, refer to the HP 3PAR Operating System Upgrade Pre-Planning Guide, at
the HP Storage Information Library:
http://www.hp.com/go/storage/docs
For complete details about supported host configurations and interoperability, refer to the Support
Matrix at HP SPOCK:
http://www.hp.com/storage/spock
Audience
This implementation guide is intended for system and storage administrators who monitor and
direct system configurations and resource allocation for the HP 3PAR StoreServ Storage.
This guide provides basic information for establishing communication between the HP 3PAR
StoreServ Storage and the VMware ESX host and allocating the required storage for a given
configuration. Refer to the appropriate HP documentation in conjunction with the ESX host and
Host Bus Adapter (HBA) documentation for specific details and procedures.
HP 3PAR OS Upgrade Considerations
7
2 Configuring the HP 3PAR StoreServ Storage for FC
This chapter explains how to establish an FC connection between the HP 3PAR StoreServ Storage
and a VMware ESX host and covers HP 3PAR OS versions 3.2.x, 3.1.x and 2.3.x. For information
on setting up the physical connection for a particular HP 3PAR StoreServ Storage, see the
appropriate HP 3PAR installation manual.
Configuring the HP 3PAR StoreServ Storage Port Running HP 3PAR OS
3.2.x or 3.1.x
This section describes how to connect the HP 3PAR StoreServ Storage with the HP 3PAR OS to a
ESX host over an FC network.
By default, the QLogic, Emulex, and Brocade drivers for the VMware ESX server support failover.
For failover support using the QLogic, Emulex, or Brocade driver, VVs should be simultaneously
exported down multiple paths to the host. To do this, create a host definition on the HP 3PAR
StoreServ Storage that includes the WWNs of multiple HBA ports on the host, and then export the
VLUNs to that host definition. If each ESX/ESXi server within a cluster has its own host definition,
the VLUNs must be exported to multiple host definitions.
NOTE:
•
Before connecting the HP 3PAR StoreServ Storage port to the host, complete the following
setup.
•
When deploying HP Virtual Connect Direct-Attach Fibre Channel storage for HP 3PAR StoreServ
Storage systems, where the HP 3PAR StoreServ Storage ports are cabled directly to the uplink
ports on the HP Virtual Connect FlexFabric 10 Gb/24-port Module for c-Class BladeSystem,
follow the procedure for configuring the HP 3PAR StoreServ Storage ports for a fabric
connection.
•
For more information about HP Virtual Connect, HP Virtual Connect interconnect modules,
and HP Virtual Connect Direct-Attach Fibre Channel feature, see the HP Support Center:
http://h20566.www2.hp.com/portal/site/hpsc/
Refer to the HP SAN Design Reference Guide at HP SPOCK:
http://www.hp.com/storage/spock
•
Download Brocade HBA drivers, firmware, and the BCU utility from the QLogic website at:
http://www.qlogic.com. Specific Brocade drivers are named with a BR prefix, such as BR-xxx
HBA model.
This link will take you outside the Hewlett-Packard website. HP does not control and is not
responsible for information outside of HP.com.
8
Configuring the HP 3PAR StoreServ Storage for FC
Setting Up the Ports
Before connecting the HP 3PAR StoreServ Storage to a host, the connection type and mode must
be specified. To set up the HP 3PAR StoreServ Storage ports for a direct or fabric connection,
complete the following procedure for each port.
1. Determine if a port is configured in host mode:
# showport -par
N:S:P Connmode
2:0:1 disk
2:0:2 disk
2:4:1 disk
2:4:2 disk
3:0:1 disk
3:0:2 disk
3:4:1 disk
3:4:2 disk
ConnType
loop
loop
loop
loop
loop
loop
loop
loop
CfgRate
auto
auto
auto
auto
auto
auto
auto
auto
MaxRate
4Gbps
4Gbps
4Gbps
4Gbps
4Gbps
4Gbps
4Gbps
4Gbps
Class2
disabled
disabled
disabled
disabled
disabled
disabled
disabled
disabled
UniqNodeWwn
disabled
disabled
disabled
disabled
disabled
disabled
disabled
disabled
VCN
disabled
disabled
disabled
disabled
disabled
disabled
disabled
disabled
IntCoal
enabled
enabled
enabled
enabled
enabled
enabled
enabled
enabled
A host port is essentially a target mode port where the initiator or host can log in to the HP 3PAR
StoreServ Storage.
2.
If the port was not configured, take the port offline before configuring it for the ESX host:
# controlport offline <node:slot:port>
CAUTION: Before taking a port offline in preparation for a direct or fabric connection, verify
that the port has not been previously defined and that it is not already connected to a host as
this would interrupt the existing host connection.
If an HP 3PAR StoreServ Storage port is already configured for a direct or fabric connection,
ignore this step, because it is not necessary to take the port offline.
3.
Configure the port for the host by using the -ct parameter:
# controlport config host -ct [loop | point] <node:slot:port>
With a direct connection:
Use the -ct loop parameter to specify a direct connection.
With a fabric connection:
Use the -ct point parameter to specify a fabric connection.
4.
Use the controlport rst <node:slot:port> command to reset and register the new
port definitions.
The following example shows how to set up a fabric connected port.
% controlport offline 1:5:1
% controlport config host -ct point 1:5:1
% controlport rst 1:5:1
Configuring the HP 3PAR StoreServ Storage Port Running HP 3PAR OS 3.2.x or 3.1.x
9
Creating the Host Definition
Before connecting the ESX host to the HP 3PAR StoreServ Storage, a host definition needs to be
created that specifies a valid host persona for each HP 3PAR StoreServ Storage port that is to be
connected to a host HBA port through a fabric or direct connection.
•
ESX/ESXi uses the generic legacy host persona of 6 for HP 3PAR OS 3.1.1 or earlier.
•
As of HP 3PAR OS 3.1.2, a second host persona 11 (VMware), which enables asymmetric
logical unit access (ALUA) is available.
◦
Host persona 11 (VMware) is recommended for new ESX/ESXi installations and is required
for ESX/ESXi hosts configured as part of HP 3PAR Peer Persistence and VMware vSphere
Metro Storage Cluster (vMSC) configurations.
•
ESX/ESXi hosts performing HP 3PAR single volume Peer Motion is required to use 3PAR OS
3.1.3 or later and host persona 11.
•
For ESX/ESXi hosts with HP 3PAR Remote Copy, refer to the Remote Copy Users Guide for
the appropriate host persona to use in specific Remote Copy configurations at the HP Storage
Information Library:
http://www.hp.com/go/storage/docs
•
Host Persona 6 (Generic-legacy) will not be supported for any version of VMware ESX deployed
with HP 3PAR OS versions after 3PAR OS 3.1.3. With HP 3PAR OS 3.1.2 or later, HP
recommends migrating the ESX configurations to Host Persona 11(VMware).
NOTE: When changing an existing host persona from 6 to 11, a host restart is required tor the
change to take effect. The host persona change should coincide with changing the SATP rules on
the host as well.
For information on migrating the ESX host from host persona 6 (Generic-Legacy) to host persona
11 (VMware), refer to the HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide
at the HP Storage Information Library.
http://www.hp.com/go/storage/docs
For both host persona 6 and persona 11, see the appropriate chapters in this guide for iSCSI, FC,
or FCoE setup considerations.
1. Display available host personas:
# showhost -listpersona
2.
Create host definitions by using the createhost command with the -persona option to
specify the persona and the host name.
With HP 3PAR OS 3.1.1 or earlier:
# createhost -persona 6 ESXserver1 10000000C9724AB2 10000000C97244FE
With HP 3PAR OS 3.1.2 or later:
# createhost -persona 11 ESXserver1 10000000C9724AB2 10000000C97244FE
3.
10
Verify that the host has been created by using the showhost command.
Configuring the HP 3PAR StoreServ Storage for FC
With HP 3PAR OS 3.1.1 or earlier, using persona 6:
# showhost
Id Name
Persona
0 ESXserver1 Generic-legacy
-WWN/iSCSI_Name10000000C9724AB2
Port
--- 10000000C97244FE ---
With HP 3PAR OS 3.1.2 or later, using persona 11:
# showhost
Id Name
Persona
0 ESXserver2 VMware
-WWN/iSCSI_Name100000051EC33E00
Port
--- 100000051EC33E01 ---
Use showhost -persona to show the persona name and Id relationship.
# showhost -persona
Id Name
Persona_Id
0 ESXserver1
6
1 Esxserver2
11
Persona_Name
Generic-legacy
VMware
Persona_Caps
-SubLun, ALUA
NOTE: If the persona is not correctly set, use the sethost -persona <host number>
<hostname> command to correct the issue, where host number is 6 (for HP 3PAR OS 3.1.1
or earlier) or 11 (for HP 3PAR OS 3.1.2) or later.
A restart of the ESX host is required if host persona is changed to 11.
NOTE:
As of the HP 3PAR OS 3.2.1 release, an in-band device known as a PE LUN (Protocol Endpoint
LUN) is exposed to all hosts connected using persona 11 (VMware). The PE LUN is presented as
part of the support for VMware vSphere 6.0 VVOLs. Depending on the host HBA/CNA driver in
use, the PE LUN might appear as a LUN number 256 in the host device list. In addition, a warning
message related to the PE LUN might appear in the ESXi 6.0 vmkernel logs when using drivers
that do not support VVOLs. However, the error does not interfere with normal, non-VVOL operation.
NOTE: Refer to the HP 3PAR Command Line Interface Reference or the HP 3PAR Management
Console User Guide for complete details about using the controlport, createhost, and
showhost commands.
These documents are available at the HP Storage Information Library:
http://www.hp.com/go/storage/docs
Setting Up and Zoning the Fabric
NOTE: This section does not apply HP 3PAR storage systems, where the HP 3PAR StoreServ
Storage ports are cabled directly to the uplink ports on the HP Virtual Connect FlexFabric 10
Gb/24-port Module for c-Class BladeSystem. Zoning is automatically configured based on the
Virtual Connect SAN Fabric and server profile definitions.
For more information about HP Virtual Connect, HP Virtual Connect interconnect modules, HP Virtual
Connect Direct-Attach Fibre Channel feature, and the HP SAN Design Reference Guide, see
HP SPOCK:
http://www.hp.com/storage/spock
Fabric zoning controls which FC end-devices have access to each other on the fabric. Zoning also
isolates the host and HP 3PAR StoreServ Storage ports from Registered State Change Notifications
(RSCNs) that are irrelevant to these ports.
Setting Up and Zoning the Fabric
11
Set up fabric zoning by associating the device World Wide Names (WWNs) or the switch ports
with specified zones in the fabric. Use either the WWN method or the port zoning method with
the HP 3PAR StoreServ Storage. The WWN zoning method is recommended because the zone
survives the changes of switch ports when cables are moved around on a fabric.
Required:
Employ fabric zoning, by using the methods provided by the switch vendor, to create relationships
between host HBA ports and HP 3PAR StoreServ Storage ports before connecting the host HBA
ports or HP 3PAR StoreServ Storage ports to the fabrics.
FC switch vendors support the zoning of the fabric end-devices in different zoning configurations.
There are advantages and disadvantages with each zoning configuration, so determine what is
needed before choosing a zoning configuration.
The HP 3PAR arrays support the following zoning configurations:
•
One initiator to one target per zone
•
One initiator to multiple targets per zone (zoning by HBA). This zoning configuration is
recommended for the HP 3PAR StoreServ Storage. Zoning by HBA is required for coexistence
with other HP Storage arrays.
NOTE:
◦
For high availability and clustered environments that require multiple initiators to access
the same set of target ports, HP recommends creating separate zones for each initiator
with the same set of target ports.
◦
The storage targets in the zone can be from the same HP 3PAR StoreServ Storage, multiple
HP 3PAR StoreServ Storages, or a mixture of HP 3PAR and other HP storage systems.
For more information about using one initiator to multiple targets per zone, refer to the HP SAN
Design Reference Guide at HP SPOCK:
When using an unsupported zoning configuration and an issue occurs, HP might require
implementing one of the supported zoning configurations as part of the corrective action.
Verify the switch and zone configurations by using the HP 3PAR CLI showhost command to make
sure each initiator is zoned with the correct targets after completing the following tasks:
•
Complete configuration of the storage port to the host and connect to the switch.
•
Create a zone configuration on the switch following the HP SAN Design Reference Guide
and enable the zone set configuration.
•
Use the showhost command to verify that the host is seen on the storage node.
HP 3PAR Coexistence
The HP 3PAR StoreServ Storage array can coexist with other HP array families.
For supported HP array combinations and rules, refer to the HP SAN Design Reference Guide at
the HP Storage Single Point of Connectivity Knowledge (HP SPOCK):
http://www.hp.com/storage/spock
12
Configuring the HP 3PAR StoreServ Storage for FC
Configuration Guidelines for FC Switch Vendors
Use the following FC switch vendor guidelines before configuring ports on fabrics to which the
HP 3PAR StoreServ Storage connects.
•
Brocade switch ports that connect to a host HBA port or to an HP 3PAR StoreServ Storage
port should be set to their default mode. On Brocade 3xxx switches running Brocade firmware
3.0.2 or later, verify that each switch port is in the correct mode by using the Brocade telnet
interface and the portcfgshow command, as follows:
brocade2_1:admin> portcfgshow
Ports
0 1 2 3
4 5 6 7
-----------------+--+--+--+--+----+--+--+-Speed
AN AN AN AN
AN AN AN AN
Trunk Port
ON ON ON ON
ON ON ON ON
Locked L_Port
.. .. .. ..
.. .. .. ..
Locked G_Port
.. .. .. ..
.. .. .. ..
Disabled E_Port
.. .. .. ..
.. .. .. ..
where AN:AutoNegotiate, ..:OFF, ??:INVALID.
The following fill-word modes are supported on a Brocade 8 Gb switch running FOS firmware
6.3.1a and later:
admin>portcfgfillword
Usage: portCfgFillWord PortNumber Mode [Passive]
Mode: 0/-idle-idle
- IDLE in Link Init, IDLE as fill word (default)
1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word
2/-idle-arbff - IDLE in Link Init, ARBFF as fill word (SW)
3/-aa-then-ia - If ARBFF/ARBFF failed, then do IDLE/ARBFF
HP recommends setting the fill word to mode 3 (aa-then-ia), which is the preferred mode,
by using the portcfgfillword command. If the fill word is not correctly set, er_bad_os
counters (invalid ordered set) will increase when using the portstatsshow command while
connected to 8 Gb HBA ports, as they need the ARBFF-ARBFF fill word. Mode 3 will also
work correctly for lower-speed HBAs, such as 4 Gb/2 Gb HBAs. For more information, refer
to the Fabric OS Command Reference Manual and the FOS release notes, at the Brocade
website:
https:/www.brocade.com
NOTE: This link will take you outside the Hewlett-Packard website. HP does not control and
is not responsible for information outside of HP.com
In addition, some HP switches, such as the HP SN8000B 8-slot SAN backbone director switch,
the HP SN8000B 4-slot SAN director switch, the HP SN6000B 16 Gb FC switch, or the
HP SN3000B 16 Gb FC switch automatically select the proper fill-word mode 3 as the default
setting.
•
McDATA switch or director ports should be in their default modes as G or GX-port (depending
on the switch model), with their speed setting permitting them to autonegotiate.
•
Cisco switch ports that connect to HP 3PAR StoreServ Storage ports or host HBA ports should
be set to AdminMode = FX and AdminSpeed = auto port, with the speed set to auto negotiate.
•
QLogic switch ports should be set to port type GL-port and port speed auto-detect. QLogic
switch ports that connect to the HP 3PAR StoreServ Storage should be set to I/O Stream Guard
disable or auto, but never enable.
Setting Up and Zoning the Fabric
13
Target Port Limits and Specifications for FC
To avoid overwhelming a target port and to ensure continuous I/O operations, observe the following
limitations on a target port:
•
For information on host port per target port and max total host ports per array, refer to the
HP 3PAR Support Matrix documentation at HP SPOCK:
http://www.hp.com/storage/spock
•
Maximum I/O queue depth on each HP 3PAR StoreServ Storage HBA model, as follows:
◦
Emulex 4 Gb: 959
◦
HP 3PAR HBA 4 Gb: 1638
◦
HP 3PAR HBA 8 Gb: 3276 (HP 3PAR StoreServ 10000 and HP 3PAR StoreServ 7000
systems only)
◦
HP 3PAR HBA 16 Gb: 3072 (HP 3PAR StoreServ 10000 and HP 3PAR StoreServ 7000
systems only)
•
The I/O queues are shared among the connected host HBA ports on a first-come, first-served
basis.
•
When all queues are in use and a host HBA port tries to initiate I/O, it receives a target queue
full response from the HP 3PAR StoreServ Storage port. This condition can result in erratic I/O
performance on each host. If this condition occurs, each host should be throttled so that it
cannot overrun the HP 3PAR StoreServ Storage port's queues when all hosts are delivering
their maximum number of I/O requests.
NOTE:
◦
When host ports can access multiple targets on fabric zones, the target number assigned
by the host driver for each discovered target can change when the host is booted and
some targets are not present in the zone. This situation might change the device node
access point for devices during a host restart. This issue can occur with any
fabric-connected storage, and is not specific to the HP 3PAR StoreServ Storage.
◦
The maximum number of I/O paths supported is 16.
HP 3PAR Priority Optimization for FC
The HP 3PAR Priority Optimization feature introduced in HP 3PAR OS 3.1.2 MU2 is a more efficient
and dynamic solution for managing server workloads and can be utilized as an alternative to
setting host I/O throttles. When using this feature, a storage administrator is able to share storage
resources more effectively by enforcing quality of service limits on the array. No special settings
are needed on the host side to obtain the benefit of HP 3PAR Priority Optimization although certain
per target or per adapter throttle settings might need to be adjusted in rare cases. For complete
details of how to use HP 3PAR Priority Optimization (Quality of Service) on HP 3PAR StoreServ
Storage arrays, refer to the HP 3PAR Priority Optimization technical whitepaper at HP.com:
http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA4-7604ENW
14
Configuring the HP 3PAR StoreServ Storage for FC
HP 3PAR Persistent Ports for FC
The HP 3PAR Persistent Ports (or virtual ports) feature minimizes I/O disruption during an HP 3PAR
StoreServ Storage online upgrade or node-down event. Port shutdown or reset events do not trigger
this feature.
Each FC target storage array port has a partner array port automatically assigned by the system.
Partner ports are assigned across array node pairs.
HP 3PAR Persistent Ports allows an HP 3PAR StoreServ Storage FC port to assume the identity of
a failed port while retaining its own identity. Where a given physical port assumes the identity of
its partner port, the assumed port is designated as a persistent port. Array port failover and failback
with HP 3PAR Persistent Ports is transparent to most host-based multipathing software, which can
keep all of its I/O paths active.
NOTE: Use of HP 3PAR Persistent Ports technology does not negate the need for properly installed,
configured, and maintained host multipathing software.
For a more complete description of the HP 3PAR Persistent Ports feature, its operation, and a
complete list of required setup and connectivity guidelines, refer to the following documents:
•
HP Technical white paper HP 3PAR StoreServ Persistent Ports (HP document
#F4AA4-4545ENW) at the HP Support Center:
http://h20195.www2.hp.com/v2/GetPDF.aspx%2F4AA4-4545ENW.pdf
•
HP 3PAR Command Line Interface Administrator's Manual, “Using Persistent Ports for
Nondisruptive Online Software Upgrades” chapter at the HP Storage Information Library:
http://www.hp.com/go/storage/docs
HP 3PAR Persistent Ports Setup and Connectivity Guidelines for FC
Starting with HP 3PAR OS 3.1.2, the HP 3PAR Persistent Ports feature is supported for FC target
ports.
Starting with HP 3PAR OS 3.1.3, the Persistent Port feature has additional functionality to minimize
I/O disruption during an array port loss_sync event triggered by a loss of array port connectivity
to the fabric.
Follow the specific cabling setup and connectivity guidelines so that HP 3PAR Persistent Ports
function properly:
•
HP 3PAR StoreServ Storage FC partner ports must be connected to the same FC fabric, and
preferably to different FC switches on the fabric.
•
The FC fabric must support NPIV, and NPIV must be enabled.
•
Configure the host-facing HBAs for point-to-point fabric connection (there is no support for
direct-connect “loops”).
For information regarding the Persistent Ports feature for an FCoE initiator to FC target configuration
(FCoE to FC switched), see “Configuring the HP 3PAR StoreServ Storage for FC” (page 8).
Setting Up and Zoning the Fabric
15
3 Configuring the HP 3PAR StoreServ Storage for iSCSI
This chapter explains how to establish an iSCSI connection between the HP 3PAR StoreServ Storage
and the VMware ESX host. With specific CNA cards, a Software iSCSI or Hardware iSCSI initiator
can be used.
NOTE: HP Recommends using default values to configure the HP 3PAR StoreServ host, unless
otherwise specified in the following procedures.
Setting Up the Ports for an iSCSI Connection
To establish an iSCSI connection between the HP 3PAR StoreServ Storage and the ESX host, set
up each HP 3PAR StoreServ Storage iSCSI target port that will be connected to an iSCSI initiator
as described in the following procedure:
1. Set up a one-time configuration for the iSCSI ports on the HP 3PAR StoreServ storage by using
the HP 3PAR CLI controlport config iscsi <n:s:p> command.
First, use the showport and showport -i commands to check the current CNA
configuration. For example:
# showport
N:S:P Mode
0:1:1 target
1:1:1 target
State
-Node_WWN-- -Port_WWN/HW_Addr- Type Protocol Label Partner FailoverState
offline
2C27D754521E
iscsi iSCSI
offline
2C27D754521A
iscsi iSCSI
-
# showport -i
N:S:P Brand Model
0:1:1 QLOGIC QLE8242
1:1:1 QLOGIC QLE8242
Rev Firmware
58 0.0.0.0
58 0.0.0.0
Serial
HWType
PCGLT0ARC1K3SK CNA
PCGLT0ARC1K3SK CNA
If State=config_wait or Firmware=0.0.0.0, use the HP 3PAR CLI controlport
config iscsi <n:s:p> command to configure, and then use the showport and
showport -i commands to verify the configuration setting.
For example:
# controlport
# controlport
# showport
N:S:P Mode
0:1:1 target
1:1:1 target
config iscsi 0:1:1
config iscsi 1:1:1
State -Node_WWN-- -Port_WWN/HW_Addr- Type Protocol Label Partner FailoverState
ready
2C27D754521E
iscsi iSCSI
ready
2C27D754521A
iscsi iSCSI
-
# showport -i
...
N:S:P Brand Model
...
0:1:1 QLOGIC QLE8242
1:1:1 QLOGIC QLE8242
2.
Rev Firmware
58
58
Serial
HWType
4.8.76.48015 PCGLT0ARC1K3SK CNA
4.8.76.48015 PCGLT0ARC1K3SK CNA
Configure the networking of the iSCSI target ports by using the HP 3PAR CLI showport
-iscsi command to check the current settings of the iSCSI ports:
# showport -iscsi
N:S:P State
IPAddr Netmask Gateway TPGT MTU Rate DHCP iSNS_Prim iSNS_Sec iSNS_Port
0:1:1 offline 0.0.0.0 0.0.0.0 0.0.0.0 11
1500 n/a 0
0.0.0.0
0.0.0.0.0 3205
1:1:1 offline 0.0.0.0 0.0.0.0 0.0.0.0 11
1500 n/a 0
0.0.0.0
0.0.0.0.0 3205
3.
Use the HP 3PAR CLI controliscsiport addr <ipaddr> <netmask> [-f]
<node:slot:port> command:
# controliscsiport addr 10.1.1.100 255.255.255.0 -f 0:1:1
# controliscsiport addr 10.1.1.102 255.255.255.0 -f 1:1:1
16
Configuring the HP 3PAR StoreServ Storage for iSCSI
NOTE:
•
For information on host port per target port and max total host ports per array, refer to
the HP 3PAR Support Matrix documentation at HP SPOCK:
http://www.hp.com/storage/spock
•
When the host initiator port and the HP 3PAR StoreServ Storage iSCSI target port are in
different IP subnets, configure the gateway address for the HP 3PAR StoreServ Storage
iSCSI port to avoid unexpected behavior as follows:
controliscsiport gw <gw_address> [-f] <node:slot:port>
4.
5.
Use the HP 3PAR CLI showport -iscsi command to verify the settings. If not completed
previously, configure the ESX host iSCSI initiator according to “Configuring the Host for an
iSCSI Connection” (page 52).
From the ESX host, use the vmkping command to verify communication with the HP 3PAR
StoreServ Storage iSCSI target ports as shown in the following example:
# vmkping 10.1.1.100
6.
From the HP 3PAR StoreServ Storage array, use the controliscsiport ping <ipadd>
<node:slot:port> command to verify communication with the ESX host initiator ports as
shown in the following example:
# controliscsiport ping 10.1.1.10 0:1:1
Creating the iSCSI Host Definition on an HP 3PAR StoreServ Storage
Create a host definition that ties all of the connections from a single host to a host name. Before
creating a host definition, the HP 3PAR StoreServ Storage iSCSI target ports must be set up and
an iSCSI connection established. The iSCSI connection is established by following the procedure
in “Setting Up the Ports for an iSCSI Connection” (page 16) and the procedure in “Configuring
the Host for an iSCSI Connection” (page 52) through “Configuring the VMware Software iSCSI
Initiator” (page 59) (ESX host setup).
•
ESX/ESXi uses the generic legacy host persona of 6 for HP 3PAR OS 3.1.1 or earlier.
•
As of HP 3PAR OS 3.1.2, a second host persona 11 (VMware) enables asymmetric logical
unit access (ALUA).
◦
Host persona 11 (VMware) is recommended for new ESX/ESXi installations and is required
for ESX/ESXi hosts configured as part of a HP 3PAR Peer Persistence configuration.
•
ESX/ESXi hosts performing HP 3PAR single volume Peer Motion is required to use HP 3PAR
OS 3.1.3 or later and host persona 11.
•
For ESX/ESXi hosts with HP 3PAR Remote Copy, refer to the Remote Copy Users Guide for
the appropriate host persona to use in specific Remote Copy configurations. See the HP Storage
Information Library:
www.hp.com/go/storage/docs
•
Host Persona 6 (Generic-legacy) will not be supported for any version of VMware ESX deployed
with HP 3PAR OS versions after 3PAR OS 3.1.3. With HP 3PAR OS 3.1.2 or later, HP
recommends migrating the ESX configurations to Host Persona 11 (VMware).
Creating the iSCSI Host Definition on an HP 3PAR StoreServ Storage
17
The following example for creating a host definition uses a VMware iSCSI initiator
(iqn.1998-01.com.vmware:dl360g8-02-42b20fff) on an ESX host connecting through a
VLAN to a pair of HP 3PAR StoreServ Storage iSCSI ports.
1. Verify that the host iSCSI initiators are connected to the HP 3PAR StoreServ Storage iSCSI
target ports by using the HP 3PAR CLI showhost command.
# showhost
Id Name Persona ----------------WWN/iSCSI_Name---------------- Port
--iqn.1998-01.com.vmware:dl360g8-02-42b20fff
0:1:2
--iqn.1998-01.com.vmware:dl360g8-02-42b20fff
1:1:2
2.
Create the appropriate host definition entry by using the HP 3PAR CLI createhost -iscsi
-persona <hostpersona> <hostname> <iscsi_initiator_name> command.
# createhost -iscsi -persona 11 ESX2 iqn.1998-01.com.vmware:dl360g8-02-42b20fff
3.
Verify that the host entry was created by using the HP 3PAR CLI showhost command.
# showhost
Id Name Persona
1 ESX2 VMware
4.
----------------WWN/iSCSI_Name---------------- Port
iqn.1998-01.com.vmware:dl360g8-02-42b20fff
0:1:2
iqn.1998-01.com.vmware:dl360g8-02-42b20fff
1:1:2
To test the connection — create some temporary VVs and then export the VLUNs to the host.
NOTE: See “Allocating Storage for Access by the ESX Host” (page 72) for complete details
on creating, exporting and discovering storage for an iSCSI connection.
5.
On the ESX iSCSI initiator host, perform a re-scan and then verify that the VLUNs were
discovered.
NOTE:
As of the HP 3PAR OS 3.2.1 release, an in-band device known as a PE LUN (Protocol Endpoint
LUN) is exposed to all hosts connected using persona 11 (VMware). The PE LUN is presented as
part of the support for VMware vSphere 6.0 VVOLs. Depending on the host HBA/CNA driver in
use, the PE LUN might appear as a LUN number 256 in the host device list. In addition, a warning
message related to the PE LUN might appear in the ESXi 6.0 vmkernel logs when using drivers
that do not support VVOLs. However, the error does not interfere with normal, non-VVOL operation.
18
Configuring the HP 3PAR StoreServ Storage for iSCSI
Target Port Limits and Specifications
To avoid overwhelming a target port and ensure continuous I/O operations, observe the following
limitations on a target port:
•
For information on host port per target port and the maximum total host ports per array, refer
to the HP 3PAR Support Matrix documentation at HP SPOCK:
http://www.hp.com/storage/spock
•
I/O queue depth on each HP 3PAR StoreServ Storage HBA model, as follows:
◦
QLogic 1G: 512
◦
QLogic 10G: 2048 (HP 3PAR StoreServ 10000 and HP 3PAR StoreServ 7000 systems
only)
•
I/O queues are shared among the connected host HBA ports on a first-come, first-served basis.
•
When all queues are in use and a host HBA port tries to initiate I/O, it receives a target
queue full response from the HP 3PAR StoreServ Storage port. This condition can result
in erratic I/O performance on each host. If this condition occurs, each host should be throttled
so that it cannot overrun the HP 3PAR StoreServ Storage port's queues when all hosts are
delivering their maximum number of I/O requests.
HP 3PAR Priority Optimization for iSCSI
The HP 3PAR Priority Optimization feature introduced in HP 3PAR OS 3.1.2.MU2 is an efficient
and dynamic solution for managing server workloads and can be used as an alternative to setting
host I/O throttles. When using this feature, a storage administrator can share storage resources
effectively by enforcing quality of service limits on the array. No special settings are needed on
the host side to obtain the benefit of HP 3PAR Priority Optimization although certain per target or
per adapter throttle settings might need to be adjusted. For complete details on how to use HP 3PAR
Priority Optimization (Quality of Service) on HP 3PAR StoreServ Storage arrays, refer to the HP 3PAR
Priority Optimization technical whitepaper (HP document #4AA4-7604ENW) at HP.com:
http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA4-7604ENW
Target Port Limits and Specifications
19
HP 3PAR Persistent Ports for iSCSI
Starting with HP 3PAR OS 3.1.3, the HP 3PAR Persistent Ports feature is supported for iSCSI. The
HP 3PAR Persistent Ports (or virtual ports) feature minimizes I/O disruption on an HP 3PAR StoreServ
Storage in response to the following events:
•
HP 3PAR OS firmware upgrade
•
Node maintenance that requires the node to be taken offline (e.g., adding a new HBA)
•
HP 3PAR node failure
•
Array host ports being taken offline administratively
Each iSCSI target storage array port has a partner array port automatically assigned by the system.
Partner ports are assigned across array node pairs.
HP 3PAR Persistent Ports allows an HP 3PAR StoreServ Storage iSCSI port to assume the identity
of a failed port while retaining its own identity. Where a given physical port assumes the identity
of its partner port, the assumed port is designated as a persistent port. Array port failover and
failback with HP 3PAR Persistent Ports is transparent to most host-based multipathing software,
which can keep all of its I/O paths active.
NOTE:
•
Use of HP 3PAR Persistent Ports technology does not negate the need for properly installed,
configured, and maintained host multipathing software.
•
A key element for iSCSI connectivity is that partner ports must share the same IP network.
For a more complete description of the HP 3PAR Persistent Ports feature, its operation, and a
complete list of required setup and connectivity guidelines, refer to the following documents:
•
HP technical white paper HP 3PAR StoreServ Persistent Ports (HP document #F4AA4-4545ENW)
at the HP Support Center:
http://h20195.www2.hp.com/v2/GetPDF.aspx%2F4AA4-4545ENW.pdf
•
HP 3PAR Command Line Interface Administrator's Manual, “Using Persistent Ports for
Nondisruptive Online Software Upgrades” at the HP Storage Information Library:
http://www.hp.com/go/storage/docs
20
Configuring the HP 3PAR StoreServ Storage for iSCSI
4 Configuring the HP 3PAR StoreServ Storage for FCoE
Setting Up the FCoE Switch, FCoE Initiator, and FCoE target ports
Connect the ESX host FCoE initiator ports and the HP 3PAR StoreServ Storage FCoE target ports
to the FCoE switches.
NOTE: FCoE switch VLANs, routing setup, and configuration is beyond the scope of this document.
For instructions on setting up VLANs and routing, refer to the manufacturer's guide for the switch.
1.
CNA ports on HP 3PAR StoreServ 10000 and HP 3PAR StoreServ 7000 arrays require a one
time configuration by using the controlport command. (HP 3PAR T-Class, and F-Class
arrays do not require this one-time setting.)
For example on a new FCoE config:
# showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol
0:3:1 suspended config_wait - - cna 0:3:2 suspended config_wait - - cna -
# showport -i
N:S:P Brand Model Rev Firmware Serial HWType
0:3:1 QLOGIC QLE8242 58 0.0.0.0 PCGLT0ARC1K3U4 CNA
0:3:2 QLOGIC QLE8242 58 0.0.0.0 PCGLT0ARC1K3U4 CNA
2.
If State=config_wait or Firmware=0.0.0.0, use the controlport config fcoe
<n:s:p> command to configure. Use the showport and showport -i commands to check
the configuration setting.
For example:
# controlport config fcoe 0:3:1
# controlport config fcoe 0:3:2
# showport 0:3:1 0:3:2
N:S:P
Mode
State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Label Partner FailoverState
0:3:1 target
ready 2FF70002AC000121
20310002AC000121 host
FCoE
0:3:2 target
ready 2FF70002AC000121
20320002AC000121 free
FCoE
# showport -i 0:3:1 0:3:2
N:S:P Brand Model
Rev Firmware Serial
HWType
0:3:1 QLOGIC QLE8242 58 4.11.122 PCGLT0ARC1K3U4 CNA
0:3:2 QLOGIC QLE8242 58 4.11.122 PCGLT0ARC1K3U4 CNA
3.
Check the current settings of the FCoE ports by using the showport -fcoe command.
For example:
# showport -fcoe
N:S:P ENode_MAC_Address PFC_Mask
0:3:1 00-02-AC-07-01-21
0x08
0:3:2 00-02-AC-06-01-21
0x00
Setting Up the FCoE Switch, FCoE Initiator, and FCoE target ports
21
NOTE:
To change the configuration from iSCSI to FCoE:
1. Use the showport command.
# showport
2.
0:3:1
target
ready
-
000E1E05BEE6 iscsi
iSCSI
-
-
-
0:3:2
target
ready
-
000E1E05BEE2 iscsi
iSCSI
-
-
-
Turn off the iSCSI ports by using the controlport offline <node:slot:port>
command:
# controlport offline 0:3:1
# controlport offline 0:3:2
showport
0:3:1 target offline
3.
-
000E1E05BEE2 iscsi iSCSI0:3:2 target offline - 000E1E05BEE2 iscsi iSCSI
Change the topology to FCoE by using the controlport config fcoe
<node:slot:port> and controlport rst <node:slot:port> commands:
# controlport config fcoe 0:3:1
# controlport config fcoe 0:3:2
# controlport rst 0:3:1
# controlport rst 0:3:2
0:3:1 target offline
-
000E1E05BEE2 iscsi iSCSI
0:3:2 target offline - 000E1E05BEE2 iscsi iSCSI
# showport
4.
0:3:1
target
ready 2FF70002AC000121
20310002AC000121
host
FCoE
-
-
-
0:3:2
target
ready 2FF70002AC000121
20320002AC000121
free
FCoE
-
-
-
Check the current settings of the FCoE ports with the showport -fcoe command.
For example:
# showport -fcoe
N:S:P ENode_MAC_Address PFC_Mask
22
0:3:1 00-02-AC-07-01-21
0x08
0:3:2 00-02-AC-06-01-21
0x00
Configuring the HP 3PAR StoreServ Storage for FCoE
Creating the Host Definition
Before connecting the ESX host to the HP 3PAR StoreServ Storage, a host definition must be created.
The host definition must specify a valid host persona (host mode) for each HP 3PAR StoreServ
Storage port connected to a host HBA port either through a fabric or direct connection.
•
ESX/ESXi uses the generic legacy host persona of 6 for HP 3PAR OS 3.1.1 or earlier.
•
Beginning with HP 3PAR OS 3.1.2, a second host persona 11 (VMware) is available. Host
persona 11(VMware) enables asymmetric logical unit access (ALUA).
◦
Host persona 11 (VMware) is recommended for new ESX/ESXi installations and is required
for ESX/ESXi hosts configured as part of a HP 3PAR Peer Persistence configuration.
◦
Host persona 11 (VMware) is required for FCoE end-to-end (FCoE target) configurations.
•
ESX/ESXi hosts performing HP 3PAR single volume Peer Motion are required to use 3PAR OS
3.1.3 or later and host persona 11.
•
For the appropriate host persona to use for ESX/ESXi hosts with HP 3PAR Remote Copy
configurations, refer to the HP 3PAR Remote Copy Software Users Guide at the HP Storage
Information Library:
www.hp.com/go/storage/docs
•
Host Persona 6 (Generic-legacy) will not be supported for any version of VMware ESX deployed
with HP 3PAR OS versions after 3PAR OS 3.1.3. With HP 3PAR OS 3.1.2 or later, HP
recommends migrating the ESX configurations to Host Persona 11(VMware).
NOTE: When changing an existing host persona from 6 to 11, a host reboot is required tor the
change to take effect. This is an offline process. The host persona change should coincide with
changing the SATP rules on the host.
See the HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide on the HP Storage
Information Library for details about migrating the ESX host from host persona 6 (Generic-Legacy)
to host persona 11 (VMware).
With both host persona 6 and persona 11, see the appropriate chapters in this guide for host
FCoE setup considerations.
1. Display all available host personas:
# showhost -listpersona
2.
To create host definitions, use the createhost command with the -persona option to
specify the persona and the host name.
With HP 3PAR OS 3.1.1 or earlier:
# createhost -persona 6 ESXserver1 10000000C9724AB2 10000000C97244FE
With HP 3PAR OS 3.1.2 or later:
# createhost -persona 11 ESXserver1 10000000C9724AB2 10000000C97244FE
3.
Verify that the host was created by using the showhost command.
Creating the Host Definition
23
With HP 3PAR OS 3.1.1 or earlier, using persona 6:
# showhost
Id Name
Persona
0 ESXserver1 Generic-legacy
-WWN/iSCSI_Name10000000C9724AB2
Port
--- 10000000C97244FE ---
With HP 3PAR OS 3.1.2 or later, using persona 11:
# showhost
Id Name
Persona
0 ESXserver2 VMware
-WWN/iSCSI_Name100000051EC33E00
Port
--- 100000051EC33E01 ---
Show the persona name and Id relationship by using the showhost -persona command.
# showhost -persona
Id Name
Persona_Id
0 ESXserver1
6
1 Esxserver2
11
Persona_Name
Generic-legacy
VMware
Persona_Caps
-SubLun, ALUA
NOTE:
•
If the host persona is not correctly set, use the sethost -persona <host number>
<hostname> command to correct the issue, where host number is 6 (for HP 3PAR OS
3.1.1 or earlier) or 11 (recommended for HP 3PAR OS 3.1.2 or later).
•
When changing the host persona to 11, reboot the ESX host. The host must be offline or
not connected in order to change the host persona from 6 to 11 or from 11 to 6.
NOTE: As of the HP 3PAR OS 3.2.1 release, an in-band device known as a PE LUN (Protocol
Endpoint LUN) is exposed to all hosts connected using persona 11 (VMware). The PE LUN is
presented as part of the support for VMware vSphere 6.0 VVOLs. Depending on the host HBA/CNA
driver in use, the PE LUN might appear as a LUN number 256 in the host device list. In addition,
a warning message related to the PE LUN might appear in the ESXi 6.0 vmkernel logs when using
drivers that do not support VVOLs. However, the error does not interfere with normal, non-VVOL
operation.
NOTE: For complete details about using the controlport, createhost, and showhost
commands, refer to the HP 3PAR OS Command Line Interface Reference or the HP 3PAR
Management Console User Guide at the HP Storage Information Library:
www.hp.com/go/storage/docs
24
Configuring the HP 3PAR StoreServ Storage for FCoE
Target Port Limits and Specifications
To avoid overwhelming a target port and ensure continuous I/O operations, observe the following
limitations on a target port:
•
For information available on host port per target port and the maximum total host ports per
array, refer to the HP 3PAR Support Matrix documentation at HP SPOCK:
http://www.hp.com/storage/spock
•
I/O queue depth on each HP 3PAR StoreServ Storage HBA model, as follows:
◦
QLogic CNA: 1748 (HP 3PAR StoreServ 10000 and HP 3PAR StoreServ 7000 systems
only)
•
I/O queues are shared among the connected host HBA ports on a first-come, first-served basis.
•
When all queues are in use and a host HBA port tries to initiate I/O, it receives a target queue
full response from the HP 3PAR StoreServ Storage port. This condition can result in erratic I/O
performance on each host. If this condition occurs, each host should be throttled so that it
cannot overrun the HP 3PAR StoreServ Storage port's queues when all hosts are delivering
their maximum number of I/O requests.
NOTE: When host ports can access multiple targets on fabric zones, the assigned target number
assigned by the host driver for each discovered target can change when the host is booted and
some targets are not present in the zone. This situation might change the device node access point
for devices during a host reboot. This issue can occur with any fabric-connected storage, and is
not specific to the HP 3PAR StoreServ Storage.
HP 3PAR Priority Optimization for FCoE
The HP 3PAR Priority Optimization feature introduced in HP 3PAR OS 3.1.2.MU2 is an efficient
and dynamic solution for managing server workloads and can be utilized as an alternative to
setting host I/O throttles. When using this feature, a storage administrator can share storage
resources effectively by enforcing quality of service limits on the array. No special settings are
needed on the host side to obtain the benefit of HP 3PAR Priority Optimization although certain
per target or per adapter throttle settings might need to be adjusted. For complete details of how
to use HP 3PAR Priority Optimization (Quality of Service) on HP 3PAR StoreServ Storage arrays,
refer to the HP 3PAR Priority Optimization technical whitepaper (HP document #4AA4-7604ENW)
at the HP website:
http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA4-7604ENW
HP 3PAR Persistent Ports for FCoE
The HP 3PAR Persistent Ports (or virtual ports) feature minimizes I/O disruption during an HP 3PAR
StoreServ Storage online upgrade, node-down or cable pull event. Port shutdown or reset events
do not trigger this feature.
Each FCoE target storage array port has a partner array port automatically assigned by the system.
Partner ports are assigned across array node pairs.
HP 3PAR Persistent Ports allow an HP 3PAR StoreServ Storage FCoE port to assume the identity of
a failed port while retaining its own identity. Where a given physical port assumes the identity of
its partner port, the assumed port is designated as a persistent port. Array port failover and failback
with HP 3PAR Persistent Ports is transparent to most host-based multipathing software, which can
keep all of its I/O paths active.
NOTE: Use of HP 3PAR Persistent Ports technology does not negate the need for properly installed,
configured, and maintained host multipathing software.
Target Port Limits and Specifications
25
For a more complete description of the HP 3PAR Persistent Ports feature, its operation, and a
complete list of required setup and connectivity guidelines, refer to the following documents:
•
HP technical white paper HP 3PAR StoreServ Persistent Ports (HP document #F4AA4-4545ENW)
at the HP Support Center:
http://h20195.www2.hp.com/v2/GetPDF.aspx%2F4AA4-4545ENW.pdf
•
HP 3PAR Command Line Interface Administrator's Manual, “Using Persistent Ports for
Nondisruptive Online Software Upgrades” at the HP Storage Information Library:
http://www.hp.com/go/storage/docs
FCoE
HP 3PAR Persistent Ports Setup and Connectivity Guidelines for FCoE
Starting with HP 3PAR OS 3.1.3:
•
The HP 3PAR Persistent Ports feature is supported for FCoE target ports (FCoE end-to-end
configurations).
•
The HP 3PAR Persistent Ports feature is enabled by default for HP 3PAR StoreServ Storage
FCoE ports during node-down events.
Follow the specific cabling setup and connectivity guidelines for HP 3PAR Persistent Ports to function
properly. Key elements for the HP 3PAR Persistent Ports feature setup and connectivity are:
26
•
HP 3PAR StoreServ Storage FCoE partner ports must be connected to the same FCoE network.
•
The same CNA port on host-facing HBAs in the nodes of a node pair must be connected to
the same FCoE network, and preferably to different FCoE switches on the network.
•
The FCoE network must support NPIV, and NPIV must be enabled.
Configuring the HP 3PAR StoreServ Storage for FCoE
5 Configuring the Host for an FC Connection
This chapter describes the procedures and considerations required to set up an ESX host to
communicate with an HP 3PAR StoreServ Storage over an FC connection.
Installing the HBA and Drivers
Before setting up the ESX host, make sure the host adapters are installed and operating properly.
If necessary, see the documentation provided by the HBA vendor for instructions.
Drivers for VMware supported HBAs are included as part of the ESX OS installation package
supplied by VMware. Updates and patches for the HBA drivers are available through VMware
support.
For Brocade FC HBA, the default Path TOV (Time-out Value) parameter is set to 30 seconds. HP
recommends changing this value to 14 seconds with VMware Native Multipathing Plug-in (NMP).
To change the value of this parameter, use the Brocade BCU command line utility. For more
information, see the VMware website:
http://www.vmware.com
This link will take you outside of the Hewlett-Packard website. HP does not control and is not
responsible for information outside of HP.com
To download Brocade HBA drivers, firmware, and the BCU utility, see the QLogic website:
http://www.qlogic.com
Specific Brocade drivers are named with a BR prefix, such as BR-xxx HBA model.
To display the list of adapter ports:
# esxcli brocade bcu --command="port --list"
--------------------------------------------------------------------------Port#
FN
Type
PWWN/MAC
FC Addr/
Media
State
Spd
Eth dev
--------------------------------------------------------------------------1/0
fc
10:00:00:05:1e:dc:f3:2f
091e00
sw
Linkup 8G
0
fc
10:00:00:05:1e:dc:f3:2f
091e00
sw
Linkup 8G
1/1
fc
10:00:00:05:1e:dc:f3:30
673000
sw
Linkup 8G
1
fc
10:00:00:05:1e:dc:f3:30
673000
sw
Linkup 8G
To query a port number from the previous output:
# esxcli brocade bcu --command="vhba --query 1/0"
PCI Function Index
: 1/0/0
Firmware Ver
: 3.0.0.0
Port type
: FC
Bandwidth
: 8 Gbps
IOC state
: operational
PWWN
: 10:00:00:05:1e:dc:f3:2f
NWWN
: 20:00:00:05:1e:dc:f3:2f
Path TOV
: 30 seconds
Portlog
: Enabled
IO Profile
: Off
Interrupt coalescing
: on
Interrupt delay
: 0 us
Interrupt latency
: 0 us
To change Path TOV value (repeat for all ports) follow the example below. This command can be
included in ESX host startup so that it runs automatically.
# esxcli brocade bcu --command="fcpim --pathtov 1/0 14"
path timeout is set to 14
Installing the HBA and Drivers
27
To query a port number after a change was made, follow the example below:
# esxcli brocade bcu --command="vhba --query 1/0"
PCI Function Index
: 1/0/0
Firmware Ver
: 3.0.0.0
Port type
: FC
Bandwidth
: 8 Gbps
IOC state
: operational
PWWN
: 10:00:00:05:1e:dc:f3:2f
NWWN
: 20:00:00:05:1e:dc:f3:2f
Path TOV
: 14 seconds
Portlog
: Enabled
IO Profile
: Off
Interrupt coalescing
: on
Interrupt delay
: 0 us
Interrupt latency
: 0 us
Installing Virtual Machine Guest OS
The VMware ESX host documentation provides a list of recommended VM (virtual machine) GOSs
(guest operating systems ) with their installation and setup as VMs. See the VMware ESX host
documentation for information about setting up the VM configuration.
CAUTION:
•
In VMware KB 51306, VMware identifies a problem with RHEL 5 (GA), RHEL 4 U4, RHEL 4 U3,
SLES 10 (GA), and SLES 9 SP3 GOSs. Their file systems might become read-only in the event
of busy I/O retry or path failover of the ESX host’s SAN or iSCSI storage. For more information,
refer to KB 51306 at the VMware Knowledge Base website:
http://kb.vmware.com
HP does not recommend and does not support RHEL 5 (GA), RHEL 4 U4, RHEL 4 U3, SLES 10
(GA), and SLES 9 SP3 as GOSs for VMs on VMware ESX hosts attached to HP 3PAR StoreServ
Storage systems.
•
28
HP does not recommend and does not support the use of the NPIV (N-Port ID Virtualization)
feature that allows virtual ports/WWNs to be assigned to individual VMs and was introduced
with VMware ESX 3.5 - 5.0.
Configuring the Host for an FC Connection
NOTE:
•
VMware and HP recommend the LSI logic adapter emulation for Windows 2003 Servers. The
LSI Logic adapter is also the default option for Windows 2003 when creating a new VM. HP
testing has noted a high incidence of Windows 2003 VM failures during an ESX multipath
failover/failback event when the BUS Logic adapter is used with Windows 2003 VMs.
•
HP testing found that the SCSI timeout value for VM GOSs should be 60 seconds to successfully
manage path failovers at the ESX layer. Most GOSs supported by VMware have a default
SCSI timeout value of 60 seconds, but this value should be checked and then verified for each
GOS installation. In particular, Red Hat 4.x GOSs should have the SCSI timeout value changed
from the default 30 seconds to 60 seconds.
Use the command line to set the SCSI timeout on all SCSI devices presented to a Red Hat 4.x
VM to 60 seconds:
find /sys -name timeout | grep "host.*target.*timeout" | xargs -n
1 echo "echo 60 >"|sh
Add this line in the /etc/rc.local file of the Red Hat 4.x guest OS to maintain the timeout
change with a VM reboot.
Example of a modified /etc/rc.local file:
# cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
find /sys -name timeout | grep "host.*target.*timeout" | xargs -n 1 echo "echo 60 >"|sh
touch /var/lock/subsys/local
On Windows Server, check the TimeOutValue by using the following command:
C:\ reg query HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\disk /v TimeOutValue
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\disk TimeOutValue
REG_DWORD
0x3c
Installing Virtual Machine Guest OS
29
Multipath Failover Considerations and I/O Load Balancing
NOTE: This section about multipathing and configuring to Round Robin policy applies to all
connectivity types: FC, FCoE, and iSCSI.
VMware ESX 3.0 - 3.5 includes failover with multipath support to maintain a constant connection
between the ESX host and the HP 3PAR StoreServ Storage array. VMware terms this multipath
support with a choice of two path policies called Fixed or MRU. Starting with ESX 4.0, a third
path policy choice of round robin is available. The path policies can be modified on a
per-HP 3PAR StoreServ Storage-volume (LUN) basis by right-clicking the device listing and selecting
the Properties function from the VI/vSphere client menu. A pop-up window allows for managing
paths, so that the properties of the paths to the volume selected can be modified. When using this
control, select the path policy, and specify which path is the active preferred path to a volume on
the storage array, or which path is the standby path within the Fixed path policy scheme.
Additionally, paths can be disabled to prevent any traffic over a specific path to a volume on the
storage array.
The VI/vSphere client GUI allows for settings to be changed only on a volume-by-volume
(LUN-by-LUN) basis. The GUI is appropriate and preferred for use in managing I/O paths within
theFIXED path policy scheme. For procedures on implementing and configuring the round-robin
path policy on ESX/ESXi 4.0 and later with an HP 3PAR StoreServ Storage, see “Configuring
Round Robin Multipathing on ESX 4.x or later for FC” (page 31).
•
A path policy of "round-robin" is the preferred multipath implementation for ESX/ESXi 4.0
and later. For procedures on implementing and configuring the round-robin path policy on
ESX/ESXi 4.0 and later with an HP 3PAR StoreServ Storage, see “Configuring Round Robin
Multipathing on ESX 4.x or later for FC” (page 31).
•
A path policy of Fixed and the preferred/active paths manually set to balance I/O load evenly
across all paths is the preferred multipath implementation for ESX 3.0 - 3.5.
◦
In the event the active path fails or is disabled either at the fabric switch, or on the storage
array, all ESX host I/O to the storage array continues by failing over to a standby path.
When the ESX host detects that the preferred path is recovered or is enabled, I/O from
the ESX host resumes on the preferred path -- as long as a preferred path policy path was
set.
◦
I/O from the ESX host should be manually distributed or balanced when two or more
paths exist to more than one HP 3PAR StoreServ Storage volume on the storage array.
Manually balancing the loads across available paths might improve I/O performance.
This path load balancing to the storage array is dependant on the number of I/Os targeted
for specific volumes on the storage array. Tuning I/Os to specific volumes on specific
paths to the storage array varies from configuration to configuration and is totally
dependant on the workload from the ESX host and the VMs to the devices on the storage
array.
“vSphere Client” (page 31) shows a LUN with five I/O paths in a FIXED I/O policy scheme. The
path marked Active (I/O) * is the preferred path, and is the path to which all I/O is currently
assigned for the given LUN. The other paths are listed as active, but are in standby mode. The
paths in active standby mode will not be used for I/O traffic for this LUN unless the preferred path
fails.
30
Configuring the Host for an FC Connection
Figure 1 vSphere Client
•
A path policy of MRU (most recently used) does not maintain or reinstate balancing of I/O
load after a failover/failback multipath event. This could leave I/O in an unplanned and
unbalanced state which might yield significant I/O performance issues. HP does not recommend
the Implementation of an MRU path policy.
NOTE: If I/O is active to a LUN and an attempt is made to modify the path policy, a failure can
occur, indicating:
"error during the configuration of the host: sysinfoException;
Status=Busy: Message=Unable to Set".
If this problem occurs while attempting to change the path policy, reduce the I/Os to that LUN and
then try making the change again.
For additional information on this topic, refer to the chapter on "Multipathing" in the SAN
Configuration Guide at the VMware website:
http://www.vmware.com
This link will take you outside of the Hewlett-Packard website. HP does not control and is not
responsible for information outside of HP.com
Configuring Round Robin Multipathing on ESX 4.x or later for FC
With ESX version 4.0 and later, VMware supports a round-robin I/O path policy for active/active
storage arrays such as HP 3PAR StoreServ Storage. A round-robin I/O path policy is the preferred
configuration for ESX 4.0 and later; however, this path policy is not enabled by default for HP 3PAR
devices.
CAUTION: When running Windows Server 2012 or Windows Server 2008 VM Cluster with
RDM shared LUNs on pre-ESX 5.5 hosts, individually change these specific RDM LUNs from Round
Robin policy to FIXED or MRU path policy.
“LUN Set to Round Robin” (page 32), shows output from an FC configuration, a LUN with a path
that was set to Round Robin (VMware).
Multipath Failover Considerations and I/O Load Balancing
31
NOTE: Each path status is shown as Active (I/O). The path status for an iSCSI configuration
would be the same.
Figure 2 LUN Set to Round Robin
Managing a round robin I/O path policy scheme through the VI/vSphere client GUI for a large
network can be cumbersome and challenging to maintain because the policy must be specified
for each LUN individually and then updated when new devices are added. Alternatively, VMware
provides a way for the server administrator to use ESX CLI, vCLI, or vSphere Management Assistant
(vMA) commands to manage I/O path policy for storage devices on a per-host basis by using
parameters defined in a set of native ESX/ESXi storage plug-ins.
The VMware native multipathing has two important plug-ins:
•
The Storage Array Type Plug-in (SATP) — handles path failover and monitors path health.
•
The Path-selection Plug-in (PSP) — chooses the best path and routes I/O requests for a specific
logical device, that is, PSP defines the path policy.
The correct ESX/ESXi host Storage Array Type Plug-in (SATP) to be used is related to the HP 3PAR
array host persona:
•
When HP 3PAR host persona 6/Generic-legacy is the host persona in use with an ESX/ESXi
host, use the SATP VMW_SATP_DEFAULT_AA.
•
When HP 3PAR host persona 11/VMware is the host persona in use with an ESX/ESXi host,
use the SATP VMW_SATP_ALUA.
For ESX/ESXi 4.0 versions (4.0 GA through all 4.0 updates), the default SATP rules must be edited
in order to automatically achieve a round robin I/O path policy for storage devices.
Beginning with ESX/ESXi 4.1, additional custom SATP rules can be created that target SATP/PSP
to specific vendors while leaving the default SATP rules unmodified. The custom SATP can be used
to automatically achieve a round robin I/O path policy for storage devices.
32
Configuring the Host for an FC Connection
Configuring ESX/ESXi Multipathing for Round Robin via SATP PSP
As part of the PSP Round-Robin configuration, the value of IOPS can be specified. IOPS is the
number of IO operations scheduled for each path during path changes within the Round-Robin
path selection scheme. The default IOPS value is 1000. HP recommends IOPS=1 as an initial
value and starting point for further optimization of IO throughput with PSP Round-Robin. With the
exception of ESX/ESXi 4.0 versions, it is preferable to set the IOPS value within a SATP custom
rule.
CAUTION:
VMware specifically warns not to directly edit the esx.conf file.
NOTE:
•
The setting of IOPs=1 is a new recommended initial value as documented in the HP technical
white paper HP 3PAR StoreServ Storage and VMware vSphere 5 Best Practice (HP document
#4AA4–3286ENW). Change this setting to suit the demands of various workloads.
•
SATP rule changes cannot be affected through vSphere GUI.
•
SATP rule changes through ESX CLI commands populate the esx.conf file.
•
A custom SATP rule is an additional SATP rule that either modifies or redefines parameters of
an existing SATP default rule, defines the targeted devices affected, and has a unique custom
rule name.
•
A custom SATP rule cannot be changed or edited. A custom SATP rule must be removed before
creating a new one with the changes added to affect a change in the parameters of the custom
rule.
•
SATP and PSP creation, changes, additions, or removals take effect for any new devices
presented afterward without the need for server reboot.
•
The host must be rebooted for SATP rule creation, changes, additions, or removals to take
effect on existing, previously presented devices.
•
Path policy changes done on an individual device basis, either done via vCenter or ESX CLI
command, supersede the PSP path policy defined in a SATP rule, and such path policy changes
to individual devices are maintained through host reboots.
•
Valid PSP for SATP VMW_SATP_DEFAULT_AA rules are:
◦
VMW_PSP_RR
◦
VMW_PSP_FIXED
◦
VMW_PSP_MRU
VMW_PSP_RR is preferred.
•
Valid PSP for SATP VMW_SATP_ALUA rules are:
◦
VMW_PSP_RR
◦
VMW_PSP_MRU
VMW_PSP_FIXED is not a valid PSP to be defined within an ALUA SATP rule. VMW_PSP_RR
is preferred.
Multipath Failover Considerations and I/O Load Balancing
33
Changing from HP 3PAR host persona 6/Generic-legacy to host persona 11/VMware or vice
versa:
•
A change from persona 6 to 11, or from persona 11 to 6, the HP 3PAR OS requires either
taking the affected array ports offline, or disconnecting the host for which the persona is being
changed.
•
For existing devices targeted in a custom SATP rule to be claimed by the rule, the ESX/ESXi
OS requires a host reboot.
HP recommends using the following procedure to change from persona 6 to 11, or from persona
11 to 6:
1. Stop all host I/O and apply the necessary SATP changes (create custom SATP rule and/or
modify default SATP rule PSP defaults) to the ESX/ESXi host.
2. Shut down the host.
3. Change the host persona on the array.
4. Boot the host.
5. Verify that the target devices have been claimed properly by the SATP rule.
ESX/ESXi 4.0 GA - 4.0 MUx
NOTE:
•
Although ESX 4.0 GA - 4.0 MUx supports custom SATP rules, the -P options for setting PSP
(path policy) within the custom rule are not supported. PSP must be defined within the default
SATP rules.
•
ESX 4.0 GA - 4.0 MUx has a known issue where no attempt should be made to change iops
from its default value of 1000 (iops = 1000). If the iops value is changed, at the next host
reboot, an invalid value for iops is retrieved and iops become unpredictable.
•
If the custom rule is not created for ALUA, 3PARdata VVs are claimed by the
VMW_SATP_DEFAULT_AA SATP rule even though the array persona is 11/VMware (ALUA
compliant array port presentation).
HP 3PAR SATP rules for use with persona 6/Generic-legacy (Active-Active array port presentation)
A “custom” SATP rule is not used. The PSP (path policy) is changed on the default active-active
SATP rule. The default multipath policy for VMW_SATP_DEFAULT_AA is VMW_PSP_FIXED (Fixed
path). The default is changed to the preferred PSP (path policy) of round-robin.
# esxcli nmp satp setdefaultpsp -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR
HP 3PAR SATP rules for use with persona 11/VMware (ALUA compliant array port presentation)
# esxcli nmp satp setdefaultpsp -s VMW_SATP_ALUA -P VMW_PSP_RR
# esxcli nmp satp addrule -s "VMW_SATP_ALUA" -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR Custom iSCSI/FC/FCoE ALUA
Rule"
To remove the above ALUA custom SATP rule:
# esxcli nmp satp deleterule -s "VMW_SATP_ALUA" -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR Custom iSCSI/FC/FCoE ALUA
Rule"
34
Configuring the Host for an FC Connection
CAUTION: The procedure for changing the default SATP rules to use the round robin I/O
multipathing policy is intended to apply only to VMware hosts using HP 3PAR StoreServ Storage
LUNs. If the host shares storage from other vendors, before making any I/O policy changes,
consider the effect that changing the default rules will have on the storage environment as a whole.
A change of the default PSP for a given SATP affects all storage devices (FC, FCoE, iSCSI) that
use the same default SATP rule. If a host shares multiple storage vendors in addition to an HP 3PAR
StoreServ Storage, and the other connected storage does not support active/active round robin
multipathing using the same SATP rule, such as VMW_SATP_DEFAULT_AA or VMW_DEFAULT_ALUA,
then its multipathing is also affected.
If the other storage uses a different SATP of its own, then SATP VMW_SATP_DEFAULT_AA mapping
should be changed to VMW_PSP_RR to take advantage of round-robin multipathing. Check the
SATP-PSP relationship of a given device for ESX 4.0 by using the esxcli nmp device list
or esxcli nmp device list -d <device id> command.
For example, If the HP 3PAR StoreServ Storage and storage X are connected to the same host
using VMW_SATP_DEFAULT_AA, and if storage X does not have its own SATP, then it might cause
a problem if storage X does not support round-robin multipathing. If the HP 3PAR StoreServ Storage
and storage Y are sharing the same host, and if storage Y has its own SATP VMW_SATP_Y and
HP uses VMW_SATP_DEFAULT_AA, then there will be no conflict, and the change can be made.
ESX/ESXi 4.1 GA - 4.1 MUx
HP 3PAR custom SATP rule for use with persona 6/Generic-legacy (Active-Active array port
presentation):
# esxcli nmp satp addrule -s "VMW_SATP_DEFAULT_AA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_off" -V "3PARdata" -M "VV" -e "HP
3PAR Custom iSCSI/FC/FCoE Rule"
To remove the above active-active custom SATP rule:
# esxcli nmp satp deleterule -s "VMW_SATP_DEFAULT_AA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_off" -V "3PARdata" -M "VV" -e
"HP 3PAR Custom iSCSI/FC/FCoE Rule"
HP 3PAR custom SATP rule for use with persona 11/VMware (ALUA compliant array port
presentation):
# esxcli nmp satp addrule -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR
Custom iSCSI/FC/FCoE ALUA Rule"
To remove the above ALUA custom SATP rule:
# esxcli nmp satp deleterule -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR
Custom iSCSI/FC/FCoE ALUA Rule"
ESXi 5.x and ESXi 6.0
HP 3PAR custom SATP rule for use with persona 6/Generic-legacy (Active-Active array port
presentation):
# esxcli storage nmp satp rule add -s "VMW_SATP_DEFAULT_AA" -P "VMW_PSP_RR" -O iops=1 –c "tpgs_off" -V "3PARdata" -M "VV"
-e "HP 3PAR Custom iSCSI/FC/FCoE Rule"
Multipath Failover Considerations and I/O Load Balancing
35
To remove the above Active-Active custom SATP rule:
# esxcli storage nmp satp rule remove -s "VMW_SATP_DEFAULT_AA" -P "VMW_PSP_RR" -O iops=1 –c "tpgs_off" -V "3PARdata" -M
"VV" -e "HP 3PAR Custom iSCSI/FC/FCoE Rule"
HP 3PAR custom SATP rule for use with persona 11/VMware (ALUA compliant array port
presentation):
# esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP
3PAR Custom iSCSI/FC/FCoE ALUA Rule"
To remove the above ALUA custom SATP rule:
# esxcli storage nmp satp rule remove -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_on" -V "3PARdata" -M "VV" -e
"HP 3PAR Custom iSCSI/FC/FCoE ALUA Rule"
SATP Info Commands
Default SATP Rules and Their Current Default PSP
List default SATP rules and their current default PSP (path policy)by using the commands in the
following examples:
ESXi 5.x and ESXi 6.0 example
#esxcli storage nmp satp list
Name
Default PSP
-------------------- ------------VMW_SATP_ALUA
VMW_PSP_MRU
VMW_SATP_MSA
VMW_PSP_MRU
VMW_SATP_Default_AP VMW_PSP_MRU
VMW_SATP_SVC
VMW_PSP_FIXED
VMW_SATP_BQL
VMW_PSP_FIXED
VMW_SATP_INV
VMW_PSP_FIXED
VMW_SATP_EVA
VMW_PSP_FIXED
VMW_SATP_ALUA_CX
VMW_PSP_FIXED
VMW_SATP_SYMM
VMW_PSP_FIXED
VMW_SATP_CX
VMW_PSP_MRU
VMW_SATP_LSI
VMW_PSP_MRU
VMW_SATP_Default_AA VMW_PSP_FIXED
VMW_SATP_LOCAL
VMW_PSP_FIXED
Description
------------------------------------------------------Supports non-specific arrays that use the ALUA protocol
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Placeholder (plugin not loaded)
Supports non-specific active/active arrays
Supports direct attached devices
ESX/ESXi 4.x example
#esxcli storage nmp satp list
Name
Default PSP
VMW_SATP_ALUA_CX
VMW_PSP_FIXED
VMW_SATP_SVC
VMW_PSP_FIXED
VMW_SATP_MSA
VMW_PSP_MRU
VMW_SATP_EQL
VMW_PSP_FIXED
VMW_SATP_INV
VMW_PSP_FIXED
VMW_SATP_SYMM
VMW_PSP_FIXED
VMW_SATP_LSI
VMW_PSP_MRU
VMW_SATP_EVA
VMW_PSP_FIXED
VMW_SATP_DEFAULT_AP VMW_PSP_FIXED
VMW_SATP_CX
VMW_PSP_MRU
VMW_SATP_ALUA
VMW_PSP_RR
VMW_SATP_Default_AA VMW_PSP_RR
VMW_SATP_LOCAL
VMW_PSP_FIXED
36
Description
Supports EMC CX that use the ALUA protocol
Supports IBM SVC
Supports HP MSA
Supports EqualLogic arrays
Supports EMC Invista
Supports EMC Symmetrix
Supports LSI and other arrays compatible with the SIS 6.10 in non-AVT mode
Supports HP EVA
Supports non-specific active/passive arrays
Supports EMC CX that do not use the ALUA protocol
Supports non-specific arrays that use the ALUA protocol
Supports non-specific active/active arrays
Supports direct attached devices
Configuring the Host for an FC Connection
SATP Custom Rules and Associated Defined Parameters
To list SATP custom rules and associated defined parameters, use the commands in the following
examples:
ESXi 5.x and ESXi 6.0 example
For persona 11:
# esxcli storage nmp satp rule list | grep -i 3par
VMW_SATP_ALUA 3PARdata VV user tpgs_on
VMW_PSP_RR iops=1 HP 3PAR Custom iSCSI/FC/FCoE ALUA Rule
For persona 6:
# esxcli storage nmp satp rule list | grep -i 3par
VMW_SATP_DEFAULT_AA 3PARdata VV user tpgs_off
VMW_PSP_RR iops=1 HP 3PAR Custom iSCSI/FC/FCoE Rule
ESX/ESXi 4.x example
# esxcli nmp satp listrules | grep -i 3par
VMW_SATP_ALUA
3PARdata VV
tpgs_on
HP 3PAR Custom iSCSI/FC/FCoE ALUA Rule
Show Device Information
To show device information, use the commands in the following examples:
ESXi 5.x and ESXi 6.0 example
# esxcli storage nmp device list
naa.50002ac0000a0124
Device Display Name: 3PARdata iSCSI Disk (naa.50002ac0000a0124)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on;explicit_support=on;
explicit_allow=on;alua_followover=on;{TPG_id=256,TPG_state=AO}}
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1,bytes=10485760,useANO=0;lastPathIndex=1:
NumIOsPending=0,numBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba3:C0:T1:L73, vmhba2:C0:T0:L73, vmhba2:C0:T1:L73, vmhba3:C0:T0:L73
ESX/ESXi 4.x example
The command is the same with ESX/ESXi 4.x. The output shown is for ESX 4.0:
# esxcli nmp device list
naa.50002ac000b40125
Device Display Name: 3PARdata Fibre Channel Disk (naa.50002ac000b40125)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config:
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=3:
NumIOsPending=0,numBytesPending=0}
Working Paths: vmhba5:C0:T0:L25, vmhba5:C0:T1:L25, vmhba4:C0:T0:L25, vmhba4:C0:T1:L25
With ESX 4.1, the iops will be 1 for the device list output shown above.
Multipath Failover Considerations and I/O Load Balancing
37
Script Alternative for Path Policy Changes on Storage Devices without a Host Reboot
If a reboot of the ESX/ESXi host to affect path policy changes through SATP on a large number of
existing, previously presented storage devices is not desirable, the path policy changes on a batch
of LUNs can be made by scripting ESX CLI commands.
Create a script that uses the following commands:
1. List all the HP 3PAR devices present on the host:
ESXi 5.x and ESXi 6.0
# esxcli storage nmp device list | grep -i naa.50002ac | grep -v Device
naa.50002ac0005800ac
naa.50002ac003b800ac
naa.50002ac0039300ac
ESX/ESXi 4.x
# esxcli nmp device list | grep -i naa.50002ac | grep -v Device
naa.50002ac0005800ac
naa.50002ac003b800ac
naa.50002ac0039300ac
2.
Change the I/O path policy to round robin for each device identified in the previous output:
ESXi 5.x and ESXi 6.0
# esxcli storage nmp device set -d naa.50002ac0005800ac -P VMW_PSP_RR
ESX/ESXi 4.x
# esxcli nmp device setpolicy -d naa.50002ac0005800ac -P VMW_PSP_RR
3.
Verify that the change has been made.
ESXi 5.x and ESXi 6.0
# esxcli storage nmp device list -d naa.50002ac0005800ac
ESX/ESXi 4.x
# esxcli nmp device list -d naa.50002ac0005800ac
NOTE:
occur:
If I/O is active to a LUN and an attempt is made to modify the path policy, a failure can
error during the configuration of the host: sysinfoException;
Status=Busy: Message=Unable to Set
If this problem occurs during an attempt to change the path policy, reduce the I/Os to that LUN
and then make the change.
38
Configuring the Host for an FC Connection
Performance Considerations for Multiple Host Configurations
The information in this section should be considered when using multiple ESX hosts, (or other hosts
in conjunction with ESX hosts), that are connected in a fan-in configuration to a pair of HP 3PAR
StoreServ Storage ports.
ESX/ESXi Handling SCSI Queue Full and Busy Messages from the HP 3PAR StoreServ
Storage Array
VMware ESX Release ESX 4.x, ESXi 5.0 and 5.0 Updates, ESXi 5.5 and 5.5 Updates, and ESXi
6.0
ESXi 5.x and ESXi 6.0
With ESX 4.0 GA, ESX 4.1 (with all ESX 4.x updates), ESXi 5.0 (with all updates), ESXi 5.5 (with
all updates), and ESXi 6.0, an algorithm was added that allows ESX to respond to Queue Full and
Busy SCSI messages from the storage array. The Queue Full or Busy response by ESX is to slow
I/O for a period of time, this helps to prevent over driving of the HP 3PAR StoreServ Storage ports.
This feature should be enabled as part of an ESX - HP 3PAR StoreServ Storage deployment.
The Queue Full and Busy LUN-throttling algorithm is disabled by default. To enable the algorithm,
complete the following procedure:
1. From the VI/vSphere client, select the ESX host. In the Configuration tab, select Advanced
Settings for software, and then select Disk.
2. Scroll to find and then adjust the following HP-recommended settings:
QFullSampleSize = 32
QFullThreshold = 4
With the algorithm enabled, no additional I/O throttling scheme is necessary. For additional
information regarding the ESX Queue Full/Busy response algorithm, refer to KB 1008113 at the
VMware Knowledge Base website:
http://www.vmware.com
This link will take you outside of the Hewlett-Packard website. HP does not control and is not
responsible for information outside of HP.com
VMware ESXi Release 5.1
The Advanced Settings parameters QFullSampleSize and QFullThreshold are required to enable
the adaptive queue-depth algorithm.
In ESXi releases earlier than ESXi 5.1, these parameters are set globally; that is, they are set on
all devices seen by the ESXi host. In VMware ESXi 5.1, however, these parameters are set in a
more granular fashion, on a per-device basis.
VMware patch ESXi510-201212001, dated 12/20/2012 (KB 2035775), restores the ability to
set the values of these parameters globally. See the VMware Knowledge Base website:
http://www.vmware.com
To set the parameters globally, install the patch and follow the instructions in “VMware ESX Release
ESX 4.x, ESXi 5.0 and 5.0 Updates, ESXi 5.5 and 5.5 Updates, and ESXi 6.0” (page 39).
Also, use the esxcli command to set these values on a per-device basis. If both options (the
esxcli and the advanced parameters) are used, the per-device values take precedence.
Set QFullSampleSize and QFullThreshold on a per-device level by using theesxcli
command:
# esxcli storage core device set --device
device_name -q Q -s S
Performance Considerations for Multiple Host Configurations
39
The settings do not require a reboot to take effect and are persistent across reboots.
Retrieve the values for a device by using the corresponding list command:
# esxcli storage core device list
The command supports an optional --device parameter:
#
esxcli storage core device list --device device_name
ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 Additional Feature Considerations
New features related to storage I/O control and integration with storage arrays are introduced
starting from ESX/ESXi 4.1. HP recommends using SIOC and vStorage APIs for Array Integration
(VAAI) with ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 HP 3PAR StoreServ Storage configurations.
NOTE:
iSCSI.
This section about VAAI and new features applies to all connectivity types: FC, FCoE and
Storage I/O Control
The SIOC feature allows a new level of monitoring and control of I/O from individual VMs to an
HP 3PAR StoreServ Storage array at the datastore level and across ESX/ESXi hosts in a VMware
cluster.
For more information about the SIOC feature and considerations for its deployment, refer to the
VMware technical white paper, Storage I/O Control Technical Overview and Considerations for
Deployment at the VMware website:
http://www.vmware.com/files/pdf/techpaper/VMW-vSphere41-SIOC.pdf
VAAI (vStorage APIs for Array Integration)
In partnership with VMware, HP has developed an ESX/ESXi 4.1 plug-in that enables a new set
of SCSI commands to be used by ESX/ESXi 4.1 in conjunction with HP 3PAR StoreServ Storage.
VMware refers to this newly incorporated set of SCSI commands as the "primitives".
ESX extensions that make use of these primitives are collectively referred to as vStorage APIs for
Array Integration (VAAI). The VMware primitives allow an ESX/ESXi host to send VM operations
to storage hardware at a meta level instead of at the traditional data level. This reduces operational
latency and traffic on the FC fabric/iSCSI network. Some of these primitives allow the storage
hardware to participate in block allocation and de-allocation functions for VMs. These primitives
are also known as hardware “offloads”.
A brief description of the "primitives":
40
•
Full Copy (XCOPY) enables the storage array to make full copies of data within the array
without having to have the ESX host read and write the data. This off loads some data copy
processes to the storage array.
•
Block Zeroing (WRITE-SAME) enables the storage array to zero-out a large number of blocks
within the array without having to have the ESX host write the zeros as data and helps expedite
the provisioning of VMs. This off loads some of the file space zeroing functions to the storage
array.
•
Hardware Assisted Locking (ATS) provides an alternative to SCSI reservations as a means to
protect the metadata for VMFS cluster file systems and helps improve the scalability of large
ESX host farms sharing a datastore.
Configuring the Host for an FC Connection
HP 3PAR VAAI Plug-in 1.1.1 for ESX 4.1
Support for VMware VAAI functionality is available by installing the HP 3PAR VAAI Plug-in 1.1.1
on ESX/ESXi 4.1 with HP 3PAR OS version 2.3.1 MU2 (minimum).
For more information about VMware VAAI, the HP 3PAR VAAI Plug-in for ESX/ESXi 4.1 installation
package, and the HP 3PAR VAAI Plug-in 1.1.1 for VMware vSphere 4.1 User's Guide, see the
HP Software Depot:
https://h20392.www2.hp.com/portal/swdepot/
NOTE: There is an HP 3PAR VAAI plug-in v1.1.1 limitation in which VAAI Full Copy (XCOPY)
does not function, and data copy processes are not offloaded to the array, as of HP 3PAR OS
3.1.2 using 16-byte volume WWNs.
HP 3PAR VAAI Plug-in 2.2.0 for ESXi 5.x and ESXi 6.0
Install the HP 3PAR VAAI Plug-in 2.2.0 on ESXi 5.x if the HP 3PAR StoreServ Storage HP 3PAR
OS is 2.3.1 MU2 or a higher version of HP 3PAR OS 2.3.1, to take advantage of the storage
array primitives or hardware offloads (mainly XCOPY, Write-Same, and ATS).
Do not install the HP 3PAR VAAI Plug-in 2.2.0 on the ESXi 5.x if it is connected to an HP 3PAR
StoreServ Storage running HP 3PAR OS 3.1.1 or later. The VAAI primitives are handled by the
default T10 VMware plug-in do not require the HP 3PAR VAAI plug-in.
The following table summarizes the HP 3PAR VAAI Plug-in installation requirements.
Table 2 HP 3PAR VAAI Plug-in Installation Requirements
VMware ESX
ESX 4.1
ESX 4.1
Features/Primitives
HP 3PAR OS 2.3.1 MU2 HP 3PAR OS 3.1.1 or
or later
later
ESXi 5.x, ESXi 6.0
ATS
Supported. Does not
require HP 3PAR VAAI
Requires HP 3PAR VAAI
plug-in (supported by
2.2.0 plug-in
Standard T10 ESX
plug-in).
XCOPY
Requires HP 3PAR VAAI Requires HP 3PAR
1.1.1 plug-in
VAAI 1.1.1 plug-in
WRITE_SAME
ESXi 5.x, ESXi 6.0
HP 3PAR OS 2.3.1 MU2 HP 3PAR OS 3.1.1 or
or later
later
NOTE: The HP 3PAR VAAI Plug-in 2.2.0 is required if the ESXi 5.x or ESXi 6.0 server is connected
to two or more arrays that are running a mix of OS HP 3PAR OS 2.3.1.x and OS 3.1.x. For LUNs
on HP 3PAR OS 3.1.x, the default VMware T10 plug-in will be effective, and for storage LUNs on
HP 3PAR OS 2.3.x, HP 3PAR VAAI Plug-in 2.2.0 will be effective.
For more information, see the HP 3PAR VAAI Plug-in Software for VMware vSphere User's Guide
(HP part number QL226-96072).
To download the HP 3PAR VAAI Plug-in software, see the HP Software Depot:
https://h20392.www2.hp.com/portal/swdepot/
UNMAP (Space Reclaim) Storage Hardware Support for ESXi 5.x or ESXi 6.0
HP 3PAR OS 3.1.1 or later supports the UNMAP storage primitive for space reclaim, which is
supported starting with ESXi 5.0 Update 1 with default VMware T10 VAAI plug-in and ESXi 6.0.
Installation of the HP 3PAR VAAI plug-in is not required.
NOTE: To avoid possible issues described in VMware KB 2007427 and KB 2014849, automatic
VAAI Thin Provisioning Block Space Reclamation (UNMAP) should be disabled on ESXi 5.0 GA.
Refer to the KB articles at the VMware Knowledge Base website:
http://kb.vmware.com
ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 Additional Feature Considerations
41
ESXi 5.0 Update 1 and later includes an updated version of vmkfstools that provides an option
[-y] for sending the UNMAP command regardless of the ESXi host’s global setting.
Use the [-y] option:
# cd /vmfs/volumes/<volune-name>
vmkfstools -y <percentage of deleted block to reclaim>
NOTE:
The vmkfstools -y option does not work in ESXi 5.0 GA.
ESXi 5.5 introduces a new command in the esxcli namespace that allows deleted blocks to
be reclaimed on thin provisioned LUNs that support the VAAI UNMAP primitive.
# esxcli storage vmfs unmap -l <datastore name>
The vmkfstools -y command is deprecated in ESXi 5.5. See VMware KB 2057513 for more
information.
UNMAP also frees up space if files are deleted on UNMAP-supported VMs such as Red Hat
Enterprise 6, as long as it is a RDM LUN on a TPVV storage volume—for example, for RDM volumes
on a Red Hat VM using the ext4 filesystem and mounted using the discard option.
# mount —t ext4 —o discard /dev/sda2 /mnt
This causes Red Hat 6 VM to run the UNMAP command and release space back on the array for
any deletes in that ext4 file system.
Out-of-Space Condition for ESX 4.1, ESXi 5.x, or ESXi 6.0
The out-of-space condition, aka "VM STUN", is implemented in ESX 4.1, ESXi 5.x, or ESXi 6.0,
and is supported as of HP 3PAR OS 2.3.1 MU2. This OS feature does not depend on the VAAI
plug-in and applies to TPVV volume types.
When the TPVV cannot allocate additional storage space or cannot grow because the storage
system is out of disk space, it sends a Check condition with "DATA PROTECT" sense key error and
additional sense "SPACE ALLOCATION FAILED WRITE PROTECT". As a result, ESX pauses the
VM and displays an 'Out of Space' message to the user in the Summary tab of the VM on vSphere,
with the options of Retry or Cancel. In the pause-VM condition, read requests and rewrites to the
allocated LUN blocks are allowed, but writing to a new space is not allowed. Ping, telnet and ssh
requests to the VM are not honored. The storage administrator must add additional disk space or
use storage vMotion to migrate other unaffected VMs from the LUN. After additional disk space
is added, use the Retry option on the warning message to bring the VM back to the read-write
state. If the Cancel option is selected, the VM reboots.
In the following example, an HP 3PAR StoreServ Storage TPVV was created with a warning limit
of 60%, as shown in the showvv -alert command.
# showvv —alert
-----------Alerts------------ --(MB)-- -Snp(%VSize)- -Usr(%VSize)- Adm ----Snp----- ---- Usr----Id Name Prov Type VSize Used Wrn Lim Used Wrn Lim Fail Fail Wrn Lim Fail Wrn Lim
612 nospace1 tyvv base 102400 0.0 -- -- 61.1 60 - - - - - - - y
42
Configuring the Host for an FC Connection
When the warning limit is reached, the HP 3PAR StoreServ Storage sends a soft threshold error
asc/q: 0x38/0x7 and ESX continues to write.
InServ debug log:
1 Debug Host error undefined Port 1:5:2 -- SCSI status 0x02 (Check
condition) Host:sqa-dl380g5-14-esx5 (WWN 2101001B32A4BA98) LUN:22 LUN
WWN:50002ac00264011c VV:0 CDB:280000AB082000000800 (Read10) Skey:0x06 (Unit attention)
asc/q:0x38/07 (Thin provisioning soft threshold reached) VVstat:0x00 (TE_PASS
-- Success) after 0.000s (Abort source unknown) toterr:74882, lunerr:2
# showalert
Id:
193
State:
New
Message Code: 0x0270001
Time:
2011-07-13 16:12:15 PDT
Severity:
Informational
Type:
TP VV allocation size warning
Message:
Thin provisioned VV nospace1 has reached allocation
warning of 60G (60% of 100G)
When the HP 3PAR StoreServ Storage runs out of disk space, a hard permanent error asc/q:
0x27/0x7 is sent. Use showspace, showvv -r, and showalert to see the warning and space
usage. The ESX responds by stunning the VM.
InServ debug log:
1 Debug Host error undefined Port 1:5:2 -- SCSI status 0x02 (Check
condition) Host:sqa-dl380g5-14-esx5 (WWN 2101001B32A4BA98) LUN:22 LUN
WWN:50002ac00264011c VV:612 CDB:2A00005D6CC800040000 (Write10) Skey:0x07 (Data protect)
asc/q:0x27/07 (Space allocation failed write protect) VVstat:0x41 (VV_ADM_NO_R5
-- No space left on snap data volume) after 0.302s (Abort source unknown)
toterr:74886, lunerr:3
The following figure shows the VM warning displayed on the vSphere with the Retry and Cancel
options.
Figure 3 VM Message — Retry and Cancel options
ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 Additional Feature Considerations
43
Additional New Primitives Beginning with ESXi 5.x
HP 3PAR OS 3.1.1 or later supports additional new primitives, called TP LUN Reporting, where
the ESXi 5.x is notified that a given LUN is a thin provisioning LUN by enabling the TPE bits of the
READ Capacity (16) and enables the host to use features such as sending the UNMAP command
to these LUNs. The TPE bit is enabled for TPVVs and R/W snapshot of TPVV base.
The Quota Exceeded Behavior present in ESXi 5.x is accomplished through the "Thin Provisioning
Soft Threshold Reached" check condition, providing alerts and warnings.
VAAI and New Feature Support Table
The following table summarizes the VAAI plug-in requirements and New Primitives support.
Table 3 HP 3PAR VAAI Plug-in Installation Requirements
VMware ESX
ESX 4.1
Features/Primitives
HP 3PAR OS 2.3.1
MU2 or later
ESX 4.1
ESXi 5.x, ESXi 6.0
ESXi 5.x, ESXi 6.0
HP 3PAR OS 3.1.1 or
later
HP 3PAR OS 2.3.1
MU2 or later
HP 3PAR OS 3.1.1 or
later
ATS
XCOPY
Needs HP 3PAR VAAI Needs HP 3PAR VAAI
1.1.1 plug-in
1.1.1 plug-in
Needs HP 3PAR VAAI
2.2.0 plug-in
Not supported by ESX Not supported by ESX
4.1 or later
4.1 or later
Not supported by
HP 3PAR OS 2.3.1
WRITE_SAME
UNMAP
Not supported by
HP 3PAR OS 2.3.1
Out-of-space
condition (also
known as, “VM
STUN”)
Supported. HP 3PAR
VAAI 1.1.1 plug-in
recommended, but not
required for this
specific feature.
Quota Exceeded
Behavior
Not supported by ESX
4.1 or later
Not supported by ESX
4.1 or later
Not supported by
HP 3PAR OS 2.3.1
TP LUN Reporting
Supported. HP 3PAR
VAAI 1.1.1 plug-in
recommended, but not
required for this specific
feature.
Supported. HP 3PAR
VAAI 2.2.0 plug-in
recommended, but not
required for this
specific feature.
Supported. Does not
require HP 3PAR VAAI
plug-in (supported by
Standard T10 ESX
plug-in).
Not supported by
HP 3PAR OS 2.3.1
VAAI Plug-in Verification
With ESXi 5.x or ESXi 6.0:
NOTE: VAAI Plug-in 2.2.0 must be installed if an ESXi 5.x or ESXi 6.0 server is connected to
two or more arrays that are running a mix of HP 3PAR OS 2.3.x and HP 3PAR OS 3.1.x.
For LUNs:
•
HP 3PAR OS 3.1.x—The default VMware T10 plug-in is effective.
•
HP 3PAR OS 2.3.x—The HP 3PAR VAAI Plug-in 2.2.0 is effective.
Verify that VAAI 2.2.0 is installed on ESXi 5.x or ESXi 6.0 and enabled for the HP 3PAR data
vendor (only contents that are applicable to vendor HP 3PAR data are shown in the output):
•
Show that the plug-in has been installed:
# esxcli storage core plugin list
Plug-in name
Plug-in class
----------------- -----------3PAR_vaaip_InServ VAAI
44
Configuring the Host for an FC Connection
The vSphere shows that Hardware Acceleration is supported on the HP 3PAR devices.
•
Show the version of the plug-in:
# esxcli software vib list | grep -i 3par
Name
Version
Vendor Acceptance Level
------------------------------------- ------ -------------vmware-esx-3PAR_vaaip_InServ 2.2-2 3PAR VMwareCertified
Install Date
------------
Show that the plug-in is active in the claimrule which runs for each of the devices discovered:
•
Show that the VAAI is enabled for the HP 3PAR data device:
# esxcfg-scsidevs -l
naa.50002ac0002200d7
Device Type: Direct-Access
Size: 51200 MB
Display Name: 3PARdata Fibre Channel Disk (naa.50002ac0002200d7)
Multipath Plugin: NMP
Console Device: /vmfs/devices/disks/naa.50002ac0002200d7
Devfs Path: /vmfs/devices/disks/naa.50002ac0002200d7
Vendor: 3PARdata Model: VV
Revis: 0000
SCSI Level: 5 Is Pseudo: false Status: on
Is RDM Capable: true Is Removable: false
Is Local: false Is SSD: false
Other Names:
vml.020022000050002ac0002200d7565620202020
VAAI Status: supported
ESXi 5.x or ESXi 6.0 with HP 3PAR OS 3.1.1 uses the native T10 plug-in, and should not show
any HP 3PAR plug-in.
# esxcli storage core plugin list
Plugin name Plugin class
----------- -----------NMP
MP
With ESX 4.1:
Verify that the VAAI Plug-in is installed and enabled on devices:
•
Show the version of the installed VAAI plug-in:
# esxupdate --vib-view query | grep 3par
cross_3par-vaaip-inserv_410.1.1-230815
•
installed
Show that the claim rule is in effect for the HP 3PAR devices discovered:
ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 Additional Feature Considerations
45
•
Show that the VAAI is supported on the device:
# esxcfg-scsidevs -l
naa.50002ac003da00eb
Device Type: Direct-Access
Size: 512000 MB
Display Name: 3PARdata iSCSI Disk (naa.50002ac003da00eb)
Multipath Plugin: NMP
Console Device: /dev/sdx
Devfs Path: /vmfs/devices/disks/naa.50002ac003da00eb
Vendor: 3PARdata Model: VV
Revis: 3110
SCSI Level: 5 Is Pseudo: false Status: on
Is RDM Capable: true Is Removable: false
Is Local: false
Other Names:
vml.020001000050002ac003da00eb565620202020
VAAI Status: supported
To take advantage of the storage primitives on ESX 4.1, ESXi 5.x, or ESXi 6.0, the following output
shows that Hardware Acceleration is enabled on HP 3PAR LUNs.
Use the esxcfg-advcfg command to check that the options are set to 1 (enabled):
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit
# esxcfg-advcfg -g /VMFS3/HardwareAcceleratedLocking
Figure 4 Hardware Acceleration for HP 3PAR devices
46
Configuring the Host for an FC Connection
6 Configuring the Host as an FCoE Initiator Connecting to
an FC target or an FCoE Target
All contents of the FC sections of this guide also apply to FCoE connectivity. See the following
sections:
•
“Installing Virtual Machine Guest OS” (page 28)
•
“Multipath Failover Considerations and I/O Load Balancing” (page 30)
•
“Performance Considerations for Multiple Host Configurations” (page 39)
•
“ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 Additional Feature Considerations” (page 40)
This chapter describes the procedures for setting up an ESX software FCoE configuration with an
HP 3PAR StoreServ Storage. These instructions cover both end-to-end FCoE and FCoE initiator to
FC target. The instructions in this chapter should be used in conjunction with the VMware vSphere
Storage guide available at the VMware website:
http://www.vmware.com
Configuring the FCoE Switch
Connect the ESX (FCoE Initiator) host ports and HP 3PAR StoreServ Storage server (FCoE target)
ports to an FCoE-enabled switch.
NOTE: FCoE switch VLANs and routing setup and configuration is beyond the scope of this
document. For instructions on setting up VLANs and routing, refer to the manufacturer's guide for
the switch.
Using system BIOS to configure FCoE
1.
Enter the setup menu. The combination of keys to press to enter setup might be different
depending on the host being configured. The example below is for an HP ProLiant:
Figure 5 Configuring FCoE
2.
In the System Options pane, select NIC Personality Options.
Configuring the FCoE Switch
47
Figure 6 NIC Personality Options
3.
In the PCI Slot 2 Pane, select FCoE for both Port 1 and Port 2.
Figure 7 Configuring the PCI Slots
4.
48
PCI Slot 2 Port 1 and Port 2 now display FCoE.
Configuring the Host as an FCoE Initiator Connecting to an FC target or an FCoE Target
Figure 8 PCI Slot 1 and Slot 2 Configured for FCoE
5.
Save the changes and exit the BIOS.
Figure 9 Exiting the BIOS Utility
Configuring an HP 3PAR StoreServ Storage Port for an FCoE Host
Connection
Setting up an FCoE initiator to FC target, does not require any special configuration on the HP
3PAR StoreServ Storage. The initiator coming from the host adapters through the FCoE Forwarder
switch is treated as another FC device connecting to the HP 3PAR StoreServ Storage ports. The
same guidelines described in “Configuring the HP 3PAR StoreServ Storage for FC” (page 8) and
“Configuring the Host for an FC Connection” (page 27) must be followed when a server with a
host CNA card configured with FCoE is connected to HP 3PAR StoreServ Storage ports.
When setting up FCoE initiator to FCoE target, the StoreServ ports must be configured for FCoE.
See “Configuring the HP 3PAR StoreServ Storage for FCoE” (page 21), for notes on how to configure
FCoE ports on the StoreServ.
NOTE: For specific configurations that support FCoE CNAs and forwarder switches, refer to the
appropriate HP 3PAR OS release version at HP SPOCK:
http://www.hp.com/storage/spock
Configuring an HP 3PAR StoreServ Storage Port for an FCoE Host Connection
49
Configuring Initiator FCoE to FC Target
If an FCoE to FC configuration is being set up, the figure below summarizes the general procedure
to configure a CNA and FCoE Forwarder Switch.
Figure 10 Initiator FCoE to FC Target
NOTE: For complete and detailed instructions for configuring a server with a given Converged
Network Adapter, see the CNA manufacturer documentation.
The FCoE switch or FCoE forwarder must be able to convert FCoE traffic to FC and to trunk this
traffic to the fabric where the HP 3PAR StoreServ Storage target ports are connected.
1.
2.
3.
4.
5.
Install the CNA card in the server like any other PCIe card - see the server vendor documentation
for specific instructions.
Install the CNA card driver according to the CNA card installation instructions (it assumes the
server is already running a supported OS).
Physically connect the server CNA card ports to the FCoE Forwarder switch and then configure
the FCoE Forwarder switch ports - see the switch vendor documentation for specific instructions.
Configure the HP 3PAR StoreServ Storage ports according to the guidelines in section
“Configuring the HP 3PAR StoreServ Storage Port Running HP 3PAR OS 3.2.x or 3.1.x ”
(page 8) and then connect the HP 3PAR StoreServ Storage port either to the FCoE Forwarder
FC switch ports or the FC fabric connected to the FCoE Forwarder.
Create FC zones for the host initiator’s ports and the HP 3PAR StoreServ Storage target port.
When the initiators are logged in to the HP 3PAR StoreServ Storage target ports, create a
host definition and provision storage to the host.
NOTE:
It is not possible to connect a server with a CNA directly to the HP 3PAR StoreServ
Storage. An FCoE Forwarder switch must be used.
50
Configuring the Host as an FCoE Initiator Connecting to an FC target or an FCoE Target
Configuring Initiator FCoE to FCoE Target
The following figure summarizes the general procedure for an FCoE to FCoE configuration. When
setting up FCoE initiator to FCoE target, the HP 3PAR StoreServ Storage ports must be configured
for FCoE. See “Configuring the HP 3PAR StoreServ Storage for FCoE” (page 21), for notes on how
to configure FCoE ports on the StoreServ.
Figure 11 Initiator FCoE to Target FCoE
1.
2.
3.
4.
5.
Install the CNA card in the server like any other PCIe card - see the server vendor documentation
for specific instructions.
Install the CNA card driver following the CNA card installation instructions (it assumes the
server is already running a supported OS).
Physically connect the server CNA card ports to the FCoE fabric.
Configure the HP 3PAR StoreServ Storage ports according to the guidelines in “Configuring
the HP 3PAR StoreServ Storage for FCoE” (page 21) and connect the HP 3PAR StoreServ
Storage ports to the FCoE fabric.
Create VLANs for the host initiator’s ports and the HP 3PAR StoreServ Storage target port.
Once the initiators have logged in to the HP 3PAR StoreServ Storage target ports, create a
host definition and provision storage to the host.
NOTE: FCoE switch VLANs and routing setup and configuration are beyond the scope of this
document. For instructions on setting up VLANs and routing, refer to the manufacturer's guide for
the switch.
Configuring Initiator FCoE to FCoE Target
51
7 Configuring the Host for an iSCSI Connection
This chapter describes the procedures for setting up an ESX iSCSI configuration with an HP 3PAR
StoreServ Storage. Use the instructions in this chapter in conjunction with the VMware vSphere
Storage guide available at the VMware website:
http://www.vmware.com
NOTE: This link will take you outside of the Hewlett-Packard website. HP does not control and
is not responsible for information outside of HP.com
Setting Up the Switch and iSCSI Initiator
Setup and configure the host Network Interface Card (NIC) or converged network adapter (CNA)
as an Initiator port that will be used by the iSCSI Initiator software to connect to the HP 3PAR
StoreServ Storage iSCSI target ports.
•
Connect the ESX host iSCSI initiator ports and the HP 3PAR StoreServ Storage iSCSI target
ports to the switches.
•
Configure the iSCSI initiator ports and HP 3PAR StoreServ Storage iSCSI target ports.
•
When using VLANs, verify that the target ports and initiator ports are in the same VLANs.
•
Verify the connectivity between the iSCSI initiators and the HP 3PAR StoreServ Storage iSCSI
targets by using the vmkping command on the ESX host.
NOTE:
•
Setting up the switch for VLAN and routing configuration is beyond the scope of this document.
For instructions about setting up VLANs and routing, refer to the manufacturer's guide for the
switch.
•
For instructions when configuring the jumbo frame setting on the switch, refer to the
manufacturer's guide for the switch.
Installing Software iSCSI on VMware ESX
Software iSCSI drivers for VMware supported NICs are included as part of the ESX OS installation
package supplied by VMware. Get updates and patches for the software iSCSI drivers from
VMware support.
An example of an ESX iSCSI Software initiator configuration with two servers is shown below:
52
Configuring the Host for an iSCSI Connection
Figure 12 iSCSI Software Initiator Configuration
NOTE: When multiple teamed NICs are configured, all HP 3PAR StoreServ Storage iSCSI ports
and ESX iSCSI NICs must be in the same VLAN of the IP switch.
Installing Virtual Machine Guest Operating System
The VMware ESX documentation lists recommended VM GOS and their installation and setup as
VMs. For information about setting up the VM configuration, refer to the VMware ESX documentation
at the VMware website:
http://kb.vmware.com
This link will take you outside of the Hewlett-Packard website. HP does not control and is not
responsible for information outside of HP.com
CAUTION: In VMware KB 51306, VMware identifies a problem with RHEL 5 (GA), RHEL 4 U4,
RHEL 4 U3, SLES 10 (GA), and SLES 9 SP3 guest operating systems. Their file systems might
become read-only in the event of busy I/O retry or path failover of the ESX host’s SAN or iSCSI
storage. Refer to KB 51306 at the VMware Knowledge Base website:
http://kb.vmware.com
HP does not recommend and does not support the use of RHEL 5 (GA), RHEL 4 U4, RHEL 4 U3,
SLES 10 (GA), and SLES 9 SP3 as guest operating systems for VMs on VMware ESX hosts attached
to HP 3PAR StoreServ Storage systems.
Installing Virtual Machine Guest Operating System
53
NOTE:
•
VMware and HP recommend the LSI logic adapter emulation for Windows 2003 Servers. The
LSI Logic adapter is also the default option for Windows 2003 when creating a new VM. HP
testing has noted a high incidence of Windows 2003 VM failures during an ESX multipath
failover/failback event when the BUS Logic adapter is used with Windows 2003 VMs.
•
HP testing indicates that the SCSI timeout value for VM GOSs should be 60 seconds to
successfully ride out path failovers at the ESX layer. Most guest operating systems supported
by VMware have a default SCSI timeout value of 60 seconds, but this value should be checked
and verified for each GOS installation. Change the SCSI timeout value in Red Hat 4.x guest
operating systems from their default value of 30 seconds to 60 seconds.
This command can be used to set the SCSI timeout on all SCSI devices presented to a Red
Hat 4.x VM to 60 seconds:
find /sys -name timeout | grep "host.*target.*timeout" | xargs -n
1 echo "echo 60 >"|sh
Add this line in the /etc/rc.local file of the Red Hat 4.x guest OS for the timeout change
to be maintained with a VM reboot.
Example of a modified /etc/rc.local file:
# cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
find /sys -name timeout | grep "host.*target.*timeout" | xargs -n 1 echo "echo 60 >"|sh
touch /var/lock/subsys/local
Creating a VMkernel Port
The following procedure describes how to set up a VMkernel port. For detailed instructions, refer
to the VMware vSphere Installation and Setup guide at the VMware website:
http://www.vmware.com
1. Log into the VI/vSphere client and select the server from the inventory panel. The hardware
configuration page for this server appears.
2. Click the Configuration tab and click Networking.
3. Click the Add Networking Link. The Add Network wizard appears as shown below:
54
Configuring the Host for an iSCSI Connection
Figure 13 Add Network Wizard
4.
5.
6.
7.
Select VMkernel and click Next to allow for connecting to the VMkernel, which runs services
for iSCSI storage to the physical network. The Network Access page appears.
Select Create a Virtual Switch and select the NICs that will be used for iSCSI. (In this example,
2 NICs are selected to configure an active/active teamed NICs that will connect to the HP
3PAR storage array.)
Click Next.
In ESX 4.x, configure Active/Active NIC teaming by bringing up all of the NIC adapters being
used as "Active Adapters" in the vSwitch Properties. For each ESX host, use the vSphere client
Configuration tab→Networking→Properties, click the Edit radio button, and then highlight
and use the Move Up radio button to bring each of the NIC adapters for NIC teaming from
the Standby Adapters or Unused Adapters section to the Active Adapters section.
The dialog box shows that this was completed for NIC adapters vmnic1 and vmnic2.
Creating a VMkernel Port
55
Figure 14 vSwitch1 Properties
8.
Click OK to complete.
NOTE: HP recommends an Active/Active NIC Teaming configuration for best failover
performance with ESX 4.x. Each host Software iSCSI or Dependent Hardware iSCSI port
should be configured as a separate VMkernel port and IP address for ESXi 5.x or ESXi 6.0.
The VMkernel – Network Access pane appears.
Figure 15 VMkernel — Network Access
9. Click Next, enter a network label, and then click Next.
10. Specify the IP settings for the iSCSI network and then click Next.
11. Review the information and then click Finish .
56
Configuring the Host for an iSCSI Connection
Configuring a Service Console Connection for the iSCSI Storage
The following procedure describes how to configure the Service Console connection for the iSCSI
storage.
1. From the Configuration tab, and the Networking tab, click Properties for the vSwitch associated
with the VMkernel port that was just created for the iSCSI network. The vSwitch1 Properties
dialog box appears as shown:
Figure 16 iSCSI Network vSwitch1 Properties
2.
3.
Click Add.
Select the radio button for VMkernel to add support for host management traffic:
Figure 17 Service Console – Network Access
Configuring a Service Console Connection for the iSCSI Storage
57
4.
5.
6.
Click Next.
Enter the network label and IP address for the service console used to communicate with the
iSCSI software initiator. The IP address must be in the same subnet as the iSCSI.
Click Next. A window appears showing the changes or additions that were made:
Figure 18 Ready to Complete
7.
8.
9.
Click Finish.
Close all dialog boxes associated with the network configuration.
Check the Configuration display:
Figure 19 VMware ESX Server—Configuration Tab
The new network configuration is displayed with the addition of the iSCSI network.
Ping the HP 3PAR StoreServ Storage ports that were previously defined from the COS.
58
Configuring the Host for an iSCSI Connection
Configuring the VMware Software iSCSI Initiator
The following procedure shows how to configure the VMware iSCSI initiator.
1. With the Configuration tab selected, select Security Profile from the Software menu box and
then Firewall Properties as shown:
Figure 20 Firewall Properties
2.
3.
Open the ports that will be used for the iSCSI connection, and then click OK.
The iSCSI software initiator must be enabled before the ESX host can use it. Click the Storage
Adapters option in the Hardware menu box:
Figure 21 Storage Adapters
4.
5.
6.
7.
8.
In the ESX host, select the Configuration tab. Then select iSCSI Software Adapter in Storage
Adapter.
Click the Properties tab.
Select the General tab.
Click Configure...
Select the Enabled check box for the status.
Configuring the VMware Software iSCSI Initiator
59
9. Click OK.
10. Click the Dynamic Discovery tab. Dynamic discovery enables the Send Targets discovery
method, where the initiator sends the request to discover and log into the targets:
Figure 22 Send Targets
11.
12.
13.
14.
Click Add....
Enter the IP address of one of the previously defined HP 3PAR StoreServ Storage iSCSI ports.
Click OK.
Add additional HP 3PAR StoreServ Storage iSCSI ports if they exist and have been defined
on the system:
Figure 23 Adding Additional iSCSI Ports
60
Configuring the Host for an iSCSI Connection
15. When all of the HP 3PAR StoreServ Storage iSCSI ports have been added to the Dynamic
Discovery tab, close this window.
16. Reboot the ESX host. If VMs are active, either shut them down or suspend them.
The ESX host and HP 3PAR StoreServ Storage should now be configured for use. When using
the showhost command on the HP 3PAR StoreServ Storage, the new iSCSI connections
should now show as present.
# showhost
Id Name
--
----------------WWN/iSCSI_Name---------------- Port
iqn.1998-01.com.vmware:hpdl380-01-11a38a59
0:1:2
iqn.1998-01.com.vmware:hpdl380-01-11a38a59
1:1:2
As new LUNs are exported to the ESX iSCSI host, a re-scan must be performed on the iSCSI
software adapter. This is done the same so that FC LUNs would be re-scanned:
Figure 24 Re-scanning for New Storage Devices
Click Rescan, select the Scan for New Storage Devices check box, then click OK to re-scan for
new LUNs exported to the ESX iSCSI host.
Configuring the VMware Software iSCSI Initiator
61
Setting Up and Configuring Challenge-Handshake Authentication Protocol
Enabling Host Challenge-Handshake Authentication Protocol (CHAP) is an option set at the ESX
system administrator's discretion. Both host CHAP (initiator) and mutual CHAP (bidirectional,
initiator-target) are supported. With Dependant Hardware iSCSI, only host CHAP can be configured
through the CNA adapter BIOS. The following example outlines the procedures for host (initiator)
CHAP enablement with software iSCSI.
See “Independent Hardware iSCSI” (page 64) for CHAP enablement with Independent Hardware
iSCSI.
1.
Use the HP 3PAR CLI showhost command to verify that a host definition was created on
HP 3PAR StoreServ Storage for the ESX host that will have CHAP enabled.
# showhost
Id Name
0 ESX1
2.
----------------WWN/iSCSI_Name---------------- Port
iqn.1998-01.com.vmware:hpdl380-01-11a38a59
0:1:2
iqn.1998-01.com.vmware:hpdl380-01-11a38a59
1:1:2
On the HP 3PAR StoreServ Storage, use the HP 3PAR CLI sethost command with the
initchap parameter to set the CHAP secret for the ESX host.
The following example uses the CHAP secret (CHAP password) host_secret3 for the ESX
host. The CHAP secret must be at least 12 characters long.
# sethost initchap -f host_secret3 ESX1
NOTE: If mutual CHAP on ESX is configured, configure the target CHAP on the HP 3PAR
StoreServ Storage as well as the initiator CHAP. Set the target CHAP secret by using the
HP 3PAR CLI sethost command and the targetchap parameter.
# sethost targetchap -f secret3_host ESX1
NOTE:
3.
The initiator CHAP secret and the target CHAP secret cannot be the same.
Use the HP 3PAR CLI showhost -chap command to verify that the specified CHAP secret
was set for the host definition.
For Initiator chap
# showhost -chap
Id Name -Initiator_CHAP_Name- -Target_CHAP_Name0 ESX1 ESX1 -For mutual chap
# showhost -chap
Id Name -Initiator_CHAP_Name- -Target_CHAP_Name0 ESX1 ESX1
s331
62
Configuring the Host for an iSCSI Connection
Configuring CHAP on ESX/ESXi Host
1.
On the ESX host vSphere client, select the Configuration tab, and then click Storage Adapters
under the Hardware pane. Select the iSCSI Software Adapter, select the Properties link and
click CHAP.
Click to select Use initiator name to set the CHAP name to the iSCSI adapter name.
Figure 25 CHAP Credentials
2.
Enter the Mutual CHAP Secret.
NOTE:
3.
Use the same CHAP secret on both the ESX host and the HP 3PAR StoreServ Storage.
When configuring mutual CHAP on ESX, specify the incoming CHAP credentials. Use the HP
3PAR StoreServ storage name as the Mutual CHAP Name.
NOTE: Use the HP 3PAR CLI showsys command to get the Mutual CHAP Name (name of
the HP 3PAR StoreServ Storage).
# showsys
---------------(MB)---------------ID
-Name-
99800 s331
4.
---Model----
-Serial-
Nodes
Master TotalCap AllocCap FreeCap FailedCap
HP_3PAR 7200
1699800
2
1
8355840
849920
7504896 1024
Click OK, and then close the Properties window. A warning appears indicating that either a
restart of the ESX host or a storage adapter re-scan is required.
NOTE:
To perform this step manually:
Restart the host or remove and then add the iSCSI sessions by using either the esxcli iscsi
session command or the vicfg-iscsi command, depending on the ESX version.
Configuring CHAP on ESX/ESXi Host
63
Hardware iSCSI Support
Configure supported CNA in either of the following modes:
•
Dependent iSCSI — obtain the IP address from the host NIC connections.
•
Independent iSCSI — enter the IP address into the CNA card BIOS.
Independent iSCSI configurations can be configured to boot from SAN; iSCSI targets are entered
into the card via CNA BIOS.
For general information about supported models, see HP SPOCK:
http://www.hp.com/storage/spock
Independent Hardware iSCSI
This example uses the CN1100E CNA:
1. After installing the CN1100E, boot the ESX host. On boot up, the following text appears:
Use <Ctrl> <P> to change the CNA personality from FCoE to iSCSI if
necessary.
2.
To set a static IP address: Press Ctrl+S to enter the utility.
Emulex 10Gb iSCSI Initiator BIOS..
Press <Ctrl> <S> for iSCSISelect(TM) Utility
Figure 26 iSCSI Utility
3.
4.
5.
64
Select a controller and then press Enter.
On the Controller Configuration screen, select Network Configuration and then press Enter.
On the Network Configuration screen, select Configure Static IP Address and then press Enter.
The screen for setting a static IP address appears.
Configuring the Host for an iSCSI Connection
Figure 27 Setting a Static IP Address
6.
Type the IP address, subnet mask, default gateway and then click Save to return to the Controller
Configuration menu.
Adding an iSCSI Target
1.
If this is a boot from SAN configuration, select Controller Properties from the Controller
Configuration menu.
On the properties screen, verify that boot support is enabled. If it is not, scroll to Boot Support
and enable it, then save and exit this screen.
2.
3.
4.
On the Controller Configuration menu, select iSCSI Target Configuration.
Select Add New iSCSI Target and then press Enter.
Fill in the information for the first iSCSI target. Make sure Boot Target is set to Yes.
Figure 28 Adding an iSCSI Target
5.
6.
7.
Click Ping to verify connectivity.
After a successful ping, click Save/Login.
When both controllers are configured, use the showiscsisession command to display the
iSCSI sessions on the HP 3PAR StoreServ Storage. If everything is configured correctly, the
output should be similar to the following example:
Hardware iSCSI Support
65
[email protected]:S99814# showiscsisession
0:2:1
10.101.0.100
21
15
1 iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 2012-09-24 09:57:58 PDT
1:2:1
10.101.1.100 121
15
1 iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 2012-09-24 09:57:58 PDT
[email protected]:S99814# showhost -d Esx50Sys1
1 Esx50Sys1
VMware iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 0:2:1 10.101.0.100
1 Esx509Sys1 VMware iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 1:2:1 10.101.1.100
8.
Configuring CHAP as an Authentication Method. From the Add/Ping iSCSI Target screen,
select Authentication Method, and choose One-Way CHAP from the list as shown below:
Figure 29 One-Way CHAP
When the CHAP Configuration screen appears, type the Target CHAP name (the initiator IQN
name) and Target secret as shown below:
Figure 30 CHAP Configuration for One-Way CHAP
In the Authentication Method setting on the Add-Ping iSCSI Target screen select Mutual CHAP.
The CHAP Configuration screen appears as shown:
66
Configuring the Host for an iSCSI Connection
Figure 31 CHAP Configuration for Mutual CHAP
Fill in the Target CHAP Name (the initiator IQN name), the Target Secret, the Initiator CHAP
Name (DNS name of the storage), and an Initiator Secret, click OK.
To remove CHAP authentication, in the Authentication Method setting on the Add-Ping iSCSI
Target screen, select None.
9.
When setting up CHAP authentication before rebooting the host system, make sure to set the
matching CHAP parameters for the host in the HP 3PAR StoreServ Storage.
•
If one-way CHAP was selected, enter the matching CHAP secret as follows:
[email protected]:S99814# sethost initchap -f aaaaaabbbbbb EsxHost1
[email protected]103140:S99814# showhost -chap
•
If mutual CHAP was selected, enter the mutual CHAP secret as follows:
[email protected]:S99814# sethost initchap -f aaaaaabbbbbb EsxHost1
[email protected]:S99814# sethost targetchap
-f bbbbbbcccccc EsxHost1
[email protected]:S99814#
[email protected]:S99814# showhost -chap
Id Name
-Initiator_CHAP_Name- -Target_CHAP_Name1 EsxHost1
EsxHost1
S814
[email protected]:S99814#
After entering the CHAP secret, exit the BIOS and reboot the host.
Dependent Hardware iSCSI
1.
2.
3.
Install the CNA.
Install the OS to the local disk (Dependent Hardware iSCSI does not support boot-from-SAN).
Boot the system and, if not already completed, enter the CNA BIOS and change the Personality
to iSCSI. The example below, from the CN1100E BIOS, appears after pressing Ctrl+P:
Hardware iSCSI Support
67
Figure 32 CN1100E BIOS
4.
Make sure that the CNA is not configured for Independent Hardware iSCSI.
NOTE: This procedure is specific to the CN1100E. Different vendors employ different methods
of implementing Dependent Hardware iSCSI. See vendor documentation for specific procedures.
a.
b.
c.
d.
On the CN1100E, press Ctrl+S to enter the BIOS.
Select Controller Configuration.
Select the controller port.
Select Erase Configuration, as shown in “Erase Configuration” (page 68):
Figure 33 Erase Configuration
5.
6.
68
When the ESXi host is booted, in vCenter, select the host, and then click the Configuration
tab.
Click Network Adapters and verify that the CNA is present:
Configuring the Host for an iSCSI Connection
Figure 34 vCenter Configuration Tab
7.
On the Configuration tab, click Storage Adapters, and right-click the CNA port you want to
configure as in “vCenter Storage Adapters Menu” (page 69):
Figure 35 vCenter Storage Adapters Menu
8.
Select Properties, and then click Configure as in “General Properties” (page 70):
Hardware iSCSI Support
69
Figure 36 General Properties
9. Enter the IP address, Subnet Mask, and Default Gateway, and then click OK.
10. Click the Dynamic Discovery tab, and then click Add as in “Dynamic Discovery” (page 70):
Figure 37 Dynamic Discovery
11. Enter the IP address of the HP 3PAR StoreServ Storage, and then click OK.
12. Repeat for each HP 3PAR StoreServ Storage port that this initiator port will access.
13. Click the Static Discovery tab to verify that each added HP 3PAR StoreServ Storage port
appears, as in this example:
70
Configuring the Host for an iSCSI Connection
Figure 38 Static Discovery
14. Click Close.
15. Click Yes to re-scan the host bus adapter.
16. Use the showiscsisession command on the HP 3PAR CLI to verify connectivity.
# showiscsisession
N:S:P --IPAddr--- TPGT TSIH Conns ----------------iSCSI_Name----------------- -------StartTime------0:5:2 10.2.10.182
52
15
1
iqn.1990-07.com.emulex:d4-c9-ef-75-6b-91 2014-01-16 13:48:24 PST
1:5:2 10.2.10.182 152
15
1
iqn.1990-07.com.emulex:d4-c9-ef-75-6b-91 2014-01-16 13:48:23 PST
iSCSI Failover Considerations and Multipath Load Balancing
NOTE: For information about multipathing and configuring to Round Robin policy for all
connectivity types (FC, FCoE, and iSCSI, see “Multipath Failover Considerations and I/O Load
Balancing” (page 30).
Performance Considerations for Multiple Host Configurations
The information in this section should be considered when using multiple ESX hosts, (or other hosts
in conjunction with ESX hosts), that are connected in a fan-in configuration to a pair of HP 3PAR
StoreServ Storage ports.
NOTE: When running W2K8 VM Cluster with RDM-shared LUNs, then individually change these
specific RDM LUNs from Round Robin policy to Fixed Path policy.
ESX/ESXi Additional Feature Considerations
To install the VAAI plug-in and to take advantage of new ESX/ESXi and HP 3PAR StoreServ Storage
features, see “ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0 Additional Feature Considerations” (page 40).
iSCSI Failover Considerations and Multipath Load Balancing
71
8 Allocating Storage for Access by the ESX Host
This chapter describes the basic procedures required for creating and exporting VVs so they can
be used by the VMware ESX host. For complete details on creating and managing storage on the
HP 3PAR StoreServ Storage, see the appropriate HP 3PAR documentation.
Creating Storage on the HP 3PAR StoreServ Storage
This section describes the general recommendations and restrictions for exporting storage from the
HP 3PAR StoreServ Storage to an ESX host.
For additional information, refer to the HP 3PAR Command Line Interface Administrator's Manual.
For a comprehensive description of HP 3PAR OS commands, refer to the HP 3PAR Command Line
Interface Reference at the HP Storage Information Library:
http://www.hp.com/go/storage/docs
In addition, for a detailed discussion about using TPVV and strategies for creating VLUNs, refer
to 3PAR Utility Storage with VMware vSphere at HP.com:
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-4023ENW.pdf
Creating Thinly Deduplicated Virtual Volumes
:
The HP 3PAR Thin Deduplication feature is supported from HP 3PAR OS 3.2.1 MU1 and later. To
create thinly deduplicated virtual volumes (TDVV), a Thin Provisioning license is required.
HP 3PAR Thin Deduplication allows for creating TDVVs from SSD (Solid State drive) CPGs. A TDVV
has the same characteristics as a TPVV with the additional capability of removing duplicated data,
before it is written to the volume. The TDVVs are managed like any other TPVV. A TDVV must be
associated with CPGs created from an SSD.
For more information about HP 3PAR Thin Deduplication, refer to the following references at the
HP Storage Information Library:
http://www.hp.com/go/storage/docs:
•
HP 3PAR StoreServ Storage Concepts Guide
•
HP 3PAR Command Line Interface Administrator's Manual
•
HP 3PAR Command Line Interface Reference
Creating Virtual Volumes
VVs are the only data layer visible to hosts. After devising a plan for allocating space for the ESX
host, create the VVs on the HP 3PAR StoreServ Storage.
Create volumes that are provisioned from one or more Common Provisioning Groups (CPGs).
Volumes can be either fully provisioned, thinly provisioned, or thinly deduped volumes. Optionally,
specify a CPG for snapshot space for provisioned volumes.
72
Allocating Storage for Access by the ESX Host
Using the HP 3PAR Management Console:
1.
From the menu bar, select:
Actions →Provisioning →Virtual Volume →Create Virtual Volume
2.
3.
Use the Create Virtual Volume wizard to create a base volume.
Select one of the following options from the list:
•
Fully Provisioned
•
Thinly Provisioned
•
Thinly Deduped (Supported from HP 3PAR OS 3.2.1 MU1)
Using the HP 3PAR CLI:
Create a fully provisioned or thinly provisioned virtual volume:
# createvv [options] <usr_CPG> <VV_name> [.<index>] <size>[g|G|t|T]
For example:
# createvv -cnt 5 testcpg TESTLUNS 5g
For complete details on creating volumes for the HP 3PAR OS version that is being used on the
HP 3PAR StoreServ Storage, refer to the HP 3PAR Management Console User’s Guide and the
HP 3PAR Command Line Interface Reference documents at he HP Storage Information Library:
http://www.hp.com/go/storage/docs
NOTE: The commands and options available for creating a virtual volume might vary for earlier
versions of the HP 3PAR OS.
Exporting LUNs to an ESX Host
This section explains how to export VVs created on the HP 3PAR StoreServ Storage as VLUNs for
the ESX host.
When exporting VLUNs to the VMware ESX host, be aware of the following:
•
New VLUNs that are exported while the host is running might not be registered until a Bus
Re-scan is initiated. This might be performed from the VI/vSphere client Management Interface.
Some versions of ESX will automatically scan for newly exported LUNs.
•
Disks can be added to a VM with the VM powered up. To remove a disk, the VM might not
need to be powered off. Refer to the VMware documentation for feature support.
•
The maximum number of LUNs on a single ESX HBA port is 256, and there are 256 total
LUNs on the ESX host. Internal devices, such as local hard drives and CD drives, are counted
as a LUN in the ESX host LUN count.
•
VLUNs can be created with any LUN number in the range from 0 to 255 (VMware ESX
limitation).
•
iSCSI LUNs and FC LUNs are treated as any other LUN on the ESX host. No special
requirements or procedures are needed to use iSCSI LUNs. HP does not recommend or support
the same storage LUN being exported on different protocol interfaces, such as exporting to
an FC interface on one host and an iSCSI on another host. This is because the timing and
error recovery of the protocols would be different.
Exporting LUNs to an ESX Host
73
•
The ESX 4.x limitation for the largest LUN that can be utilized by the ESX host is 2047 GB.
For ESXi 5.x or ESXi 6.0, the maximum LUN size is 16 T or 16384 GB.
•
Sparse LUN numbering, that is, LUN numbers can be skipped, is supported by VMware ESX
host. A LUN 0 is not required.
For failover support, VVs should be exported down multiple paths to the host simultaneously. To
facilitate this task, create a host definition on the HP 3PAR StoreServ Storage that includes the
WWNs of multiple HBA ports on the host and export the VLUNs to that host definition.
Provisioning several VMs to a smaller number of large LUNs, versus a single VM per single LUN,
provides better overall results. Further examination and explanation of this recommendation is
outlined in the HP 3PAR Utility Storage with VMware vSphere document available at HP.com:
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-4023ENW.pdf
Concerning TPVVs, ESX VMFS-3 and ESX VMFS-5 do not write data to the entire volume at
initialization and can be used with TPVVs without any configuration changes to VMFS. A further
examination of this subject, recommendations, and limitations are explored in the HP document
HP 3PAR Utility Storage with VMware vSphere.
For example:
For example:
Creating a VLUN for Export
Creation of a VLUN template enables export of a VV as a VLUN to one or more ESX hosts.
There are four types of VLUN templates:
•
port presents - created when only the node:slot:port are specified. The VLUN
is visible to any initiator on the specified port.
•
host set - created when a host set is specified. The VLUN is visible to the initiators of
any host that is a member of the set.
•
host sees - created when the hostname is specified. The VLUN is visible to the initiators
with any of the host’s WWNs.
•
matched set - created when both hostname and node:slot:port are specified.
The VLUN is visible to initiators with the host’s WWNs only on the specified port.
Export the LUNs either through the HP 3PAR Management Console or the HP 3PAR CLI.
Using the HP 3PAR Management Console
1.
From the Menu bar, select Actions→Provisioning→VLUN→Export Volume.
2.
Use the Export Virtual Volume dialog box to create a VLUN template.
Using the HP 3PAR CLI
Create a port presents VLUN template:
# createvlun [options] <VV_name | VV_set> <LUN> <node:slot:port>
Create a host sees or host set VLUN template:
# createvlun [options] <VV_name | VV_set> <LUN> <host_name/set>
Create a matched set VLUN template:
# createvlun [options] <VV_name | VV_set> <LUN> <node:slot:port>/<host_name>
74
Allocating Storage for Access by the ESX Host
Create a host set VLUN template:
# createvlun [options] <VV_name | VV_set> <LUN> <host_set>
For example:
# createvlun -cnt 5 TESTLUNs.0 0 hostname/hostdefinition
or:
# createvlun -cnt 5 TESTVLUN.0 0 set:hostsetdefinition
For complete details on exporting volumes and available options for the HP 3PAR OS version used
on the HP 3PAR StoreServ Storage, refer to the HP 3PAR Management Console Users Guide and
the HP 3PAR Command Line Interface Reference documents at the HP Storage Information Library:
www.hp.com/go/storage/docs
NOTE: The commands and options available for creating a virtual volume might vary for earlier
versions of the HP 3PAR OS.
Discovering LUNs on VMware ESX Hosts
This section provides tips for discovering LUNs used by the ESX host.
Once the host is built, the preferred method for configuring and managing the use of the ESX host
is through a VI/vSphere client Management Interface and VMware vCenter Server.
New VLUNs exported while the host is running will not be registered until a bus re-scan is initiated,
this can be done automatically by ESX 4.x, ESXi 5.x, or ESXi 6.0 hosts managed by vSphere client
or vCenter Server from the VI/vSphere client Management Interface. If the recommended failover
support is used, view all LUNs and their respective paths by using the menu from the ESX
(Configuration tab →Storage Adapter).
Disks can be added to a VM with the VM powered up. However, to remove a disk, the VM must
be powered off. This is a limitation of ESX.
The maximum number of LUNs on a single ESX HBA port is 256, and there are 256 total LUNs
on the ESX host. Internal devices, such as local hard drives and CD drives, are counted as a LUN
in the ESX host LUN count.
Removing Volumes
After removing a VLUN exported from the HP 3PAR StoreServ Storage, perform a ESX host bus
adapter re-scan. ESX will update the disk inventory upon re-scan. This applies to FC, FCoE, and
iSCSI.
Discovering LUNs on VMware ESX Hosts
75
Figure 39 Re-scanning the ESX Host
Remove the disk/LUN from the VM inventory, detach from the ESX host, and then remove it from
the HP 3PAR StoreServ Storage by using the removevlun and removevv HP 3PAR CLI commands
or by using the HP 3PAR Management Console. If a LUN is not detached but is removed from the
HP 3PAR StoreServ Storage, it appears as a device in an error state, and is cleared after an ESX
host reboot.
Figure 40 Detaching a LUN
76
Allocating Storage for Access by the ESX Host
Host and Storage Usage
Eventlog and Host Log Messages
To see all HP 3PAR system debug messages, use the HP 3PAR CLI showeventlog -debug
command. On the ESX host, errors are reported in /var/log/vmkernel files. Use the HP 3PAR
CLI showalert command to see alerts and warnings posted from the HP 3PAR StoreServ Storage.
Examples of storage server posted messages:
•
If the user has set the space allocation warning for a TPVV volume, the system event log shows
the Thin provisioning soft threshold reached message when it reaches that
limit.
Example:
1 Debug
Host error
undefined Port 1:5:2 -- SCSI status 0x02 (Check condition) Host:sqa-dl380g5-14-esx5
(WWN 2101001B32A4BA98) LUN:22 LUN WWN:50002ac00264011c VV:0 CDB:280000AB082000000800 (Read10) Skey:0x06 (Unit
attention) asc/q:0x38/07 (Thin provisioning soft threshold reached) VVstat:0x00 (TE_PASS
-- Success)
after 0.000s (Abort source unknown) toterr:74882, lunerr:2
•
If the user has set a hard limit on the HP 3PAR StoreServ Storage volume when using HP 3PAR
OS 2.3.1 MU2 or later, the HP 3PAR StoreServ Storage will post the write protect error
(ASC/Q: 0x27/0x7) when the limit is reached. This is considered a hard error, and the VM
is stunned in ESX 4.x and ESXi 5.x. Additional space must be added to the storage to clear
this error condition.
PDT
1 Debug
Host error
undefined Port 1:5:2 -- SCSI status 0x02 (Check condition)
Host:sqa-dl380g5-14-esx5 (WWN 2101001B32A4BA98) LUN:22 LUN WWN:50002ac00264011c VV:612 CDB:2A00005D6CC800040000
(Write10) Skey:0x07 (Data protect) asc/q:0x27/07 (Space allocation failed write protect) VVstat:0x41 (VV_ADM_NO_R5
-- No space left on snap data volume) after 0.302s (Abort source unknown) toterr:74886, lunerr:3
For ESX 4.x. ESXi 5.x, or ESXi 6.0, an ESX host managed by vSphere client or vCenter Server
will scan for LUNs at 5-minute intervals by using the REPORT LUN command and discovering
any new LUNs.
At an interval of every 5 minutes, an ESX host such as ESXi 5.x sends REPORT LUN commands
and discovers any new LUNs.
On ESXi 5.x, the following vmkernel log message is not harmful and can be ignored:
34:41.371Z cpu0:7701)WARNING: ScsiDeviceIO: 6227: The
device naa.50002ac00026011b does not permit the system to change the sitpua bit to 1.
Each ESX host has its own limits such as volume size, virtual disk size and so on. It is not
always known if it will fully utilize the maximum volume size of 16TB from the storage array.
Host and Storage Usage
77
9 Booting the VMware ESX Host from the HP 3PAR StoreServ
Storage
This chapter provides a general overview of the procedures required to boot the VMware ESX
operating system from the SAN.
In a boot-from-SAN environment, each ESX host operating system is installed on a the HP 3PAR
StoreServ Storage, instead of on the internal disk. In this situation, create a separate VV for each
ESX host used for the boot image.
General tasks in this process:
•
Perform the required zoning
•
Create a virtual volume and export it as a VLUN to the ESX host
•
Boot the system and enter the HBA BIOS
•
Enable the HBA port to boot
•
Discover the LUN and designate it as bootable through the HBA BIOS
For detailed information, see the VMware FC SAN Configuration Guide.
For information about setting a CN1100E CNA to boot the host from SAN, see “Hardware iSCSI
Support ” (page 64).
The VMware ESX host has an option that allows the VMware Base OS to be installed and booted
from a SAN or HP 3PAR StoreServ Storage virtual storage device. Choose this option during the
initial installation phase of the VMware Server Installation. See the VMware documentation for
further information regarding 'SANboot'.
HP makes the following general recommendations for preparing the host HBAs for a SAN boot
deployment:
NOTE: The NVRAM settings on HBAs can be changed by any server where they are installed.
These settings persist in the HBA even after it is removed. To get the correct settings for this
configuration, return all NVRAM settings to their default values.
1.
After installation of the HBAs, reset all of the HBA NVRAM settings to their default values.
NOTE: Each host adapter port is reported as a host bus adapter and the HBA settings should
be set to default.
2.
Enter the host adapter setup program during server boot by pressing the combination of keys
indicated by the host adapter.
Change and save all HBA settings to their default values.
NOTE:
When using a McDATA fabric, set the HBA topology to 'point to point’.
There might be other vendor HBAs not listed here with different setup entry key combinations.
3.
78
Reboot the host computer.
Booting the VMware ESX Host from the HP 3PAR StoreServ Storage
10 Using VVOLs (VMware Virtual Volumes) with HP 3PAR
StoreServ Storage
VMware introduced the VVOLs feature in VMware vSphere 6.0. For the specific HP 3PAR OS
versions and associated components required for support of VVOLs with HP 3PAR StoreServ
Storage, refer to documentation at HP SPOCK:
http://www.hp.com/storage/spock
VMware VVOLs are the VMware implementation of software-defined storage. VMware VVOLs
enable vSphere to provision array volumes on which VMs can be created. This transfers the task
of provisioning and managing storage volumes for VMs from the storage array administrator to
the VMware vSphere VM administrator.
With traditional storage volumes, the storage administrator creates storage volumes manually with
the necessary capabilities to meet the VMware administrator’s requirements. With VVOLs, when
creating individual VMs, the array advertises its capabilities and allows the vSphere vCenter
interface to create VVOLs as needed with the requested capabilities.
Multiple VVOLs will be created for individual VMs and these include the following:
•
configuration VVOLs
•
virtual disk VVOLs (VMDK VVOLs)
•
swap VVOLs
•
live-memory snapshot VVOLs (if needed)
For more information about VMware VVOLs and HP 3PAR storage, see following documentation
resources:
•
HP 3PAR StoreServ Storage Concepts Guide
•
HP 3PAR Command Line Interface Administrator's Manual
•
VMware vSphere 6.0 Documentation Center
To implement VMware VVOLs on HP 3PAR StoreServ Storage:
1. Configure and verify PE (Protocol Endpoint) LUN connection.
2. Start the HP 3PAR VP (VASA Provider) on the array.
3. Configure the Storage Container.
4. Allocate CPG (Common Provisioning Groups) for VVOLs on the array.
5. Register connection of VMware vCenter to VASA 2.0 web service / VASA Provider (VP).
VMware VVOL Protocol Endpoint and HP 3PAR
In the VMware VVOL schema, the PE provides an in-band pathway for access to the objects. From
a SCSI protocol implementation perspective, the PE has the same characteristics as a traditional
volume, but it is flagged as a protocol endpoint instead of a traditional volume. With HP 3PAR,
the PE presents itself to the hosts when connected to the array, and the HP 3PAR host definition is
assigned the “VMware” host personality (persona 11). When presented, the PE appears as a disk
on the ESXi host with LUN number 256 and a VMware naa number that reflects the array node
WWN. The ESXi hosts acquire the PE LUN during LUN discovery at host boot.
Verify discovery of the PE LUN by using the esxcli storage core device list --pe-only
command on the ESXi host.
VMware VVOL Protocol Endpoint and HP 3PAR
79
Figure 41 Example of a PE LUN identified on a host
Also, the PE LUN (LUN 256) appears in the Storage Devices list in vCenter at this point. Multipath
attributes are applied to the PE LUN, whether set by SATP/PSP rule or applied manually. The
VVOLs subsequently have the Multipath attributes of the PE LUN.
Figure 42 vCenter Storage Devices list
80
Using VVOLs (VMware Virtual Volumes) with HP 3PAR StoreServ Storage
The PE LUN appears in the vCenter as a storage device, because the ESXi host passes this
information to vCenter. More information about the PE LUN and management of VVOLs occurs
only after a communication link is set up between the vCenter and the storage array management
interface.
NOTE: The PE LUN and VMware VVOL feature support require ESXi 6.0 host HBA/CNA drivers
that support VVOLs. When using host HBA/CNA drivers that do not support VVOLs, the PE LUN
might not be properly acquired and identified, and an error message related to the PE LUN might
appear in the ESXi 6.0 vmkernel logs. However, the error does not interfere with non-VVOL
operation. For supported ESXi 6.0 drivers that support VVOLs, see HP SPOCK:
http://www.hp.com/storage/spock
The HP 3PAR VASA Provider
To use the vSphere environment to manage VVOLs, enable the VASA 2.0 protocol on the storage
array, and then establish an IP connection for communication between the vCenter and the array.
The HP 3PAR VASA Provider enables communication between the VMware vCenter and the HP
3PAR storage array management interface by using the VMware VASA 2.0 Protocol.
To enable the VASA 2.0 protocol, start the VASA Provider web service on the storage array. To
enable the VASA Provider, use the startvasa command.
IMPORTANT:
Starting the VASA Provider requires super user privileges.
Figure 43 Example of starting the VASA Provider
VVOL Administrator User ID and Storage Container Setup
Creation and management of VMware VVOLs on the HP 3PAR array occurs through communication
between the VMware vCenter and the HP 3PAR VASA Provider on the array. This requires IP
network connectivity between the vCenter and the HP 3PAR array management network.
To provide secure communication, vSphere components use SSL to communicate with the VASA
Provider on the HP 3PAR array. The vSphere environment and the VASA Provider (at the request
of the vSphere environment) dynamically generate these SSL certificates.
Before registering the HP 3PAR VASA Provider with vCenter, complete the following procedure:
VMware VVOL Protocol Endpoint and HP 3PAR
81
1.
Synchronize the clocks on all systems: ESXi hosts, vCenter Server, and storage array.
If synchronization requires making adjustments to the clocks on the ESXi host and vCenter
Server after they were set up, the VMCA certificates assigned to them might be incorrect.
CAUTION: HP recommends configuring all systems to use a common NTP server for date
and time synchronization. Failure to do so can cause failures either while registering the VASA
Provider with vCenter, attempting to mount a VVOL Storage Container as a Datastore, or
when creating a VM.
2.
Reset the VASA Provider SSL certificate.
When adjusting the date on the array, also regenerate the certificate for the VASA Provider
by using the setvasa -reset command.
IMPORTANT: Only use the setvasa -reset command when the VASA Provider has not
been registered with the vSphere environment.
NOTE:
Register the HP 3PAR VASA Provider with one vCenter Server at a time.
Before registering the VASA Provider with a new vCenter Server, unregister the old vCenter Server,
and then reset the VASA Provider SSL certificate by using HP 3PAR CLI setvasa -reset
command.
When registered, the HP 3PAR VASA Provider exposes a single VVOL storage container, which
appears as a VVOL Datastore in vSphere. The storage container is in the same domain as the user
that registers the VASA Provider. Before registering the VASA Provider with vCenter, complete the
following procedure:
1. Determine whether the VVOL storage container should be separated into its own HP 3PAR
virtual domain. If so, create a virtual domain by using the create domain command
createdomain vVoldomain. The virtual domain name appears in vSphere as the VVOL
storage container name.
2.
3.
4.
5.
Determine the HP 3PAR administrative user for registering the VASA Provider. In order for the
vSphere environment to create VVOLs on the array, provide this user information to the VMware
administrator for registration of the VASA Provider. If a virtual domain is used, this user must
be assigned to the same domain as determined in step 1.
Grant this HP 3PAR administrative user the edit role, at least, by using the createuser
vVoluser vVoldomain edit command:
Associate the HP 3PAR administrative user with the HP 3PAR domain that hosts the storage
container.
Define additional CPGs, with optional growth limits, for VVOL usage. For more information,
see “Defining CPGs for VVOLs” (page 82)). The virtual domain, or "root domain" (depending
on which is used by the VASA Provider) must have at least one CPG, from which VVOLs can
be provisioned.
Defining CPGs for VVOLs
Common provisioning groups (CPGs) are used to expose HP 3PAR capabilities to the vSphere
environment. It is not necessary to create specific CPGs for VVOL support, but the storage
82
Using VVOLs (VMware Virtual Volumes) with HP 3PAR StoreServ Storage
administrator can use CPGs to constrain VVOL usage on the HP 3PAR array. The storage
administrator can control usage of array storage space in the following ways:
•
VASA Provider registered in a virtual domain—The vCenter environment can access only the
capabilities exposed by the CPGs associated with that domain.
•
CPGs have growth limits defined—The vSphere administrator is constrained by these growth
limits when creating new VMs.
CAUTION: To guarantee limited growth, all CPGs on the array, or all CPGs in the VVOL
domain array, define the total growth limit (see setcpg -sdgl). If any CPG does not define
a growth limit, VVOLs for that CPG can consume all available space for that particular storage
drive device type.
•
CPGs with a special VVOL_ name prefix—These CPGs limit the which CPGs can be used
when provisioning VMs that have no specific VMware storage profile requirements. However,
the vSphere administrator can define a storage profile that uses CPGs beyond those with just
the VVOL_ name prefix. For more information, see “Default HP 3PAR CPGs for VVOLs”
(page 83)).
NOTE:
At least one CPG must be defined on the array and used in the associated VVOL domain.
When no CPGs exist, registration of the VASA Provider fails and reports a Rescan Error on
the vCenter Storage Provider page.
Default HP 3PAR CPGs for VVOLs
The vSphere administrator can choose array capability constraints when provisioning VMs in VVOL
storage containers. These capability constraints are known as storage profiles. With HP 3PAR,
one of the constraints the vSphere administrator can select is the CPG. If the vSphere administrator
chooses not to create a storage profile or the storage profile does not have constraints for a CPG,
the VASA Provider automatically selects a default CPG for the VVOLs of that VM.
The VASA Provider uses the following algorithm to select CPGs for newly created VVOLs:
1. If there are any CPG names that begin with vvol_, all other CPGs are eliminated from
consideration for provisioning.
2. If the space available to any of the CPGs is less than 10% of the total space originally
provisioned for that CPG, the CPG eliminates them from consideration for provisioning, unless
all CPGs are under the same space pressure.
3. Of the remaining CPGs under consideration, a balance of performance and availability is
used to consider which CPG to provision the VVOLs. The order of CPGs selected depends on
the CPG configuration drive type (NL, FC, SSD) and RAID (r0, r1, r5, r6) settings.
The VASA Provider chooses the first CPG to match the preferred drive/raid order as follows:
1
FCr5
7
SSDr5
2
FCr6
8
SSDr6
3
FCr1
9
SSDr1
4
NLr6
10
FCr0
5
NLr1
11
NLr0
6
NLr5
12
SSDr0
VMware VVOL Protocol Endpoint and HP 3PAR
83
Registering the HP 3PAR VASA Provider with vCenter
The storage administrator must provide the VASA Provider URL and the HP 3PAR user/administrator
credentials to the VMware administrator for registration inside vCenter. To discover the URL that
needs to be provided to the VMware administrator, use the HP 3PAR CLI showvasa command.
To provide the VASA Provider SSL certificate thumbprint that allows the vSphere administrator to
validate the identity of the VASA Provider on the array, use the showvasa -cert command .
Figure 44 Example of the showvasa and showvasa -cert commands
CAUTION: Once the VASA Provider is registered with vCenter, coordinate any change of the
vCenter-registered VASA Provider user name or password with the VMware administrator. The
VMware administrator must immediately re-register the VASA Provider in vCenter or the services
offered by VASA Provider will immediately cease.
NOTE:
•
If registration fails, indicating that VMCA certificate registration fails, verify that the vCenter
Server, ESXi, and HP 3PAR array clocks are in sync. Additionally, use the setvasa -reset
command and retry.
•
If registration fails, indicating the VASA Provider URL is invalid, verify that the VASA Provider
URL used during registration matches exactly the URL displayed by using the showvasa
command.
To register the Storage Provider (HP 3PAR VASA Provider) with vCenter:
1. From vCenter, navigate to Home <vCenter>→Manage→Storage Providers.
2. Click + to add a registration.
3. Fill in the requested information.
84
•
Name—User-defined name to represent the array
•
URL—URL copied from showvasa output on the array
•
Username—Username for array login
•
Password—User password for array login
Using VVOLs (VMware Virtual Volumes) with HP 3PAR StoreServ Storage
Figure 45 Register storage provider on vCenter (HP 3PAR VASA Provider on vCenter)
Figure 46 Storage provider registered with vCenter
4.
Add a VVOL datastore (HP 3PAR storage container) by using the following procedure:
a. Navigate to Hosts→datastores→<datacenter>+Actions.
b. Select Storage, and then select New Datastore from the Actions list. The New Datastore
window appears.
Figure 47 Add a VVOL datastore
c.
In the 1 Location dialog box. enter the location information for the new VVOL datastore,
and then click Next.
VMware VVOL Protocol Endpoint and HP 3PAR
85
Figure 48 Location of new VVOL datastore
d.
In the 2 Type dialog box, select VVOL as the Type, and then click Next.
Figure 49 Select the datastore type
e.
In the 3 Name and container selection dialog box, enter a Datastore name, select a
container, and then click Next.
Figure 50 Name and container selection
f.
g.
86
In the 4 Select hosts accessibility dialog box, select which hosts can access the VVOL
Storage Container, and then click Next.
In the 5 Ready to complete dialog box, verify the settings, and then click Finish to make
the new VVOL datastore available to vCenter.
Using VVOLs (VMware Virtual Volumes) with HP 3PAR StoreServ Storage
Figure 51 Verify and finish
IMPORTANT:
on the array.
Avoid using the storage administration command to manage or manipulate VVOLs
For more information about the following topics, see the VMware documentation:
•
Creating VMware virtual machines
•
Managing VVOLs
•
Creating storage policies for the VVOL datastore
For more information about viewing VVOLs as they exist on HP 3Par StoreServ Storage, such as
showvVolvm or showvVolvm —vv, refer to the HP 3PAR Command Line Interface Reference.
For licensing information for VVOLs implementation with HP 3PAR StoreServ Storage, refer to the
HP technical white-paper: Implementing VMware Virtual Volumes on HP 3PAR StoreServ (publication
4AA5-6907ENW).
VMware VVOL Protocol Endpoint and HP 3PAR
87
11 Support and Other Resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/support
Before contacting HP, collect the following information:
•
Product model names and numbers
•
Technical support registration number (if applicable)
•
Product serial numbers
•
Error messages
•
Operating system type and revision level
•
Detailed questions
Specify the type of support you are requesting:
HP 3PAR storage system
Support request
HP 3PAR StoreServ 7200, 7200c, 7400, 7400c, 7440c,
7450, and 7450c Storage systems
StoreServ 7000 Storage
HP 3PAR StoreServ 10000 Storage systems
3PAR or 3PAR Storage
HP 3PAR documentation
For information about:
See:
Supported hardware and software platforms
The Single Point of Connectivity Knowledge for HP
Storage Products (SPOCK) website:
SPOCK (http://www.hp.com/storage/spock)
Locating HP 3PAR documents
The HP Storage Information Library:
Storage Information Library
(http://www.hp.com/go/storage/docs/)
By default, HP 3PAR Storage is selected under Products
and Solutions.
Customer Self Repair procedures (media)
The HP Customer Self Repair Services Media Library:
Customer Self Repair Services Media Library
(http://h20464.www2.hp.com/index.html)
Under Product category, select Storage. Under Product
family, select 3PAR Storage Systems for HP 3PAR
E-Class, F-Class, S-Class, and T-Class Storage Systems,
or 3PAR StoreServ Storage for HP 3PAR StoreServ
10000 and 7000 Storage Systems.
HP 3PAR storage system software
Storage concepts and terminology
HP 3PAR StoreServ Storage Concepts Guide
Using the HP 3PAR Management Console (GUI) to configure HP 3PAR Management Console User's Guide
and administer HP 3PAR storage systems
Using the HP 3PAR CLI to configure and administer storage
systems
88
Support and Other Resources
HP 3PAR Command Line Interface Administrator’s
Manual
For information about:
See:
CLI commands
HP 3PAR Command Line Interface Reference
Analyzing system performance
HP 3PAR System Reporter Software User's Guide
Installing and maintaining the Host Explorer agent in order
to manage host configuration and connectivity information
HP 3PAR Host Explorer User’s Guide
Creating applications compliant with the Common Information HP 3PAR CIM API Programming Reference
Model (CIM) to manage HP 3PAR storage systems
Migrating data from one HP 3PAR storage system to another HP 3PAR-to-3PAR Storage Peer Motion Guide
Configuring the Secure Service Custodian server in order to
monitor and control HP 3PAR storage systems
HP 3PAR Secure Service Custodian Configuration Utility
Reference
Using the CLI to configure and manage HP 3PAR Remote
Copy
HP 3PAR Remote Copy Software User’s Guide
Updating HP 3PAR operating systems
HP 3PAR Upgrade Pre-Planning Guide
Identifying storage system components, troubleshooting
information, and detailed alert information
HP 3PAR F-Class, T-Class, and StoreServ 10000 Storage
Troubleshooting Guide
Installing, configuring, and maintaining the HP 3PAR Policy
Server
HP 3PAR Policy Server Installation and Setup Guide
HP 3PAR Policy Server Administration Guide
HP 3PAR documentation
89
For information about:
See:
Planning for HP 3PAR storage system setup
Hardware specifications, installation considerations, power requirements, networking options, and cabling information
for HP 3PAR storage systems
HP 3PAR 7200, 7400, and 7450 storage systems
HP 3PAR StoreServ 7000 Storage Site Planning Manual
HP 3PAR StoreServ 7450 Storage Site Planning Manual
HP 3PAR 10000 storage systems
HP 3PAR StoreServ 10000 Storage Physical Planning
Manual
HP 3PAR StoreServ 10000 Storage Third-Party Rack
Physical Planning Manual
Installing and maintaining HP 3PAR 7200, 7400, and 7450 storage systems
Installing 7200, 7400, and 7450 storage systems and
initializing the Service Processor
HP 3PAR StoreServ 7000 Storage Installation Guide
HP 3PAR StoreServ 7450 Storage Installation Guide
HP 3PAR StoreServ 7000 Storage SmartStart Software
User’s Guide
Maintaining, servicing, and upgrading 7200, 7400, and
7450 storage systems
HP 3PAR StoreServ 7000 Storage Service Guide
Troubleshooting 7200, 7400, and 7450 storage systems
HP 3PAR StoreServ 7000 Storage Troubleshooting Guide
HP 3PAR StoreServ 7450 Storage Service Guide
HP 3PAR StoreServ 7450 Storage Troubleshooting Guide
Maintaining the Service Processor
HP 3PAR Service Processor Software User Guide
HP 3PAR Service Processor Onsite Customer Care
(SPOCC) User's Guide
HP 3PAR host application solutions
Backing up Oracle databases and using backups for disaster HP 3PAR Recovery Manager Software for Oracle User's
recovery
Guide
Backing up Exchange databases and using backups for
disaster recovery
HP 3PAR Recovery Manager Software for Microsoft
Exchange 2007 and 2010 User's Guide
Backing up SQL databases and using backups for disaster
recovery
HP 3PAR Recovery Manager Software for Microsoft SQL
Server User’s Guide
Backing up VMware databases and using backups for
disaster recovery
HP 3PAR Management Plug-in and Recovery Manager
Software for VMware vSphere User's Guide
Installing and using the HP 3PAR VSS (Volume Shadow Copy HP 3PAR VSS Provider Software for Microsoft Windows
Service) Provider software for Microsoft Windows
User's Guide
Best practices for setting up the Storage Replication Adapter HP 3PAR Storage Replication Adapter for VMware
for VMware vCenter
vCenter Site Recovery Manager Implementation Guide
Troubleshooting the Storage Replication Adapter for VMware HP 3PAR Storage Replication Adapter for VMware
vCenter Site Recovery Manager
vCenter Site Recovery Manager Troubleshooting Guide
Installing and using vSphere Storage APIs for Array
Integration (VAAI) plug-in software for VMware vSphere
90
Support and Other Resources
HP 3PAR VAAI Plug-in Software for VMware vSphere
User's Guide
Typographic conventions
Table 4 Document conventions
Convention
Element
Bold text
• Keys that you press
• Text you typed into a GUI element, such as a text box
• GUI elements that you click or select, such as menu items, buttons,
and so on
Monospace text
• File and directory names
• System output
• Code
• Commands, their arguments, and argument values
<Monospace text in angle brackets> • Code variables
• Command variables
Bold monospace text
• Commands you enter into a command line interface
• System output emphasized for scannability
WARNING! Indicates that failure to follow directions could result in bodily harm or death, or in
irreversible damage to data or to the operating system.
CAUTION:
NOTE:
Indicates that failure to follow directions could result in damage to equipment or data.
Provides additional information.
Required
Indicates that a procedure must be followed as directed in order to achieve a functional and
supported implementation based on testing at HP.
HP 3PAR branding information
•
The server previously referred to as the "InServ" is now referred to as the "HP 3PAR StoreServ
Storage system."
•
The operating system previously referred to as the "InForm OS" is now referred to as the "HP
3PAR OS."
•
The user interface previously referred to as the "InForm Management Console (IMC)" is now
referred to as the "HP 3PAR Management Console."
•
All products previously referred to as “3PAR” products are now referred to as "HP 3PAR"
products.
Typographic conventions
91
12 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
([email protected]). Include the document title and part number, version number, or the URL
when submitting your feedback.
92
Documentation feedback
Index
B
BladeSystem
c-Class , 8
C
c-Class BladeSystem, 8
challenge-handshake authentication protocol
configuring, 62
configuring on host for iSCSI, 63
setting up, 62
CHAP see challenge-handshake authentication protocol
common provisioning groups see CPGs
configuring
for FC, 8
for FCoE, 21
conventions
text symbols, 91
CPGs
using by storage administrator, 82
creating
TDVV, 72
VVs, 72
D
data duplication, 72
deduplication, 72
deploying
HP Virtual Connect Direct-Attach Fibre Channel storage,
8
documentation
providing feedback on, 92
drivers
Brocade, 8
HBA, 8
E
exporting
LUNs to a Windows Server host, 7
F
fabric
setting up for FC, 11
zoning for FC, 11
failover, 8
iSCSI considerations, 71
Microsoft Failover Clustering, 7
multipath considerations, 30
FC
coexistence with other HP array families, 12
configuring, 8
configuring round robin multipathing, 31
configuring the host, 27
creating host definition, 10
guidelines for FC switch vendors, 13
handling SCSI queue full and busy messages, 39
HP 3PAR Persistent Ports, 15
HP 3PAR Priority Optimization, 14
installing the drivers, 27
installing the HBA, 27
installing VM guest OS, 28
multipath considerations and I/O load balancing, 30
performance considerations for multiple host
configurations, 39
persistent port setup, 15
setting up fabric, 11
setting up ports, 9
target port limits, 14
target port specifications, 14
zoning fabric, 11
FCoE, 26
configuring, 21
creating the host definition, 23
HP 3PAR Persistent Ports, 25
HP 3PAR Priority Optimization, 25
setting up FCoE initiator, 21
setting up FCoE swtich, 21
setting up FCoE target port, 21
target port limits, 25
target port specifications, 25
using system BIOS to configure FCoE, 47
features
for FC ESX/ESXi 4.1, ESXi 5.x, and ESXi 6.0, 40
HP 3PAR VAAI Plug-in 1.1.1 for ESX/ESXi 4.1, 41
HP 3PAR VAAI Plug-in 2.2.0 for ESXi 5.x and ESXi 6.0,
41
HP 3PAR VASA Provider, 81
out-of-space condition, 42
storage I/O control, 40
summary, 44
TP LUN Reporting, 44
UNMAP storage primitive for space reclaim, 41
VAAI, 40
VVOLs, 79
Fibre Channel see FC
Fibre Channel over Ethernet see FCoE
firmware
Brocade, 8
G
GOSs, 28
installing, 53
Guest OS see GOSs
H
Hardware iSCSI, 64
dependent, 67
independent, 64
host
booting, 78
configuring as an FC connection, 27
93
configuring as an FCoE initiator connecting to an FC
target or FCoE target, 47
configuring CHAP, 63
configuring FCoE switch, 47
configuring for an iSCSI connection, 52
configuring initiator FCoE to FC target, 50
configuring initiator FCoE to FCoE target, 51
creating definition for FC, 10
creating definition for FCoE, 23
creating definition for iSCSI, 17
multiple host configurations for FC, 39
setting up the iSCSI initiator, 52
setting up the switch, 52
HP 3PAR OS
configuring, 8
upgrading, 7
HP 3PAR Persistent Ports
connectivity guidelines for FC, 15
for FC, 15
for FCoE, 25
for iSCSI, 20
setting up for FC, 15
HP 3PAR Priority Optimization
for FC, 14
for FCoE, 25
for iSCSI, 19
HP 3PAR VASA Provider
enable, 81
registering with vCenter, 84
HP Virtual Connect Direct-Attach Fibre Channel
storage, 8
I
I/O path policy
round robin, 31
iSCSI
additional features, 71
configuring, 16
configuring a service console connection for iSCSI
storage, 57
configuring CHAP on host, 63
configuring host for an iSCSI connection, 52
configuring the VMware SW iSCSI initiator, 59
considerations for multiple host configurations, 71
creating a VMkernel port, 54
creating iSCSI host definition, 17
failover considerations, 71
HP 3PAR Persistent Ports, 20
HP 3PAR Priority Optimization, 19
installing SW iSCSI on VMware ESX, 52
installing VM GOS, 53
multipath load balancing, 71
setting up ports, 16
target port limits, 19
target port specifications, 19
L
LUNS
marked as offline after an HP 3PAR OS upgrade, 7
94
Index
LUNs
discovering on host, 75
exporting to host, 73
M
multipathing
configuring ESX/ESXi, 33
configuring round robin, 31
FC, 30
FCoE, 30
iSCSI, 30
O
out-of-space condition
for ESX 4.1, ESXi 5.x, or ESXi 6.0, 42
P
PE, 79
PE LUN, 11, 18
ports
configuring for an FCoE host connection, 49
FC target port limits, 14
FC target port specifications, 14
FCoE target port limits, 25
FCoE target port specifications, 25
HP 3PAR Persistent Ports for FC, 15
HP 3PAR Persistent Ports for FCoE, 25
HP 3PAR Persistent Ports for iSCSI, 20
iSCSI target port limits, 19
iSCSI target port specifications, 19
setting up for an iSCSI connection, 16
setting up for FC, 9
primitives
summary, 44
TP LUN Reporting, 44
VAAI, 40
protocol endpoint see PE
Protocol Endpoint LUN see PE LUN
S
SCSI commands support
primitives, 40, 44
using ESX/ESXi 4.1 plug-in, 40
software iSCSI
configuring software iSCSI initiator, 59
installing on VMware ESX, 52
storage
allocating for access by the ESX host, 72
creating, 72
creating TDVV, 72
creating VVs, 72
storage I/O control, 40
symbols in text, 91
T
TDVV
creating, 72
text symbols, 91
thinly deduplicated virtual volumes see TDVV
thinly provisioned virtual volume see TPVV
TPVV
creating, 72
creating, 72
exporting, 72
U
UNMAP storage primitive for space reclaim support
using ESXi 5.0 Update 1 with default VMware T10 VAAI
plug-in and ESXi 6.0, 41
upgrading
HP 3PAR OS, 7
to HP 3PAR OS 3.1.1, 7
to HP 3PAR OS 3.1.2, 7
utilities
BCU, 8
V
VAAI, 40
virtual
LUNs, 73
see also VLUNs
virtual disk VVOLs see VMDK VVOLs
virtual machines see VMs
virtual volume see VV
virtual volumes see VVs
VLUNs, 73
creating, 72
creating for export, 74
VM
installing GOS, 53
VM STUN
see out-of-space condition, 42
VMkernel port
creating, 54
VMs
installing, 28
VMware virtual volumes see VVOLs
volumes
removing, 75
vStorage APIs for Array Integration see VAAI
using HP 3PAR VAAI Plug-in 1.1.1 for ESX/ESXi 4.1,
41
using HP 3PAR VAAI Plug-in 2.2.0 for ESXi 5.x and
ESXi 6.0, 41
verifying HP 3PAR VAAI Plug-in 2.2.0, 44
VV
fully provisioned, 72
thinly deduplicated, 72
thinly provisioned, 72
VVOLs
configuration VVOLs, 79
default CPGs, 83
defining CPGs, 82
live-memory snapshot VVOLs, 79
protocol endpoint, 79
setup, 81
swap VVOLs, 79
using, 79
virtual disk VVOLs (VMDK VVOLs), 79
VVs, 72
95
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement