- No category
advertisement
HP StorageWorks 6400/8400 Enterprise
Virtual Array User Guide
Abstract
This document describes the components and operation of the HP StorageWorks 6400/8400 Enterprise Virtual
Array.
Part Number: 5697–0300
Third edition: May 2010
Legal and notice information
© Copyright 2009-2010 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Warranty
WARRANTY STATEMENT: To obtain a copy of the warranty for this product, see the warranty information website: http://www.hp.com/go/storagewarranty
Acknowledgements
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
Java™ is a US trademark of Sun Microsystems, Inc.
UNIX® is a registered trademark of The Open Group.
Contents
1 EVA6400/8400 hardware ............................................................... 13
2 Enterprise Virtual Array startup .......................................................... 35
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 3
3 Configuring application servers .......................................................... 43
4
4 EVA6400/8400 operation ............................................................... 71
5 Customer replaceable units ............................................................... 87
6 Support and other resources .............................................................. 93
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 5
A Regulatory notices and specifications ................................................. 97
Declaration of conformity for products marked with the FCC logo, United States only ............ 98
6
B Error messages .............................................................................. 109
C Controller fault management ........................................................... 123
D Non-standard rack specifications ..................................................... 127
E Single Path Implementation .............................................................. 133
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 7
Glossary .......................................................................................... 159
Index ............................................................................................... 177
8
Figures
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 9
10
Tables
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 11
12
1 EVA6400/8400 hardware
The EVA6400/8400 contains the following hardware components:
• HSV controllers—Contains power supplies, cache batteries, fans, and an operator control panel
(OCP)
• Fibre Channel disk enclosure—Contains disk drives, power supplies, fans, midplane, and I/O modules
• Fibre Channel Arbitrated Loop cables—Provides connectivity to the HSV controllers and the Fibre
Channel disk enclosures
• Rack—Several free standing racks are available
M6412A disk enclosures
The M6412A disk enclosure contains the disk drives used for data storage; a storage system contains multiple disk enclosures. The major components of the enclosure are:
• 12-bay enclosure
• Dual-loop, Fibre Channel drive enclosure I/O modules
• Copper Fibre Channel cables
• Fibre Channel disk drives and drive blanks
• Power supplies
• Fan modules
Enclosure layout
The disk drives mount in
in the front of the enclosure. The bays are numbered sequentially from top to bottom and left to right. A drive is referred to by its bay number (see
indicators are located at the right of each disk.
shows the front and
shows the rear view of the disk enclosure.
Figure 1 Disk drive bay numbering
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 13
1. Rack-mounting thumbscrew
3. Drive LEDs
5. Enclosure status LEDs
Figure 2 Disk enclosure front view without bezel ears
2. Disk drive release
4. UID push button
1. Power supply 1
3. Fan 1
5. Fan 1 status LED
7. I/O module B
9. Enclosure status LEDs
11. Power push button
Figure 3 Disk enclosure rear view
2. Power supply 1 status LED
4. Enclosure product number and serial number
6. I/O module A
8. Rear UID push button
10. Fan 2
12. Power supply 2
I/O modules
Two I/O modules provide the interface between the disk enclosure and the host controllers, ( Figure
to both I/O modules in the disk enclosure.
Each I/O module has two ports that can transmit and receive data for bidirectional operation.
Activating a port requires connecting a Fibre Channel cable to the port. The port function depends upon the loop.
14 EVA6400/8400 hardware
1. Double 7–segment display: enclosure ID
3. Port 1 (P1), Port 2 (P2) status LEDs
5. I/O module status LEDs
Figure 4 I/O module detail
2. 4 Gb I/O ports
4. Manufacturing diagnostic port
I/O module status indicators
There are five status indicators on the I/O module. See
. The status indicator states for an operational I/O module are shown in
shows the status indicator states for a non-operational I/O module.
Table 1 Port status LEDs
Status LED
Green (left)
Description
• Solid green— Active link
• Flashing green—Locate, remotely asserted by application client
Amber (right)
• Solid amber—Module fault, no synchronization
• Flashing amber—Module fault
Table 2 I/O module status LEDs
Status LED Description
• Locate
• Flashing blue—Remotely asserted by application client
• Module health indicator
• Flashing green—I/O module powering up.
• Solid green—Normal operation
• Green off—Firmware malfunction
• Fault indicator
• Flashing amber—Warning condition (not visible when solid amber showing)
• Solid amber—Replace FRU
• Amber off—Normal operation
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 15
Fiber optic Fibre Channel cables
The Enterprise Virtual Array uses orange, 50-µm, multi-mode, fiber optic cables for connection to the
SAN or the host, where there is a direct connection to the host. The fiber optic cable assembly consists of two 2-m fiber optic strands and small form-factor connectors on each end. See
To ensure optimum operation, the fiber optic cable components require protection from contamination and mechanical hazards. Failure to provide this protection can cause degraded operation. Observe the following precautions when using fiber optic cables.
• To avoid breaking the fiber within the cable:
• Do not kink the cable
• Do not use a cable bend-radius of less than 30 mm (1.18 in)
• To avoid deforming, or possibly breaking the fiber within the cable, do not place heavy objects on the cable.
• To avoid contaminating the optical connectors:
• Do not touch the connectors
• Never leave the connectors exposed to the air
• Install a dust cover on each transceiver and fiber cable connector when they are disconnected
If an open connector is exposed to dust, or if there is any doubt about the cleanliness of the connector, clean the connector as described in
.
Figure 5 Fiber Optic Fibre Channel cable
Copper Fibre Channel cables
The Enterprise Virtual Array uses copper Fibre Channel cables to interconnect disk shelves. The cables are available in 0.6-meter (1.97 ft.) and 2.0-meter (6.56 ft.) lengths. Copper cables provide performance comparable to fiber optic cables. Copper cable connectors differ from fiber optic small form-factor connectors (see
Figure 6 Copper Fibre Channel cable
Fibre Channel disk drives
The Fibre Channel disk drives are
and include the following features:
• Dual-ported 4 Gbps Fibre Channel controller interface that allows up to 96 disk drives to be supported per array controller enclosure
16 EVA6400/8400 hardware
• Compact, direct-connect design for maximum storage density and increased reliability and signal integrity
• Both online high-performance disk drives and FATA disk drives supported in a variety of capacities and spindle speeds
• Better vibration damping for improved performance
Up to 12 disk drives can be installed in a drive enclosure.
Disk drive status indicators
Two status indicators display drive operational status.
identifies the disk drive status indicators.
describes them.
1. Bi-color (amber/blue) 2. Green
Figure 7 Disk status indicators
Table 3 Disk status indicator LED descriptions
Drive LED
Bi-color (top)
Description
• Slow flashing blue (0.5 Hz)—Used to locate drive.
• Solid amber—Drive fault.
Green (bottom)
• Flashing—Drive is spinning up or down and is not ready.
• Solid—Drive is ready to perform I/O operations.
• Flickering—Indicates drive activity.
Disk drive blank
To maintain the proper enclosure air flow, a disk drive or a disk drive blank must be installed in each drive bay. The disk drive blank maintains proper airflow within the disk enclosure.
Controller enclosures
This section describes the major features, purpose, and function of the HSV400 and HSV450 controllers. Each Enterprise Virtual Array has a pair of these controllers.
shows the HSV400 controller rear view and
shows the HSV450 controller rear view. The front of the HSV400 and HSV450 is shown in
.
NOTE:
Some controller enclosure modules have a cache battery located behind the OCP.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 17
Figure 8 HSV400 controller rear view
1. Serial port
3. Controller health
5. Power
7. Mirror ports
9. Power supply 1
2. Unit ID
4. Fault indicator
6. DPI ports
8. Fiber ports
10. Power supply 2
Figure 9 HSV450 controller rear view
1. Serial port
3. Controller health
5. Power
7. Mirror ports
9. Power supply 1
2. Unit ID
4. Fault indicator
6. DPI ports
8. Fiber ports
10. Power supply 2
18 EVA6400/8400 hardware
Figure 10 Controller front view
1. Battery 1
3. Blower 1
5. Operator Control Panel (OCP)
7. Unit ID
2. Battery 2
4. Blower 2
6. Status indicators
Operator control panel
The operator control panel (OCP) provides a direct interface to each controller. From the OCP you can display storage system status and configuration information, shut down the storage system, and manage the password.
The OCP includes a 40-character LCD alphanumeric display, six push-buttons, and five status indicators.
See
.
HP Command View EVA is the tool you will typically use to display storage system status and configuration information or perform the tasks available from the OCP. However, if HP Command
View EVA is not available, the OCP can be used to perform these tasks.
Figure 11 Controller OCP
1. Status indicators (see
) and UID button
2. 40-character alphanumeric display
3. Left, right, top, and bottom push-buttons
4. Esc
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 19
5. Enter
Status indicators
The status indicators display the operational status of the controller. The function of each indicator is described in
Table 4 . During initial setup, the status indicators might not be fully operational.
The following sections define the alphanumeric display modes, including the possible displays, the valid status indicator displays, and the pushbutton functions.
Table 4 Controller status indicators
Indicator Description
Fault
Controller
When the indicator is a solid amber, it means there was a boot failure. When it flashes, the controller is inoperative. Check either HP Command View EVA or the
LCD Fault Management displays for a definition of the problem and recommended corrective action.
When the indicator is flashing green slowly, the controller is booting up. When the indicator turns to solid green, boot is successful and the controller is operating normally.
Physical link to hosts established
When this indicator is green, there is at least one physical link between the storage system and hosts that is active and functioning normally. When this indicator is amber, there are no links between the storage system and hosts that are active and functioning normally.
Virtual disks presented to hosts
When this indicator is green, all virtual disks that are presented to hosts are healthy and functioning normally. When this indicator is amber, at least one virtual disk is not functioning normally. When this indicator is off, there are no virtual disks presented to hosts and this indicates a problem with the virtual disk on the array.
Battery
Unit ID
When this indicator is green, the battery is working properly. When this indicator is amber, there is a battery failure.
Press to turn on (solid blue); press again to turn it off. This LED mimics the function of the UID on the back of the controller.This indicator comes on in response to a
Locate command issued by HP Command View EVA.
Each port on the rear of the controller has an associated status indicator located directly above it.
lists the port and its status description.
Table 5 Controller port status indicators
Port
Fibre Channel host ports
Description
• Green—Normal operation
• Amber—No signal detected
•
Off—No SFP
1 detected or the Direct Connect OCP setting is incorrect
Fibre Channel device ports
• Green—Normal operation
• Amber—No signal detected or the controller has failed the port
•
Off—No SFP
1 detected
20 EVA6400/8400 hardware
Port Description
Fibre Channel cache mirror ports
• Green—Normal operation
• Amber—No signal detected or the controller has failed the port
•
Off—No SFP
1 detected
1
On copper Fibre Channel cables, the SFP is integrated into the cable connector.
Navigation buttons
The operation of the navigation buttons is determined by the current display and location in the menu structure.
defines the basic push button functions when navigating the menus and options.
To simplify presentation and to avoid confusion, the pushbutton reference names, regardless of labels, are left, right, top, and bottom.
Table 6 Navigation button functions
Button
Esc
Enter
Function
Moves down through the available menus and options
Moves up through the available menus and options
Selects the displayed menu or option.
Returns to the previous menu.
Used for “No” selections and to return to the default display.
Used for “Yes” selections and to progress through menu items.
Alphanumeric display
The alphanumeric display uses two LCD rows, each capable of displaying up to 20 alphanumeric characters. By default, the alphanumeric display alternates between displaying the Storage System
Name and the World Wide Name. An active (flashing) display, an error condition message, or a user entry (pressing a push-button) overrides the default display. When none of these conditions exist, the default display returns after approximately 10 seconds.
Power supplies
Two power supplies provide the necessary operating voltages to all controller enclosure components.
If one power supply fails, the remaining supply is capable of operating the enclosure.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 21
1. Power supply
2. AC input connector
3. Latch
Figure 12 Power supply
4. Status indicator (solid green on—normal operation; solid amber—failure or no power)
5. Handle
Blower module
Fan modules provide the cooling necessary to maintain the proper operating temperature within the controller enclosure. If one fan fails, the remaining fan is capable of cooling the enclosure.
1. Blower 1
Figure 13 Blower module pulled out
Table 7 Fan status indicators
Status indicator
On left — green
Fault indicator
Solid green
Blinking
Off
2. Blower 2
On right — amber On
Description
Normal operation.
Maintenance in progress.
Amber is on or blinking, or the enclosure is powered down.
Fan failure. Green will be off. (Green and amber are not on simultaneously except for a few seconds after power-up.)
22 EVA6400/8400 hardware
Battery module
Batteries provide backup power to maintain the contents of the controller cache when AC power is lost and the storage system has not been shutdown properly. When fully charged the batteries can sustain the cache contents for to 96 hours. Three batteries are used on the EVA8400 and two batteries are used on the EVA6400.
illustrates the location of the cache batteries and the battery status indicators. See
for additional information on the status indicators.
Figure 14 Battery module
1. Status indicator
3. Battery 0
2. Fault indicator
4. Battery 1
The table below describes the battery status indicators. When a battery is first installed, the fault indicator goes on (solid) for approximately 30 seconds while the system discovers the new battery.
Then, the battery status indicators display the battery status as described in the table below.
Table 8 Battery status indicators
Status indicator Fault indicator
On
Flashing
Off
Off
Flashing (fast)
Off
Off
On
Flashing
Flashing (fast)
Description
Normal operation. A maintenance charge process keeps the battery fully charged.
Battery is undergoing a full charging process. This is the indication you typically see after installing a new battery.
Battery fault. The battery has failed and should be replaced.
The battery has experienced an over temperature fault.
Battery code is being updated. When a new battery is installed, it may be necessary for the controllers to update the code on the battery to the correct version. Both indicators flash rapidly for approximately 30 seconds.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 23
Status indicator Fault indicator
Flashing Flashing
Description
Battery is undergoing a scheduled battery load test, during which the battery is discharged and then recharged to ensure it is working properly. During the discharge cycle, you will see this display. The load test occurs infrequently and takes several hours.
HSV controller cabling
All data cables and power cables attach to the rear of the controller. Adjacent to each data connector is a two-colored link status indicator.
identifies the status conditions presented by these indicators.
NOTE:
These indicators do not indicate whether there is communication on the link, only whether the link can transmit and receive data.
The data connections are the interfaces to the disk drive enclosures or loop switches (depending on your configuration), the other controller, and the fabric. Fiber optic cables link the controllers to the fabric, and, if an expansion cabinet is part of the configuration, link the expansion cabinet drive enclosures to the loop is in the main cabinet. Copper cables are used between the controllers (mirror port) and between the controllers and the drive enclosures or loop switches.
Storage system racks
All storage system components are mounted in a rack. Each configuration includes one enclosure holding both controllers (the controller pair), FC cables the controller and the disk enclosures. Each controller pair and all the associated drive enclosures form a single storage system.
The rack provides the capability for mounting 483 mm (19 in) wide controller and drive enclosures.
NOTE:
Racks and rack-mountable components are typically described using “U” measurements. “U” measurements are used to designate panel or enclosure heights. The “U” measurement is a standard of 41 mm (1.6 in).
The racks provide the following:
• Unique frame and rail design—Allows fast assembly, easy mounting, and outstanding structural integrity.
• Thermal integrity—Front-to-back natural convection cooling is greatly enhanced by the innovative multi-angled design of the front door.
• Security provisions—The front and rear door are lockable, which prevents unauthorized entry.
• Flexibility—Provides easy access to hardware components for operation monitoring.
• Custom expandability—Several options allow for quick and easy expansion of the racks to create a custom solution.
24 EVA6400/8400 hardware
Rack configurations
Each system configuration contains several disk enclosures included in the storage system. See
for a typical EVA6400/8400 rack configuration. The standard rack is the 42U HP 10000 G2
Series rack. The EVA6400/8400 is also supported with 22U, 36U, 42U 5642, and 47U racks. The
42U 5643 is a field-installed option and the 47U rack must be assembled onsite because the cabinet height creates shipping difficulties.
For more information on HP rack offerings for the EVA6400/8400, see http://h18004.www1.hp.com/ products/servers/proliantstorage/racks/index.html
.
Figure 15 Storage system hardware components – back view
Power distribution–Modular PDUs
NOTE:
This section describes the most common power distribution system for EVA6400/8400s. For information about other options, see the HP power distribution units website: http://h18004.www1.hp.com/products/servers/proliantstorage/power-protection/pdu.html
AC power is distributed to the rack through a dual Power Distribution Unit (PDU) assembly mounted at the bottom rear of the rack. The characteristics of the fully-redundant rack power configuration are as follows:
• Each PDU is connected to a separate circuit breaker-protected, 30-A AC site power source
(100–127 VAC or 220–240 VAC ±10%, 50 or 60-Hz, ±5%). The following figures illustrate the most common compatible 60-Hz and 50-Hz wall receptacles.
NEMA L6-30R receptacle, 3-wire, 30-A, 60-Hz NEMA L5-30R receptacle, 3-wire, 30-A,
60-Hz
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 25
IEC 309 receptacle, 3-wire, 30-A, 50-Hz
• The standard power configuration for any Enterprise Virtual Array rack is the fully redundant configuration. Implementing this configuration requires:
• Two separate circuit breaker-protected, 30-A site power sources with a compatible wall receptacle.
• One dual PDU assembly. Each PDU connects to a different wall receptacle.
• Four to eight (depending on the rack) Power Distribution Modules (PDM) per rack. PDMs are split evenly on both sides of the rack. Each set of PDMs connects to a different PDU.
• Eight PDMs for 42U, 47U, and 42U 5642 racks
• Six PDMs for 36U racks
• Four PDMs for 22U racks
• The drive enclosure power supplies on the left (PS 1) connect to the PDMs on the left with a gray, 66 cm (26 in) power cord.
• The drive enclosure power supplies on the right (PS 2) connect to the PDMs on the right with a black, 66 cm (26 in) power cord.
• Each controller has a left and right power supply. The left power supplies of each should be connected to the left PDMs and the right power supplies should be connected to the right PDMs.
NOTE:
Drive enclosures, when purchased separately, include one 50 cm black cable and one 50 cm gray cable.
The configuration provides complete power redundancy and eliminates all single points of failure for both the AC and DC power distribution.
CAUTION:
Operating the array with a single PDU will result in the following conditions:
• No redundancy
• Louder controllers and disk enclosures due to increased fan speed
• HP Command View EVA will continuously display a warning condition, making issue monitoring a labor-intensive task
Although the array is capable of doing so, HP strongly recommends that an array operating with a single PDU should not:
• Be put into production
• Remain in this state for more than 24 hours
26 EVA6400/8400 hardware
PDUs
Each Enterprise Virtual Array rack has either a 50- or 60-Hz, dual PDU mounted at the bottom rear of the rack. The PDU placement is back-to-back, plugs facing toward the front (
breaker switches facing the back (
• The standard 50-Hz PDU cable has an IEC 309, 3-wire, 30-A, 50-Hz connector.
• The standard 60-Hz PDU cable has a NEMA L6-30P, 3-wire, 30-A, 60-Hz connector.
If these connectors are not compatible with the site power distribution, you must replace the PDU power cord cable connector. One option is the NEMA L5-30R receptacle, 3-wire, 30-A, 60-Hz connector.
Each of the two PDU power cables has an AC power source specific connector. The circuit breaker-controlled PDU outputs are routed to a group of four AC receptacles. The voltages are then routed to PDMs, sometimes called AC power strips, mounted on the two vertical rails in the rear of the rack.
Figure 16 Dual PDU—front view
1. PDU B
2. PDU A
3. AC receptacles
2. Power receptacle schematic
4. Power cord
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 27
1. PDU B
2. PDU A
Figure 17 Dual PDU—rear view
3. Main circuit breaker
4. Circuit breakers
PDU A
PDU A connects to AC PDM A1–A4.
A PDU A failure:
• Disables the power distribution circuit
• Removes power from from the left side of the rack
• Disables disk enclosure PS 1
• Disables the left power supplies in the controllers
PDU B
PDU B connects to AC PDM B1–B4.
A PDU B failure:
• Disables the power distribution circuit
• Removes power from the right side of the rack
• Disables disk enclosure PS 2
• Disables the right power supplies in the controllers
PDMs
Depending on the rack, there can be up to eight PDMs mounted in the rear of the rack:
• The PDMs on the left vertical rail connect to PDU A
• The PDMs on the right vertical rail connect to PDU B
Each PDM has seven AC receptacles. The PDMs distribute the AC power from the PDUs to the enclosures. Two power sources exist for each controller pair and disk enclosure. If a PDU fails, the system will remain operational.
28 EVA6400/8400 hardware
CAUTION:
The AC power distribution within a rack ensures a balanced load to each PDU and reduces the possibility of an overload condition. Changing the cabling to or from a PDM could cause an overload condition. HP supports only the AC power distributions defined in this user guide.
Figure 18 Rack PDM
1. Power receptacles
2. AC power connector
Rack AC power distribution
The power distribution in an Enterprise Virtual Array rack is the same for all variants. The site AC input voltage is routed to the dual PDU assembly mounted in the rack lower rear. Each PDU distributes
AC to a maximum of four PDMs mounted on the left and right vertical rails (see
).
• PDMs A1 through A4 connect to receptacles A through D on PDU A. Power cords connect these
PDMs to the left power supplies on the disk enclosures and to the left power supplies on the controllers.
• PDMs B1 through B4 connect to receptacles A through D on PDU B. Power cords connect these
PDMs to the right power supplies on the disk enclosures and to the right power supplies on the controllers.
NOTE:
The locations of the PDUs and the PDMs are the same in all racks.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 29
Figure 19 Rack AC power distribution
1. PDM 1
3. PDM 3
5. PDU 1
7. PDM 6
9. PDM 8
2. PDM 2
4. PDM 4
6. PDM 5
8. PDM 7
10. PDU 2
Rack System/E power distribution components
AC power is distributed to the Rack System/E rack through Power Distribution Units (PDU) mounted on the two vertical rails in the rear of the rack. Up to four PDUs can be mounted in the rack—two mounted on the right side of the cabinet and two mounted on the left side.
Each of the PDU power cables has an AC power source specific connector. The circuit breaker-controlled PDU outputs are routed to a group of ten AC receptacles. The storage system components plug directly into the PDUs.
30 EVA6400/8400 hardware
Rack AC power distribution
The power distribution configuration in a Rack System/E rack depends on the number of storage systems installed in the rack. If one storage system is installed, only two PDUs are required. If multiple storage systems are installed, four PDUs are required.
The site AC input voltage is routed to each PDU mounted in the rack. Each PDU distributes AC through ten receptacles directly to the storage system components.
• PDUs 1 and 3 (optional) are mounted on the left side of the cabinet. Power cords connect these
PDUs to the number 1 disk enclosure power supplies and to the controllers.
• PDUs 2 and 4 (optional) are mounted on the right side of the cabinet. Power cords connect these
PDUs to the number 2 disk enclosure power supplies and to the controllers.
For additional information on power distribution support, see the following website: http://h18004.www1.hp.com/products/servers/proliantstorage/power-protection/pdu.html
Moving and stabilizing a rack
WARNING!
The physical size and weight of the rack requires a minimum of two people to move. If one person tries to move the rack, injury may occur.
To ensure stability of the rack, always push on the lower half of the rack. Be especially careful when moving the rack over any bump (e.g., door sills, ramp edges, carpet edges, or elevator openings).
When the rack is moved over a bump, there is a potential for it to tip over.
Moving the rack requires a clear, uncarpeted pathway that is at least 80 cm (31.5 in) wide for the
60.3 cm (23.7 in) wide, 42U rack. A vertical clearance of 203.2 cm (80 in) should ensure sufficient clearance for the 200 cm (78.7 in) high, 42U rack.
CAUTION:
Ensure that no vertical or horizontal restrictions exist that would prevent rack movement without damaging the rack.
Make sure that all four leveler feet are in the fully raised position. This process will ensure that the casters support the rack weight and the feet do not impede movement.
Each rack requires an area 600 mm (23.62 in) wide and 1000 mm (39.37 in) deep (see
).
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 31
Figure 20 Single rack configuration floor space requirements
1. Front door
3. Rack width 600 mm
5. Rear service area depth 300 mm
7. Front service area depth 406 mm
2. Rear door
4. Service area width 813 mm
6. Rack depth 1000 mm
8. Total rack depth 1706 mm
If the feet are not fully raised, complete the following procedure:
1.
Raise one foot by turning the leveler foot hex nut counterclockwise until the weight of the rack is fully on the caster (see
2.
Repeat
for the other feet.
Figure 21 Raising a leveler foot
1. Hex nut 2. Leveler foot
3.
Carefully move the rack to the installation area and position it to provide the necessary service areas (see
).
32 EVA6400/8400 hardware
To stabilize the rack when it is in the final installation location:
1.
Use a wrench to lower the foot by turning the leveler foot hex nut clockwise until the caster does not touch the floor. Repeat for the other feet.
2.
After lowering the feet, check the rack to ensure it is stable and level.
3.
Adjust the feet as necessary to ensure the rack is stable and level.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 33
34 EVA6400/8400 hardware
2 Enterprise Virtual Array startup
This chapter describes the procedures to install and configure the Enterprise Virtual Array. When these procedures are complete, you can begin using your storage system.
NOTE:
Installation of the Enterprise Virtual Array should be done only by an HP authorized service representative. The information in this chapter provides an overview of the steps involved in the installation and configuration of the storage system.
EVA8400 storage system connections
shows how the storage system is connected to other components of the storage solution.
• The HSV450 controllers connect via four host ports (FP1, FP2, FP3, and FP4) to the Fibre Channel fabrics. The hosts that will access the storage system are connected to the same fabrics.
• The HP Command View EVA management server also connects to the fabric.
• The controllers connect through two loop pairs to the drive enclosures. Each loop pair consists of two independent loops, each capable of managing all the disks should one loop fail.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 35
Figure 22 EVA8400 configuration
1 Network interconnection
2 Management server
3 Non-host
4 Host A
5 Host B
6 Fabric 1
7 Fabric 2
8 Controller A
9 Controller B
10 Cache mirror ports
11 Drive enclosure 1
12 Drive enclosure 2
13 Drive enclosure 3
EVA6400 storage system connections
shows a typical EVA6400 SAN topology:
• The HSV400 controllers connect via four host ports (FP1, FP2, FP3, and FP4) to the Fibre Channel fabrics. The hosts that will access the storage system are connected to the same fabrics.
• The HP Command View EVA management server also connects to both fabrics.
• The controllers connect through one loop pair to the drive enclosures. The loop pair consists of two independent loops, each capable of managing all the disks should one loop fail.
36 Enterprise Virtual Array startup
Figure 23 EVA6400 configuration
1 Network interconnection
2 Management server
3 Non-host
4 Host A
5 Host B
6 Fabric 1
7 Fabric 2
8 Controller A
9 Controller B
10 Cache mirror ports
11 Drive enclosure 1
12 Drive enclosure 2
Direct connect
NOTE:
Direct connect is currently supported on Microsoft Windows only.
Direct connect provides a lower cost solution for smaller configurations. When using direct connect, the storage system controllers are connected directly to the host(s), not to SAN Fibre Channel switches.
Make sure the following requirements are met when configuring your environment for direct connect:
• A management server running HP Command View EVA must be connected to one port on each
EVA controller. The management host must use dual HBAs for redundancy.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 37
• To provide redundancy, it is recommended that dual HBAs be used for each additional host connected to the storage system. Using this configuration, up to four hosts (including the management host) can be connected to an EVA6400/8400.
• The Host Port Configuration must be set to Direct Connect using the OCP.
• HP Continuous Access EVA cannot be used with direct connect configurations.
• The HSV controller firmware cannot differentiate between an empty host port and a failed host port in a direct connect configuration. As a result, the Connection state dialog box on the Controller
Properties window displays Connection failed for an empty host port. To fix this problem, insert an optical loop-back connector into the empty host port; the Connection state will display Connected.
For more information about optical loop-back connectors, contact your HP-authorized service provider.
iSCSI connection configurations
The EVA6400/8400 support iSCSI attach configurations using the HP MPX100. Both fabric connect and direct connect are supported for iSCSI configurations. For complete information on iSCSI configurations, go to the following website: http://h18006.www1.hp.com/products/storageworks/evaiscsiconnect/index.html
NOTE:
An iSCSI connection configuration supports mixed direct connect and fabric connect.
Fabric connect iSCSI
Fabric connect provides an iSCSI solution for EVA Fibre Channel configurations that want to continue to use all EVA ports on FC or if the EVA is also used for HP Continuous Access EVA.
Make sure the following requirements are met when configuring your MPX100 environment for fabric connect:
• A maximum of two MPX100s per storage system are supported
• Each storage system port can connect to a maximum of two MPX100 FC ports.
• Each MPX100 FC port can connect to a maximum of one storage system port.
• In a single MPX100 configuration, if both MPX100 FC ports are used, each port must be connected to one storage system controller.
• In a dual MPX100 configuration, at least one FC port from each MPX100 must be connected to one storage system controller.
• The Host Port Configuration must be set to Fabric Connect using the OCP.
• HP Continuous Access EVA is supported on the same storage system connected in MPX100 fabric connect configurations.
Direct connect iSCSI
Direct connect provides a lower cost solution for configurations that want to dedicate controller ports to iSCSI I/O. When using direct connect, the storage system controllers are connected directly to the
MPX100(s), not to SAN Fibre Channel switches.
Make sure the following requirements are met when configuring your MPX100 environment for direct connect:
38 Enterprise Virtual Array startup
• A maximum two MPX100s per storage system are supported.
• In a single MPX100 configuration, if both MPX100 FC ports are used each port must be connected to one storage system controller.
• In a dual MPX100 configuration, at least one FC port from each MPX100 must be connected to one storage system controller.
• The Host Port Configuration must be set to Direct Connect using the OCP.
• HP Continuous Access EVA cannot be used with direct connect configurations.
• EVAs cannot be directly connected to each other to create HP Continuous Access EVA configuration.
However, hosts can be direct connected to the EVA in a HP Continuous Access configuration. At least one port from each array in an HP Continuous Access EVA configuration must be connected to a Fabric connection for remote array connectivity.
Procedures for getting started
Step
1. Gather information and identify all related storage documentation.
2. Contact an authorized service representative for hardware configuration information.
3. Enter the World Wide Name (WWN) into the OCP.
4. Configure HP Command View EVA.
5. Prepare the hosts.
6. Configure the system through HP Command View EVA.
7. Make virtual disks available to their hosts. See the storage system software documentation for each host's operating system.
Customer
Customer
Responsibility
HP Service Engineer
HP Service Engineer
Customer
HP Service Engineer
HP Service Engineer
Gathering information
The following items should be available when installing and configuring an Enterprise Virtual Array.
They provide information necessary to set up the storage system successfully.
• HP StorageWorks 6400/8400 Enterprise Virtual Array World Wide Name label, (shipped with the storage system)
•
HP StorageWorks Enterprise Virtual Array Release Notes
Locate these items and keep them handy. You will need them for the procedures in this manual.
Host information
Make a list of information for each host computer that will be accessing the storage system. You will need the following information for each host:
• The LAN name of the host
• A list of World Wide Names of the FC adapters, also called host bus adapters, through which the host will connect to the fabric that provides access to the storage system, or to the storage system directly if using direct connect.
• Operating system type
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 39
• Available LUN numbers
Setting up a controller pair using the OCP
NOTE:
This procedure should be performed by an HP authorized service representative.
Two pieces of data must be entered during initial setup using the controller OCP:
• World Wide Name (WWN) — Required to complete setup. This procedure should be performed by an HP authorized service representative.
• Storage system password — Optional. A password provides security allowing only specific instances of HP Command View EVA to access the storage system.
The OCP on either controller can be used to input the WWN and password data. For more information
about the OCP, see “ Operator Control Panel ” on page 19.
lists the push-button functions when entering the WWN, WWN checksum, and password data.
Table 9 Push button functions
Button
ESC
ENTER
Function
Selects a character by scrolling up through the character list one character at a time.
Moves forward one character. If you accept an incorrect character, you can move through all 16 characters, one character at a time, until you display the incorrect character. You can then change the character.
Selects a character by scrolling down through the character list one character at a time.
Moves backward one character.
Returns to the default display.
Accepts all the characters entered.
Entering the WWN
Fibre Channel protocol requires that each controller pair have a unique WWN. This 16-character alphanumeric name identifies the controller pair on the storage system. Two WWN labels attached to the rack identify the storage system WWN and checksum. See
NOTE:
• The WWN is unique to a controller pair and cannot be used for any other controller pair or device anywhere on the network.
• This is the only WWN applicable to any controller installed in a specific physical location, even a replacement controller.
• Once a WWN is assigned to a controller, you cannot change the WWN while the controller is part of the same storage system.
40 Enterprise Virtual Array startup
Figure 24 Location of the World Wide Name labels
1. World Wide Name labels
Complete the following procedure to assign the WWN to each pair of controllers.
1.
Turn the power switches on both controllers off.
2.
Apply power to the rack.
3.
Turn the power switch on both controllers on.
NOTE:
Notifications of the startup test steps that have been executed are displayed while the controller is booting. It may take up to two minutes for the steps to display. The default
WWN entry display has a 0 in each of the 16 positions.
4.
Press or until the first character of the WWN is displayed. Press to accept this character and select the next.
5.
Repeat
to enter the remaining characters.
6.
Press Enter to accept the WWN and select the checksum entry mode.
Entering the WWN checksum
The second part of the WWN entry procedure is to enter the two-character checksum, as follows.
1.
Verify that the initial WWN checksum displays 0 in both positions.
2.
Press or until the first checksum character is displayed. Press to accept this character and select the second character.
3.
Press or until the second character is displayed. Press Enter to accept the checksum and exit.
4.
Verify that the default display is automatically selected. This indicates that the checksum is valid.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 41
NOTE:
If you enter an incorrect WWN or checksum, the system will reject the data and you must repeat the procedure.
Entering the storage system password
The storage system password feature enables you to restrict management access to the storage system.
The password must meet the following requirements:
• 8 to 16 characters in length
• Can include upper or lower case letters
• Can include numbers 0 - 9
• Can include the following characters: ! “ # $ % & ‘ ( ) * + , - . / : ; < = > ? @ [ ] ^ _ ` { | }
• Cannot include the following characters: space ~ \
Complete the following procedure to enter the password:
1.
Select a unique password of 8 to 16 characters.
2.
With the default menu displayed, press three times to display System Password.
3.
Press to display Change Password?
4.
Press Enter for yes.
The default password, AAAAAAAA~~~~~~~~, is displayed.
5.
Press or to select the desired character.
6.
Press to accept this character and select the next character.
7.
Repeat the process to enter the remaining password characters.
8.
Press Enter to enter the password and return to the default display.
Installing HP Command View EVA
HP Command View EVA is installed on a management server. Installation may be skipped if the latest version of HP Command View EVA is running. Verify the latest version at the HP website: http://h18006.www1.hp.com/products/storage/software/cmdvieweva/index.html
See the HP StorageWorks HP Command View EVA Installation Guide for more information.
Installing optional EVA software licenses
If you purchased optional EVA software, it will be necessary to install the license. Optional software available for the Enterprise Virtual Array includes HP Business Copy EVA and HP Continuous Access
EVA. Installation instructions are included with the license.
42 Enterprise Virtual Array startup
3 Configuring application servers
Overview
This chapter provides general connectivity information for all supported operating systems. Where applicable, an OS-specific section is included to provide more information.
Clustering
Clustering is connecting two or more computers together so that they behave like a single computer.
Clustering may also be used for parallel processing, load balancing, and fault tolerance.
See the Single Point of Connectivity Knowledge (SPOCK) website ( http://www.hp.com/storage/ spock ) for the clustering software supported on each operating system.
NOTE:
For OpenVMS, you must make the Console LUN ID and OS unit IDs unique throughout the entire
SAN, not just the controller subsystem.
Multipathing
Multipathing software provides a multiple-path environment for your operating system. See the following website for more information: http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html
See the Single Point of Connectivity Knowledge (SPOCK) website ( http://www.hp.com/storage/ spock ) for the multipathing software supported on each operating system.
Installing Fibre Channel adapters
For all operating systems, supported Fibre Channel adapters (FCAs) must be installed in the host server in order to communicate with the EVA.
NOTE:
Traditionally, the adapter that connects the host server to the fabric is called a host bus adapter (HBA).
The server HBA used with the EVA6400/8400 is called a Fibre Channel adapter (FCA). You might also see the adapter called a Fibre Channel host bus adapter (Fibre Channel HBA) in other related documents.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 43
Follow the hardware installation rules and conventions for your server type. The FCA is shipped with its own documentation for installation. See that documentation for complete instructions. You need the following items to begin:
• FCA boards and the manufacturer’s installation instructions
• Server hardware manual for instructions on installing adapters
• Tools to service your server
The FCA board plugs into a compatible I/O slot (PCI, PCI-X, PCI-E) in the host system. For instructions on plugging in boards, see the hardware manual.
You can download the latest FCA firmware from the following website: http://www.hp.com/support/ downloads . Enter HBA in the Search Products box and then select your product. See the Single Point of Connectivity Knowledge (SPOCK) website ( http://www.hp.com/storage/spock ) for supported
FCAs by operating system.
Testing connections to the EVA
After installing the FCAs, you can create and test connections between the host server and the EVA.
For all operating systems, you must:
• Add hosts
• Create and present virtual disks
• Verify virtual disks from the hosts
The following sections provide information that applies to all operating systems. For OS-specific details, see the applicable operating system section.
Adding hosts
To add hosts using HP Command View EVA:
1.
Retrieve and note the worldwide names (WWNs) for each FCA on your host.
You need this information to select the host FCAs in HP Command View EVA.
2.
Use HP Command View EVA to add the host and each FCA installed in the host system.
NOTE:
To add hosts using HP Command View EVA, you must add each FCA installed in the host. Select
Add Host to add the first adapter. To add subsequent adapters, select Add Port. Ensure that you add a port for each active FCA.
3.
Select the applicable operating system for the host mode.
Table 10 Select the host mode for the applicable operating system
Operating System
HP-UX
IBM AIX
Linux
Mac OS X
Host mode selection
HP-UX
IBM AIX
Linux
Linux
44 Configuring application servers
OpenVMS
Sun Solaris
VMware
Windows
OVMS
Sun Solaris
VMware
Microsoft Windows
Microsoft Windows 2008
Linux Citrix Xen Server
4.
Check the Host folder in the Navigation pane of HP Command View EVA to verify that the host
FCAs are added.
NOTE:
More information about HP Command View EVA is available at the following website: http://www.hp.com/support/manuals . Click Storage Software under Storage, and then select
HP StorageWorks Command View EVA software under Storage Device Management Software.
Creating and presenting virtual disks
To create and present virtual disks to the host server:
1.
From HP Command View EVA, create a virtual disk on the EVA6400/8400.
2.
Specify values for the following parameters:
• Virtual disk name
• Vraid level
• Size
3.
Present the virtual disk to the host you added.
4.
If applicable (OpenVMS), select a LUN number if you chose a specific LUN on the Virtual Disk
Properties window.
Verifying virtual disk access from the host
To verify that the host can access the newly presented virtual disks, restart the host or scan the bus.
If you are unable to access the virtual disk:
• Verify that all cabling to the switch, EVA, and host is properly connected.
• Verify all firmware levels. For more information, see the Enterprise Virtual Array QuickSpecs and associated release notes.
• Ensure that you are running a supported version of the host operating system. For more information, see the HP StorageWorks Enterprise Virtual Array Compatibility Reference.
• Ensure that the correct host is selected as the operating system for the virtual disk in HP Command
View EVA.
• Ensure that the host WWN number is set correctly (to the host you selected).
• Verify the FCA switch settings.
• Verify that the virtual disk is presented to the host.
• Verify zoning.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 45
Configuring virtual disks from the host
After you create the virtual disks on the EVA6400/8400 and rescan or restart the host, follow the host-specific conventions for configuring these new disk resources. For instructions, see the documentation included with your server.
HP-UX
Scanning the bus
To scan the FCA bus and display information about the EVA6400/8400 devices:
1.
Enter the # ioscan -fnCdisk command to start the rescan.
All new virtual disks become visible to the host.
2.
Assign device special files to the new virtual disks using the insf command.
# insf -e
NOTE:
Uppercase E reassigns device special files to all devices. Lowercase e assigns device special files only to the new devices—in this case, the virtual disks.
The following is a sample output from an ioscan command:
# ioscan -fnCdisk
# ioscan -fnCdisk
Class I H/W Patch Driver S/W H/W Type Description
State
======================================================================================== ba 3 0/6 lba CLAIMED BUS_NEXUS Local PCI Bus fc fcp ext_bus
2
0
4
0/6/0/0
0/6/0/0.39
0/6/00.39.13.0.0
td
Adapter (782)
CLAIMED INTERFACE HP Tachyon XL@ 2 FC fcp CLAIMED INTERFACE
Mass Stor Adap /dev/td2
FCP Domain fcparray CLAIMED INTERFACE FCP Array Interface target ctl disk
5
4
0/6/0/0.39.13.0.0.0
0/6/0/0.39.13.0.0.0.0
22 0/6/0/0.39.13.0.0.0.1
ext_bus 5 0/6/0/0.39.13.255.0
tgt sctl sdisk fcpdev
CLAIMED DEVICE
CLAIMED
CLAIMED
DEVICE
DEVICE
HP HSV400 /dev/rscsi/c4t0d0
HP HSV400 /dev/dsk/c4t0d1
/dev/rdsk/c4t0d
CLAIMED INTERFACE FCP Device Interface target 8 0/6/0/0.39.13.255.0.0
tgt ctl ext_bus
20 0/6/0/0.39.13.255.0.0.0
10 0/6/0/0.39.28.0.0
sctl
CLAIMED
CLAIMED
DEVICE
DEVICE HP HSV400 /dev/rscsi/c5t0d0 fcparray CLAIMED INTERFACE FCP Array Interface target 9 0/6/0/0.39.28.0.0.0
ctl 40 0/6/0/0.39.28.0.0.0.0
tgt sctl
CLAIMED
CLAIMED
DEVICE
DEVICE HP HSV400 /dev/rscsi/c10t0d0 disk 46 0/6/0/0.39.28.0.0.0.2
sdisk CLAIMED DEVICE disk disk
47 0/6/0/0.39.28.0.0.0.3
48 0/6/0/0.39.28.0.0.0.4
sdisk sdisk
CLAIMED
CLAIMED
DEVICE
DEVICE
HP HSV400 /dev/dsk/c10t0d2
/dev/rdsk/c10t0d2
HP HSV400 /dev/dsk/c10t0d3
/dev/rdsk/c10t0d3
HP HSV400 /dev/dsk/c10t0d4
/dev/rdsk/c10t0d4
46 Configuring application servers
disk disk disk
49 0/6/0/0.39.28.0.0.0.5
50 0/6/0/0.39.28.0.0.0.6
51 0/6/0/0.39.28.0.0.0.7
sdisk sdisk sdisk
CLAIMED DEVICE
CLAIMED DEVICE
CLAIMED DEVICE
HP HSV400 /dev/dsk/c10t0d5
/dev/rdsk/c10t0d5
HP HSV400 /dev/dsk/c10t0d
/dev/rdsk/c10t0d6
HP HSV400 /dev/dsk/c10t0d7
/dev/rdsk/c10t0d7
Creating volume groups on a virtual disk using vgcreate
You can create a volume group on a virtual disk by issuing a vgcreate command. This builds the virtual group block data, allowing HP-UX to access the virtual disk. See the pvcreate, vgcreate, and lvcreate man pages for more information about creating disks and file systems. Use the following procedure to create a volume group on a virtual disk:
NOTE:
Italicized text is for example only.
1.
To create the physical volume on a virtual disk, enter a command similar to the following:
# pvcreate -f /dev/rdsk/c32t0d1
2.
To create the volume group directory for a virtual disk, enter a command similar to the following:
# mkdir /dev/vg01
3.
To create the volume group node for a virtual disk, enter a command similar to the following:
# mknod /dev/vg01/group c 64 0x010000
The designation 64 is the major number that equates to the 64-bit mode. The 0x01 is the minor number in hex, which must be unique for each volume group.
4.
To create the volume group for a virtual disk, enter a command similar to the following:
# vgcreate –f /dev/vg01 /dev/dsk/c32t0d1
5.
To create the logical volume for a virtual disk, enter a command similar to the following:
# lvcreate -L1000 /dev/vg01/lvol1
In this example, a 1-Gb logical volume (lvol1) is created.
6.
Create a file system for the new logical volume by creating a file system directory name and inserting a mount tap entry into /etc/fstab.
7.
Run the mkfs command on the new logical volume. The new file system is ready to mount.
IBM AIX
Accessing IBM AIX utilities
You can access IBM AIX utilities such as the Object Data Manager (ODM), on the following website: http://www.hp.com/support/downloads
In the Search products box, enter MPIO, and then click AIX MPIO PCMA for HP Arrays. Select IBM
AIX, and then select your software storage product.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 47
Adding hosts
To determine the active FCAs on the IBM AIX host, enter:
# lsdev -Cc adapter |grep fcs
Output similar to the following appears: fcs0 fcs1
# lscfg -vl
Available 1H-08
Available 1V-08
FC Adapter
FC Adapter fcs0 fcs0 U0.1-P1-I5/Q1 FC Adapter
Part Number.................80P4543
EC Level....................A
Serial Number...............1F4280A419
Manufacturer................001F
Feature Code/Marketing ID...280B
FRU Number..................
80P4544
Device Specific.(ZM)........3
Network Address.............10000000C940F529
ROS Level and ID............02881914
Device Specific.(Z0)........1001206D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF801315
Device Specific.(Z5)........02881914
Device Specific.(Z6)........06831914
Device Specific.(Z7)........07831914
Device Specific.(Z8)........20000000C940F529
Device Specific.(Z9)........TS1.90A4
Device Specific.(ZA)........T1D1.90A4
Device Specific.(ZB)........T2D1.90A4
Device Specific.(YL)........U0.1-P1-I5/Q1b.
Creating and presenting virtual disks
When creating and presenting virtual disks to an IBM AIX host, be sure to:
1.
Set the OS unit ID to 0.
2.
Set Preferred path/mode to No Preference.
3.
Select a LUN number if you chose a specific LUN on the Virtual Disk Properties window.
Verifying virtual disks from the host
To scan the IBM AIX bus, enter: cfgmgr -v
The -v switch (verbose output) requests a full output.
To list all EVA devices, enter:
Output similar to the following is displayed: hdisk1 Available 1V-08-01 hdisk2 Available 1V-08-01 hdisk3 Available 1V-08-01
HP HSV400 Enterprise Virtual Array
HP HSV400 Enterprise Virtual Array
HP HSV400 Enterprise Virtual Array
48 Configuring application servers
Linux
Driver failover mode
If you use the INSTALL command without command options, the driver’s failover mode depends on whether a QLogic driver is already loaded in memory (listed in the output of the lsmod command).
Possible driver failover mode scenarios include:
• If an hp_qla2x00src driver RPM is already installed, the new driver RPM uses the failover of the previous driver package.
• If there is no QLogic driver module (qla2xxx module) loaded, the driver defaults to failover mode.
This is also true if an inbox driver is loaded that does not list output in the /proc/scsi/qla2xxx directory.
• If there is a driver loaded in memory that lists the driver version in /proc/scsi/qla2xxx but no driver RPM has been installed, then the driver RPM loads the driver in the failover mode that the driver in memory currently uses.
Installing a Qlogic driver
NOTE:
The HP Emulex driver kit performs in a similar manner; use ./INSTALL -h to list all supported arguments.
1.
Download the appropriate driver kit for your distribution. The driver kit file is in the format hp_qla2x00-yyyy-mm-dd.tar.gz
.
2.
Copy the driver kit to the target system.
3.
Uncompress and untar the driver kit using the following command:
# tar zxvf hp_qla2x00-yyyy-mm-dd.tar.gz
4.
Change directory to the hp_qla2x00-yyyy-mm-dd directory.
5.
Execute the INSTALL command.
The INSTALL command syntax varies depending on your configuration.
If a previous driver kit is installed, you can invoke the INSTALL command without any arguments.
To use the currently loaded configuration:
# ./INSTALL
To force the installation to failover mode, use the -f flag:
# ./INSTALL -f
To force the installation to single-path mode, use the -s flag:
# ./INSTALL -s
To list all supported arguments, use the -h flag:
# ./INSTALL -h
6.
The INSTALL script installs the appropriate driver RPM for your configuration, as well as the appropriate fibreutils RPM. Once the INSTALL script is finished, you will either have to reload the QLogic driver modules (qla2xxx, qla2300, qla2400, qla2xxx_conf) or reboot your server.
The commands to reload the driver are:
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 49
# /opt/hp/src/hp_qla2x00src/unload.sh
# modprobe qla2xxx_conf
# modprobe qla2xxx
# modprobe qla2300
# modprobe qla2400
The command to reboot the server is:
# reboot
CAUTION:
If the boot device is attached to the SAN, you must reboot the host.
7.
To verify which RPM versions are installed, use the rpm command with the -q option. For example:
# rpm -q hp_qla2x00src
# rpm –q fibreutils
Upgrading Linux components
If you have any installed components from a previous solution kit or driver kit, such as the qla2x00
RPM, invoke the INSTALL script with no arguments, as shown in the following example:
# ./INSTALL
To manually upgrade the components, select one of the following kernel distributions:
• For 2.4 kernel based distributions, use version 7.xx.
• For 2.6 kernel based distributions, use version 8.xx.
Depending on the kernel version you are running, upgrade the driver RPM manually as follows:
• For the hp_qla2x00src RPM:
# rpm -Uvh hp_qla2x00src- version-revision.linux.rpm
• For fibreutils RPM, you have two options:
• To upgrade the driver:
# rpm -Uvh fibreutils-version-revision.linux.architecture.rpm
• To remove the existing driver, and install a new driver:
# rpm -e fibreutils
# rpm -ivh fibreutils-version-revision.linux.architecture.rpm
Upgrading qla2x00 RPMs
If you have a qla2x00 RPM from HP installed on your system, use the INSTALL script to upgrade from qla2x00 RPMs. The INSTALL script removes the old qla2x00 RPM and installs the new hp_qla2x00src while keeping the driver settings from the previous installation. The script takes no arguments. Use the following command to run the INSTALL script:
# ./INSTALL
50 Configuring application servers
NOTE:
IF you are going to use the failover functionality of the QLA driver, uninstall Secure Path and reboot before you attempt to upgrade the driver. Failing to do so can cause a kernel panic.
Detecting third-party storage
The preinstallation portion of the RPM contains code to check for non-HP storage. The reason for doing this is to prevent the RPM from overwriting any settings that another vendor may be using. You can skip the detection process by setting the environmental variable HPQLAX00FORCE to y by issuing the following commands:
# HPQLA2X00FORCE=y
# export HPQLA2X00FORCE
You can also use the -F option of the INSTALL script by entering the following command:
# ./INSTALL -F
Compiling the driver for multiple kernels
If your system has multiple kernels installed on it, you can compile the driver for all the installed kernels by setting the INSTALLALLKERNELS environmental variable to y and exporting it by issuing the following commands:
# INSTALLALLKERNELS=y
# export INSTALLALLKERNELS
You can also use the -a option of the INSTALL script as follows:
# ./INSTALL -a
Uninstalling the Linux components
To uninstall the components, use the INSTALL script with the -u option as shown in the following example:
# ./INSTALL -u
To manually uninstall all components, or to uninstall just one of the components, use one or all of the following commands:
# rpm -e fibreutils
# rpm -e hp_qla2x00
# rpm -e hp_qla2x00src
Using the source RPM
In some cases, you may have to build a binary hp_qla2x00 RPM from the source RPM and use that manual binary build in place of the scripted hp_qla2x00src RPM. You need to do this if your production servers do not have the kernel sources and gcc installed.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 51
If you need to build a binary RPM to install, you will need a development machine with the same kernel as your targeted production servers. You can install the binary RPM-produced RPM methods on your production servers.
NOTE:
The binary RPM that you build works only for the kernel and configuration that you build on (and possibly some errata kernels). Ensure that you use the 7.xx version of the hp_qla2x00 source RPM for 2.4 kernel-based distributions and the 8.xx version of the hp_qla2x00 source RPM for 2.6
kernel-based distributions.
Use the following procedure to create the binary RPM from the source RPM:
1.
Select one of the following options:
• Enter the #./INSTALL -S command. The binary RPM creation is complete. You do need to perform steps
through
• Install the source RPM by issuing the # rpm -ivh hp_qla2x00-version-revi-
sion.src.rpm
command. Continue with step
.
2.
Select one of the following directories:
• For Red Hat distributions, use the /usr/src/redhat/SPECS directory.
• For SUSE distributions, use the /usr/src/packages/SPECS directory.
3.
Build the RPM by using the # rpmbuild -bb hp_qla2x00.spec command.
NOTE:
In some of the older Linux distributions, the RPM command contains the RPM build functionality.
At the end of the command output, the following message appears:
"Wrote: ...rpm".
This line identifies the location of the binary RPM.
4.
Copy the binary RPM to the production servers and install it using the following command:
# rpm -ivh hp_qla2x00-version-revision.architecture.rpm
Verifying virtual disks from the host
To ensure that the LUN is recognized after a virtual disk is presented to the host, do one of the following:
• Reboot the host.
• Enter the /opt/hp/hp_fibreutils/hp_rescan -a command.
To verify that the host can access the virtual disks, enter the # more /proc/scsi/scsi command.
The output lists all SCSI devices detected by the server. An EVA6400/8400 LUN entry looks similar to the following:
Host: scsi3 Channel: 00 ID: 00 Lun: 01
Rev: Vendor: HP Model: HSV400
Type: Direct-Access ANSI SCSI revision: 02
52 Configuring application servers
OpenVMS
Updating the AlphaServer console code, Integrity Server console code, and
Fibre Channel FCA firmware
The firmware update procedure varies for the different server types. To update firmware, follow the procedure described in the Installation instructions that accompany the firmware images.
Verifying the Fibre Channel adapter software installation
A supported FCA should already be installed in the host server. The procedure to verify that the console recognizes the installed FCA varies for the different server types. Follow the procedure described in the Installation instructions that accompany the firmware images.
Console LUN ID and OS unit ID
HP Command View EVA software contains a box for the Console LUN ID on the Initialized Storage
System Properties window.
It is important that you set the Console LUN ID to a number other than zero. If the Console LUN ID is not set or is set to zero, the OpenVMS host will not recognize the controller pair. The Console LUN
ID for a controller pair must be unique within the SAN.
shows an example of the Console
LUN ID.
You can set the OS unit ID on the Virtual Disk Properties window. The default setting is 0, which disables the ID field. To enable the ID field, you must specify a value between 1 and 32767, ensuring that the number you enter is unique within the SAN. An OS Unit ID greater than 9999 is not capable of being served by MSCP.
CAUTION:
It is possible to enter a duplicate Console LUN ID or OS unit ID number. You must ensure that you enter a Console LUN ID and OS Unit ID that is not already in use. A duplicate Console LUN ID or OS
Unit ID can allow the OpenVMS host to corrupt data due to confusion about LUN identity. It can also prevent the host from recognizing the controllers.
Table 11 Comparing console LUN to OS unit ID
ID type
Console LUN ID set to 100
OS unit ID set to 50
System Display
$1$GGA100:
$1$DGA50:
Adding OpenVMS hosts
To obtain WWNs on AlphaServers, do one of the following:
• Enter the show device fg/full OVMS command.
• Use the WWIDMGR -SHOW PORT command at the SRM console.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 53
To obtain WWNs on Integrity servers, do one of the following:
• Enter the show device fg/full OVMS command.
• Use the following procedure from the server console:
1.
From the EFI boot Manager, select EFI Shell.
2.
In the EFI Shell, enter “Shell> drivers”.
A list of EFI drivers loaded in the system is displayed.
3.
In the listing, find the line for the FCA for which you want to get the WWN information.
For a Qlogic HBA, look for HP 4 Gb Fibre Channel Driver or HP 2 Gb Fibre
Channel Driver as the driver name. For example:
D
R
T D
Y C I
P F A
V VERSION E G G #D #C DRIVER NAME IMAGE NAME
== ======== = = = == == =================================== ===================
22 00000105 B X X
1 1
HP 4 Gb Fibre Channel Driver
PciROM:0F:01:01:002
4.
Note the driver handle in the first column (22 in the example).
5.
Using the driver handle, enter the drvdfg driver_handle command to find the Device
Handle (Ctrl). For example:
Shell> drvcfg 22
Configurable Components
Drv[22] Ctrl[
25]
Lang[eng]
6.
Using the driver and device handle, enter the drvdfg —s driver_handle
device_handle
command to invoke the EFI Driver configuration utility. For example:
Shell> drvcfg -s 22 25
7.
From the Fibre Channel Driver Configuration Utility list, select item 8 (Info) to find the WWN for that particular port.
Output similar to the following appears:
Adapter Path: Acpi(PNP0002,0300)/Pci(01|01)
Adapter WWPN:
50060B00003B478A
Adapter WWNN:
Adapter S/N:
50060B00003B478B
3B478A
Scanning the bus
Enter the following command to scan the bus for the OpenVMS virtual disk:
$ MC SYSMAN IO AUTO/LOG
A listing of LUNs detected by the scan process is displayed. Verify that the new LUNs appear on the list.
NOTE:
The EVA6400/8400 console LUN can be seen without any virtual disks presented. The LUN appears as $1$GGAx (where x represents the console LUN ID on the controller).
54 Configuring application servers
After the system scans the fabric for devices, you can verify the devices with the SHOW DEVICE command:
$ SHOW DEVICE NAME-OF-VIRTUAL-DISK/FULL
For example, to display device information on a virtual disk named $1$DGA50, enter $ SHOW
DEVICE $1$DGA50:/FULL
.
The following output is displayed:
Disk $1$DGA50: (BRCK18), device type HSV210, is online, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled.
Error count
Owner process
Alternate host name
Allocation class
2
""
"VMS24"
1
Operations completed
Owner UIC
4107
[SYSTEM]
Owner process ID
Reference count
Host name
00000000
0
"BRCK18"
Dev Prot
Default buffer size
Current preferred CPU Id
WWID
0 Fastpath
01000010:6005-08B4-0010-70C7-0001-2000-2E3E-0000
S:RWPL,O:RWPL,G:R,W
512
1
Host type, avail AlphaServer DS10 466 MHz, yes
Alt. type, avail HP rx3600 (1.59GHz/9.0MB), yes
I/O paths to device 9
Path PGA0.5000-1FE1-0027-0A38 (BRCK18), primary path.
Error count 0 Operations completed
Path PGA0.5000-1FE1-0027-0A3A (BRCK18).
Error count 0 Operations completed
Path PGA0.5000-1FE1-0027-0A3E (BRCK18).
Error count 0 Operations completed
Path PGA0.5000-1FE1-0027-0A3C (BRCK18).
Error count 0 Operations completed
Path PGB0.5000-1FE1-0027-0A39 (BRCK18).
Error count 0 Operations completed
Path PGB0.5000-1FE1-0027-0A3B (BRCK18).
Error count 0 Operations completed
Path PGB0.5000-1FE1-0027-0A3D (BRCK18).
Error count 0 Operations completed
Path PGB0.5000-1FE1-0027-0A3F (BRCK18), current path.
Error count 2 Operations completed
Path MSCP (VMS24).
Error count 0 Operations completed
145
338
276
282
683
704
853
826
0
You can also use the SHOW DEVICE DG command to display a list of all Fibre Channel disks presented to the OpenVMS host.
NOTE:
Restarting the host system shows any newly presented virtual disks because a hardware scan is performed as part of the startup.
If you are unable to access the virtual disk, do the following:
• Check the switch zoning database.
• Use HP Command View EVA to verify the host presentations.
• Check the SRM console firmware on AlphaServers.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 55
• Ensure that the correct host is selected for this virtual disk and that a unique OS Unit ID is used in
HP Command View EVA.
Configuring virtual disks from the OpenVMS host
To set up disk resources under OpenVMS, initialize and mount the virtual disk resource as follows:
1.
Enter the following command to initialize the virtual disk:
$ INITIALIZE name-of-virtual-disk volume-label
2.
Enter the following command to mount the disk:
MOUNT/SYSTEM name-of-virtual-disk volume-label
NOTE:
The /SYSTEM switch is used for a single stand-alone system, or in clusters if you want to mount the disk only to select nodes. You can use the /CLUSTER switch for OpenVMS clusters. However, if you encounter problems in a large cluster environment, HP recommends that you enter a MOUNT/SYSTEM command on each cluster node.
3.
View the virtual disk’s information with the SHOW DEVICE command. For example, enter the following command sequence to configure a virtual disk named data1 in a stand-alone environment:
$ INIT $1$DGA1: data1
$ MOUNT/SYSTEM $1$DGA1: data1
$ SHOW DEV $1$DGA1: /FULL
Setting preferred paths
You can set or change the preferred path used for a virtual disk by using the SET DEVICE /PATH command. For example:
$ SET DEVICE $1$DGA83: /PATH=PGA0.5000-1FE1-0007-9772/SWITCH
This allows you to control which path each virtual disk uses.
You can use the SHOW DEV/FULL command to display the path identifiers.
For additional information on using OpenVMS commands, see the OpenVMS help file:
$ HELP TOPIC
For example, the following command displays help information for the MOUNT command:
$ HELP MOUNT
Sun Solaris
NOTE:
The information in this section applies to both SPARC and x86 versions of the Sun Solaris operating system.
56 Configuring application servers
Loading the operating system and software
Follow the manufacturer’s instructions for loading the operating system (OS) and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
Configuring FCAs with the Sun SAN driver stack
Sun-branded FCAs are supported only with the Sun SAN driver stack. The Sun SAN driver stack is also compatible with current Emulex FCAs and QLogic FCAs. Support information is available on the
Sun website: http://www.sun.com/io_technologies/index.html
To determine which non-Sun branded FCAs HP supports with the Sun SAN driver stack, see the latest
MPxIO application notes or contact your HP representative.
Update instructions depend on the version of your OS:
• For Solaris 9, install the latest Sun StorEdge SAN software with associated patches. To automate the installation, use the Sun-supplied install script available at: http://www.sun.com/download/
1.
Under Systems Administration, select Storage Management.
2.
Under Browse Products, select StorageTek SAN 4.4.
3.
Reboot the host after the required software/patches have been installed. No further activity is required after adding any new LUNs once the array ports have been configured with the cfgadm –c command for Solaris 9.
Examples for two FCAs: cfgadm -c configure c3 cfgadm -c configure c4
4.
Increase retry counts and reduce I/O time by adding the following entries to the /etc/ system file: set ssd:ssd_retry_count=0xa set ssd:ssd_io_time=0x1e
5.
Reboot the system to load the newly added parameters.
• For Solaris 10, use the Sun Update Manager to install the latest patches (see http://www.sun.com/ service/sunupdate/ ). Reboot the host once the required software/patches have been installed.
No further activity is required after adding new LUNs, as the controller and LUN recognition are automatic for Solaris 10.
1.
For Solaris 10 x86/64, ensure patch 138889-03 or later is installed. For SPARC, ensure patch 138888-03 or later is installed.
2.
Increase the retry counts by adding the following line to the /kernel/drv/sd.conf file: sd-config-list="HP HSV","retries-timeout:10";
3.
Reduce the I/O timeout value to 30 seconds by adding the following line to the /etc/system file: set sd:sd_io_time=0x1e
4.
Reboot the system to load the newly added parameters.
Configuring Emulex FCAs with the lpfc driver
To configure Emulex FCAs with the lpfc driver:
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 57
1.
Ensure that you have the latest supported version of the lpfc driver (see http://www.hp.com/ storage/spock ).
You must sign up for an HP Passport to enable access. For more information on how to use
SPOCK, see the Getting Started Guide ( http://h20272.www2.hp.com/Pages/spock_overview/ introduction.html
).
2.
Edit the following parameters in the /kernel/drv/lpfc.conf driver configuration file to set up the FCAs for a SAN infrastructure: topology=2; scan-down=0; nodev-tmo=60; linkdown-tmo=60;
3.
If using a single FCA and no multipathing, edit the following parameter to reduce the risk of data loss in case of a controller reboot: nodev-tmo=120;
4.
If using Veritas Volume Manager (VxVM) DMP for multipathing (single or multiple FCAs), edit the following parameter to ensure proper VxVM behavior: no-device-delay=0;
5.
In a fabric topology, use persistent bindings to bind a SCSI target ID to the world wide port name
(WWPN) of an array port. This ensures that the SCSI target IDs remain the same when the system reboots. Set persistent bindings by editing the configuration file or by using the lputil utility.
NOTE:
HP recommends that you assign target IDs in sequence, and that the EVA has the same target
ID on each host in the SAN.
The following example for an EVA6400/8400 illustrates the binding of targets 20 and 21 (lpfc instance 2) to WWPNs 50001fe100270938 and 50001fe100270939, and the binding of targets 30 and 31 (lpfc instance 0) to WWPNs 50001fe10027093a and 50001fe10027093b: fcp-bind-WWPN="50001fe100270938:lpfc2t20",
"50001fe100270939:lpfc2t21",
"50001fe10027093a:lpfc0t30",
"50001fe10027093b:lpfc0t31";
NOTE:
Replace the WWPNs in the example with the WWPNs of your array ports.
58 Configuring application servers
6.
For each LUN that will be accessed, add an entry to the /kernel/drv/sd.conf file. For example, if you want to access LUNs 1 and 2 through all four paths, add the following entries to the end of the file: name="sd" parent="lpfc" target=20 lun=1; name="sd" parent="lpfc" target=21 lun=1; name="sd" parent="lpfc" target=30 lun=1; name="sd" parent="lpfc" target=31 lun=1; name="sd" parent="lpfc" target=20 lun=2; name="sd" parent="lpfc" target=21 lun=2; name="sd" parent="lpfc" target=30 lun=2; name="sd" parent="lpfc" target=31 lun=2;
7.
Reboot the server to implement the changes to the configuration files.
8.
If LUNs have been preconfigured in the /kernel/drv/sd.conf file, use the devfsadm command to perform LUN rediscovery after configuring the file.
NOTE:
The lpfc driver is not supported for Sun StorEdge Traffic Manager/Sun Storage Multipathing.
Configuring QLogic FCAs with the qla2300 driver
See the latest Enterprise Virtual Array release notes or contact your HP representative to determine which QLogic FCAs and which driver version HP supports with the qla2300 driver. To configure
QLogic FCAs with the qla2300 driver:
1.
Ensure that you have the latest supported version of the qla2300 driver (see http://www.hp.com/ storage/spock ).
2.
You must sign up for an HP Passport to enable access. For more information on how to use
SPOCK, see the Getting Started Guide ( http://h20272.www2.hp.com/Pages/spock_overview/ introduction.html
).
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 59
3.
Edit the following parameters in the /kernel/drv/qla2300.conf driver configuration file to set up the FCAs for a SAN infrastructure (HBA0 is used in the example, but the parameter edits apply to all HBAs):
NOTE:
If you are using a Sun-branded QLogic FCA, the configuration file is \kernal\drv\ qlc.conf
.
hba0-connection-options=1; hba0-link-down-timeout=60; hba0-persistent-binding-configuration=1;
NOTE:
If you are using Solaris 10, editing the persistent binding parameter is not required.
4.
If using a single FCA and no multipathing, edit the following parameters to reduce the risk of data loss in case of a controller reboot: hba0-login-retry-count=60; hba0-port-down-retry-count=60; hba0-port-down-retry-delay=2;
The hba0-port-down-retry-delay parameter is not supported with the 4.13.01 driver; the time between retries is fixed at approximately 2 seconds.
5.
In a fabric topology, use persistent bindings to bind a SCSI target ID to the world wide port name
(WWPN) of an array port. This ensures that the SCSI target IDs remain the same when the system reboots. Set persistent bindings by editing the configuration file or by using the SANsurfer utility.
NOTE:
Persistent binding is not required for QLogic FCAs if you are using Solaris 10.
The following example for an EVA6400/8400 illustrates the binding of targets 20 and 21 (hba instance 0) to WWPNs 50001fe100270938 and 50001fe100270939, and the binding of targets 30 and 31 (hba instance 1) to WWPNs 50001fe10027093a and 50001fe10027093b: hba0-SCSI-target-id-20-fibre-channel-port-name="50001fe100270938"; hba0-SCSI-target-id-21-fibre-channel-port-name="50001fe10027093a"; hba1-SCSI-target-id-30-fibre-channel-port-name="50001fe100270939"; hba1-SCSI-target-id-31-fibre-channel-port-name="50001fe10027093b";
NOTE:
Replace the WWPNs in the example with the WWPNs of your array ports.
60 Configuring application servers
6.
If the qla2300 driver is version 4.13.01 or earlier, for each LUN that users will access add an entry to the /kernel/drv/sd.conf file: name="sd" class="scsi" target=20 lun=1; name="sd" class="scsi" target=21 lun=1; name="sd" class="scsi" target=30 lun=1; name="sd" class="scsi" target=31 lun=1;
If LUNs are preconfigured in the/kernel/drv/sd.conf file, after changing the configuration file. use the devfsadm command to perform LUN rediscovery.
7.
If the qla2300 driver is version 4.15 or later, verify that the following or a similar entry is present in the /kernel/drv/sd.conf file: name="sd" parent="qla2300" target=2048;
To perform LUN rediscovery after configuring the LUNs, use the following command:
/opt/QLogic_Corporation/drvutil/qla2300/qlreconfig –d qla2300 -s
8.
Reboot the server to implement the changes to the configuration files.
NOTE:
The qla2300 driver is not supported for Sun StorEdge Traffic Manager/Sun Storage Multipathing.
To configure a QLogic FCA using the Sun SAN driver stack, see “ Configuring the FCAs with the Sun
SAN driver stack ” on page 57.
Fabric setup and zoning
To set up the fabric and zoning:
1.
Verify that the Fibre Channel cable is connected and firmly inserted at the array ports, host ports, and SAN switch.
2.
Through the Telnet connection to the switch or Switch utilities, verify that the WWN of the EVA ports and FCAs are present and online.
3.
Create a zone consisting of the WWNs of the EVA ports and FCAs, and then add the zone to the active switch configuration.
4.
Enable and then save the new active switch configuration.
NOTE:
There are variations in the steps required to configure the switch between different vendors. For more information, see the HP StorageWorks SAN Design Reference Guide, available for downloading on the HP website: http://www.hp.com/go/sandesign .
Sun StorEdge Traffic Manager (MPxIO)/Sun Storage Multipathing
Sun StorEdge Traffic Manager (MPxIO)/Sun Storage Multipathing can be used for FCAs configured with the Sun SAN driver depending on the operating system version, architecture (SPARC/x86), and
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 61
patch level installed. For configuration details, see the HP StorageWorks MPxIO application notes, available on the HP support website: http://www.hp.com/support/manuals .
NOTE:
MPxIO is included in the SPARC and x86 Sun SAN driver. A separate installation of MPxIO is not required.
In the Search products box, enter MPxIO, and then click the search symbol. Select the application notes from the search results.
Configuring with Veritas Volume Manager
The Dynamic Multipathing (DMP) feature of Veritas Volume Manager (VxVM) can be used for all
FCAs and all drivers. EVA disk arrays are certified for VxVM support. When you install FCAs, ensure that the driver parameters are set correctly. Failure to do so can result in a loss of path failover in
The DMP feature requires an Array Support Library (ASL) and an Array Policy Module (APM). The
ASL/APM enables Asymmetric Logical Unit Access (ALUA). LUNs are accessed through the primary controller. After enablement, use the vxdisk list <device> command to determine the primary and secondary paths. For VxVM 4.1 (MP1 or later), you must download the ASL/APM from the
Symantec/Veritas support site for installation on the host. This download and installation is not required for VxVM 5.0 or later.
To download and install the ASL/APM from the Symantec/Veritas support website:
1.
Go to http://support.veritas.com
.
2.
Enter Storage Foundation for UNIX/Linux in the Product Lookup box.
3.
Enter EVA in the Enter key words or phrase box, and then click the search symbol.
4.
To further narrow the search, select Solaris in the Platform box.
5.
Read TechNotes and follow the instructions to download and install the ASL/APM.
6.
Run vxdctl enable to notify VxVM of the changes.
7.
Verify the configuration of VxVM as shown in
(the output may be slightly different depending on your VxVM version and the array configuration).
62 Configuring application servers
Example 1. Verifying the VxVM configuration
# vxddladm listsupport all | grep HP libvxhpevale.so
HP HSV300, HSV400, HSV450
# vxddladm listsupport libname=libvxhpevale.so
ATTR_NAME ATTR_VALUE
=======================================================================
LIBNAME libvxhpevale.so
VID
PID
HP
HSV300, HSV400, HSV450
ARRAY_TYPE
ARRAY_NAME
A/A-A-HP
EVA4400, EVA6400, EVA8400
# vxdmpadm listapm all | grep HP dmphpalua dmphpalua
# vxdmpadm listapm dmphpalua
Filename:
APM name:
APM version:
Feature:
VxVM version: dmphpalua dmphpalua
1
VxVM
41
Array Types Supported: A/A-A-HP
Depending Array Types: A/A-A
State: Active
1 A/A-A-HP Active
# vxdmpadm listenclosure all
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE
============================================================================
Disk Disk DISKS CONNECTED Disk
EVA84000 EVA8400 50001FE1002709E0 CONNECTED A/A-A-HP
By default, the EVA I/O policy is set to Round-Robin. For VxVM 4.1 MP1, only one path is used for the I/Os with this policy. Therefore, HP recommends that you change the I/O policy to Adaptive in order to use all paths to the LUN on the primary controller.
shows the commands you can use to check and change the I/O policy.
Example 2. Setting the iopolicy
# vxdmpadm getattr arrayname EVA8400 iopolicy
ENCLR_NAME DEFAULT CURRENT
============================================
EVA84000 Round-Robin Round-Robin
# vxdmpadm setattr arrayname EVA8400 iopolicy=adaptive
# vxdmpadm getattr arrayname EVA8400 iopolicy
ENCLR_NAME DEFAULT CURRENT
============================================
EVA84000 Round-Robin Adaptive
Configuring virtual disks from the host
The procedure used to configure the LUN path to the array depends on the FCA driver. For more
information, see “ Installing Fibre Channel adapters ” on page 43.
To identify the WWLUN ID assigned to the virtual disk and/or the LUN assigned by the storage administrator:
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 63
• Sun SAN driver, with MPxIO enabled:
• You can use the luxadm probe command to display the array/node WWN and associated array for the devices.
• The WWLUN ID is part of the device file name. For example:
/dev/rdsk/c5t
600508B4001030E40000500000B20000d0s2
• If you use luxadm display, the LUN is displayed after the device address. For example:
50001fe1002709e9,
5
• Sun SAN driver, without MPxIO:
• The EVA WWPN is part of the file name (which helps you to identify the controller). For example:
/dev/rdsk/c3t
50001FE1002709E8d5s2
/dev/rdsk/c3t
50001FE1002709ECd5s2
/dev/rdsk/c4t
50001FE1002709E9d5s2
/dev/rdsk/c4t
50001FE1002709EDd5s2
If you use luxadm probe, the array/node WWN and the associated device files are displayed.
• You can retrieve the WWLUN ID as part of the format -e (scsi, inquiry) output; however, it is cumbersome and hard to read. For example:
09 e8 20 04 00 00 00 00 00 00 35 30 30 30 31 46
45 31 30 30 32 37 30 39 45 30 35 30 30 30 31 46
45 31 30 30 32 37 30 39 45 38 36 30 30 35 30 38
42 34 30 30 31 30 33 30 45 34 30 30 30 30 35 30
30 30 30 30 42 32 30 30 30 30 00 00 00 00 00 00
• The assigned LUN is part of the device file name. For example:
.........50001F
E1002709E050001F
E1002709E8
600508
B4001030E4000050
0000B20000
/dev/rdsk/c3t50001FE1002709E8d
5s2
You can also retrieve the LUN with luxadm display. The LUN is displayed after the device address. For example:
50001fe1002709e9,
5
• Emulex (lpfc)/QLogic (qla2300) drivers:
• You can retrieve the WWPN by checking the assignment in the driver configuration file (the easiest method, because you then know the assigned target) or by using HBAnyware/
SANSurfer
.
• You can retrieve the WWLUN ID by using HBAnyware/SANSurfer.
You can also retrieve the WWLUN ID as part of the format -e (scsi, inquiry) output; however, it is cumbersome and difficult to read. For example:
09 e8 20 04 00 00 00 00 00 00 35 30 30 30 31 46
45 31 30 30 32 37 30 39 45 30 35 30 30 30 31 46
45 31 30 30 32 37 30 39 45 38 36 30 30 35 30 38
42 34 30 30 31 30 33 30 45 34 30 30 30 30 35 30
30 30 30 30 42 32 30 30 30 30 00 00 00 00 00 00
• The assigned LUN is part of the device file name. For example:
/dev/dsk/c4t20d5s2
.........50001F
E1002709E050001F
E1002709E8
600508
B4001030E4000050
0000B20000
64 Configuring application servers
Verifying virtual disks from the host
Verify that the host can access virtual disks by using the format command. See
.
Example 3. Format command
# format
Searching for disks...done
c2t50001FE1002709F8d1: configured with capacity of 1008.00MB
c2t50001FE1002709F8d2: configured with capacity of 1008.00MB
c2t50001FE1002709FCd1: configured with capacity of 1008.00MB
c2t50001FE1002709FCd2: configured with capacity of 1008.00MB
c3t50001FE1002709F9d1: configured with capacity of 1008.00MB
c3t50001FE1002709F9d2: configured with capacity of 1008.00MB
c3t50001FE1002709FDd1: configured with capacity of 1008.00MB
c3t50001FE1002709FDd2: configured with capacity of 1008.00MB
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0
1. c2t50001FE1002709F8d1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709f8,1
2. c2t50001FE1002709F8d2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709f8,2
3. c2t50001FE1002709FCd1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709fc,1
4. c2t50001FE1002709FCd2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709fc,2
5. c3t50001FE1002709F9d1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709f9,1
6. c3t50001FE1002709F9d2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709f9,2
7. c3t50001FE1002709FDd1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709fd,1
8. c3t50001FE1002709FDd2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128>
/pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709fd,2
Specify disk (enter its number):
If you cannot access the virtual disks:
• Verify the zoning.
• For Sun Solaris, verify that the correct WWPNs for the EVA (lpfc, qla2300 driver) have been configured and the target assignment is matched in /kernel/drv/sd.conf (lpfc and qla2300
4.13.01).
Labeling and partitioning the devices
Label and partition the new devices using the Sun format utility:
CAUTION:
When selecting disk devices, be careful to select the correct disk because using the label/partition commands on disks that have data can cause data loss.
1.
Enter the format command at the root prompt to start the utility.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 65
2.
Verify that all new devices are displayed. If not, enter quit or press Ctrl+D to exit the format utility
3.
Record the character-type device file names (for example, c1t2d0) for all new disks.
You will use this data to create the file systems or to use the file system with the Solaris or Veritas
Volume Manager.
4.
When prompted to specify the disk, enter the number of the device to be labeled.
5.
When prompted to label the disk, enter Y.
6.
Because the virtual geometry of the presented volume varies with size, select autoconfigure as the disk type.
7.
If you are not using Veritas Volume Manager, use the partition command to create or adjust the partitions.
8.
For each new device, use the disk command to select another disk, and then repeat
through
.
9.
When you finish labeling the disks, enter quit or press Ctrl+D to exit the format utility.
For more information, see the System Administration Guide: Devices and File Systems for your operating system, available on the Sun website: http://docs.sun.com
.
NOTE:
Some format commands are not applicable to the EVA storage systems.
VMware
Installing or upgrading VMware
For installation instructions, see the VMware installation guide for your server.
If you have already installed VMware, use the following procedure to patch or upgrade the system:
1.
Extract the upgrade-tarball on the system. A sample command extract follows: esx-n.n.n-14182-upgrade.tar.gz
2.
Boot the system in Linux mode by selecting the Linux boot option from the boot menu selection window.
3.
Extract the tar file and enter the following command: upgrade.pl
4.
Reboot the system using the default boot option (esx).
Configuring the EVA6400/8400 with VMware host servers
To configure an EVA6400/8400 on a VMware ESX server:
1.
Using HP Command View EVA, configure a host for one ESX server.
2.
Verify that the Fibre Channel Adapters (FCAs) are populated in the world wide port name (WWPN) list. Edit the WWPN, if necessary.
3.
Set the connection type to VMware.
66 Configuring application servers
4.
To configure additional ports for the ESX server: a.
Select a host (defined in Step
).
b.
Select the Ports tab in the Host Properties window.
c.
Add additional ports for the ESX server.
5.
Perform one of the following tasks to locate the WWPN:
• From the service console, enter the wwpn.pl command.
Output similar to the following is displayed:
[root@gnome7 root]# wwpn.plvmhba0: 210000e08b09402b (QLogic) 6:1:0vmhba1:
210000e08b0ace2d (QLogic) 6:2:0[root@gnome7 root]#
• Check the SCSI device information section of /proc/scsi/qla2300/X directory, where
X
is a bus instance number.
Output similar to the following is displayed:
SCSI Device Information: scsi-qla0-adapter-node=200000e08b0b0638; scsi-qla0-adapter-port=210000e08b0b0638;
6.
Repeat this procedure for each ESX server.
Configuring an ESX server
This section provides information about configuring the ESX server.
Loading the FCA NVRAM
The FCA stores configuration information in the non-volatile RAM (NVRAM) cache. You must download the configuration for HP StorageWorks products.
Perform one of the following procedures to load the NVRAM:
• If you have a HP ProLiant blade server:
1.
Download the supported FCA BIOS update, available on http://www.hp.com/support/ downloads , to a virtual floppy.
For instructions on creating and using a virtual floppy, see the HP Integrated Lights-Out user
guide.
2.
Unzip the file.
3.
Follow the instructions in the readme file to load the NVRAM configuration onto each FCA.
• If you have a blade server other than a ProLiant blade server:
1.
Download the supported FCA BIOS update, available on http://www.hp.com/support/ downloads .
2.
Unzip the file.
3.
Follow the instructions in the readme file to load the NVRAM configuration onto each FCA.
Setting the multipathing policy
You can set the multipathing policy for each LUN or logical drive on the SAN to one of the following:
• Most recently used (MRU)
• Fixed
• Preferred
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 67
ESX 2.5.x commands
• The # vmkmultipath –s vmhba0:0:1 –p mru command sets vmhba0:0:1 with an MRU multipathing policy for all LUNs on the SAN.
• The # vmkmultipath -s vmhba1:0:1 -p fixed command sets vmhba1:0:1 with a Fixed multipathing policy.
• The # vmkmultipath -s vmhba1:0:1 -r vmhba2:0:1 -e vmhba2:0:1 command sets and enables vmhba2:0:1 with a Preferred multipathing policy.
ESX 3.x commands
• The # esxcfg-mpath --policy=mru --lun=vmhba0:0:1 command sets vmhba0:0:1 with an MRU multipathing policy.
• The # esxcfg-mpath --policy=fixed --lun=vmhba0:0:1 command sets vmhba1:0:1 with a Fixed multipathing policy.
• The # esxcfg-mpath --preferred --path=vmhba2:0:1 --lun=vmhba2:0:1 command sets vmhba2:0:1 with a Preferred multipathing policy.
ESX 4.x commands
• The # esxcli nmp device setpolicy --device naa.6001438002a56f220001100000710000 --psp VMW_PSP_MRU command sets device naa.6001438002a56f220001100000710000
with an MRU multipathing policy.
• The # esxcli nmp device setpolicy --device naa.6001438002a56f220001100000710000 --psp VMW_PSP_FIXED command sets device naa.6001438002a56f220001100000710000 with a Fixed multipathing policy.
• The # esxcli nmp fixed setpreferred --device naa.6001438002a56f220001100000710000 --path vmhba1:C0:T2:L1 command sets device naa.6001438002a56f220001100000710000 with a Preferred multipathing policy.
NOTE:
Each LUN can be accessed through both EVA storage controllers at the same time; however, each
LUN path is optimized through one controller. To optimize performance, if the LUN multipathing policy is Fixed, all servers must use a path to the same controller.
You can also set the multipathing policy from the VMware Management User Interface (MUI) by clicking the Failover Paths tab in the Storage Management section and then selecting Edit… link for each LUN whose policy you want to modify.
Specifying DiskMaxLUN
The DiskMaxLUN setting specifies the highest-numbered LUN that can be scanned by the ESX server.
• For ESX 2.5.x, the default value is 8. If more than eight LUNs are presented, you must change the setting to an appropriate value. To set DiskMaxLUN, select Options> Advanced Settings in the
MUI, and then enter the highest-numbered LUN.
• For ESX 3.x or ESX 4.x, the default value is set to the Max set value of 256. To set DiskMaxLun to a different value, in Virtual Infrastructure Client, select Configuration> Advance Settings> Disk>
Disk.MaxLun, and then enter the new value.
68 Configuring application servers
Verifying connectivity
To verify proper configuration and connectivity to the SAN:
• For ESX 2.5.x, enter the # vmkmultipath -q command.
• For ESX 3.x, enter the # esxcfg-mpath -l command.
• For ESX 4.x, enter the # esxcfg-mpath -b command.
For each LUN, verify that the multipathing policy is set correctly and that each path is marked on. If any paths are marked dead or are not listed, check the cable connections and perform a rescan on the appropriate FCA. For example:
• For ESX 2.5.x, enter the # cos-rescan.sh vmhba0 command.
• For ESX 3.x or ESX 4.x, enter the # esxcfg-rescan vmhba0 command.
If paths or LUNs are still missing, see the VMware or HP StorageWorks documentation for troubleshooting information.
Verifying virtual disks from the host
To verify that the host can access the virtual disks, enter the # more /proc/scsi/scsi command.
The output lists all SCSI devices detected by the server. An EVA6400/8400 LUN entry looks similar to the following:
Host: scsi3 Channel: 00 ID: 00 Lun: 01
Rev: Vendor: HP Model: HS400
Type: Direct-Access ANSI SCSI revision: 02
Windows
Verifying virtual disk access from the host
With Windows, you must rescan for new virtual disks to be accessible. After you rescan, you must select the disk type, and then initialize (assign disk signature), partition, format, and assign drive letters or mount points according to standard Windows conventions.
Setting the Pending Timeout value for large cluster configurations
For clusters, if disk resource counts are greater than 8, HP recommends that you increase the Pending
Timeout value for each disk resource from 180 second to 360 seconds. Changing the Pending Timeout value will ensure continuous operation of disk resources across the SAN.
To set the Pending Timeout value:
1.
Open Microsoft Cluster Administrator.
2.
Select a Disk Group resource in the left pane.
3.
Right-click a Disk Resource in the right pane and select Properties.
4.
Click the Advanced tab.
5.
Change the Pending Timeout value to 360.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 69
6.
Click OK.
7.
Repeat steps 3-6 for each disk resource.
70 Configuring application servers
4 EVA6400/8400 operation
Best practices
For useful information on managing and configuring your storage system, see the HP StorageWorks
Enterprise Virtual Array configuration best practices white paper available from http://h18006.www1.hp.com/storage/arraywhitepapers.html
Operating tips and information
Reserving adequate free space
To ensure efficient storage system operation, a certain amount of unallocated capacity, or free space, should be reserved in each disk group. The recommended amount of free space is influenced by your system configuration. For guidance on how much free space to reserve, see the HP StorageWorks
Enterprise Virtual Array configuration best practices white paper. See
.
Using FATA disk drives
FATA drives are designed for lower duty cycle applications such as near online data replication for backup. These drives should not be used as a replacement for EVA's high performance, standard duty cycle, Fibre Channel drives. Doing so could shorten the life of the drive.
For useful information on managing and configuring your storage system, see the HP StorageWorks
Enterprise Virtual Array configuration best practices white paper available from http:// h18006.www1.hp.com/storage/arraywhitepapers.html
.
Using solid state disk drives
The following requirements apply to solid state disk (SSD) drives:
• Supported in the EVA4400 and EVA6400/8400 only, running a minimum controller software version of 09500000
• SSD drives must be in a separate disk group
• The SSD disk group supports a minimum of 6 and a maximum of 8 drives per array
• SSD drives can only be configured with Vraid5
• Supported with HP Business Copy EVA
• Snapclones between an SSD disk group and a non-SSD disk group are not supported
• Not supported with HP Continuous Access EVA
• Dynamic Capacity Management extend and shrink features are not supported
Use of these devices in unsupported configurations can lead to unpredictable results, including unstable array operation or data loss.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 71
Maximum LUN size
lists the maximum LUN size supported with each supported operating system.
Table 12 Maximum LUN size
Operating system Maximum LUN size
HP OpenVMS 7.3-2, 8.2, and 8.3 with Alpha servers 1 TB
HP OpenVMS 8.2-1, 8.3, and 8.3-1H1 with Integrity servers
HP-UX 11.11
1 TB
2 TB
HP-UX 11.23
HP-UX 11.31
2 TB
IBM AIX 5.2
16 ZB
1 TB (AIX 5.2ML06 or earlier)
2 ZB (AIX 5.2ML07 or later)
IBM AIX 5.3
IBM AIX 6.1
Mac OS X 10.x
Mac OS X 11.x
Microsoft Windows Server 2003
Microsoft Windows Server 2008
Novell NetWare 6.5 SPx
Red Hat Linux 3, 4, and 5
SUSE Linux Enterprise Server 8, 9, and 10
Sun Solaris 8, 9, and 10
VMware ESX 3.0.x and 3.5
Citrix Xen
1 TB (AIX 5.3ML02 or earlier)
2 ZB (AIX 5.3ML03 or later)
2 ZB
Less than 32 TB
Less than 32 TB
256 TB
256 TB
2 TB (minus 512 bytes)
Maximum supported block device at 16 TB as of RH5.1
Maximum block device size is 16 TB for 32-bit system and 8 iEB for 64-bit systems
2 TB
2 TB
Maximum supported block device at 16 TB
Managing unused ports
When you have unused ports on an EVA, perform the following steps:
1.
Place a loopback plug on all unused ports.
2.
Change the mode on unused ports from fabric to direct connect.
72 EVA6400/8400 operation
Failback preference setting for HSV controllers
describes the failback preference behavior for the controllers.
Table 13 Failback preference behavior
Setting Point in time
No preference
Path A - Failover Only
Path B - Failover Only
At initial presentation
On dual boot or controller resynch
On controller failover
On controller failback
At initial presentation
On dual boot or controller resynch
On controller failover
On controller failback
At initial presentation
On dual boot or controller resynch
On controller failover
On controller failback
Path A - Failover/Failback
At initial presentation
Behavior
The units are alternately brought online to
Controller A or to Controller B.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are alternately brought online to Controller A or to Controller B.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
The units are brought online to Controller
A.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller A.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
The units are brought online to Controller
B.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller B.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
The units are brought online to Controller
A.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 73
Setting Point in time
On dual boot or controller resynch
On controller failover
On controller failback
At initial presentation
On dual boot or controller resynch
Path B - Failover/Failback
On controller failover
On controller failback
Behavior
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller A.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. After controller restoration, the units that are online to Controller B and set to
Path A are brought online to Controller A.
This is a one time occurrence. If the host then moves the LUN using SCSI commands, the LUN will remain where moved.
The units are brought online to Controller
B.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller B.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. After controller restoration, the units that are online to Controller A and set to
Path B are brought online to Controller B.
This is a one time occurrence. If the host then moves the LUN using SCSI commands, the LUN will remain where moved.
describes the failback default behavior and supported settings when ALUA-compliant multipath software is running with each operating system. Recommended settings may vary depending on your configuration or environment.
Table 14 Failback settings by operating system
Operating system
HP-UX
Default behavior
Host follows the unit
1
Supported settings
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
IBM AIX
Linux
Host follows the unit
1
Host follows the unit
1
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
74 EVA6400/8400 operation
Operating system
Novell NetWare
Default behavior Supported settings
Failback performed on the host
No Preference
Path A/B – Failover Only
OpenVMS
Sun Solaris
Tru64 UNIX
Host follows the unit
Host follows the unit
1
Host follows the unit
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback (recommended)
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback (recommended)
VMware
Host follows the unit
1
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
Windows Failback performed on the host
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
1
If preference has been configured to ensure a more balanced controller configuration, the Path A/B – Failover/Failback setting is required to maintain the configuration after a single controller reboot.
Changing virtual disk failover/failback setting
Changing the failover/failback setting of a virtual disk may impact which controller presents the disk.
identifies the presentation behavior that results when the failover/failback setting for a virtual disk is changed.
NOTE:
If the new setting causes the presentation of the virtual disk to move to a new controller, any snapshots or snapclones associated with the virtual disk will also be moved.
Table 15 Impact on virtual disk presentation when changing failover/failback setting
New setting
No Preference
Path A Failover
Path B Failover
Path A Failover/Failback
Impact on virtual disk presentation
None. The disk maintains its original presentation.
If the disk is currently presented on controller B, it is moved to controller A. If the disk is on controller A, it remains there.
If the disk is currently presented on controller A, it is moved to controller B. If the disk is on controller B, it remains there.
If the disk is currently presented on controller B, it is moved to controller A. If the disk is on controller A, it remains there.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 75
New setting
Path B Failover/Failback
Impact on virtual disk presentation
If the disk is currently presented on controller A, it is moved to controller B. If the disk is on controller B, it remains there.
Implicit LUN transition
Implicit LUN transition automatically transfers management of a virtual disk to the array controller that receives the most read requests for that virtual disk. This improves performance by reducing the overhead incurred when servicing read I/Os on the non-managing controller. Implicit LUN transition is enabled in VCS 4.x and all versions of XCS.
When creating a virtual disk, one controller is selected to manage the virtual disk. Only this managing controller can issue I/Os to a virtual disk in response to a host read or write request. If a read I/O request arrives on the non-managing controller, the read request must be transferred to the managing controller for servicing. The managing controller issues the I/O request, caches the read data, and mirrors that data to the cache on the non-managing controller, which then transfers the read data to the host. Because this type of transaction, called a proxy read, requires additional overhead, it provides less than optimal performance. (There is little impact on a write request because all writes are mirrored in both controllers’ caches for fault protection.)
With implicit LUN transition, when the array detects that a majority of read requests for a virtual disk are proxy reads, the array transitions management of the virtual disk to the non-managing controller.
This improves performance because the controller receiving most of the read requests becomes the managing controller, reducing proxy read overhead for subsequent I/Os.
Implicit LUN transition is disabled for all members of an HP Continuous Access EVA DR group. Because
HP Continuous Access EVA requires that all members of a DR group be managed by the same controller, it would be necessary to move all members of the DR group if excessive proxy reads were detected on any virtual disk in the group. This would impact performance and create a proxy read situation for the other virtual disks in the DR group. Not implementing implicit LUN transition on a DR group may cause a virtual disk in the DR group to have excessive proxy reads.
Storage system shutdown and startup
The storage system is shut down using HP Command View EVA. The shutdown process performs the following functions in the indicated order:
1.
Flushes cache
2.
Removes power from the controllers
3.
Disables cache battery power
4.
Removes power from the drive enclosures
5.
Disconnects the system from HP Command View EVA
NOTE:
The storage system may take a long time to complete the necessary cache flush during controller shutdown when snapshots are being used. The delay may be particularly long if multiple child snapshots are used, or if there has been a large amount of write activity to the snapshot source virtual disk.
76 EVA6400/8400 operation
Shutting down the storage system
To shut the storage system down, perform the following steps:
1.
Start HP Command View EVA.
2.
Select the appropriate storage system in the Navigation pane.
The Initialized Storage System Properties window for the selected storage system opens.
3.
Click Shut down.
The Shutdown Options window opens.
4.
Under System Shutdown click Power Down. If you want to delay the initiation of the shutdown, enter the number of minutes in the Shutdown delay field.
The controllers complete an orderly shutdown and then power off. The disk enclosures then power off. Wait for the shutdown to complete.
5.
If your management server is an SMA and you are not using it to manage other storage arrays, shut down the SMA. From the SMA user interface, click Settings > Maintenance > Shutdown.
Starting the storage system
To start a storage system, perform the following steps:
1.
Verify that each fabric Fibre Channel switch to which the HSV controllers are connected is powered up and fully booted. The power indicator on each switch should be on.
If you must power up the SAN switches, wait for them to complete their power-on boot process before proceeding. This may take several minutes.
2.
Power on the circuit breakers on both EVA rack PDUs, which powers on the controller enclosures and disk enclosures. Verify that all enclosures are operating properly. The status indicator and the power indicator should be on (green).
3.
Wait three minutes and then verify that all disk drives are ready. The drive ready indicator and the drive online indicator should be on (green).
4.
Verify that the Operator Control Panel (OCP) display on each controller displays the storage system name and the EVA WWN.
5.
Start HP Command View EVA and verify connection to the storage system. If the storage system is not visible, click HSV Storage Network in the Navigation pane, and then click Discover in the
Content pane to discover the array.
NOTE:
If the storage system is still not visible, reboot the management server to re-establish the communication link.
6.
Check the storage system status using HP Command View EVA to ensure everything is operating properly. If any status indicator is not normal, check the log files or contact your HP-authorized service provider for assistance.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 77
Saving storage system configuration data
As part of an overall data protection strategy, storage system configuration data should be saved during initial installation, and whenever major configuration changes are made to the storage system.
This includes adding or removing disk drives, creating or deleting disk groups, and adding or deleting virtual disks. The saved configuration data can save substantial time should it ever become necessary to re-initialize the storage system. The configuration data is saved to a series of files stored in a location other than on the storage system.
This procedure can be performed from the management server where HP Command View EVA is installed, or any host that can run HP Storage System Scripting Utility (SSSU) to communicate with HP
Command View EVA.
NOTE:
For more information about using HP SSSU, see the HP StorageWorks Storage System Scripting Utility
Reference. See “ Related information ” on page 93.
1.
Double-click the HP SSSU desktop icon to run the application. When prompted, enter Manager
(management server name or IP address), User name, and Password.
2.
Enter LS SYSTEM to display the EVA storage systems managed by the management server.
3.
Enter SELECT SYSTEM system name, where system name is the name of the storage system.
The storage system name is case sensitive. If there are spaces between the letters in the name, quotes must enclose the name: for example, SELECT SYSTEM “Large EVA”.
4.
Enter CAPTURE CONFIGURATION, specifying the full path and filename of the output files for the configuration data.
The configuration data is stored in a series of from one to five files, which are SSSU scripts. The file names begin with the name you select, with the restore step appended. For example, if you specify a file name of LargeEVA.txt, the resulting configuration files would be
LargeEVA_Step1A.txt, LargeEVA_Step1B
, etc.
The contents of the configuration files can be viewed with a text editor.
NOTE:
If the storage system contains disk drives of different capacities, the HP SSSU procedures used do not guarantee that disk drives of the same capacity will be exclusively added to the same disk group. If you need to restore an array configuration that contains disks of different sizes and types, you must manually recreate these disk groups. The controller software and the CAPTURE CONFIGURATION command are not designed to automatically restore this type of configuration. For more information, see the HP StorageWorks Storage System Scripting Utility Reference.
78 EVA6400/8400 operation
Example 4. Saving configuration data using HP SSSU on a Windows host
To save the storage system configuration:
1.
Double-click the HP SSSU desktop icon to run the application. When prompted, enter Manager
(management server name or IP address), User name, and Password.
2.
Enter LS SYSTEM to display the EVA storage systems managed by the management server.
3.
Enter SELECT SYSTEM system name, where system name is the name of the storage system.
4.
Enter CAPTURE CONFIGURATION pathname\filename, where pathname identifies the location where the configuration files will be saved, and filename is the name used as the prefix for the configurations files: for example, CAPTURE CONFIGURATION c:\EVAConfig\LargeEVA
5.
Enter EXIT to close the command window.
Example 5. Restoring configuration data using HP SSSU on a Windows host
To restore the storage system configuration:
1.
Double-click the HP SSSU desktop icon to run the application.
2.
Enter FILE pathname\filename, where pathname identifies the location where the configuration files are to be saved and filename is the name of the first configuration file: for example,
FILE c:\EVAConfig\LargeEVA_Step1A.txt
3.
Repeat the preceding step for each configuration file.
Adding disk drives to the storage system
As your storage requirements grow, you may be adding disk drives to your storage system. Adding new disk drives is the easiest way to increase the storage capacity of the storage system. Disk drives can be added online without impacting storage system operation.
Creating disk groups
The new disks you add will typically be used to create new disk groups. Although you cannot select which disks will be part of a disk group, you can control this by building the disk groups sequentially.
Add the disk drives required for the first disk group, and then create a disk group using these disk drives. Now add the disk drives for the second disk group, and then create that disk group. This process gives you control over which disk drives are included in each disk group.
NOTE:
Standard and FATA disk drives must be in separate disk groups. Disk drives of different capacities and spindle speeds can be included in the same disk group, but you may want to consider separating them into separate disk groups.
Handling fiber optic cables
This section provides protection and cleaning methods for fiber optic connectors.
Contamination of the fiber optic connectors on either a transceiver or a cable connector can impede the transmission of data. Therefore, protecting the connector tips against contamination or damage is imperative. The tips can be contaminated by touching them, by dust, or by debris. They can be damaged when dropped. To protect the connectors against contamination or damage, use the dust
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 79
covers or dust caps provided by the manufacturer. These covers are removed during installation, and are installed whenever the transceivers or cables are disconnected. Cleaning the connectors should remove contamination.
The transceiver dust caps protect the transceivers from contamination. Do not discard the dust covers.
CAUTION:
To avoid damage to the connectors, always install the dust covers or dust caps whenever a transceiver or a fiber cable is disconnected. Remove the dust covers or dust caps from transceivers or fiber cable connectors only when they are connected. Do not discard the dust covers.
To minimize the risk of contamination or damage, do the following:
• Dust covers — Remove and set aside the dust covers and dust caps when installing an I/O module, a transceiver or a cable. Install the dust covers when disconnecting a transceiver or cable.
• When to clean — If a connector may be contaminated, or if a connector has not been protected by a dust cover for an extended period of time, clean it.
• How to clean:
1.
Wipe the connector with a lint-free tissue soaked with 100% isopropyl alcohol.
2.
Wipe the connector with a dry, lint-free tissue.
3.
Dry the connector with moisture-free compressed air.
One of the many sources for cleaning equipment specifically designed for fiber optic connectors is:
Alcoa Fujikura Ltd.
1-888-385-4587 (North America)
011-1-770-956-7200 (International)
Using the OCP
Displaying the OCP menu tree
The Storage System Menu Tree lets you select information to be displayed, configuration settings to change, or procedures to implement. To enter the menu tree, press any navigation push-button when the default display is active.
The menu tree is organized into the following major menus:
• System Info—displays information and configuration settings.
• Fault Management—displays fault information. Information about the Fault Management menu is included in
• Shutdown Options—initiates the procedure for shutting down the system in a logical, sequential manner. Using the shutdown procedures maintains data integrity and avoids the possibility of losing or corrupting data.
• System Password—create a system password to ensure that only authorized personnel can manage the storage system using HP Command View EVA.
To enter and navigate the storage system menu tree:
1.
Press any push-button while the default display is in view. System Information becomes the active display.
80 EVA6400/8400 operation
2.
Press to sequence down through the menus.
Press to sequence up through the menus.
Press to select the displayed menu.
Press to return to the previous menu.
NOTE:
To exit any menu, press Esc or wait ten seconds for the OCP display to return to the default display.
identifies all the menu options available within the OCP display.
CAUTION:
Many of the configuration settings available through the OCP impact the operating characteristics of the storage system. You should not change any setting unless you understand how it will impact system operation. For more information on the OCP settings, contact your HP-authorized service representative.
Table 16 Menu options within the OCP display
System Information
Versions
Host Port Config
(Sets Fabric or Direct
Connect)
Fault Management
Last Fault
Detail View
Device Port Config
(Enables/disables device ports)
I/O Module Config
(Enables/disables auto-bypass)
Loop Recovery Config (Enables/disables recoveries)
Unbypass Devices
UUID Unique Half
Debug Flags
Print Flags
Mastership Status (Displays controller role — master or slave)
Shutdown Options
Restart
Power Off
Uninitialize System
System Password
Change Password
Clear Password
Current Password
(Set or not)
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 81
Displaying system information
NOTE:
The purpose of this information is to assist the HP-authorized service representative when servicing your system.
The system information displays show the system configuration, including the XCS version, the OCP firmware and application programming interface (API) versions, and the enclosure address bus programmable integrated circuit (PIC) configuration. You can only view, not change, this information.
Displaying versions system information
When you press , the active display is Versions. From the Versions display you can determine the:
• OCP firmware version
• Controller version
• XCS version
NOTE:
The terms PPC, Sprite, Glue, SDC, CBIC, and Atlantis are for development purposes and have no significance for normal operation.
NOTE:
When viewing the software or firmware version information, pressing displays the Versions Menu tree.
To display System Information:
1.
The default display alternates between the Storage System Name display and the World Wide
Name display.
Press any push-button to display the Storage System Menu Tree.
2.
Press until the desired Versions Menu option appears, and then press or to move to submenu items.
Shutting down the system
CAUTION:
To power off the system for more than 96 hours, use HP Command View EVA.
You can use the Shutdown System function to implement the shutdown methods listed below. These shutdown methods are explained in
.
• Shutting down the controller (see
Shutting the controller down ).
82 EVA6400/8400 operation
• Restarting the system (see
).
• Uninitializing the system (see
To ensure that you do not mistakenly activate a shutdown procedure, the default state is always NO, indicating do not implement this procedure. As a safeguard, implementing any shutdown method requires you to complete at least two actions.
Table 17 Shutdown methods
LCD prompt
Restart System?
Power off system?
Uninitialize?
Description
Implementing this procedure establishes communications between the storage system and HP Command View EVA. This procedure is used to restore the controller to an operational state where it can communicate with HP Command
View EVA.
Implementing this procedure initiates the sequential removal of controller power. This ensures no data is lost. The reasons for implementing this procedure include replacing a drive enclosure.
Implementing this procedure will cause the loss of all data. For a detailed discussion of this procedure, see
.
Shutting the controller down
Use the following procedure to access the Shutdown System display and execute a shutdown procedure.
NOTE:
HP Command View EVA is the preferred method for shutting down the controller. Shut down the controller from the OCP only if HP Command View EVA cannot communicate with the controller.
Shutting down the controller from the OCP removes power from the controller on which the procedure is performed only. To restore power, toggle the controller’s power.
CAUTION:
If you decide NOT to power off while working in the Power Off menu, Power Off System NO must be displayed before you press Esc. This reduces the risk of accidentally powering down.
1.
Press three times to scroll to the Shutdown Options menu.
2.
Press to display Restart.
3.
Press to scroll to Power Off.
4.
Press to select Power Off.
5.
Power off system is displayed. Press Enter to power off the system.
Restarting the system
To restore the controller to an operational state, use the following procedure to restart the system.
1.
Press three times to scroll to the Shutdown Options menu.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 83
2.
Press to select Restart.
3.
Press to display Restart system?.
4.
Press Enter to go to Startup.
No user input is required. The system will automatically initiate the startup procedure and proceed to load the Storage System Name and World Wide Name information from the operational controller.
Uninitializing the system
Uninitializing the system is another way to shut down the system. This action causes the loss of all storage system data. Because HP Command View EVA cannot communicate with the disk drive enclosures, the stored data cannot be accessed.
CAUTION:
Uninitializing the system destroys all user data. The WWN will remain in the controller unless both controllers are powered off. The password will be lost. If the controllers remain powered on until you create another storage system (initialize via GUI), you will not have to re-enter the WWN.
Use the following procedure to uninitialize the system.
1.
Press three times to scroll to the Shutdown Options menu.
2.
Press to display Restart.
3.
Press twice to display Uninitialize System.
4.
Press to display Uninitialize?
5.
Select Yes and press Enter.
The system displays Delete all data? Enter DELETE:_______
6.
Press the arrow keys to navigate to the open field and type DELETE and then press ENTER.
The system uninitializes.
NOTE:
If you do not enter the word DELETE or if you press ESC, the system does not uninitialize.
The bottom OCP line displays Uninit cancelled.
Password options
The password entry options are:
• Entering a password during storage system initialization (see
Entering the storage system password
).
• Displaying the current password.
• Changing a password (see
• Removing password protection (see
84 EVA6400/8400 operation
Changing a password
For security reasons, you may need to change a storage system password. The password must contain
Use the following procedure to change the password.
NOTE:
Changing a system password on the controller requires changing the password on any HP Command
View EVA with access to the storage system.
1.
Select a unique password of 8 to 16 characters.
2.
With the default menu displayed, press three times to display System Password.
3.
Press to display Change Password?
4.
Press Enter for yes.
The default password, AAAAAAAA~~~~~~~~, is displayed.
5.
Press or to select the desired character.
6.
Press to accept this character and select the next character.
7.
Repeat the process to enter the remaining password characters.
8.
Press Enter to enter the password and return to the default display.
Clearing a password
Use the following procedure to remove storage system password protection.
NOTE:
Changing a system password on the controller requires changing the password on any HP Command
Vi w EVA with access to the storage system.
1.
Press four times to scroll to the System Password menu.
2.
Press to display Change Password?
3.
Press to scroll to Clear Password.
4.
Press to display Clear Password.
5.
Press Enter to clear the password.
The Password cleared message will be displayed.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 85
86 EVA6400/8400 operation
5 Customer replaceable units
Customer self repair (CSR)
and
identifies which hardware components are customer replaceable. Using HP
Insight Remote Support or other diagnostic tools, a support specialist will work with you to diagnose and assess whether a replacement component is required to address a system problem. The specialist will also help you determine whether you can perform the replacement.
Parts only warranty service
Your HP Limited Warranty may include a parts only warranty service. Under the terms of parts only warranty service, HP will provide replacement parts free of charge.
For parts only warranty service, CSR part replacement is mandatory. If you request HP to replace these parts, you will be charged for travel and labor costs.
Best practices for replacing hardware components
The following information will help you replace the hardware components on your storage system successfully.
CAUTION:
Removing a component significantly changes the air flow within the enclosure. All components must be installed for the enclosure to cool properly. If a component fails, leave it in place in the enclosure until a new component is available to install.
Component replacement videos
To assist you in replacing the components, videos have been produced of the procedures. To view the videos, go to the following website and navigate to your product: http://www.hp.com/go/sml
Verifying component failure
• Consult HP technical support to verify that the hardware component has failed and that you are authorized to replace it yourself.
• Additional hardware failures can complicate component replacement. Check HP Command View
EVA and/or HP Insight Remote Support as follows to detect any additional hardware problems:
• When you have confirmed that a component replacement is required, you may want to clear the Real Time Monitoring view. This makes it easier to identify additional hardware problems that may occur while waiting for the replacement part.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 87
• Before installing the replacement part, check the Real Time Monitoring view for any new hardware problems. If additional hardware problems have occurred, contact HP support before replacing the component.
• See the HP Insight Remote Support documentation for additional information.
Identifying the spare part
Parts have a nine-character spare component number on their label (
Figure 25 ). For some spare parts,
the part number will be available in HP Command View EVA. Alternatively, the HP call center will assist in identifying the correct spare part number.
1. Spare component number
Figure 25 Typical product label
Replaceable parts
This product contains the replaceable parts listed in
and
Table 19 . Parts that are available
for customer self repair (CSR) are indicated as follows:
✓ Mandatory CSR where geography permits. Order the part directly from HP and repair the product yourself. On-site or return-to-depot repair is not provided under warranty.
• Optional CSR. You can order the part directly from HP and repair the product yourself, or you can request that HP repair the product. If you request repair from HP, you may be charged for the repair depending on the product warranty.
-- No CSR. The replaceable part is not available for self repair. For assistance, contact an HP-authorized service provider.
Table 13 Controller enclosure replacement parts
Description
10 port controller, 4GB total cache (HSV400)
12 port controller, 7GB Total Cache (HSV450)
12 port t controller, 11GB Total Cache (HSV450)
Array battery
Spare part number (non
RoHS/RoHS)
512730–001
512731–001
512732–001
512735-001
CSR status
•
•
•
88 Customer replaceable units
Description
Array power supply
Spare part number (non
RoHS/RoHS)
489883–001
Array fan module 483017–001
OCP module
Memory board: cache line flush 10 port
Memory board: cache line flush 12 port
508563–001
512733–001
512734–001
Table 19 M6412-A disk enclosure replaceable parts
Description
4Gb FC disk shelf midplane
4Gb FC disk shelf backplane
SPS-BD Front UID
SPS-BD Power UID with cable
SPS-BD Front UID Interconnect PCA with cable
4Gb FC disk shelf I/O module
FC disk shelf fan module
Spare part number (non
RoHS/RoHS)
461492–005
461493–005
399053–001
399054–001
399055–001
461494–005
468715–001
FC disk shelf power supply 405914–001
Disk drive 146GB, 15K, EVA M6412–A Enclosure,
Fibre channel
454410–001
Disk drive 300GB, 15K, EVA M6412-A Enclosure,
Fibre channel
454411–001
Disk drive 400GB, 15K, EVA M6412-A Enclosure,
Fibre channel
466277–001
Disk drive 450GB, 15K, EVA M6412-A Enclosure,
Fibre channel
454412–001
Disk drive 1TB, 7.2K, EVA M6412-A Enclosure,
FATA
454414–001
Disk drive 72GB, EVA M6412–A Enclosure, SSD
SPS-CABLE ASSY, 4Gb COPPER, FC, 2.0m
SPS-CABLE ASSY, 4Gb COPPER, FC, 0.6m
SPS-CABLE ASSY, 4Gb COPPER, FC, 0.41m
515189–001
432374-001
432375-001
496917-001
CSR status
•
•
•
--
--
CSR status
•
•
•
•
•
•
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 89
For more information about CSR, contact your local service provider. For North America, see the CSR website: http://www.hp.com/go/selfrepair
To determine the warranty service provided for this product, see the warranty information website: http://www.hp.com/go/storagewarranty
To order a replacement part, contact an HP-authorized service provider or see the HP Parts Store online: http://www.hp.com/buy/parts
Replacing the failed component
CAUTION:
Components can be damaged by electrostatic discharge. Use proper anti-static protection.
• Always transport and store CRUs in an ESD protective enclosure.
• Do not remove the CRU from the ESD protective enclosure until you are ready to install it.
• Always use ESD precautions, such as a wrist strap, heel straps on conductive flooring, and an
ESD protective smock when handling ESD sensitive equipment.
• Avoid touching the CRU connector pins, leads, or circuitry.
• Do not place ESD generating material such as paper or non anti-static (pink) plastic in an ESD protective enclosure with ESD sensitive equipment.
• HP recommends waiting until periods of low storage system activity to replace a component.
• When replacing components at the rear of the rack, cabling may obstruct access to the component.
Carefully move any cables out of the way to avoid loosening any connections. In particular, avoid cable damage that may be caused by:
• Kinking or bending.
• Disconnecting cables without capping. If uncapped, cable performance may be impaired by contact with dust, metal or other surfaces.
• Placing removed cables on the floor or other surfaces, where they may be walked on or otherwise compressed.
Replacement instructions
Printed instructions are shipped with the replacement part. Instructions for all replaceable components are also included on the documentation CD that ships with the EVA6400/8400 and posted on the web. For the latest information, HP recommends that you obtain the instructions from the web.
Go to the following web site: http://www.hp.com/support/manuals . Under Storage, select Disk
Storage Systems, then select HP StorageWorks 6400/8400 Enterprise Virtual Arrays under EVA Disk
Arrays. The manuals page for the EVA6400/8400 appears. Scroll to the Service and maintenance information section where the replacement instructions are posted.
• HP StorageWorks controller enclosure replacement instructions
• HP StorageWorks cache battery replacement instructions
• HP StorageWorks controller blower replacement instructions
• HP StorageWorks power supply replacement instructions
90 Customer replaceable units
• HP StorageWorks operator control panel replacement instructions
• HP StorageWorks disk enclosure backplane replacement instructions
• HP StorageWorks disk enclosure fan module replacement instructions
• HP StorageWorks disk enclosure front UID interconnect board (with cable) replacement instructions
• HP StorageWorks disk enclosure front UID replacement instructions
• HP StorageWorks disk enclosure I/O module replacement instructions
• HP StorageWorks disk enclosure midplane replacement instructions
• HP StorageWorks disk enclosure power supply replacement instructions
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 91
92 Customer replaceable units
6 Support and other resources
Contacting HP
For worldwide technical support information, see the HP support website: http://www.hp.com/support
Before contacting HP, collect the following information:
• Product model names and numbers
• Technical support registration number (if applicable)
• Product serial numbers
• Error messages
• Operating system type and revision level
• Detailed questions
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/go/e-updates
After registering, you will receive e-mail notification of product enhancements, new driver versions, firmware updates, and other product resources.
Documentation feedback
HP welcomes your feedback.
To make comments and suggestions about product documentation, please send a message to [email protected]
. All submissions become the property of HP.
Related information
Documents
TYou can find the documents referenced in this guide on the Manuals page of the Business Support
Center website: http://www.hp.com/support/manuals
In the Storage section, click Disk Storage Systems or Storage Software and then select your product.
HP websites
For additional information, see the following HP websites:
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 93
• HP: http://www.hp.com
• HP Storage: http://www.hp.com/go/storage
• HP Partner Locator: http://www.hp.com/service_locator
• HP Software Downloads: http://www.hp.com/support/downloads
• HP Software Depot: http://www.hp.com/storage/whitepapers
• HP Single Point of Connectivity Knowledge (SPOCK): http://www.hp.com/storage/spock
• HP StorageWorks SAN manuals: http://www.hp.com/go/sdgmanuals
Typographic conventions
Table 20 Document conventions
Convention
Blue text:
Blue, underlined text: http://www.hp.com
Bold text
Italic text
Monospace text
Monospace, italic
text
.
.
.
Monospace, bold text
WARNING!
Element
Cross-reference links and e-mail addresses
Website addresses
• Keys that are pressed
• Text typed into a GUI element, such as a box
• GUI elements that are clicked or selected, such as menu and list items, buttons, tabs, and check boxes
Text emphasis
• File and directory names
• System output
• Code
• Commands, their arguments, and argument values
• Code variables
• Command variables
Emphasized monospace text
Indication that the example continues
An alert that calls attention to important information that if not understood or followed can result in personal injury.
94 Support and other resources
Convention
CAUTION:
IMPORTANT:
NOTE:
TIP:
Element
An alert that calls attention to important information that if not understood or followed can result in data loss, data corruption, or damage to hardware or software.
An alert that calls attention to essential information.
An alert that calls attention to additional or supplementary information.
An alert that calls attention to helpful hints and shortcuts.
Rack stability
Rack stability protects personnel and equipment.
WARNING!
To reduce the risk of personal injury or damage to equipment:
• Extend leveling jacks to the floor.
• Ensure that the full weight of the rack rests on the leveling jacks.
• Install stabilizing feet on the rack.
• In multiple-rack installations, fasten racks together securely.
• Extend only one rack component at a time. Racks can become unstable if more than one component is extended.
Customer self repair
HP customer self repair (CSR) programs allow you to repair your StorageWorks product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience.
Some parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider, or see the CSR website: http://www.hp.com/go/selfrepair
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 95
96 Support and other resources
A Regulatory notices and specifications
This appendix includes regulatory notices and product specifications for the HP StorageWorks Enterprise
Virtual Array family.
Regulatory notices
Federal Communications Commission (FCC) notice
Part 15 of the Federal Communications Commission (FCC) Rules and Regulations has established
Radio Frequency (RF) emission limits to provide an interference-free radio frequency spectrum. Many electronic devices, including computers, generate RF energy incidental to their intended function and are, therefore, covered by these rules. These rules place computers and related peripheral devices into two classes, A and B, depending upon their intended installation. Class A devices are those that may reasonably be expected to be installed in a business or commercial environment. Class B devices are those that may reasonably be expected to be installed in a residential environment (for example, personal computers). The FCC requires devices in both classes to bear a label indicating the interference potential of the device as well as additional operating instructions for the user.
The rating label on the device shows the classification (A or B) of the equipment. Class B devices have an FCC logo or FCC ID on the label. Class A devices do not have an FCC logo or FCC ID on the label. After the class of the device is determined, see the corresponding statement in the following sections.
FCC Class A certification
This equipment generates, uses, and may emit radio frequency energy. The equipment has been type tested and found to comply with the limits for a Class A digital device pursuant to Part 15 of the FCC rules, which are designed to provide reasonable protection against such radio frequency interference.
Operation of this equipment in a residential area may cause interference, in which case the user at the user’s own expense will be required to take whatever measures may be required to correct the interference.
Any modifications to this device—unless approved by the manufacturer—can void the user’s authority to operate this equipment under Part 15 of the FCC rules.
NOTE:
Additional information on the need to interconnect the device with shielded (data) cables or the need for special devices, such as ferrite beads on cables, is required if such means of interference suppression was used in the qualification test for the device. This information will vary from device to device and needs to be obtained from the HP EMC group.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 97
Class A equipment
This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference, in which case the user will be required to correct the interference at personal expense.
Class B equipment
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures:
• Reorient or relocate the receiving antenna.
• Increase the separation between the equipment and receiver.
• Connect the equipment into an outlet on a circuit that is different from that to which the receiver is connected.
• Consult the dealer or an experienced radio or television technician for help.
Declaration of conformity for products marked with the FCC logo, United States only
This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions:
(1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.
For questions regarding your product, see http://thenew.hp.com
.
For questions regarding this FCC declaration, contact:
• Hewlett-Packard Company Product Regulations Manager, 3000 Hanover St., Palo Alto, CA 94304
• Or call 1-650-857-1501
To identify this product, see the part, series, or model number found on the product.
Modifications
The FCC requires the user to be notified that any changes or modifications made to this device that are not expressly approved by Hewlett-Packard Company may void the user's authority to operate the equipment.
Cables
Connections to this device must be made with shielded cables with metallic RFI/EMI connector hoods in order to maintain compliance with FCC Rules and Regulations.
98 Regulatory notices and specifications
Laser device
All Hewlett-Packard systems equipped with a laser device comply with safety standards, including
International Electrotechnical Commission (IEC) 825. With specific regard to the laser, the equipment complies with laser product performance standards set by government agencies as a Class 1 laser product. The product does not emit hazardous light; the beam is totally enclosed during all modes of customer operation and maintenance.
Laser safety warnings
Heed the following warning:
WARNING!
WARNING: To reduce the risk of exposure to hazardous radiation:
• Do not try to open the laser device enclosure. There are no user-serviceable components inside.
• Do not operate controls, make adjustments, or perform procedures to the laser device other than those specified herein.
• Allow only HP authorized service technicians to repair the laser device.
Compliance with CDRH regulations
The Center for Devices and Radiological Health (CDRH) of the U.S. Food and Drug Administration implemented regulations for laser products on August 2, 1976. These regulations apply to laser products manufactured from August 1, 1976. Compliance is mandatory for products marketed in the
United States.
Certification and classification information
This product contains a laser internal to the Optical Link Module (OLM) for connection to the Fibre communications port.
In the USA, the OLM is certified as a Class 1 laser product conforming to the requirements contained in the Department of Health and Human Services (DHHS) regulation 21 CFR, Subchapter J. The certification is indicated by a label on the plastic OLM housing.
Outside the USA, the OLM is certified as a Class 1 laser product conforming to the requirements contained in IEC 825-1:1993 and EN 60825-1:1994, including Amendment 11:1996.
The OLM includes the following certifications:
• UL Recognized Component (USA)
• CSA Certified Component (Canada)
• TUV Certified Component (European Union)
• CB Certificate (Worldwide)
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 99
Canadian notice (avis Canadian)
Class A equipment
This Class A digital apparatus meets all requirements of the Canadian Interference-Causing Equipment
Regulations.
Cet appareil numérique de la classe A respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada.
Class B equipment
This Class B digital apparatus meets all requirements of the Canadian Interference-Causing Equipment
Regulations.
Cet appareil numérique de la classe B respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada.
European union notice
Products with the CE Marking comply with both the EMC Directive (2004/108/EC) and the Low
Voltage Directive (2006/95/EC) issued by the Commission of the European Community.
Compliance with these directives implies conformity to the following European Norms (the equivalent international standards are in parenthesis):
• EN55022 (CISPR 22) - Electromagnetic Interference
• EN55024 (IEC61000-4-2, 3, 4, 5, 6, 8, 11) - Electromagnetic Immunity
• EN61000-3-2 (IEC61000-3-2) - Power Line Harmonics
• EN61000-3-3 (IEC61000-3-3) - Power Line Flicker
• EN60950 (IEC950) - Product Safety
Notice for France
DECLARATION D'INSTALLATION ET DE MISE EN EXPLOITATION d'un matériel de traitement de l'information (ATI), classé A en fonction des niveaux de perturbations radioélectriques émis, définis dans la norme européenne EN 55022 concernant la Compatibilité Electromagnétique.
WEEE Recycling Notices
English notice
Disposal of waste equipment by users in private household in the European Union
This symbol on the product or on its packaging indicates that this product must not be disposed of with your other household waste. Instead, it is your responsibility to dispose of your waste equipment by handing it over to a designated collection point for recycling of waste electrical and electronic equipment. The separate collection and recycling of your waste equipment at the time of disposal will help to conserve natural resources and ensure that it is recycled in a manner that protects human health and the environment. For more information about where you can drop off your waste equipment
100 Regulatory notices and specifications
for recycling, please contact your local city office, your household waste disposal service, or the shop where you purchased the product.
Dutch notice
Verwijdering van afgedankte apparatuur door privé-gebruikers in de Europese Unie
Dit symbool op het product of de verpakking geeft aan dat dit product niet mag worden gedeponeerd bij het normale huishoudelijke afval. U bent zelf verantwoordelijk voor het inleveren van uw afgedankte apparatuur bij een inzamelingspunt voor het recyclen van oude elektrische en elektronische apparatuur. Door uw oude apparatuur apart aan te bieden en te recyclen, kunnen natuurlijke bronnen worden behouden en kan het materiaal worden hergebruikt op een manier waarmee de volksgezondheid en het milieu worden beschermd. Neem contact op met uw gemeente, het afvalinzamelingsbedrijf of de winkel waar u het product hebt gekocht voor meer informatie over inzamelingspunten waar u oude apparatuur kunt aanbieden voor recycling.
Czechoslovakian notice
Likvidace za ízení soukromými domácími uživateli v Evropské unii
Tento symbol na produktu nebo balení označuje výrobek, který nesmí být vyhozen spolu s ostatním domácím odpadem. Povinností uživatele je předat takto označený odpad na předem určené sběrné místo pro recyklaci elektrických a elektronických zařízení. Okamžité třídění a recyklace odpadu pomůže uchovat přírodní prostředí a zajistí takový způsob recyklace, který ochrání zdraví a životní prostředí člověka. Další informace o možnostech odevzdání odpadu k recyklaci získáte na příslušném obecním nebo městském úřadě, od firmy zabývající se sběrem a svozem odpadu nebo v obchodě, kde jste produkt zakoupili.
Estonian notice
Seadmete jäätmete kõrvaldamine eramajapidamistes Euroopa Liidus
See tootel või selle pakendil olev sümbol näitab, et kõnealust toodet ei tohi koos teiste majapidamisjäätmetega kõrvaldada. Teie kohus on oma seadmete jäätmed kõrvaldada, viies need elektri- ja elektroonikaseadmete jäätmete ringlussevõtmiseks selleks ettenähtud kogumispunkti. Seadmete jäätmete eraldi kogumine ja ringlussevõtmine kõrvaldamise ajal aitab kaitsta loodusvarasid ning tagada, et ringlussevõtmine toimub viisil, mis kaitseb inimeste tervist ning keskkonda. Lisateabe saamiseks selle kohta, kuhu oma seadmete jäätmed ringlussevõtmiseks viia, võtke palun ühendust oma kohaliku linnakantselei, majapidamisjäätmete kõrvaldamise teenistuse või kauplusega, kust Te toote ostsite.
Finnish notice
Laitteiden hävittäminen kotitalouksissa Euroopan unionin alueella
Jos tuotteessa tai sen pakkauksessa on tämä merkki, tuotetta ei saa hävittää kotitalousjätteiden mukana. Tällöin hävitettävä laite on toimitettava sähkölaitteiden ja elektronisten laitteiden kierrätyspisteeseen. Hävitettävien laitteiden erillinen käsittely ja kierrätys auttavat säästämään
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 101
luonnonvaroja ja varmistamaan, että laite kierrätetään tavalla, joka estää terveyshaitat ja suojelee luontoa. Lisätietoja paikoista, joihin hävitettävät laitteet voi toimittaa kierrätettäväksi, saa ottamalla yhteyttä jätehuoltoon tai liikkeeseen, josta tuote on ostettu.
French notice
Élimination des appareils mis au rebut par les ménages dans l'Union européenne
Le symbole apposé sur ce produit ou sur son emballage indique que ce produit ne doit pas être jeté avec les déchets ménagers ordinaires. Il est de votre responsabilité de mettre au rebut vos appareils en les déposant dans les centres de collecte publique désignés pour le recyclage des équipements
électriques et électroniques. La collecte et le recyclage de vos appareils mis au rebut indépendamment du reste des déchets contribue à la préservation des ressources naturelles et garantit que ces appareils seront recyclés dans le respect de la santé humaine et de l'environnement. Pour obtenir plus d'informations sur les centres de collecte et de recyclage des appareils mis au rebut, veuillez contacter les autorités locales de votre région, les services de collecte des ordures ménagères ou le magasin dans lequel vous avez acheté ce produit.
German notice
Entsorgung von Altgeräten aus privaten Haushalten in der EU
Das Symbol auf dem Produkt oder seiner Verpackung weist darauf hin, dass das Produkt nicht
über den normalen Hausmüll entsorgt werden darf. Benutzer sind verpflichtet, die Altgeräte an einer
Rücknahmestelle für Elektro- und Elektronik-Altgeräte abzugeben. Die getrennte Sammlung und ordnungsgemäße Entsorgung Ihrer Altgeräte trägt zur Erhaltung der natürlichen Ressourcen bei und garantiert eine Wiederverwertung, die die Gesundheit des Menschen und die Umwelt schützt.
Informationen dazu, wo Sie Rücknahmestellen für Ihre Altgeräte finden, erhalten Sie bei Ihrer
Stadtverwaltung, den örtlichen Müllentsorgungsbetrieben oder im Geschäft, in dem Sie das Gerät erworben haben.
Greek notice
Hungarian notice
Készülékek magánháztartásban történ selejtezése az Európai Unió területén
102 Regulatory notices and specifications
A készüléken, illetve a készülék csomagolásán látható azonos szimbólum annak jelzésére szolgál, hogy a készülék a selejtezés során az egyéb háztartási hulladéktól eltérő módon kezelendő. A vásárló a hulladékká vált készüléket köteles a kijelölt gyűjtőhelyre szállítani az elektromos és elektronikai készülékek újrahasznosítása céljából. A hulladékká vált készülékek selejtezéskori begyűjtése és
újrahasznosítása hozzájárul a természeti erőforrások megőrzéséhez, valamint biztosítja a selejtezett termékek környezetre és emberi egészségre nézve biztonságos feldolgozását. A begyűjtés pontos helyéről bővebb tájékoztatást a lakhelye szerint illetékes önkormányzattól, az illetékes szemételtakarító vállalattól, illetve a terméket elárusító helyen kaphat.
Italian notice
Smaltimento delle apparecchiature da parte di privati nel territorio dell’Unione Europea
Questo simbolo presente sul prodotto o sulla sua confezione indica che il prodotto non può essere smaltito insieme ai rifiuti domestici. È responsabilità dell'utente smaltire le apparecchiature consegnandole presso un punto di raccolta designato al riciclo e allo smaltimento di apparecchiature elettriche ed elettroniche. La raccolta differenziata e il corretto riciclo delle apparecchiature da smaltire permette di proteggere la salute degli individui e l'ecosistema. Per ulteriori informazioni relative ai punti di raccolta delle apparecchiature, contattare l'ente locale per lo smaltimento dei rifiuti, oppure il negozio presso il quale è stato acquistato il prodotto.
Korean Communication Committee notice
Latvian notice
Nolietotu iek rtu izn cin šanas noteikumi lietot jiem Eiropas Savien bas priv taj s m jsaimniec b s
Šāds simbols uz izstrādājuma vai uz tā iesaiņojuma norāda, ka šo izstrādājumu nedrīkst izmest kopā ar citiem sadzīves atkritumiem. Jūs atbildat par to, lai nolietotās iekārtas tiktu nodotas speciāli iekārtotos punktos, kas paredzēti izmantoto elektrisko un elektronisko iekārtu savākšanai otrreizējai pārstrādei. Atsevišķa nolietoto iekārtu savākšana un otrreizējā pārstrāde palīdzēs saglabā dabas resursus un garantēs, ka šīs iekārtas tiks otrreizēji pārstrādātas tādā veidā, lai pasargātu vidi un cilvēku veselību. Lai uzzinātu, kur nolietotās iekārtas var izmest otrreizējai pārstrādei, jāvēršas savas dzīves vietas pašvaldībā, sadzīves atkritumu savākšanas dienestā vai veikalā, kurā izstrādājums tika nopirkts.
Lithuanian notice
Vartotoj iš priva i nam ki rangos atliek šalinimas Europos S jungoje
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 103
Šis simbolis ant gaminio arba jo pakuotės rodo, kad šio gaminio šalinti kartu su kitomis namų ū kio atliekomis negalima. Šalintinas įrangos atliekas privalote pristatyti į specialią surinkimo vietą elektros ir elektroninės įrangos atliekoms perdirbti. Atskirai surenkamos ir perdirbamos šalintinos į rangos atliekos padės saugoti gamtinius išteklius ir užtikrinti, kad jos bus perdirbtos tokiu būdu, kuris nekenkia žmonių sveikatai ir aplinkai. Jeigu norite sužinoti daugiau apie tai, kur galima pristatyti perdirbtinas įrangos atliekas, kreipkitės į savo seniūniją, namų ūkio atliekų šalinimo tarnybą arba parduotuvę, kurioje įsigijote gaminį.
Polish notice
Pozbywanie si zu ytego sprz tu przez u ytkowników w prywatnych gospodarstwach domowych w
Unii Europejskiej
Ten symbol na produkcie lub jego opakowaniu oznacza, że produktu nie wolno wyrzucać do zwykłych pojemników na śmieci. Obowiązkiem użytkownika jest przekazanie zużytego sprzętu do wyznaczonego punktu zbiórki w celu recyklingu odpadów powstałych ze sprzętu elektrycznego i elektronicznego. Osobna zbiórka oraz recykling zużytego sprzętu pomogą w ochronie zasobów naturalnych i zapewnią ponowne wprowadzenie go do obiegu w sposób chroniący zdrowie człowieka i środowisko. Aby uzyskać więcej informacji o tym, gdzie można przekazać zużyty sprzęt do recyklingu, należy się skontaktować z urzędem miasta, zakładem gospodarki odpadami lub sklepem, w którym zakupiono produkt.
Portuguese notice
Descarte de Lixo Elétrico na Comunidade Européia
Este símbolo encontrado no produto ou na embalagem indica que o produto não deve ser descartado no lixo doméstico comum. É responsabilidade do cliente descartar o material usado (lixo elétrico), encaminhando-o para um ponto de coleta para reciclagem. A coleta e a reciclagem seletivas desse tipo de lixo ajudarão a conservar as reservas naturais; sendo assim, a reciclagem será feita de uma forma segura, protegendo o ambiente e a saúde das pessoas. Para obter mais informações sobre locais que reciclam esse tipo de material, entre em contato com o escritório da HP em sua cidade, com o serviço de coleta de lixo ou com a loja em que o produto foi adquirido.
Slovakian notice
Likvidácia vyradených zariadení v domácnostiach v Európskej únii
Symbol na výrobku alebo jeho balení označuje, že daný výrobok sa nesmie likvidovať s domovým odpadom. Povinnosťou spotrebiteľa je odovzdať vyradené zariadenie v zbernom mieste, ktoré je určené na recykláciu vyradených elektrických a elektronických zariadení. Separovaný zber a recyklácia vyradených zariadení prispieva k ochrane prírodných zdrojov a zabezpečuje, že recyklácia sa vykonáva spôsobom chrániacim ľudské zdravie a životné prostredie. Informácie o zberných miestach na recykláciu vyradených zariadení vám poskytne miestne zastupiteľstvo, spoločnosť zabezpečujúca odvoz domového odpadu alebo obchod, v ktorom ste si výrobok zakúpili.
104 Regulatory notices and specifications
Slovenian notice
Odstranjevanje odslužene opreme uporabnikov v zasebnih gospodinjstvih v Evropski uniji
Ta znak na izdelku ali njegovi embalaži pomeni, da izdelka ne smete odvreči med gospodinjske odpadke. Nasprotno, odsluženo opremo morate predati na zbirališče, pooblaščeno za recikliranje odslužene električne in elektronske opreme. Ločeno zbiranje in recikliranje odslužene opreme prispeva k ohranjanju naravnih virov in zagotavlja recikliranje te opreme na zdravju in okolju neškodljiv način.
Za podrobnejše informacije o tem, kam lahko odpeljete odsluženo opremo na recikliranje, se obrnite na pristojni organ, komunalno službo ali trgovino, kjer ste izdelek kupili.
Spanish notice
Eliminación de residuos de equipos eléctricos y electrónicos por parte de usuarios particulares en la
Unión Europea
Este símbolo en el producto o en su envase indica que no debe eliminarse junto con los desperdicios generales de la casa. Es responsabilidad del usuario eliminar los residuos de este tipo depositándolos en un "punto limpio" para el reciclado de residuos eléctricos y electrónicos. La recogida y el reciclado selectivos de los residuos de aparatos eléctricos en el momento de su eliminación contribuirá a conservar los recursos naturales y a garantizar el reciclado de estos residuos de forma que se proteja el medio ambiente y la salud. Para obtener más información sobre los puntos de recogida de residuos eléctricos y electrónicos para reciclado, póngase en contacto con su ayuntamiento, con el servicio de eliminación de residuos domésticos o con el establecimiento en el que adquirió el producto.
Swedish notice
Bortskaffande av avfallsprodukter från användare i privathushåll inom Europeiska Unionen
Om den här symbolen visas på produkten eller förpackningen betyder det att produkten inte får slängas på samma ställe som hushållssopor. I stället är det ditt ansvar att bortskaffa avfallet genom att överlämna det till ett uppsamlingsställe avsett för återvinning av avfall från elektriska och elektroniska produkter. Separat insamling och återvinning av avfallet hjälper till att spara på våra naturresurser och gör att avfallet återvinns på ett sätt som skyddar människors hälsa och miljön. Kontakta ditt lokala kommunkontor, din närmsta återvinningsstation för hushållsavfall eller affären där du köpte produkten för att få mer information om var du kan lämna ditt avfall för återvinning.
Germany noise declaration
Schalldruckpegel Lp = 70 dB(A)
Am Arbeitsplatz (operator position)
Normaler Betrieb (normal operation)
Nach ISO 7779:1999 (Typprüfung)
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 105
Japanese notice
Harmonics conformance (Japan)
Taiwanese notice
Japanese power cord notice
Country-specific certifications
HP tests electronic products for compliance with country-specific regulatory requirements, as an individual item or as part of an assembly. The product label (see
) specifies the regulations with which the product complies.
NOTE:
Components without an individual product certification label are qualified as part of the next higher assembly (for example, enclosure, rack, or tower).
106 Regulatory notices and specifications
Figure 26 Typical enclosure certification label
NOTE:
The certification symbols on the label depend upon the certification level. For example, the FCC Class
A certification symbol is not the same as the FCC Class B certification symbol.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 107
108 Regulatory notices and specifications
B Error messages
This list of error messages is in order by status code value, 0 to xxx.
Table 21 Error Messages
Meaning Status Code Value
0
Successful Status
1
Object Already Exists
2
Supplied Buffer Too Small
3
Object Already Assigned
How to Correct
The SCMI command completed successfully.
No corrective action required.
The object or relationship already exists.
Delete the associated object and try the operation again. Several situations can cause this message:
Presenting a LUN to a host:
• Delete the current association or specify a different LUN number.
Storage cell initialize:
• Remove or erase disk volumes before the storage cell can be successfully created.
Adding a port WWN to a host:
• Specify a different port WWN.
Adding a disk to a disk group:
• Delete the specified disk volume before creating a new disk volume.
The command or response buffer is not large enough to hold the specified number of items. This can be caused by a user or program error.
The handle is already assigned to an existing object. This can be caused by a user or program error.
Report the error to product support.
Report the error to product support.
4
Insufficient Available Data
Storage
5
Internal Error
6
Invalid status for logical disk
There is insufficient storage available to perform the request.
Reclaim some logical space or add physical hardware.
An unexpected condition was encountered while processing a request.
This error is no longer supported.
Report the error to product support.
Report the error to product support.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 109
Status Code Value
7
Invalid Class
Meaning
The supplied class code is of an unknown type. This can be caused by a user or program error.
How to Correct
Report the error to product support.
8
Invalid Function
11
Invalid parameter
12
Invalid Parameter handle
The function code specified with the class code is of an unknown type.
Report the error to product support.
9
Invalid Logical Disk Block State
The specified command supplied unrecognized values. This can indicate a user or program error.
Report the error to product support.
10
Invalid Loop Configuration
The specified request supplied an invalid loop configuration.
Verify the hardware configuration and retry the request.
There are insufficient resources to fulfill the request, the requested value is not supported, or the parameters supplied are invalid. This can indicate a user or program error.
Report the error to product support.
The supplied handle is invalid. This can indicate a user error, program error, or a storage cell in an uninitialized state.
In the following cases, the storage cell is in an uninitialized state, but no action is required:
Storage cell discard (informational message):
Storage cell look up object count
(informational message):
Storage cell look up object (informational message):
In the following cases, the message can occur because the operation is not allowed when the storage cell is in an uninitialized state. If you see these messages, initialize the storage cell and retry the operation.
Storage cell set device addition policy
Storage cell set name
Storage cell set time
Storage cell set volume replacement delay
Storage cell free command lock
Storage cell set console lun id
13
Invalid Parameter Id
15
Invalid Target Handle
The supplied identifier is invalid. This can indicate a user or program error.
Report the error to product support.
14
Invalid Quorum Configuration
Quorum disks from multiple storage systems are present.
Report the error to product support.
The supplied target handle is invalid.
This can indicate a user or program error
(Case 1), or
Volume set requested usage (Case 2):
The operation could not be completed because the disk has never belonged to a disk group and therefore cannot be added to a disk group.
Case 1: Report the error to product support.
Case 2: To add additional capacity to the disk group, use the management software to add disks by count or capacity.
110 Error messages
Status Code Value
16
Invalid Target Id
Meaning
The supplied target identifier is invalid.
This can indicate a user or program error.
How to Correct
Report the error to product support.
17
Invalid Time
18
Media is Inaccessible
19
No Fibre Channel Port
The time value specified is invalid. This can indicate a user or program error.
Report the error to product support.
The operation could not be completed because one or more of the disk media was inaccessible.
The Fibre Channel port specified is not valid. This can indicate a user or program error.
Report the error to product support.
Report the error to product support.
20
No Image
There is no firmware image stored for the specified image number.
Report the error to product support.
21
No Permission
The disk device is not in a state to allow the specified operation.
The disk device must be in either maintenance mode or in a reserved state for the specified operation to proceed.
22
Storage system not initialized
The operation requires a storage cell to exist.
Create a storage cell and retry the operation.
23
Not a Loop Port
24
Not a Participating Controller
The Fibre Channel port specified is either not a loop port or is invalid. This can indicate a user or program error.
Report the error to product support.
The controller must be participating in the storage cell to perform the operation.
Verify that the controller is a participating member of the storage cell.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 111
Status Code Value
25
Objects in your system are in use, and their state prevents the operation you wish to perform.
Meaning How to Correct
Case 1: Either delete the associated object or resolve the in progress state.
Case 2: . Report the error to product support.
Case 3: Unpresent the LUNs before deleting this virtual disk.
Case 4: Resolve the delay before performing the operation.
Case 5: Delete any remaining virtual disks or wait for the used capacity to reach zero before the disk group can be deleted. If this is the last remaining disk group, uninitialize the storage cell to remove it.
Case 6: Report the error to product support.
Case 7: The disk must be in a reserved state before it can be erased.
Case 8: Delete the virtual disks or
LUN presentations before uninitializing the storage cell.
Case 9: Delete the LUN presentations before deleting the
EVA host.
Case 10: Report the error to product support.
Case 11: Resolve the situation before attempting the operation again.
Case 12: Resolve the situation before attempting the operation again.
Case 13: This may indicate a programming error. Report the error to product support.
Case 14: Select another disk or remove the disk from the disk group before making it a member of a different disk group.
Case 15: Remove the virtual disks from the group and retry the operation.
112 Error messages
Status Code Value Meaning
Several states can cause this message:
Case 1: The operation cannot be performed because an association exists a related object, or the object is in a progress state.
Derived unit create: Case 2: The supplied virtual disk handle is already an attribute of another derived unit. This may indicate a programming error
Derived unit discard: Case 3: One or more LUNs are presented to EVA hosts that are based on this virtual disk.
Case 4: Logical disk clear data lost: The virtual disk is in the non-mirrored delay window.
Case 5: LDAD discard: The operation cannot be performed because one or more virtual disks still exist, the disk group still may be recovering its capacity, or this is the last disk group that exists.
Case 6: LDAD resolve condition: The disk group contains a disk volume that is in a data-lost state. This condition cannot be resolved.
Case 7: Physical Store erase volume:
The disk is a part of a disk group and cannot be erased.
Case 8: Storage cell discard: The storage cell contains one or more virtual disks or LUN presentations.
Case 9: Storage cell client discard: =
The EVA host contains one or more LUN presentations.
Case 10: SCVD discard: The virtual disk contains one or more derived units and cannot be discarded. This may indicate a programming error.
Case 11: SCVD set capacity: The capacity cannot be modified because the virtual disk has a dependency on either a snapshot or snapclone.
Case 12: SCVD set disk cache policy:
The virtual disk cache policy cannot be modified while the virtual disk is presented and enabled.
Case 13: SCVD set logical disk: The logical disk attribute is already set, or the supplied logical disk is already a member of another virtual disk.
Case 14: VOLUME set requested usage:
The disk volume is already a member of a disk group or is in the state of being removed from a disk group.
How to Correct
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 113
Status Code Value Meaning
Case 15: GROUP discard: The
Continuous Access group cannot be discarded as one or more virtual disk members exist.
How to Correct
26
Parameter Object Does Not
Exist
28
Timeout
29
Unknown Id
30
Unknown Parameter Handle
31
Unrecoverable Media Error
The operation cannot be performed because the object does not exist. This can indicate a user or program error.
VOLUME set requested usage: The disk volume set requested usage cannot be performed because the disk group does not exist. This can indicate a user or program error.
Report the error to product support.
27
Target Object Does Not Exist
Case 1: The operation cannot be performed because the object does not exist. This can indicate a user or program error.
Case 2: DERIVED UNIT discard: The operation cannot be performed because the virtual disk, snapshot, or snapclone does not exist or is still being created.
Case 3: VOLUME set requested usage:
The operation cannot be performed because the target disk volume does not exist. This can indicate a user or program error.
Case 4: GROUP get name: The operation cannot be performed because the Continuous Access group does not exist. This can indicate a user or program error.
Case 1: Report the error to product support.
Case 2: Retry the request at a later time.
Case 3: Report the error to product support.
Case 4: Report the error to product support.
A timeout has occurred in processing the request.
Verify the hardware connections and that communication to the device is successful.
The supplied storage cell identifier is invalid. This can indicate a user or program error.
The supplied parameter handle is unknown. This can indicate a user or program error.
The operation could not be completed because one or more of the disk media had an unrecoverable error.
Report the error to product support.
Report the error to product support.
Report the error to product support.
32
Invalid State
This error is no longer supported.
Report the error to product support.
33
Transport Error
A SCMI transport error has occurred.
Verify the hardware connections, communication to the device, and that the management software is operating successfully.
114 Error messages
Status Code Value
34
Volume is Missing
35
Invalid Cursor
36
Invalid Target for the
Operation
Meaning How to Correct
The operation could not be completed because the drive volume is in a missing state.
Resolve the condition and retry the request. Report the error to product support.
The supplied cursor or sequence number is invalid. This may indicate a user or program error.
Report the error to product support.
The specified target logical disk already has an existing data sharing relationship.
This can indicate a user or program error.
Report the error to product support.
37
No More Events
38
Lock Busy
There are no more events to retrieve.
(This message is informational only.)
The command lock is busy and being held by another process.
No action required.
Retry the request at a later time.
39
Time Not Set
40
Not a Supported Version
41
No Logical Disk for Vdisk
42
Logical disk Presented
43
Operation Denied On Slave
The storage system time is not set. The storage system time is set automatically by the management software.
Report the error to product support.
The requested operation is not supported by this firmware version. This can indicate a user or program error.
Report the error to product support.
The specified SCVD does not have a logical disk associated with it. This can indicate a user or program error.
Report the error to product support.
The virtual disk specified is already presented to the client and the requested operation is not allowed.
Delete the associated presentation(s) and retry the request.
The request is not allowed on the slave controller. This can indicate a user or program error.
Report the error to product support.
44
Not licensed for data replication
This error is no longer supported.
Report the error to product support.
45
Not DR group member
46
Invalid DR mode
The operation cannot be performed because the virtual disk is not a member of a Continuous Access group.
Configure the virtual disk to be a member of a Continuous Access group and retry the request.
The operation cannot be performed because the Continuous Access group is not in the required mode.
47
The target DR member is in full copy, operation rejected
The operation cannot be performed because at least one of the virtual disk members is in a copying state.
Configure the Continuous Access group correctly and retry the request.
Wait for the copying state to complete and retry the request.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 115
Status Code Value Meaning How to Correct
48
Security credentials needed.
Please update your system's
ID and password in the
Storage System Access menu.
The management software is unable to log in to the storage system. The storage system password has been configured.
Use the management software to save the password specified so communication can proceed.
49
Security credentials supplied were invalid. Please update your system's ID and password in the Storage System Access menu.
The management software is unable to login to the device. The storage system password may have been re-configured or removed.
Use the management software to set the password to match the device so communication can proceed.
50
Security credentials supplied were invalid. Please update your system's ID and password in the Storage System Access menu.
The management software is already logged in to the device. (This message is informational only.)
No action required.
51
Storage system connection down
52
DR group empty
53
Incompatible attribute
54
Vdisk is a DR group member
55
Vdisk is a DR log unit
56
Cache batteries failed or missing.
.
The Continuous Access group is not functioning.
No virtual disks are members of the
Continuous Access group.
The battery system is missing or discharged.
Verify that devices are powered on and that device hardware connections are functioning correctly.
Add one or more virtual disks as members and retry the request.
The request cannot be performed because one or more of the attributes specified is incompatible.
The requested operation cannot be performed on a virtual disk that is already a member of a data replication group.
Retry the request with valid attributes for the operation.
Remove the virtual disk as a member of a data replication group and retry the request.
The requested operation cannot be performed on a virtual disk that is a log unit.
No action required.
Report the error to product support.
57
Vdisk is not presented
58
Other controller failed
The virtual disk member is not presented to a client.
The virtual disk member must be presented to a client before this operation can be performed.
Invalid status for logical disk. This error is no longer supported.
Report the error to product support.
116 Error messages
Status Code Value Meaning
59
Maximum Number of Objects
Exceeded.
Case 1: The maximum number of items allowed has been reached.
Case 2: The maximum number of EVA hosts has been reached.
Case 3: The maximum number of port
WWNs has been reached.
How to Correct
Case 1: If this operation is still desired, delete one or more of the items and retry the operation.
Case 2: If this operation is still desired, delete one or more of the
EVA hosts and retry the operation.
Case 3: If this operation is still desired, delete one or more of the port WWNs and retry the operation.
60
Max size exceeded
62
DR group is merging
63
DR group is logging
64
Connection is suspended
Case 1: The maximum number of items already exist on the destination storage cell.
Case 2: The size specified exceeds the maximum size allowed.
Case 3: The presented user space exceeds the maximum size allowed.
Case 4: The presented user space exceeds the maximum size allowed.
Case 5: The size specified exceeds the maximum size allowed.
Case 6: The maximum number of EVA hosts already exist on the destination storage cell.
Case 7: The maximum number of EVA hosts already exist on the destination storage cell.
Case 8: The maximum number of
Continuous Access groups already exist.
Case 1: If this operation is still desired, delete one or more of the items on the destination storage cell and retry the operation.
Case 2: Use a smaller size and retry the operation.
Case 3: No action required.
Case 4: No action required.
Case 5: Use a smaller size and try this operation again.
Case 6: If this operation is still desired, delete one or more of the
EVA hosts and retry the operation.
Case 7: If this operation is still desired, delete one or more of the virtual disks on the destination storage cell and retry the operation.
Case 8: If this operation is still desired, delete one or more of the groups and retry the operation.
61
Password mismatch. Please update your system's password in the Storage
System Access menu.
Continued attempts to access this storage system with an incorrect password will disable management of this storage system.
The login password entered on the controllers does not match.
Reconfigure one of the storage system controller passwords, then use the management software to set the password to match the device so communication can proceed.
The operation cannot be performed because the Continuous Access connection is currently merging.
Wait for the merge operation to complete and retry the request.
The operation cannot be performed because the Continuous Access connection is currently logging.
Wait for the logging operation to complete and retry the request.
The operation cannot be performed because the Continuous Access connection is currently suspended
Resolve the suspended mode and retry the request.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 117
Status Code Value
65
Bad image header
Meaning
The firmware image file has a header checksum error.
How to Correct
Retrieve a valid firmware image file and retry the request.
66
Bad image
The firmware image file has a checksum error.
Retrieve a valid firmware image file and retry the request.
67 The firmware image file is too large.
Image too large
Invalid status for logical disk. This error is no longer supported.
Retrieve a valid firmware image file and retry the request.
70
Image incompatible with system configuration. Version conflict in upgrade or downgrade not allowed.
The firmware image file is incompatible with the current firmware.
Retrieve a valid firmware image file and retry the request
71
Bad image segment
72
Image already loaded
73
Image Write Error
74
Logical Disk Sharing
The firmware image download process has failed because of a corrupted image segment.
Verify that the firmware image is not corrupted and retry the firmware download process.
The firmware version already exists on the device.
No action required.
The firmware image download process has failed because of a failed write operation.
Verify that the firmware image is not corrupted and retry the firmware download process.
Case 1: The operation cannot be performed because the virtual disk or snapshot is part of a snapshot group.
Case 2: The operation may be prevented because a snapclone or snapshot operation is in progress. If a snapclone operation is in progress, the parent virtual disk should be discarded automatically after the operation completes. If the parent virtual disk has snapshots, then you must delete the snapshots before the parent virtual disk can be deleted.
Case 3: The operation cannot be performed because either the previous snapclone operation is still in progress, or the virtual disk is already part of a snapshot group.
Case 4: A capacity change is not allowed on a virtual disk or snapshot that is a part of a snapshot group.
Case 5: The operation cannot be performed because the virtual disk or snapshot is a part of a snapshot group.
Case 1: No action required.
Case 2: No action required.
Case 3: If a snapclone operation is in progress, wait until the snapclone operation has completed and retry the operation.
Otherwise, the operation cannot be performed on this virtual disk.
Case 4: No action required.
Case 5: No action required.
75
Bad Image Size
The firmware image file is not the correct size.
Retrieve a valid firmware image file and retry the request.
118 Error messages
Status Code Value Meaning
76
The controller is temporarily busy and it cannot process the request. Retry the request later.
The controller is currently processing a firmware download. Retry the request once the firmware download process is complete.
How to Correct
Retry the request once the firmware download process is complete.
77
Volume Failure Predicted
The disk volume specified is in a predictive failed state.
Report the error to product support.
78
Invalid object condition for this command.
The current condition or state is preventing the request from completing successfully.
Resolve the condition and retry the request.
79
Snapshot (or snapclone) deletion in progress. The requested operation is currently not allowed. Please try again later.
The current condition of the snapshot, snapclone or parent virtual disk is preventing the request from completing successfully.
Wait for the operation to complete and retry the request.
80
Invalid Volume Usage
Case 1: The disk volume is already a part of a disk group.
Resolve the condition by setting the usage to a reserved state and 80 retry the request. Invalid Volume
Usage
Case 2: The disk volume usage cannot be modified, as the minimum number of disks exist in the disk group.
Report the error to product support.
The disk volume usage cannot be modified, as the minimum number of disks exist in the disk group.
Resolve the condition by adding additional disks and retry the request.
81
Minimum Volumes In Disk
Group
82
Shutdown In Progress
The controller is currently shutting down.
83
Controller API Not Ready, Try
Again Later
The device is not ready to process the request.
No action required.
Retry the request at a later time.
84
Is Snapshot
This is a snapshot virtual disk and cannot be a member of a Continuous Access group.
No action required.
85
Cannot add or remove DR group member. Mirror cache must be active for this Vdisk.
Check controller cache condition.
An incompatible mirror policy of the virtual disk is preventing it from becoming a member of a Continuous Access group.
Modify the mirror policy and retry the request.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 119
Status Code Value
86
Command View EVA has detected this array as inoperative. Contact HP
Service for assistance.
Meaning How to Correct
Case 1: A virtual disk is in an inoperative state and the request cannot be processed.
Case 2: The snapclone cannot be associated with a virtual disk that is in an inoperative state. 86 Command View EVA has detected this array as inoperative.
Contact HP Service for assistance.
Report the error to product support.
Case 3: The snapshot cannot be associated with a virtual disk that is in an inoperative state. Report the error to product support.
87
Disk group inoperative or disks in group less than minimum.
88
Storage system inoperative
The disk group is in an inoperative state and cannot process the request.
Report the error to product support.
89
Failsafe Locked
The storage system is inoperative and cannot process the request.
Report the error to product support.
The request cannot be performed because the Continuous Access group is in a failsafe locked state.
Resolve the condition and retry the request.
The disk cache data need to be flushed before the condition can be resolved.
Retry the request later.
90
Data Flush Incomplete
91
Redundancy Mirrored
Inoperative
92
Duplicate LUN
The disk group is in a redundancy mirrored inoperative state and the request cannot be completed.
Report the error to product support.
93
Other remote controller failed
While the request was being performed, the remote storage system controller failed.
Resolve the condition and retry the request. Report the error to product support.
94
Unknown remote Vdisk
The remote storage system specified does not exist.
Correctly select the remote storage system and retry the request.
95
Unknown remote DR group
96
PLDMC failed
The LUN number is already in use by another client of the storage system.
The remote Continuous Access group specified does not exist.
The disk metadata was unable to be updated.
Select another LUN number and retry the request.
Correctly select the remote Continuous Access group retry the request.
Resolve the condition and retry the request. Report the error to product support.
120 Error messages
Status Code Value
97
Storage system could not be locked. System busy. Try command again.
Meaning
Another process has already taken the
SCMI lock on the storage system.
How to Correct
Retry the request later.
98
Error on remote storage system.
While the request was being performed, an error occurred on the remote storage system.
'Resolve the condition and retry the request
99
The DR operation can only be completed when the source-destination connection is down. If you are doing a destination DR deletion, make sure the connection link to the source DR system is down or do a failover operation to make this system the source.
The request failed because the operation cannot be performed on a Continuous
Access connection that is up.
Resolve the condition and retry the request.
100
Login required - password changed.
The management software is unable to log into the device as the password has changed.
The storage system password may have been re-configured or removed. The management software must be used to set the password up to match the device so communication can proceed.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 121
122 Error messages
C Controller fault management
This appendix describes how the controller displays events and termination event information.
Termination event information is displayed on the LCD. HP Command View EVA enables you to view controller events. This appendix also discusses how to identify and correct problems.
Once you create a storage system, an error condition message has priority over other controller displays.
HP Command View EVA provides detailed descriptions of the storage system error conditions, or faults. The Fault Management displays provide similar information on the LCD, but not in as much detail. Whenever possible, see HP Command View EVA for fault information.
Using HP Command View EVA
HP Command View EVA provides detailed information about each event affecting system operation in either a Termination Event display or an Event display. These displays are similar, but not identical.
GUI termination event display
A problem that generates the Termination Event display prevents the system from performing a specific function or process. You can use the information in this display (see
) to diagnose and correct the problem.
NOTE:
The major differences between the Termination Event display and the Event display are:
• The Termination Event display includes a Code Flag field; it does not include the EIP Type field.
• The Event display includes an EIP type field; it does not include a Code Flag field.
• The Event display includes a Corrective Action Code field.
Date Time SWCID Evt No Code Flag Description
Figure 27 GUI termination event display
The fields in the Termination Event display include:
• Date—The date the event occurred.
• Time—The time the event occurred.
• SWCID—Software Identification Code. A hexadecimal number in the range 0–FF that identifies the controller software component reporting the event.
• Evt No—Event Number. A hexadecimal number in the range 0–FF that is the software component identification number.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 123
• Code Flag—An internal code that includes a combination of other flags.
• Description—The condition that generated the event. This field may contain information about an individual field’s content and validity.
GUI event display
A problem that generates the Event display reduces the system capabilities. You can use the information in this display (see
Figure 28 ) to diagnose and correct problems.
NOTE:
The major differences between the Event Display and the Termination Event display are:
• The Event display includes an EIP type field; it does not include a Code Flag field.
• The Event display includes a Corrective Action Code (CAC) field.
• The Termination Event display includes a Code Flag field; it does not include the EIP Type field.
Date Time SWCID Evt No CAC EIP Type Description
Figure 28 Typical HP Command View EVA Event display
The Event display provides the following information:
• Date—The date the event occurred.
• Time—The time the even occurred.
• SWCID—Software Identification Code. A number in the range 1–256 that identifies the internal firmware module affected.
• Evt No—Event Number. A hexadecimal number in the range 0–FF that is the software component identification number.
• CAC—Corrective Action Code. A specific action to correct the problem.
• EIP Type—Event Information Packet Type. A hexadecimal character that defines the event information format.
• Description—The problem that generated the event.
Fault management displays
When you do not have access to the GUI, you can display and analyze termination codes (TCs) on the OCP LCD display. You can then use the event text code document, as described in the section titled “Interpreting Fault Management Information” to determine and implement corrective action. You can also provide this information to the authorized service representative should you require additional support. This lets the service representative identify the tools and components required to correct the condition in the shortest possible time.
When the fault management display is active, you can either display the last fault or display detailed information about the last 32 faults reported.
Displaying Last Fault Information
Complete the following procedure to display Last Fault information
1.
When the Fault Management display is active, press to select the Last Fault menu.
124 Controller fault management
2.
Press to display the last fault information.
The first line of the TC display contains the eight-character TC error code and the two-character
IDX (index) code. The IDX is a reference to the location in the TC array that contains this error.
The second line of the TC display identifies the affected parameter with a two-character parameter number (0–30), the eight-character parameter code affected, and the parameter code number.
3.
Press to return to the Last Fault menu.
Displaying Detailed Information
The Detail View menu lets you examine detailed fault information stored in the Last Termination Event
Array (LTEA). This array stores information for the last 32 termination events.
Complete the following procedure to display the LTEA information about any of the last 32 termination events:
1.
When the Fault Management display is active (flashing), press to select the Detail View menu.
The LTEA selection menu is active (LTEA 0 is displayed).
2.
Press or to increment to a specific error.
3.
Press to observe data about the selected error.
Interpreting fault management information
Each version of HP Command View EVA includes an ASCII text file that defines all the codes that the authorized service representative can view either on the GUI or on the OCP.
IMPORTANT:
This information is for the exclusive use of the authorized service representative.
The file name identifies the controller model, file type, XCS baselevel id, and XCS version. For example, the file name hsv210_event_cr08d3_5020.txt provides the following information:
• hsv210_
—The EVA controller model number
• event_
—The type of information in the file
• w010605_
—The base level build string (the file creation date).
• 01—The creation year
• 06—The creation month
• 05—The creation date
•
5020
—The XCS version
describes types of information available in this file.
Table 22 Controller event text description file
Information type
Event Code
Termination Code (TC)
Description
This hexadecimal code identifies the reported event type.
This hexadecimal code specifies the condition that generated the termination code. It might also define either a system or user initiated corrective action.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 125
Information type
Coupled Crash Control Codes
Dump/Restart Control Codes
Corrective Action Codes (CAC)
Software Component ID Codes
(SWCID)
Event Information Packets (EIP)
Description
This single digit, decimal character defines the requirement for the other controller to initiate a coupled crash control.0. Other controller SHOULD
NOT complete a coupled crash.1. Other controller SHOULD complete a coupled crash.
This single decimal character (0, 1, 3) defines the requirement to:0. Perform a crash dump and then restart the controller.1. DO NOT perform a crash dump; just restart the controller.3. DO NOT perform a crash dump;
DO NOT restart the controller.
These hexadecimal codes supplement the Termination Code information to identify the faulty element and the recommended corrective action.
These decimal codes identify software associated with the event.
These codes specify the packet organization for specific type events.
126 Controller fault management
D Non-standard rack specifications
The appendix provides information on the requirements when installing the 6400/8400 in a non-standard rack. All the requirements must be met to ensure proper operation of the storage system.
Rack specifications
Internal component envelope
EVA component mounting brackets require space to be mounted behind the vertical mounting rails.
Room for the mounting of the brackets includes the width of the mounting rails and needed room for any mounting hardware, such as screws, clip nuts, etc.
shows the dimensions required for the mounting space for the EVA product line. It does not show required space for additional HP components such as servers.
Figure 29 Mounting space dimensions
EIA310-D standards
The rack must meet the Electronic Industries Association, (EIA), Standard 310-D, Cabinets, Racks and
Associated Equipment. The standard defines rack mount spacing and component dimensions specified in U units.
Copies of the standard are available for purchase at http://www.eia.org/ .
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 127
EVA cabinet measures and tolerances
EVA component rack mount brackets are designed to fit cabinets with mounting rails set at depths from 27.5 inches to 29.6 inches, inside rails to inside rails.
Weights, dimensions and component CG measurements
Cabinet CG dimensions are reported as measured from the inside bottom of the cabinet (Z), the leading edge of the vertical mounting rails (Y), and the centerline of the cabinet mounting space (X).
Component CG measurements are measured from the bottom of the U space the component is to occupy (Z), the mounting surface of the mounting flanges (Y), and the centerline of the component
(X).
lists the CG dimensions for the EVA components.
Determining the CG of a configuration may be necessary for safety considerations. CG considerations for CG calculations do not include cables, PDU’s and other peripheral components. Some consideration should be made to allow for some margin of safety when estimating configuration CG.
Estimating the configuration CG requires measuring the CG of the cabinet the product will be installed in. Use the following formula:
Σd component
W = d system cg
W where d component
= the distance of interest and W = Weight
The distance of a component is its CG’s distance from the inside base of the cabinet. For example, if a loaded disk enclosure is to be installed into the cabinet with its bottom at 10U, the distance for the enclosure would be (10*1.75)+2.7 inches.
Table 23 Component data
HP 10K cabinet CG
Filler panel , 3U
Fully loaded drive enclosure
Filler panel, 1U
XL Controller Pair
1
1U = 1.75 inches
U height
1
Component Data
Weight (Lb)
3
3
1
4
233
1.4
74
0.47
120
X (in)
-0.108
0
-0.288
0
-0.094
Y (in)
25.75
2.625
2.7
0.875
2.53
Z (in)
14.21
0
7.95
0
10.64
Airflow and Recirculation
Component Airflow Requirements
Component airflow must be directed from the front of the cabinet to the rear. Components vented to discharge airflow from the sides must discharge to the rear of the cabinet.
128 Non-standard rack specifications
Rack Airflow Requirements
The following requirements must be met to ensure adequate airflow and to prevent damage to the equipment:
• If the rack includes closing front and rear doors, allow 830 square inches (5,350 sq cm) of hole evenly distributed from top to bottom to permit adequate airflow (equivalent to the required 64 percent open area for ventilation).
• For side vented components, the clearance between the installed rack component and the side panels of the rack must be a minimum of 2.75 inches (7 cm).
• Always use blanking panels to fill all empty front panel U-spaces in the rack. This ensures proper airflow. Using a rack without blanking panels results in improper cooling that can lead to thermal damage.
Configuration Standards
EVA configurations are designed considering cable length, configuration CG, serviceability and accessibility, and to allow for easy expansion of the system. If at all possible, it is best to configure non HP cabinets in a like manner.
Environmental and operating specifications
This section identifies the product environmental and operating specifications.
NOTE:
Further testing is required to update the information in Tables 45-47. Once testing is complete, these tables will be updated in a future release.
UPS Selection
This section provides information that can be used when selecting a UPS for use with the EVA. The four HP UPS products listed in
are available for use with the EVA and are included in this comparison.
identifies the amount of time each UPS can sustain power under varying loads and with various UPS ERM (Extended Runtime Module) options.
The load imposed on the UPS for different disk enclosure configurations are listed in
and
.
NOTE:
The specified power requirements reflect fully loaded enclosures (14 disks) .
Table 24 HP UPS models and capacities
UPS Model
R1500
R3000
Capacity (in watts)
1340
2700
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 129
UPS Model
R5500
R12000
Table 25 UPS operating time limits
Capacity (in watts)
4500
12000
Minutes of operation
With 1 ERM
Load (percent)
With standby battery
80
50
20
R5500
100
80
50
R1500
100
80
50
20
R3000
100
20
R12000
100
80
50
20
14
43
5
7
Table 26 EVA8400 UPS loading
5
6
13
34
5
6.5
12
40
19
59
7
9
11
15
28
69
24
31
61
169
20
30
45
120
23
32
57
146
Enclosures
12
Watts
4920
With 2 ERMs
18
24
41
101
46
60
106
303
49
63
161
290
R5500
% of UPS capacity
R12000
41.0
130 Non-standard rack specifications
Enclosures
5
4
3
2
1
8
7
6
Enclosures Watts
4
3
6
5
2
1
9
8
11
10
7
Table 27 EVA6400 UPS loading
4414
4037
3660
3284
2907
2530
2153
1777
1400
1023
647
Watts
3214
2837
2460
2083
1707
1330
953
577
R3000
91.1
77.2
63.2
49.3
35.3
21.4
R5500
% of UPS capacity
R12000
56.2
47.9
39.5
31.1
98.1
89.7
81.3
73.0
64.6
22.7
14.4
21.1
17.9
14.8
11.7
36.8
33.6
30.5
27.4
24.2
8.5
5.4
% of UPS capacity
R5500
71.4
63.0
54.6
46.2
37.9
29.5
21.2
12.8
R12000
26.8
23.6
20.5
17.3
14.2
11.1
7.9
4.8
Shock and vibration specifications
lists the product operating shock and vibration specifications. This information applies to products weighing 45 Kg (100 lbs) or less.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 131
NOTE:
HP StorageWorks EVA products are designed and tested to withstand the operational shock and vibration limits specified in
. Transmission of site vibrations through non-HP racks exceeding these limits could cause operational failures of the system components.
Table 28 Operating Shock/Vibration
Shock test with half sine pulses of 10 G magnitude and 10 ms duration applied in all three axes (both positive and negative directions).
Sine sweep vibration from 5 Hz to 500 Hz to 5 Hz at 0.1 G peak, with 0.020” displacement limitation below
10 Hz. Sweep rate of 1 octave/minute. Test performed in all three axes.
Random vibration at 0.25 G rms level with uniform spectrum in the frequency range of 10 to 500 Hz. Test performed for two minutes each in all three axes.
Drives and other items exercised and monitored running appropriate exerciser (UIOX, P-Suite, etc.) with appropriate operating system and hardware.
132 Non-standard rack specifications
E Single Path Implementation
This appendix provides guidance for connecting servers with a single path host bus adapter (HBA) to the Enterprise Virtual Array (EVA) storage system with no multi-path software installed. A single path HBA is defined as an HBA that has a single path to its LUNs. These LUNs are not shared by any other HBA in the server or in the SAN.
The failure scenarios demonstrate behavior when recommended configurations are employed, as well as expected failover behavior if guidelines are not met. To implement single adapter servers into a multi-path EVA environment, configurations should follow these recommendations.
NOTE:
The purpose of single HBA configurations for non-mission critical storage access is to control costs.
This appendix describes the configurations, limitations, and failover characteristics of single HBA servers under different operating systems. Much of the description herein are based upon a single
HBA configuration resulting in a single path to the device, but such is not the case with OpenVMS and Tru64 UNIX.
HP OpenVMS and Tru64 UNIX have native multi-path features by default.
With OpenVMS and Tru64 UNIX, a single HBA configuration will result in two paths to the device by virtue of having connections to both EVA controllers. Single HBA configurations are not single path configurations with these operating systems.
In addition, cluster configurations of both OpenVMS and Tru64 UNIX provide enhanced availability and security. To achieve availability within cluster configurations, each member should be configured with its own HBA(s) and connectivity to shared LUNs. Cluster configuration will not be discussed further within this appendix as the enhanced availability requires both additional server hardware and HBAs which is contrary to controlling configuration costs for non-mission critical applications.
For further information on cluster configurations and attributes, see the appropriate operating system guides and the SAN design guide.
NOTE:
HP continually makes additions to its storage solution product line. For more information about the
HP Fibre Channel product line, the latest drivers, and technical tips, and to view other documentation, see the HP website at http://www.hp.com/country/us/eng/prodserv/storage.html
High-level solution overview
EVA was designed for highly dynamic enterprise environments requiring high data availability, fault tolerance, and high performance; thus, the EVA controller runs only in multi-path failover mode.
Multi-path failover mode ensures the proper level of fault tolerance for the enterprise with mission-critical application environments. However, this appendix addresses the need for non-mission-critical applications to gain access to the EVA system running mission-critical production applications.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 133
The non-mission-critical applications gain access to the EVA from a single path HBA server without running a multi-path driver. When a single path HBA server uses the supported configurations, a fault in the single path HBA server does not result in a fault in the other servers.
Benefits at a glance
The EVA is a high-performance array controller utilizing the benefits of virtualization. Virtualization within the storage system is ideal for environments needing high performance, high data availability, fault tolerance, efficient storage management, data replication, and cluster support. However, enterprise-level data centers incorporate non-mission-critical applications as well as applications that require high availability.
Single-path capability adds flexibility to budget allocation. There is a per-path savings as the additional cost of HBAs and multi-path software is removed from non-mission−critical application requirements.
These servers can still gain access to the EVA by using single path HBAs without multi-path software.
This reduces the costs at the server and infrastructure level.
Installation requirements
• The host must be placed in a zone with any EVA worldwide IDs (WWIDs) that access storage devices presented by the hierarchical storage virtualization (HSV) controllers to the single path
HBA host. The preferred method is to use HBA and HSV WWIDs in the zone configurations.
• On HP-UX, Solaris, Microsoft Windows Server 2003 (32-bit), Novell NetWare, Linux and IBM
AIX operating systems, the zones consist of the single path HBA systems and one HSV controller port.
• On OpenVMS and Tru64 UNIX operating systems, the zones consist of the single HBA systems and two HSV controller ports. This will result in a configuration where there are two paths per device, or multiple paths.
Recommended mitigations
EVA is designed for the mission-critical enterprise environment. When used with multi-path software, high data availability and fault tolerance are achieved. In single path HBA server configurations, neither multi-path software nor redundant I/O paths are present. Server-based operating systems are not designed to inherently recover from unexpected failure events in the I/O path (for example, loss of connectivity between the server and the data storage). It is expected that most operating systems will experience undesirable behavior when configured in non-high-availability configurations.
Because of the risks of using servers with a single path HBA, HP recommends the following actions:
• Use servers with a single path HBA that are not mission-critical or highly available.
• Perform frequent backups of the single path server and its storage.
Supported configurations
All examples detail a small homogeneous Storage Area Network (SAN) for ease of explanation.
Mixing of dual and single path HBA systems in a heterogeneous SAN is supported. In addition to this document, reference and adhere to the SAN Design Reference Guide for heterogeneous SANs, located at: http://h18006.www1.hp.com/products/storageworks/san/documentation.html
134 Single Path Implementation
General configuration components
All configurations require the following components:
• Enterprise VCS software
• HBAs
• Fibre Channel switches
Connecting a single path HBA server to a switch in a fabric zone
Each host must attach to one switch (fabric) using standard Fibre Channel cables. Each host has its single path HBA connected through switches on a SAN to one port of an EVA.
Because a single path HBA server has no software to manage the connection and ensure that only one controller port is visible to the HBA, the fabric containing the single path HBA server, SAN switch, and EVA controller must be zoned. Configuring the single path by switch zoning and the LUNs by
Selective Storage Presentation (SSP) allows for multiple single path HBAs to reside in the same server.
A single path HBA server with OpenVMS or Tru64 UNIX operating system should be zoned with two
EVA controllers. See the HP StorageWorks SAN Design Reference Guide at the following HP website for additional information about zoning: http://h18006.www1.hp.com/products/storageworks/san/documentation.html
To connect a single path HBA server to a SAN switch:
1.
Plug one end of the Fibre Channel cable into the HBA on the server.
2.
Plug the other end of the cable into the switch.
and
represent configurations containing both single path HBA server and dual
HBA server, as well as a SAN appliance, connected to redundant SAN switches and EVA controllers.
Whereas the dual HBA server has multi-path software that manages the two HBAs and their connections to the switch (with the exception of OpenVMS and Tru64 UNIX servers), the single path HBA has no software to perform this function. The dashed line in the figure represents the fabric zone that must be established for the single path HBA server. Note that in
, servers with OpenVMS or Tru64
UNIX operating system should be zoned with two controllers.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 135
Figure 30 Single path HBA server without OpenVMS or Tru64 UNIX
1 Network interconnection
2 Single HBA server
3 Dual HBA server
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Fabric zone
8 Controller A
9 Controller B
136 Single Path Implementation
Figure 31 Single path HBA server with OpenVMS or Tru64 UNIX
1 Network interconnection
2 Single HBA server
3 Dual HBA server
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Fabric zone
8 Controller A
9 Controller B
HP-UX configuration
Requirements
• Proper switch zoning must be used to ensure each single path HBA has an exclusive path to its
LUNs.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• Single path HBA server cannot share LUNs with any other HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is a single path HBA host.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 137
• Host 2 is a multiple HBA host with multi-pathing software.
See
.
Risks
• Disabled jobs hang and cannot umount disks.
• Path or controller failure may results in loss of data accessibility and loss of host data that has not been written to storage.
NOTE:
For additional risks, see Table 29 on page 153.
Limitations
• HP Continuous Access EVA is not supported with single-path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is not supported.
Figure 32 HP-UX configuration
1 Network interconnection
2 Host 1
3 Host 2
4 Management server
138 Single Path Implementation
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller B
Windows Server (32-bit) configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• Single path HBA server cannot share LUNs with any other HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is a single path HBA host.
• Host 2 is a multiple HBA host with multi-pathing software.
See
Risks
• Single path failure will result in loss of connection with the storage system.
• Single path failure may cause the server to reboot.
• Controller shutdown puts controller in a failed state that results in loss of data accessibility and loss of host data that has not been written to storage.
NOTE:
For additional risks, see Table 30 on page 154.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is not supported on single path HBA servers.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 139
Figure 33 Windows Server (32-bit) configuration
1 Network interconnection
2 Host 1
3 Host 2
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller B
Windows Server (64-bit) configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• Single path HBA server cannot share LUNs with any other HBAs.
HBA configuration
• Hosts 1 and 2 are single path HBA hosts.
• Host 3 is a multiple HBA host with multi-pathing software.
See
140 Single Path Implementation
NOTE:
Single path HBA servers running the Windows Server 2003 (x64) operating system will support multiple single path HBAs in the same server. This is accomplished through a combination of switch zoning and controller level SSP. Any single path HBA server will support up to four single path HBAs.
Risks
• Single path failure will result in loss of connection with the storage system.
• Single path failure may cause the server to reboot.
• Controller shutdown puts controller in a failed state that results in loss of data accessibility and loss of host data that has not been written to storage.
NOTE:
For additional risks, see Table 30 on page 154.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is not supported on single path HBA servers.
Figure 34 Windows Server (64-bit) configuration
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 141
1 Network interconnection
2 Management server
3 Host 1
4 Host 2
5 Host 3
6 SAN switch 1
7 SAN switch 2
8 Controller A
9 Controller B
SUN Solaris configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• Single path HBA server cannot share LUNs with any other HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is a single path HBA host.
• Host 2 is a multiple HBA host with multi-pathing software.
See
Risks
• Single path failure may result in loss of data accessibility and loss of host data that has not been written to storage.
• Controller shutdown results in loss of data accessibility and loss of host data that has not been written to storage.
NOTE:
For additional risks, see Table 31 on page 154.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is not supported.
142 Single Path Implementation
Figure 35 SUN Solaris configuration
1 Network interconnection
2 Host 1
3 Host 2
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller B
Tru64 UNIX configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each HBA has exclusive access to its LUNs.
• All nodes with direct connection to a disk must have the same access paths available to them.
• Single HBA server can be in the same fabric as servers with multiple HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single host that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is single HBA host with Tru64.
• Host 2 is a dual HBA host.
See
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 143
Risks
• For nonclustered nodes with a single HBA, a path failure from the HBA to the SAN switch will result in a loss of connection with storage devices.
• If a host crashes or experiences a power failure, or if the path is interrupted, data will be lost.
Upon re-establishment of the path, a retransmit can be performed to recover whatever data may have been lost during the outage. The option to retransmit data after interruption is applicationdependent.
NOTE:
For additional risks, see Table 32 on page 155.
Figure 36 Tru64 UNIX configuration
1 Network interconnection
2 Host 1
3 Host 2
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller B
144 Single Path Implementation
OpenVMS configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• All nodes with direct connection to a disk must have the same access paths available to them.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is a single path HBA host.
• Host 2 is a dual HBA host.
See
.
Risks
• For nonclustered nodes with a single path HBA, a path failure from the HBA to the SAN switch will result in a loss of connection with storage devices.
NOTE:
For additional risks, see Table 32 on page 155.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 145
Figure 37 OpenVMS configuration
1 Network interconnection
2 Host 1
3 Host 2
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller B
NetWare configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• Single path HBA server cannot share LUNs with any other HBAs.
HBA configuration
• Host 1 is a single path HBA host with NetWare.
• Host 2 is a dual HBA host with multi-pathing software.
See
Risks
• Single-path failure will result in a loss of connection with storage devices.
146 Single Path Implementation
NOTE:
For additional risks, see Table 33 on page 156.
Limitations
• HP Continuous Access EVA is not supported with single-path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is not supported on single path HBA servers.
Figure 38 NetWare configuration
1 Network interconnection
2 Single HBA server
3 Dual HBA server
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller B
Linux (32-bit) configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• All nodes with direct connection to a disk must have the same access paths available to them.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 147
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is a single path HBA.
• Host 2 is a dual HBA host with multi-pathing software.
See
Risks
• Single path failure may result in data loss or disk corruption.
NOTE:
For additional risks, see Table 34 on page 156.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is supported on single path HBA servers.
Figure 39 Linux (32-bit) configuration
148 Single Path Implementation
1 Network interconnection
2 Host 1
3 Host 2
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller
Linux (64-bit) configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• All nodes with direct connection to a disk must have the same access paths available to them.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
• Linux 64-bit servers can support up to14 single or dual path HBAs per server. Switch zoning and
SSP are required to isolate the LUNs presented to each HBA from each other.
HBA configuration
• Host 1 and 2 are single path HBA hosts.
• Host 3 is a dual HBA host with multi-pathing software.
See
Risks
• Single path failure may result in data loss or disk corruption.
NOTE:
For additional risks, see Table 34 on page 156.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is supported on single path HBA servers.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 149
Figure 40 Linux (64-bit) configuration
1 Network interconnection
2 Host 3
3 Host 2
4 Host 1
5 Management server
6 SAN switch 1
7 SAN switch 2
8 Controller A
9 Controller B
IBM AIX configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• Single path HBA server cannot share LUNs with any other HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is a single path HBA host.
150 Single Path Implementation
• Host 2 is a dual HBA host with multi-pathing software.
See
.
Risks
• Single path failure may result in loss of data accessibility and loss of host data that has not been written to storage.
• Controller shutdown results in loss of data accessibility and loss of host data that has not been written to storage.
NOTE:
For additional risks, see Table 35 on page 157.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is not supported.
Figure 41 IBM AIX Configuration
1 Network interconnection
2 Single HBA server
3 Dual HBA server
5 SAN switch 1
6 SAN switch 2
7 Controller A
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 151
4 Management server 8 Controller B
VMware configuration
Requirements
• Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
• All nodes with direct connection to a disk must have the same access paths available to them.
• Single path HBA server can be in the same fabric as servers with multiple HBAs.
• In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
HBA configuration
• Host 1 is a single path HBA.
• Host 2 is a dual HBA host with multi-pathing software.
See
.
Risks
• Single path failure may result in data loss or disk corruption.
NOTE:
For additional risks, see Table 36 on page 158.
Limitations
• HP Continuous Access EVA is not supported with single path configurations.
• Single path HBA server is not part of a cluster.
• Booting from the SAN is supported on single path HBA servers.
152 Single Path Implementation
Figure 42 VMware configuration
1 Network interconnection
2 Single HBA server
3 Dual HBA server
4 Management server
5 SAN switch 1
6 SAN switch 2
7 Controller A
8 Controller B
Failure scenarios
HP-UX
Table 29 HP-UX failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Controller failure
Failure effect
Extremely critical event on UNIX. Can cause loss of system disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Job hangs, cannot umount disk, fsck failed, disk corrupted, need mkfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Job hangs, cannot umount disk, fsck failed, disk corrupted, need mkfs disk.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 153
Fault stimulus
Controller restart
Server path failure
Storage path failure
Failure effect
Short term: Data transfer stops. Possible I/O errors.
Long term: Job hangs, cannot umount disk, fsck failed, disk corrupted, need mkfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Job hangs, cannot umount disk, fsck failed, disk corrupted, need mkfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Job hangs, replace cable, I/O continues. Without cable replacement job must be aborted; disk seems error free.
Windows Server
Table 30 Windows Server failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Controller failure
Controller restart
Server path failure
Storage path failure
Failure effect
OS runs a command called chkdsk when rebooting. Data lost, data that finished copying survived.
Write delay, server hangs until I/O is cancelled or cold reboot.
Write delay, server hangs or reboots. One controller failed, other controller and shelves critical, shelves offline. Volume not accessible.
Server cold reboot, data lost. Check disk when rebooting.
Controller momentarily in failed state, server keeps copying. All data copied, no interruption. Event error warning error detected during paging operation.
Write delay, volume inaccessible. Host hangs and restarts.
Write delay, volume disappears, server still running. When cables plugged back in, controller recovers, server finds volume, data loss.
Sun Solaris
Table 31 Sun Solaris failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Failure effect
Check disk when rebooting. Data loss, data that finished copying survived.
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages on console, no access to CDE.
System reboot causes loss of data on disk. Must newfs disk.
Controller failure
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages on console, no access to CDE.
System reboot causes loss of data on disk. Must newfs disk.
154 Single Path Implementation
Fault stimulus
Controller restart
Server path failure
Storage path failure
Failure effect
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages on console, no access to CDE.
System reboot causes loss of data on disk. Must newfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages on console, no access to CDE.
System reboot causes loss of data on disk. Must newfs disk.
Short term: Job hung, data lost.
Long term: Repeated error messages on console, no access to CDE.
System reboot causes loss of data on disk. Must newfs disk.
OpenVMS and Tru64 UNIX
Table 32 OpenVMS and Tru64 UNIX failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Controller failure
Controller restart
Server path failure
Storage path failure
Failure effect
All I/O operations halted. Possible data loss from unfinished or unflushed writes. File system check may be needed upon reboot.
OpenVMS—OS will report the volume in a Mount Verify state until the MVTIMEOUT limit is exceeded, when it then marks the volume as Mount Verify Timeout. No data is lost or corrupted.
Tru64 UNIX—All I/O operations halted. I/O errors are returned back to the applications. An I/O failure to the system disk can cause the system to panic. Possible data loss from unfinished or unflushed writes. File system check may be needed upon reboot.
I/O fails over to the surviving path. No data is lost or corrupted.
OpenVMS—OS will report the volume in a Mount Verify state until the MVTIMEOUT limit is exceeded, when it then marks the volume as Mount Verify Timeout. No data is lost of corrupted.
Tru64 UNIX—I/O retried until controller back online. If maximum retries exceeded, I/O fails over to the surviving path. No data is lost or corrupted.
OpenVMS—OS will report the volume in a Mount Verify state until the MVTIMEOUT limit is exceeded, when it then marks the volume as Mount Verify Timeout. No data is lost or corrupted.
Tru64 UNIX—All I/O operations halted. I/O errors are returned back to the applications. An I/O failure to the system disk can cause the system to panic. Possible data loss from unfinished or unflushed writes. File system check may be needed upon reboot.
OpenVMS—OS will report the volume in a Mount Verify state until the MVTIMEOUT limit is exceeded, when it then marks the volume as Mount Verify Timeout. No data is lost or corrupted.
Tru64 UNIX—I/O fails over to the surviving path. No data is lost or corrupted.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 155
NetWare
Table 33 NetWare failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Controller failure
Controller restart
Server path failure
Storage path failure
Failure effect
OS reboots. When mounting volumes, volume repair or NSS rebuild executes to cleanup volumes. Data loss, data that finished writing survived.
I/O to device stops with I/O errors indicated on server console.
Applications using lost connection halts. Server restart recommended but may not be necessary. Volume repair or NSS rebuild runs when volumes are mounted.
I/O to device stops with I/O errors indicated on server console.
Applications using lost connection halts. Server restart recommended but may not be necessary. Volume repair or NSS rebuild runs when volumes are mounted.
I/O to device stops with I/O errors indicated on server console.
Applications using lost connection halts. Server restart recommended but may not be necessary. Volume repair or NSS rebuild runs when volumes are mounted.
I/O to device stops with I/O errors indicated on server console.
Applications using lost connection halts. Server restart recommended but may not be necessary. Volume repair or NSS rebuild runs when volumes are mounted.
I/O to device stops with I/O errors indicated on server console.
Applications using lost connection halts. Server restart recommended but may not be necessary. Volume repair or NSS rebuild runs when volumes are mounted.
Linux
Table 34 Linux failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Controller failure
Failure effect
OS reboots, automatically checks disks. HSV disks must be manually checked unless auto mounted by the system.
Short: I/O suspended, possible data loss.
Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
Short term: I/O suspended, possible data loss.
Long term: I/O halts with I/O errors, data loss. Cannot reload driver, need to reboot system, fsck should be run on any failed disks before remounting.
156 Single Path Implementation
Fault stimulus
Controller restart
Server path failure
Storage path failure
Failure effect
Short term: I/O suspended, possible data loss.
Long term: I/O halts with I/O errors, data loss. Cannot reload driver, need to reboot system, fsck should be run on any failed disks before remounting.
Short: I/O suspended, possible data loss.
Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
Short: I/O suspended, possible data loss.
Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
IBM AIX
Table 35 IBM AIX failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Controller failure
Controller restart
Server path failure
Storage path failure
Failure effect
Check disk when rebooting. Data loss, data that finished copying survived
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages in errpt output. System reboot causes loss of data on disk. Must crfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages in errpt output. System reboot causes loss of data on disk. Must crfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages in errpt output. System reboot causes loss of data on disk. Must crfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages in errpt output. System reboot causes loss of data on disk. Must crfs disk.
Short term: Data transfer stops. Possible I/O errors.
Long term: Repeated error messages in errpt output. System reboot causes loss of data on disk. Must crfs disk.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 157
VMware
Table 36 VMware failure scenarios
Fault stimulus
Server failure (host power-cycled)
Switch failure (SAN switch disabled)
Controller failure
Controller restart
Server path failure
Storage path failure
Failure effect
OS reboots, automatically checks disks. HSV disks must be manually checked unless auto mounted by the system.
Short: I/O suspended, possible data loss.
Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
Short term: I/O suspended, possible data loss.
Long term: I/O halts with I/O errors, data loss. Cannot reload driver, need to reboot system, fsck should be run on any failed disks before remounting.
Short term: I/O suspended, possible data loss.
Long term: I/O halts with I/O errors, data loss. Cannot reload driver, need to reboot system, fsck should be run on any failed disks before remounting.
Short: I/O suspended, possible data loss.
Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
Short: I/O suspended, possible data loss.
Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
158 Single Path Implementation
Glossary
This glossary defines terms used in this guide or related to this product and is not a comprehensive glossary of computer terms.
µm
3U
A symbol for micrometer; one millionth of a meter. For example, 50 µm is equivalent to 0.000050 m.
A unit of measurement representing three “U” spaces. “U” spacing is used to designate panel or enclosure heights. Three “U” spaces is equivalent to 5.25
inches (133 mm).
See also
active member of a virtual disk family adapter
AL_PA
ALUA allocation policy
An active member of a virtual disk family is a simulated disk drive created by the controllers as storage for one or more hosts. An active member of a virtual disk family is accessible by one or more hosts for normal storage. An active virtual disk member and its snapshot, if one exists, constitute a virtual disk family.
An active member of a virtual disk family is the only necessary member of a virtual disk family.
See also
virtual disk,virtual disk copy,virtual disk family,
and
See
.
Arbitrated Loop Physical Address. A 1-byte value the arbitrated loop topology uses to identify the loop ports. This value becomes the last byte of the address identifier for each public port on the loop.
Asymmetric logical unit access. Operating systems that support asymmetric logical unit access work with the EVA’s active/active functionality to enable any virtual disk to be accessed through either of the array’s two controllers.
Storage system rules that govern how virtual disks are created. Allocate
Completely and Allocate on Demand are the two rules used in creating virtual disks.
• Allocate Completely—The space a virtual disk requires on the physical disks is reserved, even if the virtual disk is not currently using the space.
• Allocate on Demand—The space a virtual disk requires on the physical disks is not reserved until needed.
ambient temperature
ANSI arbitrated loop
The air temperature in the area where a system is installed. Also called intake temperature or room temperature.
American National Standards Institute. A non-governmental organization that develops standards (such as SCSI I/O interface standards and Fibre Channel interface standards) used voluntarily by many manufacturers within the United
States.
A Fibre Channel topology that links multiple ports (up to 126) together on a single shared simplex media. Transmissions can only occur between a single
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 159
arbitrated loop physical address arbitrated loop topology array pair of nodes at any given time. Arbitration is the scheme that determines which node has control of the loop at any given moment
See
See
.
array controller asynchronous backplane bad block bad block replacement bail lock baud bay bidirectional block blower cabinet cable assembly
All the physical disk drives in a storage system that are known to and under the control of a controller pair.
See
.
Events scheduled as the result of a signal requesting the event or that which is without any specified time relation.
An electronic printed circuit board that distributes data, control, power, and other signals to element connectors.
A data block that contains a physical defect.
A replacement routine that substitutes defect-free disk blocks for those found to have defects. This process takes place in the controller and is transparent to the host.
Part of the power supply AC receptacle that engages the AC power cord connector to ensure that the cord cannot be accidentally disconnected.
The maximum rate of signal state changes per second on a communication circuit.
If each signal state change corresponds to a code bit, then the baud rate and the bit rate are the same. It is also possible for signal state changes to correspond to more than one code bit so the baud rate may be lower than the code bit rate.
The physical location of an element, such as a drive, I/O module, EMU or power supply in a drive enclosure. Each bay is numbered to define its location.
Also called Bi-Di. The movement of optical signals in opposite directions through a common fiber cable such as the data flow path typically on a parallel printer port. A parallel port can provide two-way data flow for disk drives, scanning devices, FAX operations and even parallel modems.
Also called a sector. The smallest collection of consecutive bytes addressable on a disk drive. In integrated storage elements, a block contains 512 bytes of data, error codes, flags, and the block address header.
A variable speed airflow device that pulls air into an enclosure or element. It usually pulls air in from the front and exhausts the heated air out the rear.
An alternate term used for a rack.
A fiber optic cable that has connectors installed on one or both ends. General use of these cable assemblies includes the interconnection of multimode fiber optic cable assemblies with either LC or SC type connectors.
• When there is a connector on only one end of the cable, the cable assembly is referred to as a pigtail.
160 Glossary
CAC cache cache battery cache battery indicator carrier client
• When there is a connector on each end of the cable, the cable assembly is referred to as a jumper.
Corrective Action Code. An HP Command View EVA GUI display component that defines the action required to correct a problem.
See also
,
.
High-speed memory that sets aside data as an intermediate data buffer between a host and the storage media. The purpose of cache is to improve performance.
A rechargeable unit mounted within a controller enclosure that supplies back-up power to the cache module in case of primary power shortage.
1.
An orange light emitting diode (indicator) that illuminates on the controller operator control panel (OCP) to define the status of the HSV Controller cache batteries.
2.
An amber status indicator that illuminates on a cache battery. When illuminated, it indicates that one or more cache battery cells have failed and the battery must be replaced with a new battery.
A drive-enclosure-compatible assembly containing a disk drive or other storage devices.
A software program that uses the services of another software program. The HP
Command View EVA client is a standard internet browser.
See
See
.
clone communication logical unit number (LUN) condition report console LUN console LUN ID controller controller enclosure controller event controller fault indicator
A three-element code generated by the EMU in the form where e.t. is the element type (a hexadecimal number), en. is the element number (a decimal number), and ec is the condition code (a decimal number).
A SCSI-3 virtual object that makes a controller pair accessible by the host before any virtual disks are created. Also called a communication LUN.
The ID that can be assigned when a host operating system requires a unique ID.
The console LUN ID is assigned by the user, usually when the storage system is initialized.
See also
.
A hardware/firmware device that manages communications between host systems and other devices. Controllers typically differ by the type of interface to the host and provide functions beyond those the devices support.
A unit that holds one or more controllers, power supplies, blowers, cache batteries, transceivers, and connectors.
A significant occurrence involving any storage system hardware or software component reported by the controller to HP Command View EVA.
An amber fault indicator that illuminates on the controller OCP to indicate when there is an HSV Controller fault.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 161
controller pair customer replaceable unit
Two interconnected controller modules which together control the disk enclosures in the storage system.
See
.
corrective action code
CRITICAL Condition A drive enclosure EMU condition that occurs when one or more drive enclosure elements have failed or are operating outside of their specifications. The failure of the element makes continued normal operation of at least some elements in the enclosure impossible. Some enclosure elements may be able to continue normal operations. Only an UNRECOVERABLE condition has precedence. This condition has precedence over NONCRITICAL errors and INFORMATION condition.
CRU Customer Replaceable Unit. A storage system element that a user can replace without using special tools or techniques, or special training.
See
data entry mode The state in which controller information can be displayed or controller configuration data can be entered. On the Enterprise Storage System, the controller mode is active when the LCD on the HSV Controller OCP is Flashing.
default disk group The first disk group created at the time the system in initialized. The default disk group can contain the entire set of physical disks in the array or just a few of the disks.
See also
Detailed Fault
View device channel
An HSV Controller OCP display that permits a user to view detailed information about a controller fault.
A channel used to connect storage devices to a host I/O bus adapter or intelligent controller.
device ports device-side ports
See
DIMM Dual Inline Memory Module. A small circuit board holding memory chips.
dirty data
Controller pair device ports connected to the storage system’s physical disk drive array through the Fibre Channel drive enclosure. Also called a device-side port.
disk drive
The write-back cached data that has not been written to storage media even though the host operation processing the data has completed.
A carrier-mounted storage device supporting random access to fixed size blocks of data.
disk drive blank disk enclosure drive enclosure event
A carrier that replaces a disk drive to control airflow within a drive enclosure whenever there is less than a full complement of storage devices.
A unit that holds storage system devices such as disk drives, power supplies, blowers, I/O modules, transceivers, or EMUs.
A significant operational occurrence involving a hardware or software component in the drive enclosure. The drive enclosure EMU reports these events to the controller for processing.
162 Glossary
disk failure protection disk group disk migration state
A method by which a controller pair reserves drive capacity to take over the functionality of a failed or failing physical disk. For each disk group, the controllers reserve space in the physical disk pool equivalent to the selected number of physical disk drives.
A physical disk drive set or pool in which a virtual disk is created. A disk group may contain all the physical disk drives in a controller pair array or a subset of the array.
A physical disk drive operating state. A physical disk drive can be in a stable or migration state:
• Stable—The state in which the physical disk drive has no failure nor is a failure predicted.
• Migration—The state in which the disk drive is failing, or failure is predicted to be imminent. Data is then moved off the disk onto other disk drives in the same disk group.
disk replacement delay drive blank drive enclosure
The time that elapses between a drive failure and when the controller starts searching for spare disk space. Drive replacement seldom starts immediately in case the “failure” was a glitch or temporary condition.
See
See
disk drive blank disk enclosure
.
dual-loop A configuration where each drive is connected to a pair of controllers through two loops. These two Fibre Channel loops constitute a loop pair.
dual power supply configuration
See
.
dynamic capacity expansion
A storage system feature that provides the ability to increase the size of an existing virtual disk. Before using this feature, you must ensure that your operating system supports capacity expansion of a virtual disk (or LUN).
EIA
EIP
Electronic Industries Alliance. A standards organization specializing in the electrical and functional characteristics of interface equipment.
Event Information Packet. The event information packet is an HSV element hexadecimal character display that defines how an event was detected. Also called the EIP type.
See
electromagnetic interference electrostatic discharge element
See
1.
In a drive enclosure, a device such as an EMU, power supply, disk, blower, or I/O module. The object can be controlled, interrogated, or described by the enclosure services process.
2.
In the Open SAN Manager, a controllable object, such as the Enterprise storage system.
HP Command
View EVA GUI
The GUI through which a user can control and monitor a storage system. HP
Command View EVA can be installed on more than one storage management
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 163
EMI
EMU enclosure enclosure address bus enclosure number
(En) server in a fabric. Each installation is a management agent. The client for the agent is a standard browser.
Electromagnetic Interference. The impairment of a signal by an electromagnetic disturbance.
Environmental Monitoring Unit. An element which monitors the status of an enclosure, including the power, air temperature, and blower status. The EMU detects problems and displays and reports these conditions to a user and the controller. In some cases, the EMU implements corrective action.
A unit used to hold various storage system devices such as disk drives, controllers, power supplies, blowers, an EMU, I/O modules, or blowers.
An Enterprise storage system bus that interconnects and identifies controller enclosures and disk drive enclosures by their physical location. Enclosures within a reporting group can exchange environmental data. This bus uses enclosure ID expansion cables to assign enclosure numbers to each enclosure. Communications over this bus do not involve the Fibre Channel drive enclosure bus and are, therefore, classified as out-of-band communications.
One of the vertical rack-mounting positions where the enclosure is located. The positions are numbered sequentially in decimal numbers starting from the bottom of the cabinet. Each disk enclosure has its own enclosure number. A controller pair shares an enclosure number. If the system has an expansion rack, the enclosures in the expansion rack are numbered from 15 to 24, starting at the bottom.
enclosure services Those services that establish the mechanical environmental, electrical environmental, and external indicators and controls for the proper operation and maintenance of devices with an enclosure as described in the SES SCSI-3
Enclosure Services Command Set (SES), Rev 8b, American National Standard
for Information Services.
See
.
Enclosure Services
Interface
Enclosure Services
Processor
See
.
Enterprise Virtual
Array
Enterprise Virtual
Array rack
The Enterprise Virtual Array is a product that consists of one or more storage systems. Each storage system consists of a pair of HSV controllers and the disk drives they manage. A storage system within the Enterprise Virtual Array can be formally referred to as an Enterprise storage system, or generically referred to as the storage system.
A unit that holds controller enclosures, disk drive enclosures, power distribution supplies, and enclosure address buses that, combined, comprise an Enterprise storage system solution. Also called the Enterprise storage system rack.
See also
.
See
environmental monitoring unit error code The portion of an EMU condition report that defines a problem.
164 Glossary
ESD
ESI
ESP event
Electrostatic Discharge. The emission of a potentially harmful static electric voltage as a result of improper grounding.
Enclosure Services Interface. The SCSI-3 engineering services interface implementation developed for StorageWorks products. A bus that connects the
EMU to the disk drives.
Enclosure Services Processor. An EMU that implements an enclosure’s services process.
Any significant change in the state of the Enterprise storage system hardware or software component reported by the controller to HP Command View EVA.
See also
,
,
, and
See
.
Event Information
Packet
Event Number
Evt No.
exabyte
See
Event Number. A sequential number assigned to each Software Code
Identification (SWCID) event. It is a decimal number in the range 0-255.
A unit of storage capacity that is the equivalent of 2
60 bytes or
1,152,921,504,606,846,976 bytes. One exabyte is equivalent to 1,024 petabytes.
fabric fabric port failover fan
A Fibre Channel fabric or two or more interconnected Fibre Channels allowing data transmission.
A port which is capable of supporting an attached arbitrated loop. This port on a loop will have the AL_PA hexadecimal address 00 (loop ID 7E), giving the fabric the highest priority access to the loop. A loop port is the gateway to the fabric for the node ports on a loop.
The process that takes place when one controller assumes the workload of a failed companion controller. Failover continues until the failed controller is operational.
The variable speed airflow device that cools an enclosure or element by forcing ambient air into an enclosure or element and forcing heated air out the other side.
See also
.
Fault Management
Code
See
.
Fibre Channel drive enclosure
FC HBA
Fibre Channel Arbitrated Loop. The American National Standards Institute’s
(ANSI) document that specifies arbitrated loop topology operation.
Fibre Channel Host Bus Adapter. An interchangeable term for Fibre Channel adapter.
See also
.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 165
FCA
FCC
FCP fiber fiber optics fiber optic cable fibre
Fibre Channel
Fibre Channel Adapter. An adapter used to connect the host server to the fabric.
Also called a Host Bus Adapter (HBA) or a Fibre Channel Host Bus Adapter (FC
HBA).
See also
Federal Communications Commission. The federal agency responsible for establishing standards and approving electronic devices within the United States.
Fibre Channel Protocol. The mapping of SCSI-3 operations to Fibre Channel.
The optical media used to implement Fibre Channel.
The technology where light is transmitted through glass or plastic (optical) threads
(fibers) for data communication or signaling purposes.
A transmission medium designed to transmit digital signals in the form of pulses of light. Fiber optic cable is noted for its properties of electrical isolation and resistance to electrostatic contamination.
The international spelling that refers to the Fibre Channel standards for optical media.
A data transfer architecture designed for mass storage devices and other peripheral devices that require very high bandwidth.
See
.
Fibre Channel adapter
Fibre Channel
Loop field replaceable unit flush
FMC form factor
FPGA frequency
FRU
An enclosure that provides twelve-port central interconnect for Fibre Channel
Arbitrated Loops following the ANSI Fibre Channel drive enclosure standard.
See
.
The act of writing dirty data from cache to a storage media.
Fault Management Code. The HP Command View EVA display of the Enterprise
Storage System error condition information.
A storage industry dimensional standard for 3.5 inch (89 mm) and 5.25 inch
(133 mm) high storage devices. Device heights are specified as low-profile (1 inch or 25.4 mm), half-height (1.6 inch or 41 mm), and full-height (5.25 inch or
133 mm).
Field Programmable Gate Array. A programmable device with an internal array of logic blocks surrounded by a ring of programmable I/O blocks connected together through a programmable interconnect.
The number of cycles that occur in one second expressed in Hertz (Hz). Thus, 1
Hz is equivalent to one cycle per second.
Field Replaceable Unit. A hardware element that can be replaced in the field.
This type of replacement can require special training, tools, or techniques.
Therefore, FRU procedures are usually performed only by an Authorized Service
Representative.
166 Glossary
Gb Gigabit. A measurement of the rate at which the transfer of bits of data occurs.
Sometimes referred to as Gbps. Nominally, a Gb is a transfer rate of
1,000,000,000 (10
9
) bits per second.
For Fibre Channel transceivers or FC loops the Gb transfer rates are:
• 1 Gb is a transmission rate of 1,062,500,000 bits per second.
• 2 Gb is a transmission rate of 2,125,000,000 bits per second.
GB
Gbps
GBps
Gigabyte. A unit of measurement defining either:
• A data transfer rate.
•
A storage or memory capacity of 1,073,741,824 (2
30
) bytes.
See also
.
Gigabits per second. A measurement of the rate at which the transfer of bits of data occurs. Nominally, a Gb is a transfer rate of 1,000,000,000 (10
9
) bits per second.
See also
Gigabytes per second. A measurement of the rate at which the transfer of bytes of data occurs. A GBps is a transfer rate of 1,000,000,000 (10
9
) bytes per second.
See also
Giga (G) gigabaud
The notation to represent 10
9 or 1 billion (1,000,000,000).
An encoded bit transmission rate of one billion (10
9
) bits per second.
gigabit
See
gigabit per second See
.
See
.
graphical user interface
GUI Graphical User Interface. Software that displays the status of a storage system and allows its user to control the storage system.
HBA host
Host Bus Adapter.
See also
.
A computer that runs user applications and uses (or can potentially use) one or more virtual disks created and presented by the controller pair.
Host Bus Adapter
See
.
host computer
See
host link indicator The HSV Controller display that indicates the status of the storage system Fibre
Channel links.
host ports A connection point to one or more hosts through a Fibre Channel fabric. A host is a computer that runs user applications and that uses (or can potentially use) one or more of the virtual disks that are created and presented by the controller pair.
host-side ports
See
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 167
hot-pluggable hub
A method of element replacement whereby the complete system remains operational during element removal or insertion. Replacement does not interrupt data transfers to other elements.
A communications infrastructure device to which nodes on a multi-point bus or loop are physically connected. It is used to improve the manageability of physical cables.
I/O module
IDX
Input/Output module. The enclosure element that is the Fibre Channel drive enclosure interface to the host or controller. I/O modules are bus speed specific, either 1 Gb or 2 Gb.
A 2-digit decimal number portion of the HSV controller termination code display that defines one of 32 locations in the Termination Code array that contains information about a specific event.
See also
The method of communication between the EMU and controller that utilizes the
Fibre Channel drive enclosure bus.
in-band communication
INFORMATION condition initialization
A drive enclosure EMU condition report that may require action. This condition is for information only and does not indicate the failure of an element. All condition reports have precedence over an INFORMATION condition.
A process that prepares a storage system for use. Specifically, the system binds controllers together as an operational pair and establishes preliminary data structures on the disk array. Initialization also sets up the first disk group, called the default disk group.
See
.
input/output module intake temperature See
.
interface A set of protocols used between components such as cables, connectors, and signal levels.
JBOD
K
KB
Just a Bunch of Disks. A number of disks connected to one or more controllers.
Kilo. A scientific notation denoting a multiplier of one thousand (1,000).
Kilobyte. A unit of measurement defining either storage or memory capacity.
1.
For storage, a KB is a capacity of 1,000 (10
3
) bytes of data.
2.
For memory, a KB is a capacity of 1,024 (2
10
) bytes of data.
LAN laser
Last Fault View
Last Termination
Error Array
Local area network. A group of computers and associated devices that share a common communications line and typically share the resources of a single processor or server within a small geographic area.
A device that amplifies light waves and concentrates them in a narrow, very intense beam.
An HSV Controller display defining the last reported fault condition.
See
168 Glossary
LCD indicator
License Key light emitting diode link logon loop loop ID loop pair
LTEA
LUN management agent management agent event
Mb
MB
Mbps
Liquid Crystal Display. The indicator on a panel that is associated with an element.
The LCD is usually located on the front of an element.
Light Emitting Diode. A semiconductor diode, used in an electronic display, that emits light when a voltage is applied to it.
A WWN-encoded sequence that is obtained from the license key fulfillment website.
See
A connection between ports on Fibre Channel devices. The link is a full duplex connection to a fabric or a simplex connection between loop devices.
Also called login, it is a procedure whereby a user or network connection is identified as being an authorized network user or participant.
See
.
Seven-bit values numbered contiguous from 0 to 126 decimal that represent the
127 valid AL_PA values on a loop (not all 256 hexadecimal values are allowed as AL_PA values per Fibre Channel).
A Fibre Channel attachment between a controller and physical disk drives.
Physical disk drives connect to controllers through paired Fibre Channel arbitrated loops. There are two loop pairs, designated loop pair 1 and loop pair 2. Each loop pair consists of two loops (called loop A and loop B) that operate independently during normal operation, but provide mutual backup in case one loop fails.
Last Termination Event Array. A two-digit HSV Controller number that identifies a specific event that terminated an operation. Valid numbers range from 00 to
31.
Logical Unit Number. A SCSI convention used to identify elements. The host sees a virtual disk as a LUN. The LUN address a user assigns to a virtual disk for a particular host will be the LUN at which that host will see the virtual disk.
The HP Command View EVA software that controls and monitors the Enterprise storage system. The software can exist on more than one management server in a fabric. Each installation is a management agent.
Significant occurrence to or within the management agent software, or an initialized storage cell controlled or monitored by the management agent.
Megabit. A term defining a data transfer rate.
See also
.
Megabtye. A term defining either:
• A data transfer rate.
•
A measure of either storage or memory capacity of 1,048,576 (2
20
) bytes.
See also
.
Megabits per second. A measure of bandwidth or data transfers occurring at a rate of 1,000,000 (10
6
) bits per second.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 169
MBps Megabytes per second. A measure of bandwidth or data transfers occurring at a rate of 1,000,000 (10
6
) bytes per second.
See
.
mean time between failures
Mega metadata
A notation denoting a multiplier of 1 million (1,000,000).
Information that a controller pair writes on the disk array. This information is used to control and monitor the array and is not readable by the host.
See
.
micro meter mirrored caching A process in which half of each controller’s write cache mirrors the companion controller’s write cache. The total memory available for cached write data is reduced by half, but the level of protection is greater.
mirroring
MTBF
The act of creating an exact copy or image of data.
Mean Time Between Failures. The average time from start of use to first failure in a large population of identical systems, components, or devices.
multi-mode fiber A fiber optic cable with a diameter large enough (50 microns or more) to allow multiple streams of light to travel different paths from the transmitter to the receiver.
This transmission mode enables bidirectional transmissions.
Network Storage
Controller
NONCRITICAL
Condition
See
node port non-OFC (Open
Fibre Control)
NSC
NVRAM occupancy alarm level
A drive enclosure EMU condition report that occurs when one or more elements inside the enclosure have failed or are operating outside of their specifications.
The failure does not affect continued normal operation of the enclosure. All devices in the enclosure continue to operate according to their specifications.
The ability of the devices to operate correctly may be reduced if additional failures occur. UNRECOVERABLE and CRITICAL errors have precedence over this condition. This condition has precedence over INFORMATION condition.
Early correction can prevent the loss of data.
A device port that can operate on the arbitrated loop topology.
A laser transceiver whose lower-intensity output does not require special open
Fibre Channel mechanisms for eye protection. The Enterprise storage system transceivers are non-OFC compatible.
Network Storage Controller. The HSV Controllers used by the Enterprise storage system.
Nonvolatile Random Access Memory. Memory whose contents are not lost when a system is turned Off or if there is a power failure. This is achieved through the use of UPS batteries or implementation technology such as flash memory. NVRAM is commonly used to store important configuration parameters.
A percentage of the total disk group capacity in blocks. When the number of blocks in the disk group that contain user data reaches this level, an event code is generated. The alarm level is specified by the user.
170 Glossary
OCP Operator Control Panel. The element that displays the controller’s status using indicators and an LCD. Information selection and data entry is controlled by the
OCP push-button.
online/nearonline An online drive is a normal, high-performance drive, while a near-online drive is a lower-performance drive.
operator control panel
See
.
OpenView Storage
Management
Server
A centralized, appliance-based monitoring and management interface that supports multiple applications, operating systems, hardware platforms, storage systems, tape libraries and SAN-related interconnect devices. It is included and resides on the SANWorks Management Server, a single aggregation point for data management.
param password
That portion of the HSV controller termination code display that defines:
• The 2-character parameter identifier that is a decimal number in the 0 through
30 range.
• The 8-character parameter code that is a hexadecimal number.
See also
and
.
A security interlock where the purpose is to allow:
• A management agent to control only certain storage systems
• Only certain management agents to control a storage system
PDM
PDU petabyte physical disk
Power Distribution Module. A thermal circuit breaker-equipped power strip that distributes power from a PDU to Enterprise Storage System elements.
Power Distribution Unit. The rack device that distributes conditioned AC or DC power within a rack.
A unit of storage capacity that is the equivalent of 2
50
, 1,125,899,906,842,624 bytes or 1,024 terabytes.
A disk drive mounted in a drive enclosure that communicates with a controller pair through the device-side Fibre Channel loops. A physical disk is hardware with embedded software, as opposed to a virtual disk, which is constructed by the controllers. Only the controllers can communicate directly with the physical disks.
The physical disks, in aggregate, are called the array and constitute the storage pool from which the controllers create virtual disks.
physical disk array See
port A Fibre Channel connector on a Fibre Channel device.
port_name A 64-bit unique identifier assigned to each Fibre Channel port. The port_name is communicated during the login and port discovery processes.
See
power distribution module power distribution unit
See
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 171
power supply An element that develops DC voltages for operating the storage system elements from either an AC or DC source.
preferred address An AL_PA which a node port attempts to acquire during loop initialization.
preferred path A preference for which controller of the controller pair manages the virtual disk.
This preference is set by the user when creating the virtual disk. A host can change the preferred path of a virtual disk at any time. The primary purpose of preferring a path is load balancing.
protocol pushbutton
The conventions or rules for the format and timing of messages sent and received.
A button that is engaged or disengaged when it is pressed.
quiesce The act of rendering bus activity inactive or dormant. For example, “quiesce the
SCSI bus operations during a device warm-swap.” rack rack-mounting unit A measurement for rack heights based upon a repeating hole pattern. It is expressed as “U” spacing or panel heights. Repeating hole patterns are spaced every 1.75 inches (44.45 mm) and based on EIA’s Standard RS310C. For example, a 3U unit is 5.25inches (133.35 mm) high, and a 4U unit is 7.0inches
(177.79 mm) high.
read caching
A floorstanding structure primarily designed for, and capable of, holding and supporting storage system equipment. All racks provide for the mounting of panels per Electronic Industries Alliance (EIA) Standard RS310C.
A cache method used to decrease subsystem response times to a read request by allowing the controller to satisfy the request from the cache memory rather than from the disk drives. Reading data from cache memory is faster than reading data from a disk. The read cache is specified as either On or Off for each virtual disk. The default state is on.
read ahead caching reconstruction
A cache management method used to decrease the subsystem response time to a read request by allowing the controller to satisfy the request from the cache memory rather than from the disk drives.
The process of regenerating the contents of a failed member data. The reconstruction process writes the data to a spare set disk and incorporates the spare set disk into the mirrorset, striped mirrorset or RAID set from which the failed member came.
red wine-colored A convention of applying the color of red wine to a CRU tab, lever, or handle to identify the unit as hot-pluggable.
redundancy 1.
Element Redundancy—The degree to which logical or physical elements are protected by having another element that can take over in case of failure.
For example, each loop of a device-side loop pair normally works independently but can take over for the other in case of failure.
2.
Data Redundancy—The level to which user data is protected. Redundancy is directly proportional to cost in terms of storage usage; the greater the level of data protection, the more storage space is required.
redundant power configuration
A capability of the Enterprise storage system racks and enclosures to allow continuous system operation by preventing single points of power failure.
172 Glossary
• For a rack, two AC power sources and two power conditioning units distribute primary and redundant AC power to enclosure power supplies.
• For a controller or drive enclosure, two power supplies ensure that the DC power is available even when there is a failure of one supply, one AC source, or one power conditioning unit. Implementing the redundant power configuration provides protection against the loss or corruption of data.
reporting group An Enterprise Storage System controller pair and the associated disk drive enclosures. The Enterprise Storage System controller assigns a unique decimal reporting group number to each EMU on its loops. Each EMU collects disk drive environmental information from its own sub-enclosure and broadcasts the data over the enclosure address bus to all members of the reporting group. Information from enclosures in other reporting groups is ignored.
room temperature
See
.
SCSI 1.
Small Computer System Interface. An American National Standards Institute
(ANSI) interface which defines the physical and electrical parameters of a parallel I/O bus used to connect computers and a maximum of 16 bus elements.
2.
The communication protocol used between a controller pair and the hosts.
Specifically, the protocol is Fibre Channel drive enclosure or SCSI on Fibre
Channel. SCSI is the higher command-level protocol and Fibre Channel is the low-level transmission protocol. The controllers have full support for SCSI-
2; additionally, they support some elements of SCSI-3.
SCSI-3 The ANSI standard that defines the operation and function of Fibre Channel systems.
See
.
SCSI-3 Enclosure
Services selective presentation
The process whereby a controller presents a virtual disk only to the host computer which is authorized access.
serial transmission A method of transmission in which each bit of information is sent sequentially on a single channel rather than simultaneously as in parallel transmission.
SES SCSI-3 Enclosures Services. Those services that establish the mechanical environment, electrical environment, and external indicators and controls for the proper operation and maintenance of devices within an enclosure.
See
.
small computer system interface
Snapclone snapshot
A virtual disk that can be manipulated while the data is being copied. Only an
Active member of a virtual disk family can be snapcloned.
The Snapclone, like a snapshot, reflects the contents of the source virtual disk at a particular point in time. Unlike the snapshot, the Snapclone is an actual clone of the source virtual disk and immediately becomes an independent Active member of its own virtual disk family.
A temporary virtual disk (Vdisk) that reflects the contents of another virtual disk at a particular point in time. A snapshot operation is only done on an active
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 173
SSN storage carrier storage pool storage system virtual disk. Up to seven snapshots of an active virtual disk can exist at any point.
The active disk and its snapshot constitute a virtual family.
See also
, and
Storage System Name. An HP Command View EVA-assigned, unique 20-character name that identifies a specific storage system.
See
The aggregated blocks of available storage in the total physical disk array.
The controllers, storage devices, enclosures, cables, and power supplies and their software.
See
.
Storage System
Name
Switch An electro-mechanical device that initiates an action or completes a circuit.
TB
TBps
TC
Terabyte. A term defining either:
• A data transfer rate.
• A measure of either storage or memory capacity of 1,099,5111,627,776
(2
40
) bytes.
See also
.
Terabytes per second. A data transfer rate of 1,000,000,000,000 (10
12
) bytes per second.
Termination Code. An Enterprise Storage System controller 8-character hexadecimal display that defines a problem causing controller operations to halt.
See also
and
.
Termination Code
See
termination event Occurrences that cause the storage system to cease operation.
terminator topology transceiver
Interconnected elements that form the ends of the transmission lines in the enclosure address bus.
An interconnection scheme that allows multiple Fibre Channel ports to communicate. Point-to-point, arbitrated loop, and ed fabric are all Fibre Channel topologies.
The device that converts electrical signals to optical signals at the point where the fiber cables connect to the FC elements such as hubs, controllers, or adapters.
uninitialized system
UNRECOVERABLE
Condition
A state in which the storage system is not ready for use.
See also
A drive enclosure EMU condition report that occurs when one or more elements inside the enclosure have failed and have disabled the enclosure. The enclosure may be incapable of recovering or bypassing the failure and will require repairs to correct the condition.
This is the highest level condition and has precedence over all other errors and requires immediate corrective action.
174 Glossary
unwritten cached data
UPS
Also called unflushed data.
See also
.
Uninterruptible Power Supply. A battery-operated power supply guaranteed to provide power to an electrical device in the event of an unexpected interruption to the primary power supply. Uninterruptible power supplies are usually rated by the amount of voltage supplied and the length of time the voltage is supplied.
Vdisk Virtual Disk. A simulated disk drive created by the controllers as storage for one or more hosts. The virtual disk characteristics, chosen by the storage administrator, provide a specific combination of capacity, availability, performance, and accessibility. A controller pair simulates the characteristics of the virtual disk by deploying the disk group from which the virtual disk was created.
The host computer sees the virtual disk as “real,” with the characteristics of an identical physical disk.
See also
,
,
virtual disk family , and virtual disk snapshot
.
virtual disk
See
virtual disk copy A clone or exact replica of another virtual disk at a particular point in time. Only an active virtual disk can be copied. A copy immediately becomes the active disk of its own virtual disk family.
See also
virtual disk family , and virtual disk snapshot
.
virtual disk family A virtual disk and its snapshot, if a snapshot exists, constitute a family. The original virtual disk is called the active disk. When you first create a virtual disk family, the only member is the active disk.
See also
, and
.
See
virtual disk snapshot
Vraid0 A virtualization technique that provides no data protection. Data host is broken down into chunks and distributed on the disks comprising the disk group from which the virtual disk was created. Reading and writing to a Vraid0 virtual disk is very fast and makes the fullest use of the available storage, but there is no data protection (redundancy) unless there is parity.
Vraid1
Vraid5
A virtualization technique that provides the highest level of data protection. All data blocks are mirrored or written twice on separate physical disks. For read requests, the block can be read from either disk, which can increase performance.
Mirroring takes the most storage space because twice the storage capacity must be allocated for a given amount of data.
A virtualization technique that uses parity striping to provide moderate data protection. Parity is a data protection mechanism for a striped virtual disk. A striped virtual disk is one where the data to and from the host is broken down into chunks and distributed on the physical disks comprising the disk group in which the virtual disk was created. If the striped virtual disk has parity, another chunk (a parity chunk) is calculated from the set of data chunks and written to the physical disks. If one of the data chunks becomes corrupted, the data can be reconstructed from the parity chunk and the remaining data chunks.
World Wide Name See
.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 175
write back caching A controller process that notifies the host that the write operation is complete when the data is written to the cache. This occurs before transferring the data to the disk. Write back caching improves response time since the write operation completes as soon as the data reaches the cache. As soon as possible after caching the data, the controller then writes the data to the disk drives.
write caching A process when the host sends a write request to the controller, and the controller places the data in the controller cache module. As soon as possible, the controller transfers the data to the physical disk drives.
WWN World Wide Name. A unique Fibre Channel identifier consisting of a
16-character hexadecimal number. A WWN is required for each Fibre Channel communication port.
176 Glossary
Index
A
AC power,
adding
IBM AIX hosts,
OpenVMS hosts,
adding hosts,
API versions,
ASCII error codes definitions,
B bad image header,
bad image segment,
bad image size,
bays locating,
numbering,
bidirectional operation of I/O modules,
C cables
FCC compliance statement,
cabling controller,
CAC,
Cache batteries failed or missing,
cache battery assembly indicator,
Canadian compliance statement
Class A equipment,
Class B equipment,
CDRH compliance regulations,
Center for Devices and Radiological Health
See CDRH certification product labels,
changing passwords,
checksum,
cleaning fiber optic connectors,
clearing passwords,
code flag,
configuring EVA,
configuring the ESX server,
connection suspended,
connectivity verifying,
connectors power IEC 309 receptacle,
power NEMA L5-30R,
power NEMA L6-30R,
protecting,
contacting HP,
controller cabling,
connectors,
initial setup,
status indicators,
conventions document,
text symbols,
Corrective Action Code
See CAC country-specific certifications,
coupled crash control codes,
creating virtual disks,
creating volume groups,
customer self repair,
parts list,
D detail view,
detail view menu,
disk drives defined,
reporting status,
disk enclosures bays,
front view,
rear view,
DiskMaxLUN,
disks labeling,
partinioning,
DMP,
document conventions,
related information,
DR group empty,
DR group logging,
DR group merging,
dump/restart control codes,
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 177
dust covers, using,
E
EIP,
enclosure certification label,
error codes, defined,
error messages,
event code, defined,
event GUI display,
Event Information Packet
See EIP event number,
F fabric setup,
FATA drives,
fault management details,
display,
displays,
FC loops,
,
FCA configuring,
configuring QLogic,
configuring, Emulex,
FCC
Class A Equipment, compliance notice,
Class B Equipment, compliance notice,
Declaration of Conformity,
modifications,
FCC Class A certification,
Federal Communications Commission (FCC) notice,
fiber optics cleaning cable connectors,
protecting cable connectors,
file name for error code definitions,
firmware version display,
H harmonics conformance
Japan,
help obtaining,
host bus adapters,
hosts adding IBM AIX hosts,
adding OpenVMS hosts,
HP technical support,
HP Command View EVA adding hosts with,
creating virtual disk with,
displaying events,
displaying termination events,
location of,
using,
HSV controller initial setup,
shutdown,
I
I/O modules bidirectional,
IDX code display,
image already loaded,
image incompatible with configuration,
image too large,
image write error,
implicit LUN transition,
incompatible attribute,
indicators battery status,
push buttons,
INITIALIZE LCD,
initializing the system defined,
installing VMware,
invalid parameter ID,
quorum configuration,
target handle,
time,
invalid target id, invalid cursor,
invalid state,
invalid status,
invalid target,
iopolicy setting,
iSCSI configurations,
L labels enclosure certification,
product certification,
laser device regulatory compliance notice,
lasers radiation, warning,
last fault information,
178
Last Termination Event Array
See LTEA
LCD default display,
lock busy,
logical disk presented,
logical disk sharing,
lpfc driver,
LTEA,
LUN numbers,
M management server,
maximum number of objects exceeded,
maximum size exceeded,
media inaccessible,
multipathing,
policy,
N no FC port,
no image,
no logical disk for Vdisk,
no more events,
no permission,
non-standard rack, specifications,
not a loop port,
not participating controller,
O object does not exist,
objects in use,
OCP fault management displays,
using,
operation rejected,
other controller failed,
P parameter code,
parameter code number,
parts replaceable,
password clearing, entering, changing,
clearing,
entering,
removing,
password mismatch,
PDUs,
PIC,
power connectors
IEC 309 receptacle,
NEMA L5-30R,
NEMA L6-30R,
POWER OFF LCD,
powering off the system defined,
presenting virtual disks,
product certification,
protecting fiber optic connectors cleaning supplies,
dust covers,
how to clean,
proxy reads,
push buttons indicators, navigating with,
push-buttons definition,
Q qla2300 driver,
R rack non-standard specifications,
rack configurations,
rack stability warning,
regulatory compliance notices cables,
Class A,
Class B,
European Union,
Japan,
laser devices,
modifications,
Taiwan,
WEEE recycling notices,
regulatory notices,
related documentation,
RESTART LCD,
restarting the system,
,
restarting the system defined,
S
Secure Path,
security credentials invalid,
Security credentials needed,
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 179
setting password,
shutdown controllers,
restarting,
shutdown system,
shutting down the system,
slots
See disk enclosures, bays
Software Component ID Codes
See SWCID
Software Identification Code
See SWCID software version display,
status, disk drives,
storage connection down,
storage not initialized,
storage system initializing,
restarting,
shutting down,
storage system menu tree fault management,
shut down system,
system information,
system password,
Storage System Name,
Subscriber's Choice, HP,
Sun San driver stack,
Sun StorEdge,
Traffic Manager,
SWCID,
,
,
symbols in text,
system information display,
firmware version,
software version,
versions,
system password,
system rack configurations,
T
TC,
TC display,
TC error code,
technical support
HP,
service locator website,
Termination Code
See TC termination event GUI display,
text symbols,
time not set,
timeout,
transport error,
180 turning off power,
typographic conventions,
U
Uninitializing,
uninitializing the system,
universal disk drives,
unknown id,
unknown parameter handle,
unrecoverable media error,
upgrading VMware,
UPS, selecting,
using the OCP,
V
Vdisk DR group member,
Vdisk DR log unit,
Vdisk not presented,
verifying virtual disks,
Veritas Volume Manager,
version information displaying, software, controller,
firmware,
OCP,
software,
XCS,
version not supported,
vgcreate,
virtual disks configuring,
presenting,
verifying,
,
VMware installing, upgrading,
volume groups,
volume is missing,
W warning rack stability,
warnings lasers, radiation,
website
Sun documentation,
Symantec/Veritas,
websites customer self repair,
HP,
HP Subscriber's Choice for Business,
WEEE recycling notices,
WWLUN ID identitying,
WWN labels,
X
XCS version,
Z zoning,
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 181
182
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Related manuals
advertisement
Table of contents
- 13 M6412A disk enclosures
- 13 Enclosure layout
- 14 I/O modules
- 15 I/O module status indicators
- 16 Fiber optic Fibre Channel cables
- 16 Copper Fibre Channel cables
- 16 Fibre Channel disk drives
- 17 Disk drive status indicators
- 17 Disk drive blank
- 17 Controller enclosures
- 19 Operator control panel
- 20 Status indicators
- 21 Navigation buttons
- 21 Alphanumeric display
- 21 Power supplies
- 22 Blower module
- 23 Battery module
- 24 HSV controller cabling
- 24 Storage system racks
- 25 Rack configurations
- 25 Power distribution–Modular PDUs
- 27 PDUs
- 28 PDU A
- 28 PDU B
- 28 PDMs
- 29 Rack AC power distribution
- 30 Rack System/E power distribution components
- 31 Rack AC power distribution
- 31 Moving and stabilizing a rack
- 35 EVA8400 storage system connections
- 36 EVA6400 storage system connections
- 37 Direct connect
- 38 iSCSI connection configurations
- 38 Fabric connect iSCSI
- 38 Direct connect iSCSI
- 39 Procedures for getting started
- 39 Gathering information
- 39 Host information
- 40 Setting up a controller pair using the OCP
- 40 Entering the WWN
- 41 Entering the WWN checksum
- 42 Entering the storage system password
- 42 Installing HP Command View EVA
- 42 Installing optional EVA software licenses
- 43 Overview
- 43 Clustering
- 43 Multipathing
- 43 Installing Fibre Channel adapters
- 44 Testing connections to the EVA
- 44 Adding hosts
- 45 Creating and presenting virtual disks
- 45 Verifying virtual disk access from the host
- 46 Configuring virtual disks from the host
- 46 HP-UX
- 46 Scanning the bus
- 47 Creating volume groups on a virtual disk using vgcreate
- 47 IBM AIX
- 47 Accessing IBM AIX utilities
- 48 Adding hosts
- 48 Creating and presenting virtual disks
- 48 Verifying virtual disks from the host
- 49 Linux
- 49 Driver failover mode
- 49 Installing a Qlogic driver
- 50 Upgrading Linux components
- 50 Upgrading qla2x00 RPMs
- 51 Detecting third-party storage
- 51 Compiling the driver for multiple kernels
- 51 Uninstalling the Linux components
- 51 Using the source RPM
- 52 Verifying virtual disks from the host
- 53 OpenVMS
- 53 Updating the AlphaServer console code, Integrity Server console code, and Fibre Channel FCA firmware
- 53 Verifying the Fibre Channel adapter software installation
- 53 Console LUN ID and OS unit ID
- 53 Adding OpenVMS hosts
- 54 Scanning the bus
- 56 Configuring virtual disks from the OpenVMS host
- 56 Setting preferred paths
- 56 Sun Solaris
- 57 Loading the operating system and software
- 57 Configuring FCAs with the Sun SAN driver stack
- 57 Configuring Emulex FCAs with the lpfc driver
- 59 Configuring QLogic FCAs with the qla2300 driver
- 61 Fabric setup and zoning
- 61 Sun StorEdge Traffic Manager (MPxIO)/Sun Storage Multipathing
- 62 Configuring with Veritas Volume Manager
- 63 Configuring virtual disks from the host
- 65 Verifying virtual disks from the host
- 65 Labeling and partitioning the devices
- 66 VMware
- 66 Installing or upgrading VMware
- 66 Configuring the EVA6400/8400 with VMware host servers
- 67 Configuring an ESX server
- 67 Loading the FCA NVRAM
- 67 Setting the multipathing policy
- 68 Specifying DiskMaxLUN
- 69 Verifying connectivity
- 69 Verifying virtual disks from the host
- 69 Windows
- 69 Verifying virtual disk access from the host
- 69 Setting the Pending Timeout value for large cluster configurations
- 71 Best practices
- 71 Operating tips and information
- 71 Reserving adequate free space
- 71 Using FATA disk drives
- 71 Using solid state disk drives
- 72 Maximum LUN size
- 72 Managing unused ports
- 73 Failback preference setting for HSV controllers
- 75 Changing virtual disk failover/failback setting
- 76 Implicit LUN transition
- 76 Storage system shutdown and startup
- 77 Shutting down the storage system
- 77 Starting the storage system
- 78 Saving storage system configuration data
- 79 Adding disk drives to the storage system
- 79 Creating disk groups
- 79 Handling fiber optic cables
- 80 Using the OCP
- 80 Displaying the OCP menu tree
- 82 Displaying system information
- 82 Displaying versions system information
- 82 Shutting down the system
- 83 Shutting the controller down
- 83 Restarting the system
- 84 Uninitializing the system
- 84 Password options
- 85 Changing a password
- 85 Clearing a password
- 87 Customer self repair (CSR)
- 87 Parts only warranty service
- 87 Best practices for replacing hardware components
- 87 Component replacement videos
- 87 Verifying component failure
- 88 Identifying the spare part
- 88 Replaceable parts
- 90 Replacing the failed component
- 90 Replacement instructions
- 93 Contacting HP
- 93 Subscription service
- 93 Documentation feedback
- 93 Related information
- 93 Documents
- 93 HP websites
- 94 Typographic conventions
- 95 Rack stability
- 95 Customer self repair
- 97 Regulatory notices
- 97 Federal Communications Commission (FCC) notice
- 97 FCC Class A certification
- 98 Class A equipment
- 98 Class B equipment
- 98 Declaration of conformity for products marked with the FCC logo, United States only
- 98 Modifications
- 98 Cables
- 99 Laser device
- 99 Laser safety warnings
- 99 Compliance with CDRH regulations
- 99 Certification and classification information
- 100 Canadian notice (avis Canadian)
- 100 Class A equipment
- 100 Class B equipment
- 100 European union notice
- 100 Notice for France
- 100 WEEE Recycling Notices
- 100 English notice
- 101 Dutch notice
- 101 Czechoslovakian notice
- 101 Estonian notice
- 101 Finnish notice
- 102 French notice
- 102 German notice
- 102 Greek notice
- 102 Hungarian notice
- 103 Italian notice
- 103 Korean Communication Committee notice
- 103 Latvian notice
- 103 Lithuanian notice
- 104 Polish notice
- 104 Portuguese notice
- 104 Slovakian notice
- 105 Slovenian notice
- 105 Spanish notice
- 105 Swedish notice
- 105 Germany noise declaration
- 106 Japanese notice
- 106 Harmonics conformance (Japan)
- 106 Taiwanese notice
- 106 Japanese power cord notice
- 106 Country-specific certifications
- 123 Using HP Command View EVA
- 123 GUI termination event display
- 124 GUI event display
- 124 Fault management displays
- 124 Displaying Last Fault Information
- 125 Displaying Detailed Information
- 125 Interpreting fault management information
- 127 Rack specifications
- 127 Internal component envelope
- 127 EIA310-D standards
- 128 EVA cabinet measures and tolerances
- 128 Weights, dimensions and component CG measurements
- 128 Airflow and Recirculation
- 128 Component Airflow Requirements
- 129 Rack Airflow Requirements
- 129 Configuration Standards
- 129 Environmental and operating specifications
- 129 UPS Selection
- 131 Shock and vibration specifications
- 133 High-level solution overview
- 134 Benefits at a glance
- 134 Installation requirements
- 134 Recommended mitigations
- 134 Supported configurations
- 135 General configuration components
- 135 Connecting a single path HBA server to a switch in a fabric zone
- 137 HP-UX configuration
- 137 Requirements
- 137 HBA configuration
- 138 Risks
- 138 Limitations
- 139 Windows Server (32-bit) configuration
- 139 Requirements
- 139 HBA configuration
- 139 Risks
- 139 Limitations
- 140 Windows Server (64-bit) configuration
- 140 Requirements
- 140 HBA configuration
- 141 Risks
- 141 Limitations
- 142 SUN Solaris configuration
- 142 Requirements
- 142 HBA configuration
- 142 Risks
- 142 Limitations
- 143 Tru64 UNIX configuration
- 143 Requirements
- 143 HBA configuration
- 144 Risks
- 145 OpenVMS configuration
- 145 Requirements
- 145 HBA configuration
- 145 Risks
- 145 Limitations
- 146 NetWare configuration
- 146 Requirements
- 146 HBA configuration
- 146 Risks
- 147 Limitations
- 147 Linux (32-bit) configuration
- 147 Requirements
- 148 HBA configuration
- 148 Risks
- 148 Limitations
- 149 Linux (64-bit) configuration
- 149 Requirements
- 149 HBA configuration
- 149 Risks
- 149 Limitations
- 150 IBM AIX configuration
- 150 Requirements
- 150 HBA configuration
- 151 Risks
- 151 Limitations
- 152 VMware configuration
- 152 Requirements
- 152 HBA configuration
- 152 Risks
- 152 Limitations
- 153 Failure scenarios
- 153 HP-UX
- 154 Windows Server
- 154 Sun Solaris
- 155 OpenVMS and Tru64 UNIX
- 156 NetWare
- 156 Linux
- 157 IBM AIX
- 158 VMware