- No category
advertisement
Failback preference setting for HSV controllers
describes the failback preference behavior for the controllers.
Table 13 Failback preference behavior
Setting Point in time
No preference
Path A - Failover Only
Path B - Failover Only
At initial presentation
On dual boot or controller resynch
On controller failover
On controller failback
At initial presentation
On dual boot or controller resynch
On controller failover
On controller failback
At initial presentation
On dual boot or controller resynch
On controller failover
On controller failback
Path A - Failover/Failback
At initial presentation
Behavior
The units are alternately brought online to
Controller A or to Controller B.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are alternately brought online to Controller A or to Controller B.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
The units are brought online to Controller
A.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller A.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
The units are brought online to Controller
B.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller B.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
The units are brought online to Controller
A.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 73
Setting Point in time
On dual boot or controller resynch
On controller failover
On controller failback
At initial presentation
On dual boot or controller resynch
Path B - Failover/Failback
On controller failover
On controller failback
Behavior
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller A.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. After controller restoration, the units that are online to Controller B and set to
Path A are brought online to Controller A.
This is a one time occurrence. If the host then moves the LUN using SCSI commands, the LUN will remain where moved.
The units are brought online to Controller
B.
If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller B.
All LUNs are brought online to the surviving controller.
All LUNs remain on the surviving controller. After controller restoration, the units that are online to Controller A and set to
Path B are brought online to Controller B.
This is a one time occurrence. If the host then moves the LUN using SCSI commands, the LUN will remain where moved.
describes the failback default behavior and supported settings when ALUA-compliant multipath software is running with each operating system. Recommended settings may vary depending on your configuration or environment.
Table 14 Failback settings by operating system
Operating system
HP-UX
Default behavior
Host follows the unit
1
Supported settings
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
IBM AIX
Linux
Host follows the unit
1
Host follows the unit
1
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
74 EVA6400/8400 operation
Operating system
Novell NetWare
Default behavior Supported settings
Failback performed on the host
No Preference
Path A/B – Failover Only
OpenVMS
Sun Solaris
Tru64 UNIX
Host follows the unit
Host follows the unit
1
Host follows the unit
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback (recommended)
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback (recommended)
VMware
Host follows the unit
1
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
Windows Failback performed on the host
No Preference
Path A/B – Failover Only
Path A/B – Failover/Failback
1
If preference has been configured to ensure a more balanced controller configuration, the Path A/B – Failover/Failback setting is required to maintain the configuration after a single controller reboot.
Changing virtual disk failover/failback setting
Changing the failover/failback setting of a virtual disk may impact which controller presents the disk.
identifies the presentation behavior that results when the failover/failback setting for a virtual disk is changed.
NOTE:
If the new setting causes the presentation of the virtual disk to move to a new controller, any snapshots or snapclones associated with the virtual disk will also be moved.
Table 15 Impact on virtual disk presentation when changing failover/failback setting
New setting
No Preference
Path A Failover
Path B Failover
Path A Failover/Failback
Impact on virtual disk presentation
None. The disk maintains its original presentation.
If the disk is currently presented on controller B, it is moved to controller A. If the disk is on controller A, it remains there.
If the disk is currently presented on controller A, it is moved to controller B. If the disk is on controller B, it remains there.
If the disk is currently presented on controller B, it is moved to controller A. If the disk is on controller A, it remains there.
HP StorageWorks 6400/8400 Enterprise Virtual Array User Guide 75
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Related manuals
advertisement
Table of contents
- 13 M6412A disk enclosures
- 13 Enclosure layout
- 14 I/O modules
- 15 I/O module status indicators
- 16 Fiber optic Fibre Channel cables
- 16 Copper Fibre Channel cables
- 16 Fibre Channel disk drives
- 17 Disk drive status indicators
- 17 Disk drive blank
- 17 Controller enclosures
- 19 Operator control panel
- 20 Status indicators
- 21 Navigation buttons
- 21 Alphanumeric display
- 21 Power supplies
- 22 Blower module
- 23 Battery module
- 24 HSV controller cabling
- 24 Storage system racks
- 25 Rack configurations
- 25 Power distribution–Modular PDUs
- 27 PDUs
- 28 PDU A
- 28 PDU B
- 28 PDMs
- 29 Rack AC power distribution
- 30 Rack System/E power distribution components
- 31 Rack AC power distribution
- 31 Moving and stabilizing a rack
- 35 EVA8400 storage system connections
- 36 EVA6400 storage system connections
- 37 Direct connect
- 38 iSCSI connection configurations
- 38 Fabric connect iSCSI
- 38 Direct connect iSCSI
- 39 Procedures for getting started
- 39 Gathering information
- 39 Host information
- 40 Setting up a controller pair using the OCP
- 40 Entering the WWN
- 41 Entering the WWN checksum
- 42 Entering the storage system password
- 42 Installing HP Command View EVA
- 42 Installing optional EVA software licenses
- 43 Overview
- 43 Clustering
- 43 Multipathing
- 43 Installing Fibre Channel adapters
- 44 Testing connections to the EVA
- 44 Adding hosts
- 45 Creating and presenting virtual disks
- 45 Verifying virtual disk access from the host
- 46 Configuring virtual disks from the host
- 46 HP-UX
- 46 Scanning the bus
- 47 Creating volume groups on a virtual disk using vgcreate
- 47 IBM AIX
- 47 Accessing IBM AIX utilities
- 48 Adding hosts
- 48 Creating and presenting virtual disks
- 48 Verifying virtual disks from the host
- 49 Linux
- 49 Driver failover mode
- 49 Installing a Qlogic driver
- 50 Upgrading Linux components
- 50 Upgrading qla2x00 RPMs
- 51 Detecting third-party storage
- 51 Compiling the driver for multiple kernels
- 51 Uninstalling the Linux components
- 51 Using the source RPM
- 52 Verifying virtual disks from the host
- 53 OpenVMS
- 53 Updating the AlphaServer console code, Integrity Server console code, and Fibre Channel FCA firmware
- 53 Verifying the Fibre Channel adapter software installation
- 53 Console LUN ID and OS unit ID
- 53 Adding OpenVMS hosts
- 54 Scanning the bus
- 56 Configuring virtual disks from the OpenVMS host
- 56 Setting preferred paths
- 56 Sun Solaris
- 57 Loading the operating system and software
- 57 Configuring FCAs with the Sun SAN driver stack
- 57 Configuring Emulex FCAs with the lpfc driver
- 59 Configuring QLogic FCAs with the qla2300 driver
- 61 Fabric setup and zoning
- 61 Sun StorEdge Traffic Manager (MPxIO)/Sun Storage Multipathing
- 62 Configuring with Veritas Volume Manager
- 63 Configuring virtual disks from the host
- 65 Verifying virtual disks from the host
- 65 Labeling and partitioning the devices
- 66 VMware
- 66 Installing or upgrading VMware
- 66 Configuring the EVA6400/8400 with VMware host servers
- 67 Configuring an ESX server
- 67 Loading the FCA NVRAM
- 67 Setting the multipathing policy
- 68 Specifying DiskMaxLUN
- 69 Verifying connectivity
- 69 Verifying virtual disks from the host
- 69 Windows
- 69 Verifying virtual disk access from the host
- 69 Setting the Pending Timeout value for large cluster configurations
- 71 Best practices
- 71 Operating tips and information
- 71 Reserving adequate free space
- 71 Using FATA disk drives
- 71 Using solid state disk drives
- 72 Maximum LUN size
- 72 Managing unused ports
- 73 Failback preference setting for HSV controllers
- 75 Changing virtual disk failover/failback setting
- 76 Implicit LUN transition
- 76 Storage system shutdown and startup
- 77 Shutting down the storage system
- 77 Starting the storage system
- 78 Saving storage system configuration data
- 79 Adding disk drives to the storage system
- 79 Creating disk groups
- 79 Handling fiber optic cables
- 80 Using the OCP
- 80 Displaying the OCP menu tree
- 82 Displaying system information
- 82 Displaying versions system information
- 82 Shutting down the system
- 83 Shutting the controller down
- 83 Restarting the system
- 84 Uninitializing the system
- 84 Password options
- 85 Changing a password
- 85 Clearing a password
- 87 Customer self repair (CSR)
- 87 Parts only warranty service
- 87 Best practices for replacing hardware components
- 87 Component replacement videos
- 87 Verifying component failure
- 88 Identifying the spare part
- 88 Replaceable parts
- 90 Replacing the failed component
- 90 Replacement instructions
- 93 Contacting HP
- 93 Subscription service
- 93 Documentation feedback
- 93 Related information
- 93 Documents
- 93 HP websites
- 94 Typographic conventions
- 95 Rack stability
- 95 Customer self repair
- 97 Regulatory notices
- 97 Federal Communications Commission (FCC) notice
- 97 FCC Class A certification
- 98 Class A equipment
- 98 Class B equipment
- 98 Declaration of conformity for products marked with the FCC logo, United States only
- 98 Modifications
- 98 Cables
- 99 Laser device
- 99 Laser safety warnings
- 99 Compliance with CDRH regulations
- 99 Certification and classification information
- 100 Canadian notice (avis Canadian)
- 100 Class A equipment
- 100 Class B equipment
- 100 European union notice
- 100 Notice for France
- 100 WEEE Recycling Notices
- 100 English notice
- 101 Dutch notice
- 101 Czechoslovakian notice
- 101 Estonian notice
- 101 Finnish notice
- 102 French notice
- 102 German notice
- 102 Greek notice
- 102 Hungarian notice
- 103 Italian notice
- 103 Korean Communication Committee notice
- 103 Latvian notice
- 103 Lithuanian notice
- 104 Polish notice
- 104 Portuguese notice
- 104 Slovakian notice
- 105 Slovenian notice
- 105 Spanish notice
- 105 Swedish notice
- 105 Germany noise declaration
- 106 Japanese notice
- 106 Harmonics conformance (Japan)
- 106 Taiwanese notice
- 106 Japanese power cord notice
- 106 Country-specific certifications
- 123 Using HP Command View EVA
- 123 GUI termination event display
- 124 GUI event display
- 124 Fault management displays
- 124 Displaying Last Fault Information
- 125 Displaying Detailed Information
- 125 Interpreting fault management information
- 127 Rack specifications
- 127 Internal component envelope
- 127 EIA310-D standards
- 128 EVA cabinet measures and tolerances
- 128 Weights, dimensions and component CG measurements
- 128 Airflow and Recirculation
- 128 Component Airflow Requirements
- 129 Rack Airflow Requirements
- 129 Configuration Standards
- 129 Environmental and operating specifications
- 129 UPS Selection
- 131 Shock and vibration specifications
- 133 High-level solution overview
- 134 Benefits at a glance
- 134 Installation requirements
- 134 Recommended mitigations
- 134 Supported configurations
- 135 General configuration components
- 135 Connecting a single path HBA server to a switch in a fabric zone
- 137 HP-UX configuration
- 137 Requirements
- 137 HBA configuration
- 138 Risks
- 138 Limitations
- 139 Windows Server (32-bit) configuration
- 139 Requirements
- 139 HBA configuration
- 139 Risks
- 139 Limitations
- 140 Windows Server (64-bit) configuration
- 140 Requirements
- 140 HBA configuration
- 141 Risks
- 141 Limitations
- 142 SUN Solaris configuration
- 142 Requirements
- 142 HBA configuration
- 142 Risks
- 142 Limitations
- 143 Tru64 UNIX configuration
- 143 Requirements
- 143 HBA configuration
- 144 Risks
- 145 OpenVMS configuration
- 145 Requirements
- 145 HBA configuration
- 145 Risks
- 145 Limitations
- 146 NetWare configuration
- 146 Requirements
- 146 HBA configuration
- 146 Risks
- 147 Limitations
- 147 Linux (32-bit) configuration
- 147 Requirements
- 148 HBA configuration
- 148 Risks
- 148 Limitations
- 149 Linux (64-bit) configuration
- 149 Requirements
- 149 HBA configuration
- 149 Risks
- 149 Limitations
- 150 IBM AIX configuration
- 150 Requirements
- 150 HBA configuration
- 151 Risks
- 151 Limitations
- 152 VMware configuration
- 152 Requirements
- 152 HBA configuration
- 152 Risks
- 152 Limitations
- 153 Failure scenarios
- 153 HP-UX
- 154 Windows Server
- 154 Sun Solaris
- 155 OpenVMS and Tru64 UNIX
- 156 NetWare
- 156 Linux
- 157 IBM AIX
- 158 VMware