4354R - StorageWorks Enclosure Storage | Datasheet | Compaq 4354R

Add to my manuals
218 Pages

advertisement

4354R - StorageWorks Enclosure Storage | Datasheet | Compaq 4354R | Manualzz
Compaq StorageWorks
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64
UNIX
Installation and Configuration Guide
First Edition (June 2001)
Part Number: AA-RFAUD-TE
Compaq Computer Corporation
© 2001 Compaq Computer Corporation.
Compaq, the Compaq logo, and StorageWorks Registered in U.S. Patent and Trademark Office.
Microsoft, MS-DOS, Windows, Windows NT, Windows 2000 are trademarks of Microsoft Corporation.
UNIX is a trademark of The Open Group.
All other product names mentioned herein may be trademarks of their respective companies.
Confidential computer software. Valid license from Compaq required for possession, use or copying.
Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
Compaq shall not be liable for technical or editorial errors or omissions contained herein. The
information in this document is provided “as is” without warranty of any kind and is subject to change
without notice. The warranties for Compaq products are set forth in the express limited warranty
statements accompanying such products. Nothing herein should be construed as constituting an additional
warranty.
Compaq service tool software, including associated documentation, is the property of and contains
confidential technology of Compaq Computer Corporation. Service customer is hereby licensed to use
the software only for activities directly relating to the delivery of, and only during the term of, the
applicable services delivered by Compaq or its authorized service provider. Customer may not modify or
reverse engineer, remove, or transfer the software or make the software or any resultant diagnosis or
system management data available to other parties without Compaq’s or its authorized service provider’s
consent. Upon termination of the services, customer will, at Compaq’s or its service provider’s option,
destroy or return the software and associated documentation in its possession.
Printed in the U.S.A.
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX
Installation and Configuration Guide
First Edition (June 2001)
Part Number: AA-RFAUD-TE
Contents
About this Guide
Text Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Symbols in Text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Symbols on Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Rack Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Installation and Setup of the CD-ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Configuration Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Getting Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Compaq Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Compaq Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Compaq Authorized Reseller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Chapter 1
Planning a Subsystem
Defining the Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–2
Controller Designations A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–2
Controller Designations “This Controller” and “Other Controller”. . . . . . . . . . . . . . . . . . . 1–3
Selecting a Failover Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–5
Transparent Failover Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–5
Multiple-Bus Failover Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–7
Selecting a Cache Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–9
Read Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–10
Read-Ahead Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–10
Write-Back Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–10
Write-Through Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–10
Enabling Mirrored Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–11
The Command Console LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–11
Determining the Address of the CCL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–12
Enabling/Disabling the CCL in SCSI-2 Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–12
iv
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Enabling the CCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–12
Disabling the CCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–12
Enabling/Disabling CCL in SCSI-3 Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–13
Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–13
Naming Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–13
Numbers of Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–13
Assigning Unit Numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–16
Matching Units to Host Connections in Transparent Failover Mode . . . . . . . . . . . . . . . . . 1–17
Matching Units to Host Connections in Multiple-Bus Failover Mode . . . . . . . . . . . . . . . . 1–19
Assigning Unit Numbers Depending on SCSI_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . 1–19
Assigning Host Connection Offsets and Unit Numbers in SCSI-3 Mode . . . . . . . . . . 1–20
Assigning Host Connection Offsets and Unit Numbers in SCSI-2 Mode . . . . . . . . . . 1–20
Selective Storage Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–21
Restricting Host Access in Transparent Failover Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–21
Restricting Host Access by Separate Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–22
Restricting Host Access by Disabling Access Paths . . . . . . . . . . . . . . . . . . . . . . . . . . 1–23
Restricting Host Access by Offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–23
Restricting Host Access in Multiple-Bus Failover Mode . . . . . . . . . . . . . . . . . . . . . . . . . . 1–24
Enablign the Access Path of Selected Host Connections . . . . . . . . . . . . . . . . . . . . . . . 1–24
Restricting Host Access by Offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–27
Worldwide Names (Node IDs and Port IDs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–28
Restoring Worldwide Names (Node IDs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–28
Unit Worldwide Names (LUN IDs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–30
Chapter 2
Planning Storage
Where to Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–2
Configuration Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–3
Device PTL Addressing Convention. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–3
Determining Storage Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–10
Choosing a Container Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–11
Creating a Storageset Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–12
Storageset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–14
Stripeset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–14
Mirrorset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–16
RAIDset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–18
Striped Mirrorset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–20
Storageset Expansion Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–21
Partition Planning Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–22
Defining a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–22
Guidelines for Partitioning Storagesets and Disk Drives. . . . . . . . . . . . . . . . . . . . . . . . . . . 2–23
v
Changing Characteristics through Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Storageset and Partition Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RAIDset Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mirrorset Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Partition Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Initialization Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chunk Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Increasing the Request Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Increasing Sequential Data Transfer Performance . . . . . . . . . . . . . . . . . . . . . . . . . . .
Save Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Destroy/Nodestroy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unit Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Storage Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Storage Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example Storage Map - Model 4310R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . .
Example Storage Map - Model 4350R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . .
Example Storage Map - Model 4314R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . .
Example Storage Map - Model 4354R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . .
Using the LOCATE Command to Find Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2–23
2–23
2–24
2–24
2–24
2–25
2–25
2–25
2–26
2–26
2–27
2–28
2–28
2–28
2–29
2–29
2–29
2–30
2–32
2–33
2–36
2–38
Chapter 3
Preparing the Host System
Enterprise Storage RAID Array Storage System Installation . . . . . . . . . . . . . . . . . . . . . . . . . . .
Making a Physical Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preparing to Install the Host Bus Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing the Host Bus Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preparing LUNs for Access by Tru64 UNIX FileSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Partitions on a LUN Using disklabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For V4.0G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For V5.1x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Filesystem on a LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For V4.0G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For V5.1x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mounting the Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For V4.0G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For V5.1x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DECsafe Available Server Environment (ASE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using genvmunix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3–1
3–5
3–5
3–5
3–6
3–6
3–6
3–6
3–7
3–7
3–7
3–7
3–8
3–8
3–8
3–8
vi
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
HSG80 Units and Tru64 UNIX Utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–9
File Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–9
For V4.0G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–9
For V5.1x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–9
Reading from the device, dd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–10
For V4.0G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–10
For V5.1x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–10
scu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–11
For V4.0G and V5.1x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–11
V4.0G Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–11
V5.1x Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–11
hwmgr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–12
iostat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–12
For V4.0G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–12
For V5.1x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–13
Chapter 4
Installing and Configuring the HS-Series Agent
Why Use StorageWorks Command Console (SWCC)? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–1
Installation and Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–3
Before Installing the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–5
Installing and Configuring the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–5
Configure Client System Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–6
Enter Subsystem Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–7
Configure Email Notification Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–8
Final System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–9
Configure Client System Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–10
Enter Subsystem Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–11
Configure Email Notification Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–12
Final System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–13
Reconfiguring the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–13
Removing the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–15
Chapter 5
Configuration Procedures
Establishing a Local Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–2
Setting Up a Single Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–3
Power On and Establish Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–3
Cabling a Single Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–3
Configuring a Single Controller Using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–4
Verify the Node ID and Check for Any Previous Connections . . . . . . . . . . . . . . . . . . . 5–4
vii
Configure Controller Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–5
Restart the Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–6
Set Time and Verify all Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–6
Plug in the FC Cable and Verify Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
Repeat Procedure for Each Host Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
Verify Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
For V4.0G Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
For V5.1x Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
Setting Up a Controller Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
Power Up and Establish Communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
Cabling a Controller Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
Configuring a Controller Pair Using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
Configure Controller Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
Restart the Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–12
Set Time and Verify All Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–12
Plug in the FC Cable and Verify Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–13
Repeat Procedure for Each Host Adapter Connection . . . . . . . . . . . . . . . . . . . . . . . . 5–14
Verify Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–14
For V4.0G Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–14
For V5.1x Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–14
Configuring Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–15
Configuring a Stripeset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–15
Configuring a Mirrorset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–15
Configuring a RAIDset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–16
Configuring a Striped Mirrorset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–17
Configuring a Single-Disk Unit (JBOD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–18
Configuring a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–18
Assigning Unit Numbers and Unit Qualifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–19
Assigning a Unit Number to a Storageset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–19
Assigning a Unit Number to a Single (JBOD) Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–20
Assigning a Unit Number to a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–20
Preferring Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–20
Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–21
Changing the CLI Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–21
Mirroring cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–21
Adding Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–21
Adding a Disk Drive to the Spareset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–21
Removing a Disk Drive from the Spareset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–22
Enabling Autospare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–22
Deleting a Storageset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–23
Changing Switches for a Storageset or Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–23
Displaying the Current Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–23
viii
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Changing RAIDset and Mirrorset Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–24
Changing Device Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–24
Changing Initialize Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–24
Changing Unit Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–24
Chapter 6
Verifying Storage Configuration from the Host
Chapter 7
Configuration Example Using CLI
CLI Configuration Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–4
Chapter 8
Backing Up the Subsystem, Cloning Data for Backup, and Moving Storagesets
Backing Up the Subsystem Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8–1
Cloning Data for Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8–2
Moving Storagesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8–6
Appendix A
Subsystem Profile Templates
Storageset Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A–2
Storage Map Template 1 for the BA370 Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A–3
Storage Map Template 2 for the second BA370 Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . A–4
Storage Map Template 3 for the third BA370 Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A–5
Storage Map Template 4 for the Model 4214R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . A–6
Storage Map Template 5 for the Model 4254 Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . A–7
Storage Map Template 6 for the Model 4310R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . A–9
Storage Map Template 7 for the Model 4350R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . . . A–11
Storage Map Template 8 for the Model 4314R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . . . A–12
Storage Map Template 9 for the Model 4354R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . . . . A–14
Appendix B
Installing, Configuring, and Removing the Client
Why Install the Client? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Before You Install the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting the Client Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Invalid Network Port Assignments
During Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
“There is no disk in the drive” Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B–1
B–2
B–2
B–3
B–3
B–4
ix
Adding the Storage Subsystem and its Host to the Navigation Tree. . . . . . . . . . . . . . . . . . . . . .
Removing the Command Console Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Where to Find Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About the User Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About the Online Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Glossary
Index
B–5
B–7
B–8
B–8
B–8
Figures
Figure 1 General configuration flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Figure 2 Configuring storage with the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Figure 3 Configuring storage with SWCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Figure 1–1 Location of controllers and cache modules in a Model 2200 enclosure . . . . . . . . . . . 1–2
Figure 1–2 Location of controllers and cache modules in a BA370 enclosure . . . . . . . . . . . . . . . 1–3
Figure 1–3 “This controller” and “other controller” for the Model 2200 enclosure . . . . . . . . . . . 1–4
Figure 1–4 “This controller” and “other controller” for the BA370 enclosure . . . . . . . . . . . . . . . 1–4
Figure 1–5 Transparent failover—normal operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–6
Figure 1–6 Transparent failover—after failover from controller B to controller A . . . . . . . . . . . 1–7
Figure 1–7 Typical multiple-bus configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–9
Figure 1–8 Mirrored caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–11
Figure 1–9 Connections in separate-link, transparent failover mode configurations . . . . . . . . . 1–14
Figure 1–10 Connections in single-link, transparent failover mode configurations . . . . . . . . . . 1–15
Figure 1–11 Connections in multiple-bus failover mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–16
Figure 1–12 LUN presentation to hosts, as determined by offset . . . . . . . . . . . . . . . . . . . . . . . . 1–18
Figure 1–13 Limiting host access in transparent failover mode . . . . . . . . . . . . . . . . . . . . . . . . . 1–22
Figure 1–14 Limiting host access in multiple-bus failover mode . . . . . . . . . . . . . . . . . . . . . . . . 1–25
Figure 1–15 Placement of the worldwide name label on the Model 2200 enclosure . . . . . . . . . 1–29
Figure 1–16 Placement of the worldwide name label on the BA370 enclosure . . . . . . . . . . . . . 1–29
Figure 2–1 PTL naming convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–4
Figure 2–2 PTL addressing in a configuration in a BA370 enclosure. . . . . . . . . . . . . . . . . . . . . . 2–4
Figure 2–3 PTL addressing in a single-bus configuration, six Model 4310R disk enclosures . . . 2–6
Figure 2–4 PTL addressing in a dual-bus configuration, three Model 4350R disk enclosures . . . 2–7
Figure 2–5 PTL addressing in a single-bus configuration, six Model 4314R disk enclosures . . . 2–8
Figure 2–6 PTL addressing in a dual-bus configuration, three Model 4354R disk enclosures . . . 2–9
Figure 2–7 Mapping a unit to physical disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–10
Figure 2–8 Container types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–11
Figure 2–9 An example storageset profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–13
Figure 2–10 A 3-member RAID 0 stripeset (example 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–14
xii
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Figure 2–11 A 3-member RAID 0 stripeset (example 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–15
Figure 2–12 Mirrorsets maintain two copies of the same data . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–17
Figure 2–13 Mirrorset example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–17
Figure 2–14 A 5-member RAIDset using parity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–18
Figure 2–15 Striped mirrorset (example 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–20
Figure 2–16 Striped mirrorset (example 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–21
Figure 2–17 One example of a partitioned single-disk unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–22
Figure 2–18 Chunk size larger than the request size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–26
Figure 2–19 Model 4310R disk enclosure − example storage map . . . . . . . . . . . . . . . . . . . . . . . 2–31
Figure 2–20 Model 4350R disk enclosure − example storage map . . . . . . . . . . . . . . . . . . . . . . . 2–33
Figure 2–21 Model 4314R disk enclosure − example storage map . . . . . . . . . . . . . . . . . . . . . . . 2–35
Figure 2–22 Model 4354R disk enclosure − example storage map . . . . . . . . . . . . . . . . . . . . . . . 2–37
Figure 3–1 Dual-Bus Enterprise Storage RAID Array Storage System . . . . . . . . . . . . . . . . . . . . . 3–3
Figure 3–2 Single-Bus Enterprise Storage RAID Array Storage System . . . . . . . . . . . . . . . . . . . . 3–4
Figure 4–1 An example of a network connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–4
Figure 5–1 Maintenance port connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–2
Figure 5–2 Single controller cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–4
Figure 5–3 Controller pair failover cabling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
Figure 7–1 Example storage map for the BA370 Enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–2
Figure 7–2 Example system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–3
Figure 7–3 Example virtual system layout from the hosts’ point of view . . . . . . . . . . . . . . . . . . . 7–4
Figure 8–1 Steps the CLONE utility follows for duplicating unit members. . . . . . . . . . . . . . . . . . 8–3
Figure B–1 Navigation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B–5
Figure B–2 Navigation window showing storage host system “Atlanta” . . . . . . . . . . . . . . . . . . . B–6
Figure B–3 Navigation window showing expanded “Atlanta” host icon . . . . . . . . . . . . . . . . . . . B–6
Tables
Table 1–1
Table 2–1
Table 2–2
Table 4–1
Table 4–2
Table 4–3
Table 4–4
Table 4–5
Table 4–6
Table 4–7
Table 4–8
Table 4–9
Unit Assignments and SCSI_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–21
A Comparison of Container Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–12
Example Chunk Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–27
SWCC Features and Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–2
Installation and Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–3
Client System Access Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–6
Client System Notification Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–7
Definitions of Email Notification Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–8
Client System Access Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–10
Client System Notification Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–11
Definitions of Email Notification Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–12
Information Needed to Configure Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–14
About this Guide
This guide provides installation and configuration instructions and reference material for
operation of the HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX.
Thank You for selecting a Compaq StorageWorks™ RAID Array subsystem for your
growing storage needs. StorageWorks RAID subsystems are designed to support the most
popular computer platforms in the industry. The solution software that accompanies this
kit enables the new storage subsystem to work effectively with your chosen platform.
Please fill out and return the Registration Card included in this kit. This information is
used by Compaq to provide notification services to its customers. You can also register
online at:
http://www.compaq.com/products/registration
This guide describes:
■ Considerations while planning a configuration
■ Configuration procedures
This book does not contain information about the operating environments to which the
controller may be connected; nor does it contain detailed information about subsystem
enclosures or their components. See the documentation that accompanied these
peripherals for information about them.
xvi
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Text Conventions
This document uses the following conventions:
Keys
Keys appear in boldface. A plus sign (+) between two
keys indicates that they should be pressed
simultaneously.
USER INPUT, COMMANDS
* User input and commands appear in this typeface and in
uppercase.
Menu Options, type of user
input
Menu options and the type of user input, such as
device-name, appear in italics.
FILENAMES
* File names appear in uppercase italics.
Dialog Box Names
These elements appear in initial capital letters.
DIRECTORY NAMES,
and DRIVE NAMES
* These elements appear in uppercase.
Enter
When you are instructed to enter information, type the
information and press the Enter key.
Script
Script names appear in upper and lower case in this
typeface.
SWITCHES
Elements designated as switches appear in uppercase
italics.
* UNIX commands are case sensitive and will not appear in uppercase.
About this Guide
Symbols in Text
The symbols found in this guide have the following meanings:
WARNING: Text set off in this manner indicates that failure to follow directions in the
warning could result in bodily harm or loss of life.
CAUTION: Text set off in this manner indicates that failure to follow directions could
result in damage to equipment or loss of information.
IMPORTANT: Text set off in this manner presents clarifying information or specific instructions.
Symbols on Equipment
Any surface or area of the equipment marked with these symbols indicates
the presence of electrical shock hazards. Enclosed area contains no
operator serviceable parts.
WARNING: To reduce the risk of injury from electrical shock hazards, do not
open this enclosure.
Any RJ-45 receptacle marked with these symbols indicates a Network
Interface Connection.
WARNING: To reduce the risk of electrical shock, fire, or damage to the
equipment, do not plug telephone or telecommunications connectors into
this receptacle.
Any surface or area of the equipment marked with these symbols indicates
the presence of a hot surface or hot component. If this surface is contacted,
the potential for injury exists.
WARNING: To reduce the risk of injury from a hot component, allow the
surface to cool before working with it.
xvii
xviii
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Power Supplies or Systems marked with these symbols indicate
that the equipment is supplied by multiple sources of power.
WARNING: To reduce the risk of injury from electrical shock,
remove all power cords to completely disconnect power from the
system.
Any product or assembly marked with these symbols indicates that the
component exceeds the recommended weight for one individual to handle
safely.
WARNING: To reduce the risk of personal injury or damage to the
equipment, observe local occupational health and safety requirements and
guidelines for manual material handling.
Rack Stability
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure
that:
■ The leveling jacks are extended to the floor.
■ The full weight of the rack rests on the leveling jacks.
■ The stabilizing feet are attached to the rack if it is a single rack installation.
■ The racks are coupled together in multiple rack installations.
■ A rack may become unstable if more than one component is extended for any
reason. Extend only one component at a time.
Installation and Setup of the CD-ROM
Contents of CD-ROM:
■ Solution Software
■ Storage Works Command Console (SWCC) Agent and Client software
■ Documentation
NOTE: Refer to the platform-specific release notes for detailed solution kit contents and
release-specific information.
About this Guide
xix
Configuration Flowchart
A three-part flowchart is shown on the following pages. Refer to these charts while
configuring a new storage subsystem:
■ Figure 1 on page xx shows the start of the configuration process.
■ Figure 2 on page xxii shows how to configure storage with the command line
interpreter (CLI), which is the low-level interface to the controller.
■ Figure 3 on page xxiii shows how to configure storage using StorageWorks Command
Console (SWCC), which is the graphical user interface to the controller.
All references in the flowcharts pertain to pages in this guide, unless otherwise indicated.
xx HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Unpack
subsystem
See the unpacking instructions
on shipping box
Plan a Subsystem
Chapter 1
Plan Storage
Chapter 2
Prepare Host
Chapter 3
Make Local Connection
Page 5-2
Controller pair
Single controller
Cable Controller
Page 5-3
Cable Controllers
Page 5-9
Configure Controller
Page 5-4
Configure Controllers
Page 5-10
Installing
SWCC
?
No
A
Figure 1. General configuration flowchart
Yes
See Figure 2
B
See Figure 3
About this Guide
A
Add devices
Page 5-15
Create Storagesets
and Partitions:
Stripeset, Page 5-15
Mirrorset, Page 5-15
RAIDset, Page 5-16
Striped Mirrorset, Page 5-15
Single (JBOD) Disk, Page 5-18
Continue creating units
until you have you have
completed your planned
configuration.
Partition, Page 5-18
Assign Unit Numbers
Page 5-19
Configuration Options
Page 5-21
Verify Storage Setup
Chapter 6
Figure 2. Configuring storage with the CLI
xxi
xxii HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
B
Install Agent
Chapter 4
Install Client
Appendix B
Create Storage
See SWCC online help
Verify Storage Set Up
Chapter 6
Figure 3. Configuring storage with SWCC
Getting Help
If you have a problem and have exhausted the information in this guide, you can receive
further information and other help in the following locations.
Compaq Technical Support
A technical support specialist can help diagnose the problem or guide you to the next step
in the warranty process.
In North America, call the Compaq Technical Phone Support Center at
1-800-OK-COMPAQ. This service is available 24 hours a day, 7 days a week.
NOTE: For continuous quality improvement, calls may be recorded or monitored.
Outside North America, call the nearest Compaq Technical Support Phone Center.
Telephone numbers for worldwide Technical Support Centers are listed on the Compaq
website. Access the Compaq website:
http://www.compaq.com
About this Guide
xxiii
Be sure to have the following information available before you call Compaq:
■ Technical support registration number (if applicable)
■ Product serial numbers
■ Product model names and numbers
■ Applicable error messages
■ Add-on boards or hardware
■ Third-party hardware or software
■ Operating system type and revision level
■ Detailed, specific questions
Compaq Website
The Compaq website has latest information on this product as well as the latest drivers.
Access the Compaq website at:
http://www.compaq.com/storage
Compaq Authorized Reseller
For the name of your nearest Compaq Authorized Reseller:
■ In the United States, call 1-800-345-1518.
■ In Canada, call 1-800-263-5868.
■ Elsewhere, see the Compaq website for locations and telephone numbers.
Chapter
1
Planning a Subsystem
This chapter provides information that helps you plan how to configure the subsystem.
Refer to Chapter 2 when planning the types of storage containers you need.
IMPORTANT: This chapter frequently references the command line interface (CLI). For the
complete syntax and descriptions of the CLI commands, see the Compaq StorageWorks HSG80
Array Controller ACS Version 8.6 CLI Reference Guide.
The following information is included in this chapter:
■ “Defining the Subsystems,” page 1–2
■ “Selecting a Failover Mode,” page 1–5
■ “Selecting a Cache Mode,” page 1–9
■ “Enabling Mirrored Caching,” page 1–11
■ “The Command Console LUN,” page 1–11
■ “Connections,” page 1–13
■ “Assigning Unit Numbers,” page 1–16
■ “Selective Storage Presentation,” page 1–21
■ “Worldwide Names (Node IDs and Port IDs),” page 1–28
IMPORTANT: DILX should be run for ten minutes on all units to delete the 8 MB EISA partition.
Refer to Compaq StorageWorks HSG80 Array Controller ACS Version 8.6 CLI Reference Guide for
details.
1–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Defining the Subsystems
This section describes the terms this controller and other controller. It also presents
graphics of the Model 2200 and BA370 enclosures.
NOTE: The HSG80 controller uses the BA370 or Model 2200 enclosure.
Controller Designations A and B
The terms A, B, “this controller,” and “other controller,” respectively are used to
distinguish one controller from another in a two-controller (also called dual-redundant)
subsystem. These terms are described more thoroughly in the following sections.
Controllers and cache modules are designated either A or B depending on their location in
the enclosure, as shown in Figure 1–1 for the Model 2200 enclosure and as shown in
Figure 1–2 for the BA370 enclosure.
Model 2200 Enclosure
1
2
1
2
2
1
1
3
1
ECBs
2
Fans
3
EMU
4
Power supplies
5
I/O modules
6
Controller A
7
Controller B
8
Cache module A
9
Cache module B
CXO6323C
5
5
5
5
5
5
6
4
4
7
8
9
CXO7365A
Figure 1–1. Location of controllers and cache modules in a Model 2200 enclosure
Planning a Subsystem
1–3
BA370 Enclosure
2
1
3
1
EMU
2
PVA
3
Controller A
4
Controller B
5
Cache module A
6
Cache module B
4
5
6
CXO6283B
Figure 1–2. Location of controllers and cache modules in a BA370 enclosure
Controller Designations “This Controller” and “Other
Controller”
Some CLI commands use the terms “this” and “other” to identify one controller or the
other in a dual-redundant pair. These designations are a shortened form of “this controller”
and “other controller.” These terms are defined as follows:
■ “this controller”—the controller that is the focus of the CLI session. “This controller”
is the controller to which the maintenance terminal is attached and through which the
CLI commands are being entered. “This controller” can be shortened to “this” in CLI
commands.
■ “other controller”—the controller that is not the focus of the CLI session and through
which CLI commands are not being entered. “Other controller” can be shortened to
“other” in CLI commands.
Figure 1–3 shows the relationship between “this controller” and “other controller” in a
Model 2200 enclosure, and Figure 1–4 shows the same relationship in a BA370 enclosure.
1–4 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Model 2200 Enclosure
1
2
CXO7366A
1 This controller
2
Other controller
Figure 1–3. “This controller” and “other controller” for the Model 2200 enclosure
BA370 Enclosure
1
2
CXO6468D
1 Other controller
2
This controller
Figure 1–4. “This controller” and “other controller” for the BA370 enclosure
Planning a Subsystem
1–5
Selecting a Failover Mode
Failover is a way to keep the storage array available to the host if one of the controllers
becomes unresponsive. A controller can become unresponsive because of a hardware
failure, such as a controller or, in multiple-bus only, due to a failure of the link between
host and controller or host-bus adapter. Failover keeps the storage array available to the
hosts by allowing the surviving controller to take over total control of the subsystem.
There are two failover modes:
■ Transparent, which is handled by the surviving controller and is invisible
(transparent) to the hosts.
■ Multiple-bus, which is handled by the hosts.
Either mode of failover can work with loop or fabric topology.
Transparent Failover Mode
Transparent failover mode has the following characteristics:
■ Hosts do not know failover has taken place
■ Units are divided between host ports 1 and 2
A unit or storage unit is a physical or virtual device of the subsystem. It is typically
assigned a logical unit number (LUN) and is managed by the HSG80 controller and
presented to a server through the Fibre Channel bus and the server’s host bus adapter.
Disks that are set up as independent disks (JBODs) or RAIDsets are referred to as
storagesets. Storagesets are units.
In transparent failover mode, host port 1 of controller A and host port 1 of controller B
must be on the same Fibre Channel link. Host port 2 of controller A and host port 2 of
controller B must also be on the same Fibre Channel link. Depending on operating system
restrictions and requirements, the port 1 link and the port 2 link can be separate links, or
they can be the same link.
At any time, host port 1 is active on only one controller, and host port 2 is active on only
one controller. The other ports are in standby mode. In normal operation, both host port 1
on controller A and host port 2 on controller B are active. A representative configuration is
shown in Figure 1–5. The active and standby ports share port identity, enabling the
standby port to take over for the active one. If one controller fails, its companion controller
(known as the surviving controller) takes control by making both its host ports active, as
shown in Figure 1–6.
1–6 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Units are divided between the host ports:
■ Units 0-99 are on host port 1 of both controllers (but accessible only through the active
port).
■ Units 100-199 are on host port 2 of both controllers (but accessible only through the
active port).
Transparent failover only compensates for a controller failure, and not for failures of either
the Fibre Channel link or host Fibre Channel adapters.
Host 1
Host 2
Switch or hub
Switch or hub
Host
port 1
active
D0
Host 3
D1
Host
port 1
standby
Host
port 2
standby
Controller A
D100
Controller B
D101
D120
Host
port 2
active
CXO7036A
Figure 1–5. Transparent failover—normal operation
Planning a Subsystem
Host 1
Host 2
Switch or hub
Host 3
Switch or hub
Host
port 1
active
D0
1–7
D1
Host
port 1
not
available
Host
port 2
active
Controller A
D100
Controller B
not available
D101
D120
Host
port 2
not
available
CXO7035A
Figure 1–6. Transparent failover—after failover from controller B to controller A
Multiple-Bus Failover Mode
Multiple-bus failover mode has the following characteristics:
■ Host controls the failover process by moving the unit(s) from one controller to another
■ All units (0 through 199) are visible at all host ports
■ Each host has two or more paths to the units
All hosts must have operating system software that supports multiple-bus failover mode.
With this software, the host sees the same units visible through two (or more) paths. When
one path fails, the host can issue commands to move the units from one path to another. A
typical multiple-bus failover configuration is shown in Figure 1–7.
1–8 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
In multiple-bus failover mode, you can specify which units are normally serviced by a
specific controller of a controller pair. This process is called preferring or preferment.
Units can be preferred to one controller or the other by the PREFERRED_PATH switch of
the ADD UNIT (or SET unit) command. For example, use the following command to prefer unit
D101 to “this controller”:
SET D101 PREFERRED_PATH=THIS_CONTROLLER
NOTE: This is an initial preference, which can be overridden by the hosts.
Keep the following points in mind when configuring controllers for multiple-bus failover:
■ Multiple-bus failover can compensate for a failure in any of the following:
❏ Controller
❏ Switch or hub
❏ Fibre Channel link
❏ Host Fibre Channel adapter
■ A host can redistribute the I/O load between the controllers
■ All hosts must have operating system software that supports multiple-bus failover
mode
Planning a Subsystem
Host 1
"RED"
Host 2
"GREY"
Host 3
"BLUE"
FCA1 FCA2
FCA1 FCA2
FCA1 FCA2
Switch
or hub
1–9
Switch
or hub
Host
port 1
active
D0
Host
port 2
active
Controller A
D1
D2
D100
D101
D120
All units visible to all ports
Host
port 1
active
Controller B
Host
port 2
active
NOTE: FCA = Fibre Channel Adapter
CXO7094B
Figure 1–7. Typical multiple-bus configuration
Selecting a Cache Mode
The cache module supports read, read-ahead, write-through, and write-back caching
techniques. The cache technique is selected separately for each unit. For example, you can
enable only read and write-through caching for some units while enabling only write-back
caching for other units.
1–10 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Read Caching
When the controller receives a read request from the host, it reads the data from the disk
drives, delivers it to the host, and stores the data in its cache module. Subsequent reads for
the same data will take the data from cache rather than accessing the data from the disks.
This process is called read caching.
Read caching can improve response time to many of the host’s read requests. By default,
read caching is enabled for all units.
Read-Ahead Caching
During read-ahead caching, the controller anticipates subsequent read requests and begins
to prefetch the next blocks of data from the disks as it sends the requested read data to the
host. This is a parallel action. The controller notifies the host of the read completion, and
subsequent sequential read requests are satisfied from the cache memory. By default,
read-ahead caching is enabled for all units.
Write-Back Caching
Write-back caching improves the subsystem’s response time to write requests by allowing
the controller to declare the write operation complete as soon as the data reaches cache
memory. The controller performs the slower operation of writing the data to the disk
drives at a later time.
By default, write-back caching is enabled for all units, but only if there is a backup power
source for the cache modules (either batteries or an uninterruptable power supply).
Write-Through Caching
Write-through caching is enabled when write-back caching is disabled. When the
controller receives a write request from the host, it places the data in its cache module,
writes the data to the disk drives, then notifies the host when the write operation is
complete. This process is called write-through caching because the data actually passes
through—and is stored in—the cache memory on its way to the disk drives.
Planning a Subsystem
1–11
Enabling Mirrored Caching
In mirrored caching, half of each controller’s cache mirrors the companion controller’s
cache, as shown in Figure 1–8.
The total memory available for cached data is reduced by half, but the level of protection is
greater.
Cache module A
Cache module B
A
cache
B
cache
Copy of
B
cache
Copy of
A
cache
CXO5729A
Figure 1–8. Mirrored caching
Before enabling mirrored caching, make sure the following conditions are met:
■ Both controllers support the same size cache.
■ Diagnostics indicate that both caches are good.
■ No unit errors are outstanding, for example, lost data or data that cannot be written to
devices.
■ Both controllers are started and configured in failover mode.
The Command Console LUN
StorageWorks Command Console (SWCC) software communicates with the HSG80
controllers through an existing storage unit, or logical unit number (LUN). The dedicated
LUN that SWCC uses is called the Command Console LUN (CCL). CCL serves as the
communication device for the HS-Series Agent and identifies itself to the host by a unique
identification string. By default, a CCL device is enabled within the HSG80 controller on
host port 1. The HSG80 uses SCSI-2 and SCSI-3.
1–12 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
The CCL does the following:
■ Allows the RAID Array to be recognized by the host as soon as it is attached to the
SCSI bus and configured into the operating system.
■ Serves as a communications device for the HS-Series Agent. The CCL identifies itself
to the host by a unique identification string.
In dual-redundant controller configurations, the commands described in the following
sections alter the setting of the CCL on both controllers. The CCL is enabled only on host
port 1. At least one storage device of any type must be configured on host port 2 before
installing the Agent on a host connected to host port 2.
Select a storageset that you plan to configure and that is not likely to change. This
storageset can be used by the Agent to communicate with the RAID Array. Deleting this
storageset (LUN) later breaks the connection between the Agent and the RAID Array.
Determining the Address of the CCL
CCL is enabled by default. Its address can be determined by entering the following CLI
command:
HSG80 > SHOW THIS_CONTROLLER
Enabling/Disabling the CCL in SCSI-2 Mode
Enabling the CCL
To enable the CCL, enter the following CLI command:
HSG80 > SET THIS_CONTROLLER COMMAND_CONSOLE_LUN
Disabling the CCL
To disable the CCL, enter the following CLI command:
HSG80 > SET THIS_CONTROLLER NOCOMMAND_CONSOLE_LUN
To see the state of the CCL, use the SHOW THIS CONTROLLER/ OTHER CONTROLLER command.
Because the CCL is not an actual LUN, the SHOW UNITS command will not display the CCL
location.
Planning a Subsystem
1–13
Enabling/Disabling CCL in SCSI-3 Mode
The CCL is enabled all the time. There is no option to enable/disable.
Connections
The term “connection” applies to every path between a Fibre Channel adapter in a host
computer and an active host port on a controller.
NOTE: In ACS V8.6 the maximum number of supported connections is 96.
Naming Connections
Compaq highly recommends that you assign names to connections that have meaning in
the context of your particular configuration. One system that works well is to name each
connection after its host, its adapter, its controller, and its controller host port, as follows:
HOST1A1
HOST
NAME
PORT
CONTROLLER
ADAPTER
Examples:
A connection from the first adapter in the host named RED that goes to port 1 of controller
A would be called RED1A1.
A connection from the third adapter in host GREEN that goes to port 2 of controller B
would be called GREEN3B2.
NOTE: Connection names can have a maximum of 9 characters.
Numbers of Connections
The number of connections resulting from cabling one adapter into a switch or hub
depends on failover mode and how many links the configuration has:
1–14 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
■ If a controller pair is in transparent failover mode and the port 1 link is separate from
the port 2 link (that is, ports 1 of both controllers are on one loop or fabric, and port 2
of both controllers are on another), each adapter will have one connection, as shown in
Figure 1–9.
■ If a controller pair is in transparent failover mode and port 1 and port 2 are on the same
link (that is, all ports are on the same loop or fabric), each adapter will have two
connections, as shown in Figure 1–10.
■ If a controller pair is in multiple-bus failover mode, each adapter has two connections,
as shown in Figure 1–11.
Host 1
"AQUA"
Host 2
"BLACK"
Host 3
"BROWN"
FCA1
FCA1
FCA1
Switch or hub
Switch or hub
Connection
AQUA1A1
Host
port 1
active
Host
port 2
standby
Controller A
Connection
BLACK1B2
Connection
BROWN1B2
D0
D1
Host
port 1
standby
D100
Controller B
D101
D120
Host
port 2
active
NOTE: FCA = Fibre Channel Adapter
CXO7081B
Figure 1–9. Connections in separate-link, transparent failover mode configurations
Planning a Subsystem
Host 1
"GREEN"
Host 2
"ORANGE"
Host 3
"PURPLE"
FCA1
FCA1
FCA1
Switch or hub
Connections
GREEN1A1
ORANGE1A1
PURPLE1A1
Host
port 1
active
D0
Host
port 2
standby
Controller A
D1
Host
port 1
standby
Connections
GREEN1B2
ORANGE1B2
PURPLE1B2
D100
Controller B
D101
D120
Host
port 2
active
NOTE: FCA = Fibre Channel Adapter
CXO7079B
Figure 1–10. Connections in single-link, transparent failover mode configurations
1–15
1–16 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Host 1
"VIOLET"
FCA1 FCA2
Switch
or hub
Connection
VIOLET1B1
Switch
or hub
Connection
VIOLET1A1
Connection
VIOLET2A2
Host
port 1
active
D0
Host
port 2
active
Controller A
D1
D2
D100
Connection
VIOLET2B2
D101
D120
All units visible to all ports
Host
port 1
active
Controller B
Host
port 2
active
NOTE: FCA = Fibre Channel Adapter
CXO7080B
Figure 1–11. Connections in multiple-bus failover mode
Assigning Unit Numbers
The controller keeps track of the unit with the unit number. The unit number can be from
0−199 prefixed by a D, which stands for disk drive. A unit can be presented as different
LUNs to different connections. The interaction of a unit and a connection is determined by
several factors:
■ Failover mode of the controller pair
■ The ENABLE_ACCESS_PATH and PREFERRED_PATH switches in the ADD UNIT (or
SET unit) commands
Planning a Subsystem
1–17
■ The UNIT_OFFSET switch in the ADD CONNECTIONS (or SET connections) commands
■ The controller port to which the connection is attached
■ The SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command
The considerations for assigning unit numbers are discussed in the following sections.
Matching Units to Host Connections in Transparent Failover
Mode
In transparent failover mode, the ADD UNIT command creates a unit for host connection to
access and assigns it to either port 1 of both controllers or to port 2 of both controllers.
Unit numbers are assigned to ports as follows:
■ 0−99 are assigned to host port 1 of both controllers.
■ 100−199 are assigned to host port 2 of both controllers.
For example, unit D2 is on port 1, and unit D102 is on port 2.
The LUN number that a host connection assigns to a unit is a function of the
UNIT_OFFSET switch of the ADD (or SET) CONNECTIONS command. The relationship of
offset, LUN number, and unit number is shown in the following equation:
LUN number = unit number – offset
Where. . .
❏ LUN number is relative to the host (what the host sees the unit as)
❏ Unit number is relative to the controller (what the controller sees the unit as)
If no value is specified for offset, then connections on port 1 have a default offset of 0 and
connections on port 2 have a default offset of 100.
For example, if all host connections use the default offset values, unit D2 will be presented
to a port 1 host connection as LUN 2 (unit number of 2 minus offset of 0). Unit D102 will
be presented to a port 2 host connection as LUN 2 (unit number of D102 minus offset of
100).
Figure 1–12 shows how units are presented as different LUNs, depending on the offset of
the host. In this illustration, host connection 1 and host connection 2 would need to be on
host port 1; host connection 3 would need to be on host port 2.
1–18 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Controller
units
Host
connection 1
Offset: 0
Host
connection 2
Offset: 20
D0
LUN 0
D1
LUN 1
D2
LUN 2
D3
LUN 3
D20
LUN 20
LUN 0
D21
LUN 21
LUN 1
Host
connection 3
Offset: 100
D100
LUN 0
D101
LUN 1
D102
LUN 2
D130
LUN 30
D131
LUN 31
CXO6455B
Figure 1–12. LUN presentation to hosts, as determined by offset
Offsets other than the default values can be specified. For example, unit D17 would be
visible to a host connection on port 1 that had an offset of 10 as LUN 7 (unit number of 17
minus offset of 10). The unit would not be visible to a host connection with a unit offset of
18 or greater, because that offset is not within the unit’s range (unit number of 17 minus
offset of 18 is a negative number).
Similarly, unit D127 would be visible to a host connection on port 2 that had an offset of
120 as LUN 7 (unit number of 127 minus offset of 120). The unit would not be visible to a
host connection with a unit offset of 128 or greater, because that offset is not within the
unit’s range (unit number of 127 minus offset of 128 is a negative number).
An additional factor to consider when assigning unit numbers and offsets is SCSI version.
If the SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command is
set to SCSI-3, the CCL is presented as LUN 0 to every connection, superseding any unit
assignments. The interaction between SCSI version and unit numbers is explained further
in the next section.
In addition, the access path to the host connection must be enabled for the connection to
access the unit. See “Restricting Host Access in Transparent Failover Mode,” page 1–21.
Planning a Subsystem
1–19
Matching Units to Host Connections in Multiple-Bus
Failover Mode
In multiple-bus failover mode, the ADD UNIT command creates a unit for host connections to
access. All unit numbers (0 through 199) are potentially visible on all four controller ports,
but are accessible only to those host connections for which access path is enabled and
which have offsets in the unit's range.
The LUN number a host connection assigns to a unit is a function of the UNIT_OFFSET
switch of the ADD (or SET) CONNECTIONS command. The default offset is 0. The relationship of
offset, LUN number, and unit number is shown in the following equation:
LUN number = unit number – offset
Where . . .
❏ LUN number is relative to the host (number the host sees the unit as)
❏ Unit number is relative to the controller (number the controller sees the unit as)
For example, unit D7 would be visible to a host connection with an offset of 0 as LUN 7
(unit number of 7 minus offset of 0). Unit D17 would be visible to a host connection with
an offset of 10 as LUN 7 (unit number of 17 minus offset of 10). The unit would not be
visible at all to a host connection with a unit offset of 18 or greater, because that offset is
not within the units range (unit number of 17 minus offset of 18 is a negative number).
In addition, the access path to the host connection must be enabled for the connection to
access the unit. This is done through the ENABLE_ACCESS_PATH switch of the ADD UNIT
(or SET unit) command.
The PREFERRED_PATH switch of the ADD UNIT (or SET unit) command determines which
controller of a dual-redundant pair initially accesses the unit. Initially,
PREFERRED_PATH determines which controller presents the unit as Ready. The other
controller presents the unit as Not Ready. Hosts can issue a SCSI Start Unit command to
move the unit from one controller to the other.
Assigning Unit Numbers Depending on SCSI_VERSION
The SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command
determines how the CCL is presented. There are two choices: SCSI-2 and SCSI-3. The
choice for SCSI_VERSION affects how certain unit numbers and certain host connection
offsets interact.
1–20 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Assigning Host Connection Offsets and Unit Numbers in
SCSI-3 Mode
If SCSI_VERSION is set to SCSI-3, the CCL is presented as LUN 0 to all connections.
The CCL supersedes any other unit assignment. Therefore, in SCSI-3 mode, a unit that
would normally be presented to a connection as LUN 0 is not visible to that connection at
all.
The following methods are recommended for assigning host connection offsets and unit
numbers in SCSI-3 mode:
■ Offsets should be divisible by 10 (for consistency and simplicity).
■ Unit numbers should not be assigned at connection offsets (to avoid being masked by
the CCL at LUN 0).
For example, if a host connection has an offset of 20 and SCSI-3 mode is selected, the
connection will see LUNs as follows:
LUN 0 - CCL
LUN 1 - unit 21
LUN 2 - unit 22, etc.
In this example, if a unit 20 is defined, it will be superseded by the CCL and invisible to
the connection.
Assigning Host Connection Offsets and Unit Numbers in
SCSI-2 Mode
Some operating systems expect or require a disk unit to be at LUN 0. In this case, it is
necessary to specify SCSI-2 mode.
If SCSI_VERSION is set to SCSI-2 mode, the CCL floats, moving to the first available
LUN location, depending on the configuration.
It is recommended to use the following conventions when assigning host connection
offsets and unit numbers in SCSI-2 mode:
■ Offsets should be divisible by 10 (for consistency and simplicity).
■ Unit numbers should be assigned at connection offsets (so that every host connection
has a unit presented at LUN 0).
Planning a Subsystem
1–21
Table 1–1 summarizes the recommendations for unit assignments based on the
SCSI_VERSION switch.
Table 1–1 Unit Assignments and SCSI_VERSION
SCSI_VERSION
Offset
Unit Assignment
What the connection sees LUN 0 as
SCSI-2
Divisible by 10
At offsets
Unit whose number matches offset
SCSI-3
Divisible by 10
Not at offsets
CCL
Selective Storage Presentation
Selective Storage presentation is a feature of the HSG80 controller that enables the user to
control the allocation of storage space and shared access to storage across multiple hosts.
This is also known as Restricting Host Access.
In a subsystem that is attached to more than one host or if the hosts have more than one
adapter, it is possible to reserve certain units for the exclusive use of certain host
connections.
For a controller pair, the method used to restrict host access depends on whether the
controllers are in transparent or multiple-bus failover mode. For a single controller, the
methods are the same as for a controller pair in transparent failover.
NOTE: The default condition is ENABLE_ACCESS_PATH=ALL. This specifies that access paths
to ALL hosts are enabled. It is recommended that the user restrict host access and that the
access path be carefully specified to avoid providing undesired host connections access to the
unit.
Restricting Host Access in Transparent Failover Mode
Three methods can be used to restrict host access to storage units in transparent failover
mode:
■ Using separate Fibre Channel links (either loop or fabric)
■ Enabling the access path of selected host connections on a shared loop or fabric
■ Setting offsets
NOTE: These techniques also work for a single controller.
1–22 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Restricting Host Access by Separate Links
In transparent failover mode, host port 1 of controller A and host port 1 of controller B
share a common Fibre Channel link. Host port 2 of controller A and host port 2 of
controller B also share a common Fibre Channel link. If the host 1 link is separate from the
host 2 link, the simplest way to limit host access is to have one host or set of hosts on the
port 1 link, and another host or set of hosts on the port 2 link. Each host can then see only
units assigned to its respective controller port. This separation of host buses is shown in
Figure 1–13. This method applies only if the host 1 link and host 2 link are separate links.
NOTE: Compaq highly recommends that you provide access to only specific connections. This
way, if new connections are added, they will not have automatic access to all units. See the
following section Restricting Host Access by Disabling Access Paths.
Host 1
"AQUA"
Host 2
"BLACK"
Host 3
"BROWN"
FCA1
FCA1
FCA1
Switch or hub
Switch or hub
Connection
AQUA1A1
Host
port 1
active
Host
port 2
standby
Controller A
Connection
BLACK1B2
Connection
BROWN1B2
D0
D100
D1
Host
port 1
standby
Controller B
D101
D120
Host
port 2
active
NOTE: FCA = Fibre Channel Adapter
CXO7081B
Figure 1–13. Limiting host access in transparent failover mode
Planning a Subsystem
1–23
Restricting Host Access by Disabling Access Paths
If more than one host is on a link (that is, attached to the same port), host access can be
limited by enabling the access of certain host connections and disabling the access of
others. This is done through the ENABLE_ACCESS_PATH and
DISABLE_ACCESS_PATH switches of the ADD UNIT (or SET unit) commands. The access
path is a unit switch, meaning it must be specified for each unit. Default access enables the
unit to be accessible to all hosts.
For example:
In Figure 1–13, restricting the access of unit D101 to host 3, the host named BROWN can
be done by enabling only the connection to host 3. Enter the following commands:
SET D101 DISABLE_ACCESS_PATH=ALL
SET D101 ENABLE_ACCESS_PATH=BROWN1B2
If the storage subsystem has more than one host connection, carefully specify the access
path to avoid providing undesired host connections access to the unit. The default
condition for a unit is that access paths to all host connections are enabled. To restrict host
access to a set of host connections, specify DISABLE_ACCESS_PATH=ALL for the unit,
then specify the set of host connections that are to have access to the unit.
Enabling the access path to a particular host connection does not override previously
enabled access paths. All access paths previously enabled are still valid; the new host
connection is simply added to the list of connections that can access the unit.
IMPORTANT: The procedure of restricting access by enabling all access paths then disabling
selected paths is not recommended because of the potential data/security breach that occurs
when a new host connection is added.
Restricting Host Access by Offsets
Offsets establish the start of the range of units that a host connection can access.
1–24 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
For example:
In Figure 1–13, assume both host connections on port 2 (connections BLACK1B2 and
BROWN1B2) initially have the default port 2 offset of 100. Setting the offset of
connection BROWN1B2 to 120 will present unit D120 to host BROWN as LUN 0.
SET BROWN1B2 UNIT_OFFSET=120
Host BROWN cannot see units lower than its offset, so it cannot access units D100 and
D101. However, host BLACK can still access D120 as LUN 20 if the operating system
permits. To restrict access of D120 to only host BROWN, enable only host BROWN’s
access, as follows:
SET D120 DISABLE_ACCESS_PATH=ALL
SET D120 ENABLE_ACCESS_PATH=BROWN1B2
NOTE: Compaq recommends that you provide access to only specific connections, even if there
is just one connection on the link. This way, if new connections are added, they will not have
automatic access to all units.
Restricting Host Access in Multiple-Bus Failover Mode
In multiple-bus mode, the units assigned to any port are visible to all ports. There are two
ways to limit host access in multiple-bus failover mode:
■ Enabling the access path of selected host connections
■ Setting offsets
Enablign the Access Path of Selected Host Connections
Host access can be limited by enabling the access of certain host connections and
disabling the access of others. This is done through the ENABLE_ACCESS_PATH and
DISABLE_ACCESS_PATH switches of the ADD UNIT (or SET unit) commands. Access path is
a unit switch, meaning it must be specified for each unit. Default access means that the
unit is accessible to all hosts. It is important to remember that at least two paths between
the unit and the host must be enabled in order for multiple-bus failover to work.
Planning a Subsystem
Host 1
"RED"
Host 2
"GREY"
Host 3
"BLUE"
FCA1 FCA2
FCA1 FCA2
FCA1 FCA2
Switch
or hub
Connections
RED1B1
GREY1B1
BLUE1B1
1–25
Switch
or hub
Connections
RED1A1
GREY1A1
BLUE1A1
Connections
RED2A2
GREY2A2
BLUE2A2
Host
port 1
active
Host
port 2
active
Controller A
D0
D1
D2
D100
Connections
RED2B2
GREY2B2
BLUE2B2
D101
D120
All units visible to all ports
Host
port 1
active
Controller B
Host
port 2
active
NOTE: FCA = Fibre Channel Adapter
CXO7078B
Figure 1–14. Limiting host access in multiple-bus failover mode
1–26 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
For example:
Figure 1–14 shows a representative multiple-bus failover configuration. Restricting the
access of unit D101 to host BLUE can be done by enabling only the connections to host
BLUE. At least two connections must be enabled for multiple-bus failover to work. For
most operating systems, it is desirable to have all connections to the host enabled. To
enable all connections for host BLUE, enter the following commands:
SET D101 DISABLE_ACCESS_PATH=ALL
SET D101 ENABLE_ACCESS_PATH=BLUE1A1,BLUE1B1,BLUE2A2,BLUE2B2
To enable only two connections for host BLUE (if it is a restriction of the operating
system), select two connections that use different adapters, different switches or hubs, and
different controllers:
SET D101 DISABLE_ACCESS_PATH=ALL
SET D101 ENABLE_ACCESS_PATH=(BLUE1A1,BLUE2B2)
or
SET D101 DISABLE_ACCESS_PATH=ALL
SET D101 ENABLE_ACCESS_PATH=(BLUE1B1,BLUE2A2)
If the storage subsystem has more than one host connection, the access path must be
specified carefully to avoid giving undesirable host connections access to the unit. The
default condition for a unit is that access paths to all host connections are enabled. To
restrict host access to a set of host connections, specify DISABLE_ACCESS_PATH=ALL
when the unit is added, then use the SET unit command to specify the set of host
connections that are to have access to the unit.
Enabling the access path to a particular host connection does not override previously
enabled access paths. All access paths previously enabled are still valid; the new host
connection is simply added to the list of connections that can access the unit.
IMPORTANT: The procedure of restricting access by enabling all access paths then disabling
selected paths is not recommended because of the potential data/security breach that occurs
when a new host connection is added.
Planning a Subsystem
1–27
Restricting Host Access by Offsets
Offsets establish the start of the range of units that a host connection can access. However,
depending on the operating system, hosts that have lower offsets may be able to access the
units in the specified range.
NOTE: All host connections to the same host computer must be set to the same offset.
For example:
In Figure 1–14, assume all host connections initially have the default offset of 0. Giving
all connections access to host BLUE, an offset of 120 will present unit D120 to host
BLUE as LUN 0. Enter the following commands:
SET BLUE1A1 UNIT_OFFSET=120
SET BLUE1B1 UNIT_OFFSET=120
SET BLUE2A2 UNIT_OFFSET=120
SET BLUE2B2 UNIT_OFFSET=120
Host BLUE cannot see units lower than its offset, so it cannot access any other units.
However, the other two hosts can still access D120 as LUN 20 if their operating system
permits. To restrict access of D120 to only host BLUE, enable only host BLUE’s access,
as follows:
SET D120 DISABLE_ACCESS_PATH=ALL
SET D120 ENABLE_ACCESS_PATH=(BLUE1A1,BLUE1B1,BLUE12A2,BLUE2B2)
NOTE: Compaq recommends that you always provide access to only specific connections. This
way, if new connections are added, they will not have automatic access to all units. See
“Restricting Host Access by Disabling Access Paths,” page 1–23.
1–28 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Worldwide Names (Node IDs and Port IDs)
A worldwide name—also called a node ID—is a unique, 64-bit number assigned to a
subsystem prior to shipping. The node ID belongs to the subsystem itself and never
changes.
Each subsystem’s node ID ends in zero, for example 5000-1FE1-FF0C-EE00. The
controller port IDs are derived from the node ID.
In a subsystem with two controllers in transparent failover mode, the controller port IDs
are incremented as follows:
■ Controller A and controller B, port 1—worldwide name + 1, for example
5000-1FE1-FF0C-EE01
■ Controller A and controller B, port 2—worldwide name + 2, for example
5000-1FE1-FF0C-EE02
In multiple-bus failover mode, each of the host ports has its own port ID:
■ Controller B, port 1—worldwide name + 1, for example 5000-1FE1-FF0C-EE01
■ Controller B, port 2—worldwide name + 2, for example 5000-1FE1-FF0C-EE02
■ Controller A, port 1—worldwide name + 3, for example 5000-1FE1-FF0C-EE03
■ Controller A, port 2—worldwide name + 4, for example 5000-1FE1-FF0C-EE04
Use the CLI command, SHOW THIS_CONTROLLER/OTHER_CONTROLLER to display the
subsystem’s worldwide name.
Restoring Worldwide Names (Node IDs)
If a situation occurs that requires you to restore the worldwide name, you can restore it
using the worldwide name and checksum printed on the sticker on the frame into which
the controller is inserted.
Planning a Subsystem
Figure 1–15 shows the placement of the worldwide name label for the Model 2200
enclosure, and Figure 1–16 for the BA370 enclosure.
1
WWN INFORMATION
P/N:
WWN:
S/N:
NNNN – NNNN – NNNN – NNNN
Checksum:
NN
1
Node ID
(Worldwide name)
2
Checksum
2
CXO7228A
Figure 1–15. Placement of the worldwide name label on the Model 2200 enclosure
1
1
Node ID
(Worldwide name)
2
Checksum
WWN INFORMATION
S/N:
P/N:
WWN:
NNNN – NNNN – NNNN – NNNN
Checksum:
NN
2
CXO6873B
Figure 1–16. Placement of the worldwide name label on the BA370 enclosure
CAUTION: Each subsystem has its own unique worldwide name (node ID). If you
attempt to set the subsystem worldwide name to a name other than the one that
came with the subsystem, the data on the subsystem will not be accessible. Never set
two subsystems to the same worldwide name, or data corruption will occur.
1–29
1–30 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Unit Worldwide Names (LUN IDs)
In addition, each unit has its own worldwide name, or LUN ID. This is a unique, 128-bit
value that the controller assigns at the time of unit initialization. It cannot be altered by the
user but does change when the unit is reinitialized. Use the SHOW command to list the LUN
ID.
Chapter
2
Planning Storage
This chapter provides information to help you plan the storage configuration of your
subsystem. Use the guidelines found in this section to plan the various types of storage
containers needed.
The following information is included in this chapter:
■ “Where to Start,” page 2–2
■ “Configuration Rules,” page 2–3
■ “Device PTL Addressing Convention,” page 2–3
■ “Determining Storage Requirements,” page 2–10
■ “Choosing a Container Type,” page 2–11
■ “Creating a Storageset Profile,” page 2–12
■ “Storageset Planning Considerations,” page 2–14
■ “Storageset Expansion Considerations,” page 2–21
■ “Partition Planning Considerations,” page 2–22
■ “Changing Characteristics through Switches,” page 2–23
■ “Storageset and Partition Switches,” page 2–24
■ “Initialization Switches,” page 2–25
■ “Unit Switches,” page 2–29
■ “Storage Maps,” page 2–29
2–2
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Where to Start
The following procedure outlines the steps to follow when planning your storage
configuration. Containers are individual disk drives (JBOD), storageset types (mirrorsets,
stripesets, and so on), and/or partitioned drives. See Appendix A to locate the blank
templates for keeping track of the containers being configured.
1. Review configuration rules. See “Configuration Rules,” page 2–3.
2. Familiarize yourself with the current physical layout of the devices and their
addressing scheme. See “Device PTL Addressing Convention,” page 2–3.
3. Determine your storage requirements. Use the questions in “Determining Storage
Requirements,” page 2–10, to help you.
4. Choose the type of storage containers you need to use in your subsystem. See
“Choosing a Container Type,” page 2–11, for a comparison and description of each
type of storageset.
5. Create a storageset profile (described in “Creating a Storageset Profile,” page 2–12).
Fill out the storageset profile while you read the sections that pertain to your chosen
storage type:
❏ “Storageset Planning Considerations,” page 2–14
❏ “Mirrorset Planning Considerations,” page 2–16
❏ “RAIDset Planning Considerations,” on page 18
❏ “Partition Planning Considerations,” page 2–22
❏ “Striped Mirrorset Planning Considerations,” page 2–20
6. Decide which switches you need for your subsystem. General information on switches
is detailed in “Storageset and Partition Switches,” page 2–24.
❏ Determine the unit switches you want for your units (“Unit Switches,” page 2–29).
❏ Determine the initialization switches you want for your planned storage containers
(“Initialization Switches,” page 2–25).
7. Create a storage map (“Storage Maps,” page 2–29).
8. Configure the storage you have now planned using one of the following methods:
❏ Use SWCC. See the SWCC documentation for details.
❏ Use the Command Line Interpreter (CLI) commands. This method allows you
flexibility in defining and naming your storage containers. See the Compaq
StorageWorks HSG80 Array Controller ACS Version 8.6 CLI Reference Guide.
Planning Storage
2–3
Configuration Rules
Review the following requirements and conditions to ensure that the storage configuration
you are planning is adequate:
■ Maximum of 128 LUNs: if Command Console LUN (CCL) is enabled, the result is
127 visible LUNs and one CCL
■ Maximum 1.024 TB LUN capacity
■ Maximum 84 physical drives with 2 optional expansion enclosures per storage system
■ Maximum 84 physical devices
■ Maximum 20 RAID-5 storagesets
■ Maximum 8 partitions per storageset or individual disk
■ Maximum 6 physical devices per RAID 1 (mirrorset)
■ Maximum 14 physical devices per RAID-5 storageset
■ Maximum 24 physical devices per RAID 0 (stripeset)
■ Maximum 24 physical devices per RAID 0+1 (striped mirrorset)
Device PTL Addressing Convention
The HSG80 controller has six SCSI device ports, each of which connects to a SCSI bus. In
dual-controller subsystems, these device buses are shared between the two controllers.
(The StorageWorks Command Console GUI calls the device ports “channels.”) The
standard BA370 enclosure provides a maximum of four SCSI target identifications (ID)
for each device port. If more target IDs are needed, expansion enclosures can be added to
the subsystem.
The HSG80 controller identifies devices based on a Port-Target-LUN (PTL) numbering
scheme, shown in Figure 2–1. The physical location of a device in its enclosure
determines its PTL.
■ P—Designates the controller's SCSI device port number (1 through 6).
■ T—Designates the target ID number of the device. Valid target ID numbers for a
single-controller configuration and dual-redundant controller configuration are 0 - 3
and 8 - 15, respectively.
■ L—Designates the logical unit (LUN) of the device. For disk devices the LUN is
always 0.
2–4
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
1 02 00
LUN 00
Target 02
Port 1
Figure 2–1. PTL naming convention
The controller can either operate with a BA370 enclosure or with a Model 2200 controller
enclosure combined with Model 4214R, Model 4254, Model 4310R, Model 4350R,
Model 4314R, or Model 4354R disk enclosures.
The controller operates with BA370 enclosures that are assigned ID numbers 0, 2, and 3.
These ID numbers are set through the PVA module. Enclosure ID number 1, which assigns
devices to targets 4 through 7, is not supported. Figure 2–2 shows the addresses for each
device in an extended configuration.
EMU
61500
61300
61200
61400
51500
51400
6
51300
41500
5
51200
31500
41300
41400
31300
41200
31400
4
31200
11500
21500
21400
11200
PVA 2
11400
11300
12
3
21300
61100
13
2
21200
51100
61000
50900
50800
EMU
14
60900
41100
41000
40900
40800
PVA 0
EMU
Controller A
Controller B
Cache A
Cache B
15
60800
31100
8
1
51000
21100
31000
30900
50100
0
50000
9
30800
50200
40100
11100
50300
40200
1
40000
21000
40300
30200
11000
30300
20200
10
20900
20300
2
30100
11
10900
3
20100
6
20800
6
30000
5
10800
5
20000
4
60300
4
10300
3
60200
3
10200
2
60100
2
10100
1
60000
1
10000
Target
numbers
Device port numbers
PVA 3
PTL location = Device port number = 3
Target number = 08
LUN = 00
CXO5851B
Figure 2–2. PTL addressing in a configuration in a BA370 enclosure
Planning Storage
2–5
The Model 2200 controller enclosure can be combined with the following:
■ Model 4214R disk enclosure — Ultra2 SCSI with 14 drive bays, single-bus I/O
module.
■ Model 4254 disk enclosure — Ultra2 SCSI with 14 drive bays, dual-bus I/O module.
NOTE: The Model 4214R uses the same storage maps as the Model 4314R, and the Model
4254 uses the same storage maps as the Model 4354R disk enclosures.
■ Model 4310R disk enclosure — Ultra3 SCSI with 10 drive bays, single-bus I/O
module. Figure 2–3 shows the addresses for each device in a six-shelf, single-bus
configuration. A maximum of six Model 4310R disk enclosures can be used with each
Model 2200 controller enclosure.
NOTE: The storage map for the Model 4310R reflects the disk enclosure’s physical location in
the rack. Disk enclosures 6, 5, and 4 are stacked above the controller enclosure, and disk
enclosures 1, 2, and 3 are stacked below the controller enclosure.
■ Model 4350R disk enclosure — Ultra3 SCSI with 10 drive bays, dual-bus I/O
module. Figure 2–4 shows the addresses for each device in a three-shelf, dual-bus
configuration. A maximum of three Model 4350R disk enclosures can be used with
each Model 2200 controller enclosure.
■ Model 4314R disk enclosure — Ultra3 SCSI with 14 drive bays, single-bus I/O
module. Figure 2–5 shows the addresses for each device in a six-shelf, single-bus
configuration. A maximum of six Model 4314R disk enclosures can be used with each
Model 2200 controller enclosure.
NOTE: The storage map for the Model 4314R reflects the disk enclosure’s physical location in
the rack. Disk enclosures 6, 5, and 4 are stacked above the controller enclosure, and disk
enclosures 1, 2, and 3 are stacked below the controller enclosure.
■ Model 4354R disk enclosure — Ultra3 SCSI with 14 drive bays, dual-bus I/O
module. Figure 2–6 shows the addresses for each device in a three-shelf, dual-bus
configuration. A maximum of three Model 4354R disk enclosures can be used with
each Model 2200 controller enclosure.
NOTE: Appendix A contains storageset profiles you can copy and use to create your own
system profiles. It also contains an enclosure template you can use to help you keep track of the
location of devices and storagesets in your shelves.
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Disk61100
Disk61200
7
8
9
10
08
10
11
12
DISK ID
Disk50800
Disk51000
Disk51100
Disk51200
Bay
1
2
3
4
5
6
7
8
9
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk40800
Disk41000
Disk41100
Disk41200
Bay
1
2
3
4
5
6
7
8
9
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk11000
Disk61000
6
05
Disk10800
Disk60800
5
04
Disk50500
Disk60500
4
03
Disk40500
Disk60400
3
02
Disk10500
Disk60300
2
01
Disk50400
Disk60200
1
00
Disk40400
Disk60100
Bay
SCSI ID
Disk10400
DISK ID
Disk50300
12
Disk40300
10
Disk10300
9
11
Disk50200
8
10
Disk40200
7
08
Disk10200
6
05
Disk50100
5
04
Disk40100
4
03
Disk10100
3
02
Disk50000
2
01
Disk40000
1
00
Disk10000
Bay
SCSI ID
Disk60000
Model 4310R Disk Enclosure Shelf 6 (single-bus)
Model 4310R Disk Enclosure Shelf 5 (single-bus)
Model 4310R Disk Enclosure Shelf 4 (single-bus)
Disk11100
Disk11200
Bay
1
2
3
4
5
6
7
8
9
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk20000
Disk20100
Disk20200
Disk20300
Disk20400
Disk20500
Disk20800
Disk21000
Model 4310R Disk Enclosure Shelf 1 (single-bus)
Disk21100
Disk21200
Bay
1
2
3
4
5
6
7
8
9
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk30100
Disk30200
Disk30300
Disk30400
Disk30500
Disk30800
Disk31000
Disk31100
Disk31200
Model 4310R Disk Enclosure Shelf 2 (single-bus)
Disk30000
2–6
Model 4310R Disk Enclosure Shelf 3 (single-bus)
Figure 2–3. PTL addressing in a single-bus configuration, six Model 4310R disk enclosures
Planning Storage
Model 4350R Disk Enclosure Shelf 1 (dual-bus)
SCSI Bus A
SCSI Bus B
10
04
DISK ID
Disk20400
9
03
Disk20300
8
02
Disk20200
7
01
Disk20100
6
00
Disk20000
5
04
Disk10400
4
03
Disk10300
3
02
Disk10200
2
01
Disk10100
1
00
Disk10000
Bay
SCSI ID
Model 4350R Disk Enclosure Shelf 2 (dual-bus)
SCSI Bus A
SCSI Bus B
10
SCSI ID
00
01
02
03
04
00
01
02
03
04
DISK ID
Disk40400
9
Disk40300
8
Disk40200
7
Disk40100
6
Disk40000
5
Disk30400
4
Disk30300
3
Disk30200
2
Disk30100
1
Disk30000
Bay
Model 4350R Disk Enclosure Shelf 3 (dual-bus)
SCSI Bus A
SCSI Bus B
10
SCSI ID
00
01
02
03
04
00
01
02
03
04
DISK ID
Figure 2–4. PTL addressing in a dual-bus configuration, three Model 4350R disk enclosures
Disk60400
9
Disk60300
8
Disk60200
7
Disk60100
6
Disk60000
5
Disk50400
4
Disk50300
3
Disk50200
2
Disk50100
1
Disk50000
Bay
2–7
2–8
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk60000
Disk60100
Disk60200
Disk60300
Disk60400
Disk60500
Disk60800
Disk60900
Disk61000
Disk61100
Disk61200
Disk61300
Disk61400
Disk61500
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk50000
Disk50100
Disk50200
Disk50300
Disk50400
Disk50500
Disk50800
Disk50900
Disk51000
Disk51100
Disk51200
Disk51300
Disk51400
Disk51500
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk40000
Disk40100
Disk40200
Disk40300
Disk40400
Disk40500
Disk40800
Disk40900
Disk41000
Disk41100
Disk41200
Disk41300
Disk41400
Disk41500
Model 4314R Disk Enclosure Shelf 6 (single-bus)
Model 4314R Disk Enclosure Shelf 5 (single-bus)
Model 4314R Disk Enclosure Shelf 4 (single-bus)
2
3
4
5
6
7
8
9
10
11
12
13
14
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk10100
Disk10200
Disk10300
Disk10400
Disk10500
Disk10800
Disk10900
Disk11000
Disk11100
Disk11200
Disk11300
Disk11500
1
00
Disk11400
Bay
SCSI ID
Disk10000
Model 4314R Disk Enclosure Shelf 1 (single-bus)
2
3
4
5
6
7
8
9
10
11
12
13
14
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk20000
Disk20100
Disk20200
Disk20300
Disk20400
Disk20500
Disk20800
Disk20900
Disk21000
Disk21100
Disk21200
Disk21300
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk30100
Disk30200
Disk30300
Disk30400
Disk30500
Disk30800
Disk30900
Disk31000
Disk31100
Disk31200
Disk31300
Disk21500
1
00
Disk21400
Bay
SCSI ID
Disk30000
Model 4314R Disk Enclosure Shelf 2 (single-bus)
Figure 2–5. PTL addressing in a single-bus configuration, six Model 4314R disk enclosures
Disk31500
Disk31400
Model 4314R Disk Enclosure Shelf 3 (single-bus)
Planning Storage
2–9
Model 4354R Disk Enclosure Shelf 1 (dual-bus)
SCSI Bus A
SCSI Bus B
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
DISK ID
Disk20800
13
Disk20500
12
Disk20400
11
Disk20300
10
Disk20200
9
Disk20100
8
Disk20000
7
Disk10800
6
Disk10500
5
Disk10400
4
Disk10300
3
Disk10200
2
Disk10100
1
Disk10000
Bay
Model 4354R Disk Enclosure Shelf 2 (dual-bus)
SCSI Bus A
SCSI Bus B
14
08
DISK ID
Disk40800
13
05
Disk40500
12
04
Disk40400
11
03
Disk40300
10
02
Disk40200
9
01
Disk40100
8
00
Disk40000
7
08
Disk30800
6
05
Disk30500
5
04
Disk30400
4
03
Disk30300
3
02
Disk30200
2
01
Disk30100
1
00
Disk30000
Bay
SCSI ID
Model 4354R Disk Enclosure Shelf 3 (dual-bus)
SCSI Bus A
SCSI Bus B
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
DISK ID
Figure 2–6. PTL addressing in a dual-bus configuration, three Model 4354R disk enclosures
Disk60800
13
Disk60500
12
Disk60400
11
Disk60300
10
Disk60200
9
Disk60100
8
Disk60000
7
Disk50800
6
Disk50500
5
Disk50400
4
Disk50300
3
Disk50200
2
Disk50100
1
Disk50000
Bay
2–10
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
D100
Host addressable
unit number
RAID1
Disk 10000
Disk 20000
Storageset
name
Disk 30000
Controller
PTL addresses
CXO6186B
Figure 2–7. Mapping a unit to physical disk drives
Determining Storage Requirements
It is important to determine your storage requirements. Here are a few of the questions you
should ask yourself regarding the subsystem usage:
■ What applications or user groups will access the subsystem? How much capacity do
they need?
■ What are the I/O requirements? If an application is data transfer-intensive, what is the
required transfer rate? If it is I/O request-intensive, what is the required response time?
What is the read/write ratio for a typical request?
■ Are most I/O requests directed to a small percentage of the disk drives? Do you want
to keep it that way or balance the I/O load?
■ Do you store mission-critical data? Is availability the highest priority or would
standard backup procedures suffice?
Planning Storage
2–11
Choosing a Container Type
Different applications may have different storage requirements. You probably want to
configure more than one kind of container within your subsystem.
In choosing a container, you choose between independent disks (JBODs) or one of several
storageset types, as shown in Figure 2–8. The independent disks and the selected
storageset may also be partitioned.
The storagesets implement RAID (Redundant Array of Independent Disks) technology.
Consequently, they all share one important feature: each storageset, whether it contains
two disk drives or ten, looks like one large, virtual disk drive to the host.
Containers
Partition
Stripeset
(R0)
Single
devices
(JBOD)
Mirrorset
(R1)
Striped mirrorset
(R0+1)
RAIDset
(R3/5)
Storagesets
CXO6677A
Figure 2–8. Container types
2–12
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Table 2–1 compares the different kinds of containers to help you determine which ones
satisfy your requirements.
Table 2–1 A Comparison of Container Types
Container Name
Independent disk
drives (JBOD)
Stripeset
Relative
Availability
Request Rate
(Read/Write)
I/O per second
Comparable to
single disk drive
Excellent if used
with large chunk
size
Transfer Rate
(Read/Write) MB
per second
Comparable to
single disk drive
Excellent if used
with small chunk
size
Good/Fair
Good/Fair
System drives;
critical files
Read: excellent
(if used with small
chunk sizes)
Write: good (if used
with small chunk
sizes)
Excellent if used
with small chunk
size
High request rates,
read-intensive,
data lookup
Mirrorset
Equal to number of
JBOD disk drives
Proportionate to
number of disk
drives; worse than
single disk drive
Excellent
(RAID1)
RAIDset
Excellent
Excellent/good
Excellent
Excellent if used
with large chunk
size
(RAID 0)
(RAID 3/5)
Striped Mirrorset
(RAID 0+1)
Applications
—
High performance
for non-critical data
Any critical
response-time
application
For a comprehensive discussion of RAID, refer to The RAIDBOOK—A Source Book for
Disk Array Technology.
Creating a Storageset Profile
Creating a profile for your storagesets, partitions, and devices can simplify the
configuration process. Filling out a storageset profile helps you choose the storagesets that
best suit your needs and to make informed decisions about the switches you can enable for
each storageset or storage device that you configure in your subsystem.
See the example storageset profile shown in Figure 2–9. Appendix A contains blank
profiles that you can copy and use to record the details for your storagesets. Use the
information in this chapter to help you make decisions when creating storageset profiles.
Planning Storage
2–13
Type of Storageset:
_____ Mirrorset
__X_ RAIDset
_____ Stripeset
_____ Striped Mirrorset ____ JBOD
Storageset Name R1
Disk Drives D10300, D20300, D10400, D20400
Unit Number D101
Partitions:
Unit #
%
Unit #
%
Unit #
%
Unit #
%
Unit #
%
Unit #
%
Unit #
%
Unit #
%
RAIDset Switches:
Reconstruction Policy
_X_Normal (default)
Reduced Membership
_X _No (default)
Replacement Policy
_X_Best performance (default)
___Fast
___Yes, missing:
___Best fit
___None
Mirrorset Switches:
Replacement Policy
___Best performance (default)
Copy Policy
___Normal (default)
Read Source
___Least busy (default)
___Best fit
___Fast
___Round robin
___Disk drive:
___None
Initialize Switches:
Chunk size
_X_ Automatic (default)
Save Configuration
___No (default)
Metadata
_X_Destroy (default)
___ 64 blocks
_X_Yes
___Retain
___ 128 blocks
___ 256 blocks
___ Other:
Unit Switches:
Caching
Access by following hosts enabled
Read caching_______X__
_ALL_________________________________________________________
Read-ahead caching_____
____________________________________________________________
Write-back caching___X__
____________________________________________________________
Write-through caching____
____________________________________________________________
Figure 2–9. An example storageset profile
2–14
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Storageset Planning Considerations
This section contains the guidelines for choosing the storageset type needed for your
subsystem:
■ “Stripeset Planning Considerations,” page 2–14
■ “Mirrorset Planning Considerations,” page 2–16
■ “RAIDset Planning Considerations,” page 2–18
■ “Striped Mirrorset Planning Considerations,” page 2–20
■ “Storageset Planning Considerations,” page 2–14
■ “Partition Planning Considerations,” page 2–22
Stripeset Planning Considerations
Stripesets (RAID 0) enhance I/O performance by spreading the data across multiple disk
drives. Each I/O request is broken into small segments called “chunks.” These chunks are
then simultaneously “striped” across the disk drives in the storageset, thereby enabling
several disk drives to participate in one I/O request.
For example, in a three-member stripeset that contains disk drives Disk 10000, Disk
20000, and Disk 10100, the first chunk of an I/O request is written to Disk 10000, the
second to Disk 20000, the third to Disk 10100, the fourth to Disk 10000, and so forth until
all of the data has been written to the drives (Figure 2–10).
6
1
5
2
Disk 10000
Chunk
1
4
4
3
Disk 20000
Disk 10100
2
3
5
6
CXO7287A
Figure 2–10. A 3-member RAID 0 stripeset (example 1)
Planning Storage
2–15
The relationship between the chunk size and the average request size determines if striping
maximizes the request rate or the data-transfer rate. You can set the chunk size or use the
default setting (see “Chunk Size,” page 2–26, for information about setting the chunk
size). Figure 2–11 shows another example of a three-member RAID 0 stripeset.
A major benefit of striping is that it balances the I/O load across all of the disk drives in
the storageset. This can increase the subsystem performance by eliminating the hot spots
(high localities of reference) that occur when frequently accessed data becomes
concentrated on a single disk drive.
Virtual disk
Operating
system
view
Actual
device
mappings
Block 0
Block 1
Block 2
Block 3
Block 4
Block 5
etc.
Disk 1
Disk 2
Disk 3
Block 0
Block 3
etc.
Block 1
Block 4
etc.
Block 2
Block 5
etc.
Stripeset
CXO4592B
Figure 2–11. A 3-member RAID 0 stripeset (example 2)
Keep the following points in mind as you plan your stripesets:
■ Reporting methods and size limitations prevent certain operating systems from
working with large stripesets.
■ A storageset should only contain disk drives of the same capacity. The controller limits
the effective capacity of each member to the capacity of the smallest member in the
storageset (base member size) when the storageset is initialized. Thus, if you combine
9 GB disk drives with 4 GB disk drives in the same storageset, you waste 5 GB of
capacity on each 9 GB member.
If you need high performance and high availability, consider using a RAIDset,
striped-mirrorset, or a host-based shadow of a stripeset.
2–16
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
■ Striping does not protect against data loss. In fact, because the failure of one member is
equivalent to the failure of the entire stripeset, the likelihood of losing data is higher
for a stripeset than for a single disk drive.
For example, if the mean time between failures (MTBF) for a single disk is l hour, then
the MTBF for a stripeset that comprises N such disks is l/N hours. As another example,
if the MTBF of a a single disk is 150,000 hours (about 17 years), a stripeset
comprising four of these disks would only have an MTBF of slightly more than 4
years.
For this reason, you should avoid using a stripeset to store critical data. Stripesets are
more suitable for storing data that can be reproduced easily or whose loss does not
prevent the system from supporting its critical mission.
■ Evenly distribute the members across the device ports to balance the load and provide
multiple paths.
■ Stripesets may contain between two and 24 members.
■ If you plan to use mirror members to replace failing drives, then create the original
stripeset as a stripeset of 1-member mirrorsets.
■ Stripesets are well-suited for the following applications:
❏ Storing program image libraries or run-time libraries for rapid loading.
❏ Storing large tables or other structures of read-only data for rapid application
access.
❏ Collecting data from external sources at very high data transfer rates.
■ Stripesets are not well-suited for the following applications:
❏ A storage solution for data that cannot be easily reproduced or for data that must be
available for system operation.
❏ Applications that make requests for small amounts of sequentially located data.
❏ Applications that make synchronous random requests for small amounts of data.
Spread the member drives as evenly as possible across the six I/O device ports.
Mirrorset Planning Considerations
Mirrorsets (RAID 1) use redundancy to ensure availability, as illustrated in Figure 2–12.
For each primary disk drive, there is at least one mirror disk drive. Thus, if a primary disk
drive fails, its mirror drive immediately provides an exact copy of the data. Figure 2–13
shows a second example of a Mirrorset.
Planning Storage
Disk 10000
Disk 20000
A
A'
Disk 20100
Disk 10100
B
B'
Disk 10200
Disk 20200
C
C'
2–17
Mirror drives contain
copy of data
CXO7288A
Figure 2–12. Mirrorsets maintain two copies of the same data
Virtual disk
Operating
system
view
Actual
device
mappings
Block 0
Block 1
Block 2
etc.
Disk 1
Disk 2
Block 0
Block 1
Block 2
etc.
Block 0
Block 1
Block 2
etc.
Mirrorset
CXO4594B
Figure 2–13. Mirrorset example 2
Keep these points in mind when planning mirrorsets:
■ Data availability with a mirrorset is excellent but comes with a higher cost—you need
twice as many disk drives to satisfy a given capacity requirement. If availability is your
top priority, consider using dual-redundant controllers and redundant power supplies.
■ You can configure up to 20 mirrorsets per controller or pair of dual-redundant
controllers. Each mirrorset may contain up to 6 members.
■ Both write-back cache modules must be the same size.
2–18
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
■ A mirrorset should only contain disk drives of the same capacity.
■ Spread mirrorset members across different device ports (drive bays).
■ Mirrorsets are well-suited for the following:
❏ Any data for which reliability requirements are extremely high
❏ Data to which high-performance access is required
❏ Applications for which cost is a secondary issue
■ Mirrorsets are not well-suited for the following applications:
❏ Write-intensive applications (a performance hit of 10 percent will occur)
❏ Applications for which cost is a primary issue
RAIDset Planning Considerations
RAIDsets (RAID 3/5) are enhanced stripesets—they use striping to increase I/O
performance and distributed-parity data to ensure data availability. Figure 2–14 shows an
example of a RAIDset that uses five members.
Virtual disk
Operating
system
view
Disk 1
Block 0
Block 5
Block 10
Block 15
Block 0
Block 1
Block 2
Block 3
Block 4
Block 5
etc.
Disk2
Disk 3
Disk 4
Disk 5
Block 1
Block 6
Block 11
Parity 12-15
Block 2
Block 7
Parity 8-11
Block 12
Block 3
Parity 4-7
Block 8
Block 13
Parity 0-3
Block 4
Block 9
Block 14
RAIDset
CXO6463B
Figure 2–14. A 5-member RAIDset using parity
Planning Storage
2–19
RAIDsets are similar to stripesets in that the I/O requests are broken into smaller “chunks”
and striped across the disk drives. RAIDsets also create chunks of parity data and stripe
them across all the members of the RAIDset. Parity data is derived mathematically from
the I/O data and enables the controller to reconstruct the I/O data if a single disk drive
fails. Thus, it becomes possible to lose a disk drive without losing access to the data it
contained. Data could be lost if a second disk drive fails before the controller replaces the
first failed disk drive and reconstructs the data.
The relationship between the chunk size and the average request size determines if striping
maximizes the request rate or the data-transfer rates. You can set the chunk size or use the
default setting. See “Chunk Size,” page 2–26, for information about setting the chunk size.
Keep these points in mind when planning RAIDsets:
■ Reporting methods and size limitations prevent certain operating systems from
working with large RAIDsets.
■ Both cache modules must be the same size.
■ A RAIDset must include at least 3 disk drives, but no more than 14.
■ A storageset should only contain disk drives of the same capacity. The controller limits
the capacity of each member to the capacity of the smallest member in the storageset.
Thus, if you combine 9 GB disk drives with 4 GB disk drives in the same storageset,
you waste 5 GB of capacity on each 9 GB member.
■ RAIDsets are particularly well-suited for the following:
❏ Small to medium I/O requests
❏ Applications requiring high availability
❏ High read request rates
❏ Inquiry-type transaction processing
■ RAIDsets are not particularly well-suited for the following:
❏ Write-intensive applications
❏ Database applications in which fields are continually updated
❏ Transaction processing
2–20
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Striped Mirrorset Planning Considerations
Striped mirrorsets (RAID 0+1) are a configuration of stripesets whose members are also
mirrorsets (Figure 2–15). Consequently, this kind of storageset combines the performance
of striping with the reliability of mirroring. The result is a storageset with very high I/O
performance and high data availability. Figure 2–16 shows a second example of a striped
mirrorset using six members.
t
Mirrorset1
Mirrorset2
Disk 20000
Disk 10100
Disk 20200
A
B
C
Disk 10000
Disk 20100
Disk 10200
B'
C'
A'
Mirrorset3
CXO7289A
Figure 2–15. Striped mirrorset (example 1)
The failure of a single disk drive has no effect on the ability of the storageset to deliver
data to the host. Under normal circumstances, a single disk drive failure has very little
effect on performance. Because striped mirrorsets do not require any more disk drives than
mirrorsets, this storageset is an excellent choice for data that warrants mirroring.
Planning Storage
2–21
Virtual disk
Operating
system
view
Controller
internal
mapping
Block
Block
Block
Block
Block
Block
etc.
0
1
2
3
4
5
Actual
device
mappings
Disk 1
Disk2
Disk 3
Disk 4
Disk 5
Disk 6
Block 0
Block 3
Block 6
Block 0
Block 3
Block 6
Block1
Block 4
Block 7
Block 1
Block 4
Block 7
Block 2
Block 5
Block 8
Block 2
Block 5
Block 8
Virtual disk #1 Mirrorset
Virtual disk #2 Mirrorset
Virtual disk #3 Mirrorset
Stripeset
CXO6462A
Figure 2–16. Striped mirrorset (example 2)
Plan the mirrorset members, and plan the stripeset that will contain them. Review the
recommendations in “Storageset Planning Considerations,” page 2–14, and “Mirrorset
Planning Considerations,” page 2–16.
Storageset Expansion Considerations
Storageset Expansion allows for the joining of two of the same kind of storage containers
by concatenating RAIDsets, Stripesets, or individual disks, thereby forming a larger
virtual disk which is presented as a single unit. The Compaq StorageWorks HSG80 Array
Controller ACS Version 8.6 CLI Reference Guide describes the CLI command: ADD
CONCATSETS which is used to perform concatenation.
CAUTION: The ADD CONCATSETS command should only be executed with host
operating systems that support dynamic volume expansion. If the operating system
cannot handle one of its disks increasing in size, use of this command could make
data inaccessible.
2–22
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Partition Planning Considerations
Use partitions to divide a container (storageset or individual disk drive) into smaller
pieces, each of which can be presented to the host as its own storage unit. Figure 2–17
shows the conceptual effects of partitioning a single-disk container.
1 Partition 1
3
2 Partition 2
2
3 Partition 3
1
CXO7056A
Figure 2–17. One example of a partitioned single-disk unit
You can create up to eight partitions per storageset (disk drive, RAIDset, mirrorset,
stripeset, or striped mirrorset). Each partition has its own unit number so that the host can
send I/O requests to the partition just as it would to any unpartitioned storageset or device.
Partitions are separately addressable storage units; therefore, you can partition a single
storageset to service more than one user group or application.
Defining a Partition
Partitions are expressed as a percentage of the storageset or single disk unit that contains
them:
■ Mirrorsets and single disk units—the controller allocates the largest whole number of
blocks that are equal to or less than the percentage you specify.
■ RAIDsets and stripesets—the controller allocates the largest whole number of stripes
that are less than or equal to the percentage you specify.
❏ Stripesets—the stripe size = chunk size × number of members.
❏ RAIDsets—the stripe size = chunk size × (number of members minus 1)
An unpartitioned storage unit has more capacity than a partition that uses the whole unit
because each partition requires a small amount of disk space for metadata.
Planning Storage
2–23
Guidelines for Partitioning Storagesets and Disk Drives
Keep these points in mind when planning partitions for storagesets and disks:
■ Each storageset or disk drive may have up to eight partitions.
■ In transparent failover mode, all partitions of a particular container must be on the
same host port. Partitions cannot be split across host ports.
■ In multiple-bus failover mode, all the partitions of a particular container must be on the
same controller. Partitions cannot be split across controllers.
■ Partitions cannot be combined into storagesets. For example, you cannot divide a disk
drive into three partitions, then combine those partitions into a RAIDset.
■ Just as with storagesets, you do not have to assign unit numbers to partitions until you
are ready to use them.
■ The CLONE utility cannot be used with partitioned mirrorsets or partitioned stripesets.
(See “Cloning Data for Backup,” page 8–2 for details about cloning.)
Changing Characteristics through
Switches
CLI command switches allow the user another level of command options. There are three
types of switches that modify the storageset and unit characteristics:
■ Storageset switches
■ Initialization switches
■ Unit switches
The following sections describe how to enable/modify switches. They also contain a
description of the major CLI command switches.
Enabling Switches
If you use SWCC to configure the device or storageset, you can set switches from the
SWCC screens during the configuration process, and SWCC automatically applies them
to the storageset or device. See the SWCC online help for information about using SWCC.
2–24
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
If you use CLI commands to configure the storageset or device manually, the
configuration procedure found in Chapter 5 of this guide indicates when and how to
enable each switch. The Compaq StorageWorks HSG80 Array Controller ACS Version 8.6
CLI Reference Guide contains the details of the CLI commands and their switches.
Changing Switches
You can change the RAIDset, mirrorset, device, and unit switches at any time. You cannot
change the initialize switches without destroying data on the storageset or device. These
switches are integral to the formatting and can only be changed by re-initializing the
storageset.
CAUTION: Initializing a storageset is similar to formatting a disk drive; all data is
destroyed during this procedure.
Storageset and Partition Switches
The characteristics of a particular storageset can be set by specifying switches when the
storageset is added to the controllers’ configuration. Once a storageset has been added, the
switches can be changed by using a SET command. Switches can be set for partitions and
the following types of storagesets:
■ RAIDset
■ Mirrorset
Stripesets have no specific switches associated with their ADD and SET commands.
RAIDset Switches
Use the following types of switches to control how a RAIDset ensures data availability:
■ Replacement policy
■ Reconstruction policy
■ Remove/replace policy
For details on the use of these switches refer to SET RAIDSET and SET RAIDset-name commands
in the Compaq StorageWorks HSG80 Array Controller ACS Version 8.6 CLI Reference
Guide.
Planning Storage
2–25
Mirrorset Switches
Use the following switches to control how a mirrorset behaves to ensure data availability:
■ Replacement policy
■ Copy speed
■ Read source
■ Membership
For details on the use of these switches refer to ADD MIRRORSET and SET mirrorset-name
commands in the Compaq StorageWorks HSG80 Array Controller ACS Version 8.6 CLI
Reference Guide.
Partition Switches
The following partition switches are available when creating a partition:
■ Size
■ Geometry
For details on the use of these switches refer to CREATE_PARTITION command in the Compaq
StorageWorks HSG80 Array Controller ACS Version 8.6 CLI Reference Guide.
Initialization Switches
Initialization switches set characteristics for established storagesets before they are made
into units. The following kinds of switches effect the format of a disk drive or storageset:
■ Chunk Size (for stripesets and RAIDsets only)
■ Save Configuration
■ Destroy/Nodestroy
■ Geometry
Each of these switches is described in the following sections.
NOTE: After initializing the storageset or disk drive, you cannot change these switches without
reinitializing the storageset or disk drive.
2–26
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Chunk Size
Specify the chunk size of the data to be stored to control the stripesize used in RAIDsets
and stripesets:
■ CHUNKSIZE=DEFAULT lets the controller set the chunk size based on the number of
disk drives (d) in a stripeset or RAIDset. If d ≤ 9 then chunk size = 256. If d > 9 then
chunk size = 128.
■ CHUNKSIZE=n lets you specify a chunk size in blocks. The relationship between
chunk size and request size determines whether striping increases the request rate or
the data-transfer rate.
Increasing the Request Rate
A large chunk size (relative to the average request size) increases the request rate by
enabling multiple disk drives to respond to multiple requests. If one disk drive contains all
of the data for one request, then the other disk drives in the storageset are available to
handle other requests. Thus, in principle, separate I/O requests can be handled in parallel,
thereby increasing the request rate. This concept is shown in Figure 2–18.
Request A
Chunk size = 128k (256 blocks)
Request B
Request C
Request D
CXO-5135A-MC
Figure 2–18. Chunk size larger than the request size
Planning Storage
2–27
Large chunk sizes also tend to increase the performance of random reads and writes. It is
recommended that you use a chunk size of 10 to 20 times the average request size,
rounded to the closest prime number.
To calculate the chunk size that should be used for your subsystem, you must first analyze
the types of requests that are being made to the subsystem:
■ Many parallel I/Os that use a small area of disk should use a chunk size of 10 times the
average transfer request rate.
■ Random I/Os that are scattered over all the areas of the disks should use a chunk size
of 20 times the average transfer request rate.
■ If you do not know, then you should use a chunk size of 15 times the average transfer
request rate.
■ If you have mostly sequential reads or writes (like those needed to work with large
graphic files), make the chunk size for RAID 0 and RAID 0+1 a small number (for
example: 67 sectors). For RAID 5, make the chunk size a relatively large number (for
example: 253 sectors).
Table 2–2 shows a few examples of chunk size selection.
Table 2–2 Example Chunk Sizes
Transfer Size (KB)
Small Area of I/O Transfers
2
41
59
Unknown
79
Random Areas of I/O Transfers
4
79
113
163
8
157
239
317
e
Increasing Sequential Data Transfer Performance
RAID 0 and RAID 0+1 sets intended for high data transfer rates should use a relatively
low chunk size (for example: 67 sectors). RAID 5 sets intended for high data rate
performance should use a relatively large number (for example: 253 sectors).
2–28
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Save Configuration
The SAVE CONFIGURATION switch is for a single-controller configuration only. This
switch reserves an area on each of the disks that constitutes the container being initialized.
The controller can write subsystem configuration data on this area. If the controller is
replaced, the new controller can read the subsystem configuration from the reserved areas
of disks.
If you specify SAVE_CONFIGURATION for a multi-device storageset, such as a stripeset,
the complete subsystem configuration is periodically written on each disk in the
storageset.
CAUTION: If user data already exists on a storageset, do NOT reinitialize it with the
save configuration option, as this will change the site and position of the user data on
the storageset. Compaq recommends backing up user data prior to reinitializing any
storageset.
The SHOW DEVICES FULL command shows which disks are used to backup configuration
information.
Destroy/Nodestroy
Specify whether to destroy or retain the user data and metadata when a disk is initialized
after it has been used in a mirrorset or as a single-disk unit.
NOTE: The DESTROY and NODESTROY switches are only valid for mirrorsets and striped
mirrorsets.
■ DESTROY (default) overwrites the user data and forced-error metadata when a disk
drive is initialized.
■ NODESTROY preserves the user data and forced-error metadata when a disk drive is
initialized. Use NODESTROY to create a single-disk unit from any disk drive that has
been used as a member of a mirrorset. See the REDUCED command in the Compaq
StorageWorks HSG80 Array Controller ACS Version 8.6 CLI Reference Guide for
information on removing disk drives from a mirrorset.
NODESTROY is ignored for members of a RAIDset.
Geometry
The geometry parameters of a storageset can be specified. The geometry switches are:
Planning Storage
2–29
■ CAPACITY—the number of logical blocks. The range is from 1 to the maximum
container size.
■ CYLINDERS—the number of cylinders used. The range is from 1 to 16777215.
■ HEADS—the number of disk heads used. The range is from 1 to 255.
■ SECTORS_PER_TRACK—the number of sectors per track used. The range is from 1
to 255.
Unit Switches
Several switches control the characteristics of units. The unit switches are described under
the SET unit-number command in the Compaq StorageWorks HSG80 Array Controller ACS
Version 8.6 CLI Reference Guide.
One unit switch, ENABLE/DISABLE_ACCESS_PATH, determines which host connections
can access the unit, and is part of the larger topic of matching units to specific hosts. This
complex topic is covered in Chapter 1 under the following headings:
■ “Assigning Unit Numbers,” page 1–16
■ “Selective Storage Presentation,” page 1–21
Storage Maps
Configuring a subsystem will be easier if you know how the storagesets, partitions, and
JBODs correspond to the disk drives in your subsystem. You can more easily see this
relationship by creating a hardcopy representation (a storage map).
Creating a Storage Map
To make a storage map, fill out the templates provided in Appendix A as you add
storagesets, partitions, and JBOD disks to the configuration and assign them unit numbers.
Label each disk drive in the map with the higher levels it is associated with, up to the unit
level.
2–30
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Example Storage Map - Model 4310R Disk Enclosure
Figure 2–19 shows an example of four Model 4310R disk enclosures (single-bus I/O).
■ Unit D100 is a 4-member RAID 3/5 storageset named R1. R1 consists of Disk10000,
Disk20000, Disk30000, and Disk40000.
■ Unit D101 is a 2-member striped mirrorset named S1. S1 consists of M1 and M2:
❏ M1 is a 2-member mirrorset consisting of Disk10100 and Disk20100.
❏ M2 is a 2-member mirrorset consisting of Disk30100 and Disk40100.
■ Unit D102 is a 2-member mirrorset named M3. M3 consists of Disk10200 and
Disk20200.
■ Unit D103 is a 2-member mirrorset named M4. M4 consists of Disk30200 and
Disk40200.
■ Unit D104 is 3-member stripeset named S2. S2 consists of Disk10300, Disk20300, and
Disk30300.
■ Unit D105 is a single (JBOD) disk named Disk40300.
■ Unit D106 is a 3-member RAID 3/5 storageset named R2. R2 consists of Disk10400,
Disk20400, and Disk30400.
■ Unit D107 is a single (JBOD) disk named Disk40400.
■ Unit D108 is a 4-member stripeset named S3. S3 consists of Disk10500, Disk20500,
Disk30500, and Disk40500.
■ Unit D1 is a 2-member striped mirrorset named S4. S4 consists of M4 and M5:
❏ M5 is a 2-member mirrorset consisting of Disk10800 and Disk20800.
❏ M6 is a 2-member mirrorset consisting of Disk30800 and Disk40800.
■ Unit D2 is a 4-member RAID 3/5 storageset named R3. R3 consists of Disk11000,
Disk21000, Disk31000, and Disk41000.
■ Unit D3 is a 4-member stripeset named S5. S5 consists of Disk11100, Disk21100,
Disk31100, and Disk41100.
■ Unit D4 is a 2-member mirrorset named M7. M7 consists of Disk11200 and
Disk21200.
■ Disk31200 and Disk41200 are spareset members.
Planning Storage
2–31
5
6
7
8
9
10
04
05
08
10
11
12
D100
R1
D101
S1
M2
D103
M4
D105
D107
D108
S3
D1
S4
M6
D2
R3
D3
S5
spare
DISK ID
Disk40300
Disk40400
Disk40500
Disk40800
Bay
SCSI ID
Disk41200
4
03
Disk41100
3
02
Disk41000
2
01
Disk40100
1
00
Disk40000
Bay
SCSI ID
Disk40200
Model 4310R Disk Enclosure Shelf 4 (single-bus)
1
2
3
4
5
6
7
8
9
10
00
01
02
03
04
D100
R1
D101
S1
M1
05
08
10
11
12
D2
R3
D3
S5
D4
M7
DISK ID
Bay
SCSI ID
Disk11100
Disk11200
9
10
05
08
10
11
12
D2
R3
D3
S5
D4
M7
Disk21200
Disk10800
8
Disk21100
Disk10500
Disk11000
Disk10400
7
Disk21000
Disk10300
Disk10100
D108
S3
1
2
3
4
5
6
00
01
02
03
04
D100
R1
D101
S1
M1
Disk20100
D106
R2
Disk10000
D104
S2
Disk20000
D102
M3
D1
S4
M5
Disk10200
Model 4310R Disk Enclosure Shelf 1 (single-bus)
D104
S2
D106
R2
D108
S3
Disk20300
Disk20400
Disk20500
Disk20800
DISK ID
D102
M3
D1
S4
M5
Disk20200
Model 4310R Disk Enclosure Shelf 2 (single-bus)
Bay
1
2
3
4
5
6
7
8
9
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
D100
R1
D101
S1
M2
D103
M4
D104
S2
D106
R2
D108
S3
D1
S4
M6
D2
R3
D3
S5
spare
Disk30000
Disk30100
Disk30200
Disk30300
Disk30400
Disk30500
Disk30800
Disk31000
Disk31100
Disk31200
Model 4310R Disk Enclosure Shelf 3 (single-bus)
DISK ID
Figure 2–19. Model 4310R disk enclosure − example storage map
2–32
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Example Storage Map - Model 4350R Disk Enclosure
Figure 2–20 shows an example of three Model 4350R disk enclosures (dual-bus).
■ Unit D100 is a 6-member RAID 3/5 storageset named R1. R1 consists of Disk10000,
Disk20000, Disk30000, Disk 40000, Disk50000, and Disk60000.
■ Unit D101is a 6-member RAID 3/5 storageset named R2. R2 consists of Disk10100,
Disk20100, Disk30100, Disk40100, Disk50100, and Disk60100.
■ Unit D102 is a 2-member striped mirrorset named S1. S1 consists of M1 and M2:
❏ M1 is a 2-member mirrorset consisting of Disk10200 and Disk20200.
❏ M2 is a 2-member mirrorset consisting of Disk30200 and Disk40200.
■ Unit D103 is a 2-member mirrorset named M3. M3 consists of Disk50200 and
Disk60200.
■ Unit D1 is 4-member stripeset named S2. S2 consists of Disk10300, Disk20300,
Disk30300, and Disk40300.
■ Unit D2 is a 2-member mirrorset named M4. M4 consists of Disk50300 and
Disk60300.
■ Unit D3 is a 2-member striped mirrorset named S3. S3 consists of M5 and M6:
❏ M5 is a 2-member mirrorset consisting of Disk10400 and Disk20400.
❏ M6 is a 2-member mirrorset consisting of Disk30400 and Disk40400.
■ Unit D4 is a single (JBOD) disk named Disk50400.
■ Disk60400 is a spareset member.
Planning Storage
2–33
Model 4350R Disk Enclosure Shelf 1 (dual-bus)
04
D100
R1
D101
R2
D1
S2
D3
S3
M5
D1
S2
Disk20400
03
8
9
10
01
02
03
04
D100
R1
D101
R2
D102
S1
M2
D1
S2
D3
S3
M6
Disk40400
00
Disk20300
04
D3
S3
M5
Disk40300
02
D102
S1
M1
Disk20200
03
10
Disk40200
D101
R2
01
Disk20100
D100
R1
9
Disk40100
02
D102
S1
M1
8
Disk20000
7
Disk40000
00
6
Disk10400
SCSI ID
5
Disk10300
01
Disk10200
2
DISK ID
4
Disk10100
1
SCSI Bus B
3
Disk10000
SCSI Bus A
Bay
Model 4350R Disk Enclosure Shelf 2 (dual-bus)
00
6
7
03
04
00
D1
S2
D3
S3
M6
Disk30400
SCSI ID
5
Disk30300
01
02
D100
R1
D101
R2
D102
S1
M2
Disk30200
2
DISK ID
4
Disk30100
1
SCSI Bus B
3
Disk30000
SCSI Bus A
Bay
Model 4350R Disk Enclosure Shelf 3 (dual-bus)
Bay
1
2
3
4
5
6
7
8
9
10
SCSI ID
00
01
02
03
04
00
01
02
03
04
D100
R1
D101
R2
D103
M3
D2
M4
D4
D100
R1
D101
R2
D103
M3
D2
M4
spare
Disk50100
Disk50200
Disk50300
Disk50400
Disk60000
Disk60100
Disk60200
Disk60300
Disk60400
SCSI Bus B
Disk50000
SCSI Bus A
DISK ID
Figure 2–20. Model 4350R disk enclosure − example storage map
Example Storage Map - Model 4314R Disk Enclosure
Figure 2–21 shows an example of four Model 4314R disk enclosures (single-bus I/O).
■ Unit D100 is a 4-member RAID 3/5 storageset named R1. R1 consists of Disk10000,
Disk20000, Disk30000, and Disk40000.
2–34
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
■ Unit D101 is a 2-member striped mirrorset named S1. S1 consists of M1 and M2:
❏ M1 is a 2-member mirrorset consisting of Disk10100 and Disk20100.
❏ M2 is a 2-member mirrorset consisting of Disk30100 and Disk40100.
■ Unit D102 is a 2-member mirrorset named M3. M3 consists of Disk10200 and
Disk20200.
■ Unit D103 is a 2-member mirrorset named M4. M4 consists of Disk30200 and
Disk40200.
■ Unit D104 is 3-member stripeset named S2. S2 consists of Disk10300, Disk20300, and
Disk30300.
■ Unit D105 is a single (JBOD) disk named Disk40300.
■ Unit D106 is a 3-member RAID 3/5 storageset named R2. R2 consists of Disk10400,
Disk20400, and Disk30400.
■ Unit D107 is a single (JBOD) disk named Disk40400.
■ Unit D108 is a 4-member stripeset named S3. S3 consists of Disk10500, Disk20500,
Disk30500, and Disk40500.
■ Unit D1 is a 2-member striped mirrorset named S4. S4 consists of M5 and M6:
❏ M5 is a 2-member mirrorset consisting of Disk10800 and Disk20800.
❏ M6 is a 2-member mirrorset consisting of Disk30800 and Disk40800.
■ Unit D2 is a 4-member RAID 3/5 storageset named R3. R3 consists of Disk10900,
Disk20900, Disk30900, and Disk40900.
■ Unit D3 is a 4-member stripeset named S5. S5 consists of Disk11000, Disk21000,
Disk31000, and Disk41000.
■ Unit D4 is a 2-member mirrorset named M7. M7 consists of Disk11100 and
Disk21100.
■ Unit D5 is a 2-member stripeset named S6. S6 consists of Disk31100 and Disk41100.
■ Unit D6 is a 4-member RAID 3/5 storageset named R4. R4 consists of Disk11200,
Disk21200, Disk31200, and Disk41200.
■ Unit D7 is a 2-member mirrorset named M8. M8 consists of Disk11300 and
Disk21300.
■ Unit D8 is a 2-member stripeset named S7. S7 consists of Disk31300 and Disk41300.
■ Unit D9 is a 4-member Raidset named R5. R5 consists of Disk41400, Disk11400,
Disk21400 and Disk31400.
■ Disk11500, Disk21500, Disk31500, and Disk41500 are spareset members.
Planning Storage
2–35
11
12
13
14
11
12
13
14
15
D100
R1
D101
S1
M2
D103
M4
D105
D107
D108
S3
D1
S4
M6
D2
R3
D3
S5
D5
S6
D6
R4
D8
S7
D9
R5
spare
Disk40100
Disk40200
Disk40300
Disk40400
Disk40500
Disk40800
Disk40900
Disk41000
Disk41500
10
Disk41400
9
10
Disk41300
8
09
Disk41200
7
08
Disk41100
6
05
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
D100
R1
D101
S1
M1
D102
M3
D104
S2
D106
R2
D108
S3
D1
S4
M5
D2
R3
D3
S5
D4
M7
D6
R4
D7
M8
D9
R5
spare
DISK ID
Disk11000
5
04
Disk10900
4
03
Disk10800
3
02
Disk10500
2
01
Disk10400
1
00
Disk10300
Bay
SCSI ID
Disk40000
Model 4314R Disk Enclosure Shelf 4 (single-bus)
Bay
SCSI ID
DISK ID
Disk11100
Disk11200
Disk11300
Disk11400
Disk11500
10
11
12
13
14
05
08
09
10
11
12
13
14
15
D106
R2
D108
S3
D1
S4
M5
D2
R3
D3
S5
D4
M7
D6
R4
D7
M8
D9
R5
spare
Disk20900
Disk21000
1
2
3
4
5
6
00
01
02
03
04
D100
R1
D101
S1
M1
D102
M3
D104
S2
DISK ID
Disk20300
9
Disk20800
Disk10200
8
Disk20500
Disk10100
7
Disk20400
Disk10000
Model 4314R Disk Enclosure Shelf 1 (single-bus)
Bay
SCSI ID
Disk21100
Disk21200
Disk21300
Disk21400
Disk21500
10
11
12
13
14
05
08
09
10
11
12
13
14
15
D106
R2
D108
S3
D1
S4
M6
D2
R3
D3
S5
D5
S6
D6
R4
D8
S7
D9
R5
spare
Disk30900
Disk31000
Disk31100
Disk31200
Disk31300
Disk31400
Disk31500
4
5
6
00
01
02
03
04
D100
R1
D101
S1
M2
D103
M4
D104
S2
Disk30300
9
Disk30800
Disk20200
3
Disk30200
8
Disk30500
Disk20100
2
Disk30100
7
Disk30400
Disk20000
1
Disk30000
Model 4314R Disk Enclosure Shelf 2 (single-bus)
Model 4314R Disk Enclosure Shelf 3 (single-bus)
DISK ID
Figure 2–21. Model 4314R disk enclosure − example storage map
2–36
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Example Storage Map - Model 4354R Disk Enclosure
Figure 2–22 shows an example of three Model 4354R disk enclosures (dual-bus).
■ Unit D100 is a 6-member RAID 3/5 storageset named R1. R1 consists of Disk10000,
Disk20000, Disk30000, Disk40000, Disk50000, and Disk60000.
■ Unit D101is a 6-member RAID 3/5 storageset named R2. R2 consists of Disk10100,
Disk20100, Disk30100, Disk40100, Disk50100, and Disk60100.
■ Unit D102 is a 2-member striped mirrorset named S1. S1 consists of M1 and M2:
❏ M1 is a 2-member mirrorset consisting of Disk10200 and Disk20200.
❏ M2 is a 2-member mirrorset consisting of Disk30200 and Disk40200.
■ Unit D103 is a 2-member mirrorset named M3. M3 consists of Disk50200 and
Disk60200.
■ Unit D104 is a 2-member striped mirrorset named S2. S2 consists of M3 and M4:
❏ M3 is a 2-member mirrorset consisting of Disk10300 and Disk20300.
❏ M4 is a 2-member mirrorset consisting of Disk30300 and Disk40300.
■ Unit D105 is a 2-member stripeset named S3. S3 consists of Disk50300 and
Disk60300.
■ Unit D1 is 4-member stripeset named S4. S4 consists of Disk10400, Disk20400,
Disk30400, and Disk40400.
■ Unit D2 is a 2-member mirrorset named M5. M5 consists of Disk50400 and
Disk60400.
■ Unit D3 is a 2-member striped mirrorset named S5. S5 consists of M6 and M7:
❏ M6 is a 3-member mirrorset consisting of Disk10500, Disk20500, and Disk30500.
❏ M7 is a 2-member mirrorset consisting of Disk40500 and Disk50500.
■ Unit D4 is a single (JBOD) disk named Disk60500.
■ Unit D5 is a 4-member RAID 3/5 storageset named R3. R3 consists of Disk10800,
Disk20800, Disk30800, and Disk40800.
■ Disk50800 and Disk60800 are spareset members.
Planning Storage
2–37
Model 4354R Disk Enclosure Shelf 1 (dual-bus)
05
08
00
01
D1
S4
D3
S5
M6
D5
R3
D100
R1
13
14
03
D101
R2
D104
S2
M3
04
05
08
D1
S4
D3
S5
M6
D5
R3
Disk20800
04
10
Disk20500
9
Disk20400
02
D102
S1
M1
Disk20300
D101
R2
12
Disk20200
03
D104
S2
M3
11
Disk20100
8
Disk20000
D100
R1
7
Disk10800
01
6
Disk10500
00
3
Disk10400
02
D102
S1
M1
Disk10300
SCSI ID
DISK ID
5
Disk10200
2
SCSI Bus B
4
Disk10100
1
Disk10000
SCSI Bus A
Bay
12
13
14
04
05
08
D1
S4
D3
S5
M7
D5
R3
Model 4354R Disk Enclosure Shelf 2 (dual-bus)
Disk40400
Disk40500
Disk40800
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
D100
R1
D101
R2
D103
M3
D105
S3
D2
M5
D3
S5
M7
spare
D100
R1
D112
R2
D103
M3
D105
S3
D2
M5
D4
spare
Disk60500
Disk60800
D100
R1
Disk60400
D101
R2
Disk40300
D5
R3
Disk60300
D1
S4
Disk40000
03
D104
S2
M4
Disk40200
01
Disk60200
00
Disk30800
02
D102
S1
M2
Disk40100
08
Disk60100
05
D3
S5
M6
Disk30500
D101
R2
11
Disk60000
04
10
Disk50800
9
Disk50500
8
Disk30400
03
D104
S2
M4
Disk30300
D100
R1
7
Disk50400
02
D102
S1
M2
Disk30200
01
6
Disk50300
5
Disk50200
00
SCSI Bus B
4
Disk30100
SCSI ID
3
Disk50100
2
Disk50000
1
Disk30000
SCSI Bus A
Bay
DISK ID
Model 4354R Disk Enclosure Shelf 3 (dual-bus)
SCSI Bus A
DISK ID
SCSI Bus B
Figure 2–22. Model 4354R disk enclosure − example storage map
2–38
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Using the LOCATE Command to Find Devices
If you want to complete a storage map at a later time but do not remember where all disk
drives, partitions, etc. are located, use the CLI command LOCATE. The LOCATE command
flashes the (fault) LED on the drives associated with the specific storageset or unit. To turn
off the flashing LEDs, enter the CLI command LOCATE CANCEL.
The following procedure is an example of the commands to locate all the disk drives that
make up unit D104:
1. Enter the following command:
LOCATE D104
The LEDs on the disk drives that make up unit D104 will flash.
2. Note the position of all the drives contained within D104.
3. Enter the following command to turn off the flashing LEDs:
LOCATE CANCEL
The following procedure is an example command to locate all the drives that make up
RAIDset R1:
1. Enter the following command:
LOCATE R1
2. Note the position of all the drives contained within R1.
3. Enter the following command to turn off the flashing LEDs:
LOCATE CANCEL
Chapter
3
Preparing the Host System
This chapter describes how to prepare your Tru64 UNIX host computer to accommodate
the HSG80 controller storage subsystem.
The following information is included in this chapter:
■ “Enterprise Storage RAID Array Storage System Installation,” page 3–1
■ “Making a Physical Connection,” page 3–5
■ “Preparing LUNs for Access by Tru64 UNIX FileSystem,” page 3–6
■ “DECsafe Available Server Environment (ASE),” page 3–8
■ “HSG80 Units and Tru64 UNIX Utilities,” page 3–9
Enterprise Storage RAID Array Storage
System Installation
WARNING: A shock hazard exists at the backplane when the controller enclosure
bays or cache module bays are empty.
Be sure the enclosures are empty, then mount the enclosures into the rack. DO NOT
use the disk enclosure handles to lift the enclosure. The handles cannot support the
weight of the enclosure. Only use these handles to position the enclosure in the
mounting brackets.
Use two people to lift, align, and install any enclosure into a rack. Failure to use two
people might cause personal injury and/or equipment damage.
3–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
CAUTION: Controller and disk enclosures have no power switches. Make sure the
controller enclosures and disk enclosures are physically configured before turning the
PDU on and connecting the power cords. Failure to do so can cause equipment
damage.
1. Be sure the enclosures are empty before mounting them into the rack. If necessary,
remove the following elements from the controller enclosure:
❏ Environmental Monitoring Unit (EMU)
❏
Power Supplies
❏ External Cache Batteries (ECBs)
❏
Fans
If necessary, remove the following elements from the disk enclosure:
❏ Power Supply/Blower Assemblies
❏
Disk Drives
❏ Environmental Monitoring Unit (EMU)
❏
I/O Modules
Refer to the Compaq StorageWorks Model 2100 and 2200 UltraSCSI Controller User
Guide, Compaq StorageWorks Enclosure 4200 Family LVD Disk Enclosure User
Guide, and Compaq StorageWorks Enclosure 4300 Family LVD Disk Enclosure User
Guide the for further information.
2. Install brackets onto the controller enclosure and disk enclosures. Using two people,
mount the enclosures into the rack. Refer to the mounting kit documentation for
further information.
3. Install the elements. Install the disk drives. Make sure you install blank panels in any
unused bays.
NOTE: Fibre channel cabling information is shown to illustrate supported configurations. In a
dual-bus disk enclosure configuration, disk enclosures 1, 2, and 3 are stacked below the
controller enclosure—two SCSI Buses per enclosure (see Figure 3–1). In a single-bus disk
enclosure configuration, disk enclosures 6, 5, and 4 are stacked above the controller enclosure
and disk enclosures 1, 2, and 3 are stacked below the controller enclosure—one SCSI Bus per
enclosure(see Figure 3–2).
4. Connect the six VHDCI UltraSCSI bus cables between the controller and disk
enclosures as shown in Figure 3–1 for a dual bus system and Figure 3–2 for a single
bus system. Note that the supported cable lengths are 1, 2, 3, 5, and 10 meters.
Preparing the Host System
3–3
1
8
2
3
4
5
7
6
CXO7383A
1
3
5
7
SCSI Bus 1 Cable
SCSI Bus 3 Cable
SCSI Bus 5 Cable
AC Power Inputs
2
4
6
8
SCSI Bus 2 Cable
SCSI Bus 4 Cable
SCSI Bus 6 Cable
Fibre Channel Ports
Figure 3–1. Dual-Bus Enterprise Storage RAID Array Storage System
3–4 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
6
5
4
8
1
7
2
3
CXO7382A
1
3
5
7
SCSI Bus 1 Cable
SCSI Bus 3 Cable
SCSI Bus 5 Cable
AC Power Inputs
2
4
6
8
SCSI Bus 2 Cable
SCSI Bus 4 Cable
SCSI Bus 6 Cable
Fibre Channel Ports
Figure 3–2. Single-Bus Enterprise Storage RAID Array Storage System
5. Connect the AC power cords from the appropriate rack AC outlets to the controller and
disk enclosures.
Preparing the Host System
3–5
Making a Physical Connection
To attach a host computer to the storage subsystem, install one or more host bus adapters
into the computer. A Fibre Channel (FC) cable goes from the host bus adapter to a FC
switch .
Preparing to Install the Host Bus Adapter
Before installing the host bus adapter, perform the following steps:
1. Perform a complete backup of the entire system.
2. Shut down the computer system.
Installing the Host Bus Adapter
The first step in making a physical connection is the installation of a host bus adapter.
CAUTION: Protect the host bus adapter board from electrostatic discharge by wearing
an ESD wriststrap. DO NOT remove the board from the antistatic cover until you are
ready to install it.
You need the following items to begin:
■ Host bus adapter board
■ The computer hardware manual
■ Appropriate tools to service your computer
The host bus adapter board plugs into a standard PCI slot in the host computer. Refer to
the system manual for instructions on installing PCI devices.
NOTE: Do not power on anything yet. For the FC switches to autoconfigure, power on
equipment in a certain sequence. Also, the controllers in the subsystem are not yet configured
for compatibility with Tru64 UNIX.
3–6 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Preparing LUNs for Access by Tru64 UNIX
FileSystem
Tru64 UNIX treats an HSG80 LUN much like a SCSI disk; therefore, to prepare your
LUN for access by the Tru64 UNIX filesystem, you must do the following:
■ Create the partitions on the LUN using disklabel
■ Create a filesystem on the LUN
■ Mount the filesystem to be able to access it
Creating Partitions on a LUN Using disklabel
Create the partitions on a LUN by issuing a disklabel command. The disklabel command
partitions the LUN for access by the Tru64 UNIX Operating System. Tru64 UNIX defines
only partitions a, b, c, and g for the HSG80 controller. In addition, refer to the rz and
disktab man pages for more information about the disklabel utility.
To create the read/write partitions on a LUN using the default partition sizes, enter the
following:
For V4.0G
# disklabel -rw device HSG80
Where:
■ HSG80 defines this LUN as attached to the HSG80 array controller. Always use the
device label HSG80 regardless of what drive types make up the LUN.
■ device is the character device name.
For example, to create partitions on block device, rza8, enter:
# disklabel -rw rza8 HSG80
For V5.1x
# disklabel -rw disk HSG80
Where:
■ HSG80 specifies the type of disk as found in /etc/disktab or the driver for that device.
■ disk is the character device name found in /dev/rdisk.
Preparing the Host System
For example, to create partitions on block device, dsk8, enter:
# disklabel -rw dsk8 HSG80
To view a LUN partition, enter:
# disklabel -r device
Creating a Filesystem on a LUN
NOTE: The newfs command is given here as an example. For Advanced File System (ADVFS)
and for making devices available for Logical Storage Manager (LSM), similar types of
commands exist. For additional information, please consult the related documentation.
Use the newfs command to create a UFS filesystem on a LUN the same way that you
would create a filesystem on a disk device by entering the following:
# newfs special-device disk-type
Where:
■ disk-type is the disk as listed in the etc/disktab or HSG80.
■ special device can be either of the following examples:
For V4.0G
/dev/rrza8c
To create a UFS filesystem using the above examples, enter the following:
# newfs /dev/rrza8c HSG80
For V5.1x
/dev/rdisk/dsk8c
To create a UFS filesystem using the above examples, enter the following:
# newfs /dev/rdisk/dsk8c HSG80
This creates a UFS filesystem on the c partition (whole disk-device).
Mounting the Filesystem
To access the LUN, mount it as a device filesystem to a mount point. For example:
3–7
3–8 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
For V4.0G
# mount /dev/rza8c
/mnt
For V5.1x
# mount /dev/disk/dsk8c
/mnt
To view the mounted filesystem, enter:
# df
The LUN is now accessible to the filesystem just as a disk device would be. The
filesystem can not see the RAID functionality and number of physical devices attached to
the HSG80 controller. This device appears as a single LUN or “disk” to the user as viewed
by the filesystem.
DECsafe Available Server Environment
(ASE)
HSG80 disk devices can be used with the DECsafe Available Server Environment (ASE)
for Tru64 UNIX provided a valid Host configuration (including Host adapters) is used to
support them. Refer to the Tru64 UNIX ASE Installation and User’s Guide, Software
Product Description, SPD: 44.17.xx, for further information. Refer to the Release Notes
for supported host adapters and Tru64 UNIX version levels for ASE.
Using genvmunix
NOTE: This section on genvmunix does not apply to Tru64 UNIX V5.1x, since the configuration
file has a different format for devices.
If you use genvmunix to initialize the system and doconfig to build a new configuration
file, the new configuration file will only list the HSG80 LUN 0 units; nonzero LUNs will
not appear in the new configuration file.
Before rebuilding a configuration file using genvmunix, save any existing customized
configuration file that has entries for HSG80 units. After rebuilding the configuration file,
add the entries from the saved configuration file to the new configuration file.
Preparing the Host System
3–9
HSG80 Units and Tru64 UNIX Utilities
This section contains notes on the interactions of some Tru64 UNIX utilities with storage
units in your Enterprise Storage RAID Array storage system.
File Utility
You can use the Tru64 UNIX file utility to determine if a Controller Unit can be accessed
from the host.
The unit that you want to test must already have a character mode device special file and
the correct disk label.
The following example uses the HSG80 unit D101 on SCSI Bus-2. Run the file command
and specify the character mode device special file, in the format described below.
For V4.0G
# /usr/bin/file /dev/rrzb17a
The device activity indicator (green light) will illuminate on the device if the information
is not in cache. If the unit is a multi-device container, only one of the devices from that
container will illuminate. The Tru64 UNIX Operating System should display information
similar to the following output after the command is entered:
/dev/rrzb17a character special (8/33856) SCSI #2 HSG80 disk #146 (SCSI ID #1)
■ 8 is the major number
■ 33856 represents the minor number
■ 2 is the SCSI host-side bus number
■ 146 is the drive number as listed in the Configuration File
■ 1 is the Controller Target ID
For V5.1x
# /usr/bin/file /dev/rdisk/dsk7c
3–10 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
The device activity indicator (green light) will illuminate on the device if the information
is not in cache. If the unit is a multi-device container, only one of the devices from that
container will illuminate. The Tru64 UNIX Operating System should display information
similar to the following output after the command is entered:
/dev/rdisk/dsk7c character special (19/150) SCSI #2 HSG80 disk #0 (SCSI ID #1) (SCSI LUN #1)
■ 19 is the major number
■ 150 represents the minor number
■ 2 is the SCSI host-side bus number
■ 0 is the drive number as listed in the Configuration File
■ 1 is the Controller Target ID
■ 1 is the LUN number
If the only output that is returned from the file command is the major and minor number,
then either the device is not answering or the device special file does not have the correct
minor number. Check the minor number to be sure that it matches the host SCSI bus
number, the Controller target ID, and the Controller Unit LUN.
If an error occurs regarding the disk label, there is good probability that the device can be
accessed. This error can usually be fixed by creating the disk label with the Tru64 UNIX
disklabel utility.
NOTE: For major and minor number calculations see “man SCSI” in the UNIX online help.
Reading from the device, dd
Check the created device using dd on the ‘raw’ device to see if there is a full
communication path between devices and Tru64 UNIX. For example:
For V4.0G
# dd if=/dev/rrzb17a of=/dev/null
For V5.1x
# dd if=/dev/rdisk/dsk7a of=/dev/null
This will read the full disk-device until you press Ctrl-c or the device has been read. If the
test is successful, the device activity LED (green) on the device lights. If the device
consists of multiple disks, all these will be lit.
Preparing the Host System
3–11
scu
You can use the SCSI CAM Utility (scu) program to see which HSG80 units are available
to the Tru64 UNIX Operating System. It is located in the /sbin directory and documented
in the REF Pages.
The scu command, scan edt, polls all devices on the host-side SCSI buses. This allows you
to show what devices are available from all host-side SCSI buses. The device special files
do not have to exist for scu to see the devices. For example, scan SCSI bus 2, where your
Enterprise Storage RAID Array is connected:
For V4.0G and V5.1x
# /sbin/scu scan edt bus 2
# /sbin/scu show edt bus 2
V4.0G Output
CAM Equipment Device Table (EDT) Information:
Bus: 2, Target: 1, Lun: 0, Device Type: Direct Access
Bus: 2, Target: 1, Lun: 1, Device Type: Direct Access
Bus: 2, Target: 1, Lun: 2, Device Type: Direct Access
Bus: 2, Target: 1, Lun: 3, Device Type: Direct Access
V5.1x Output
CAM Equipment Device Table (EDT) Information:
Bus/Target/Lun Device Type ANSI Vendor ID Product ID Revision N/W
---------------- -------------- ---- ---------- ------------ -------- ---2
1
0 Direct
SCSI-2 DEC
HSG80
V86L W
2
1
1 Direct
SCSI-2 DEC
HSG80
V86L W
2
1
2 Direct
SCSI-2 DEC
HSG80
V86L W
2
1
3 Direct
SCSI-2 DEC
HSG80
V86L W
For detailed information about one of the units, you may use:
# /sbin/scu show device bus 2 target 1 lun 0
The preceding command line gives you the SCSI inquiry of the selected device.
3–12 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
HSG80 units appear like any other SCSI device. All four entries for Bus 2, Target 1 in the
example display are HSG80 units. The last entry would be for unit D103 on host SCSI bus
2.
hwmgr
Hwmgr is available on Tru64 UNIX V5.1x systems. hwmgr is a very powerful utility for
managing system hardware. For example you can use this command to view devices:
# hwmgr -view devices
and to get attributes:
# hwmgr -get attribute.
See the UNIX online help for more information.
iostat
You can use the iostat utility to view performance statistics on Enterprise Storage RAID
Array storage units. (Set your terminal screen to 132 columns before running iostat.)
The output from iostat shows the number of devices (LUNs) that have been defined in the
configuration file. It is much easier to interpret the output if the configuration file contains
entries for all eight devices. If the configuration file does not contain entries for all
devices, the iostat output has fewer columns and it is difficult to correlate each column
with a specific device.
For V4.0G
# iostat rznn s t
The output from iostat shows all devices that have device name rznn. The information for
LUN 0 is in the first column, the information for LUN 1 is in the second column, and so
forth.
# iostat rz16 5 4
rza16 rzb16 rzc16 rzd16 rze16 rzf16 rzg16 rzh16
bps tps bps tps bps tps bps tps bps tps bps tps bps tps bps tps
0 0 0 0 0 0 0 0 0 0 0 0 0 0 126 3
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1618 34
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1639 34
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1610 34
Preparing the Host System
3–13
The above display shows activity on all 8 LUNs attached to one SCSI-bus (2), one Target
0. The above list represents information on rza16, rzb16... rzh16. The device with activity
is device rzh16.
The iostat utility only recognizes device names in the format rznn where nn is calculated
as:
(8 * Host SCSI Bus #) + (HSG Target ID)
(This is the same formula and format that is used for the configuration file.)
Invoke the iostat utility using the following format:
For V5.1x
# iostat dsk3 s t
Where:
■ rznn or dsk3 is the device name
■ The s is optional and denotes the amount of time, in seconds, between screen updates
■ The t is optional and denotes the total number of screen updates
The output from iostat shows all devices that have device name rznn. The information for
LUN 0 is in the first column, the information for LUN 1 is in the second column, and so
forth.
# iostat dsk3 1 5
tty
floppy0
dsk3
cpu
tin tout
05
0 52
0 52
0 52
0 52
bps tps
00
00
00
00
00
bps tps
10 2
16 2
16 2
16 2
81
us ni sy id
0 0 1 99
0 0 0 100
0 0 0 100
0 0 0 100
0 0 1 99
Chapter
4
Installing and Configuring the HS-Series Agent
The following information is included in this chapter:
■ “Why Use StorageWorks Command Console (SWCC)?,” page 4–1
■ “Installation and Configuration Overview,” page 4–3
■ “Reconfiguring the Agent,” page 4–13
■ “Removing the Agent,” page 4–15
Why Use StorageWorks Command Console
(SWCC)?
StorageWorks Command Console (SWCC) enables you to monitor and configure the
storage connected to the HSG80 controller. SWCC consists of Client and Agent.
■ Client provides pager notification and lets you manage your virtual disks. Client runs
on Windows 2000 with Service Pack 1 and Windows NT 4.0 (Intel) with Service Pack
6A.
■ Agent obtains the status of the storage connected to the controller. It also passes the
status of the devices connected to the controller to other computers and provides email
notification and error logging.
To receive information about the devices connected to your HSG80 controller over a
TCP/IP network, you must install the Agent on a computer that is connected to a
controller.
4–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
The Agent can also be used as a standalone application without Client. In this mode,
which is referred to as Agent only, Agent monitors the status of the subsystem and
provides local and remote notification in the event of a failure. A subsystem includes the
HSG80 controller and its devices. Remote and local notification can be made by email
and/or SNMP messages to an SNMP monitoring program.
Table 4–1 SWCC Features and Components
Features
Agent Required?
Client Required?
Yes
Yes
Able to monitor many subsystems at once
Yes
Yes
Event logging
Yes
No
Able to create the following:
■ Striped device group (RAID 0)
■ Mirrored device group (RAID 1)
■ Striped mirrored device group (RAID 0+1)
■ Striped parity device group (RAID 3/5)
■ Individual device (JBOD)
Email notification
Yes
No
Pager notification
Yes
Yes
NOTE: For serial and SCSI connections, the Agent is not required for creating virtual disks.
Installing and Configuring the HS-Series Agent
4–3
Installation and Configuration Overview
Table 4–2 provides an overview of the installation.
Table 4–2 Installation and Configuration Overview
Step
Procedure
1
Verify that your hardware has been set up correctly. See the previous chapters in this guide.
2
Verify that you have a network connection for the Client and Agent systems.
3
Verify that there is a LUN to communicate through. This can be either the CCL or a LUN that was
created with the CLI. See "The Command Console LUN" described in Chapter 1.
4
Install the Agent (TCP/IP network connections) on a system connected to the HSG80 controller.
See “Installing and Configuring the Agent,” page 4–5.
5
Add the name of the Client system to the Agent’s list of Client system entries (TCP/IP network
connections). This can be done during installation or when reconfiguring the Agent.
6
Install the Client software on Windows 2000 with Service Pack 1 or Windows NT 4.0 (Intel) with
Service Pack 6A. See Appendix B.
7
Add the name of the agent system to the Navigation Tree of each Client system that is on the
Agent’s list of Client system entries (TCP/IP network connections). See Appendix B.
8
Set up pager notification (TCP/IP network connections). Refer to “Setting Up Pager Notification”
in the Compaq StorageWorks Command Console Version 2.4 User Guide.
4–4 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
7
1
A
T
V
A
T
-S
H
V
T
N
E
C
O
O
A
T
V
O
4
4
7
A
T
V
A
T
-S
H
2
V
T
N
E
C
O
O
5
4
3
6
CXO7240A
Figure 4–1. An example of a network connection
1 Agent system (has the Agent software)
5 Hub or switch
2 TCP/IP Network
6 HSG80 controller and its device
subsystem
3 Client system (has the Client software)
7 Servers
4 Fibre Channel cable
Installing and Configuring the HS-Series Agent
4–5
Before Installing the Agent
1. Log in as root (superuser). Agent installations on Tru64 UNIX must be done locally.
Do not attempt to install the Agent over the network.
2. Remove previous versions of the Agent from your computer.
3. Read the release notes.
CAUTION: Do not install the Agent by using Dataless Management Services (DMS) or
Remote Installation Services (RIS) sever installations.
Installing and Configuring the Agent
The following instructions assume that you have a directory /mnt to which you can mount
the CD-ROM. If you do not, you must create a mount point and replace /mnt in the
following sequence with the mount point you created. It also assumes your CD-ROM
device is /dev/rz4c. If not, replace /dev/rz4c with the actual CD-ROM device.
IMPORTANT: Before you install the Agent, complete the steps in “Before You Install the Agent.”
1. Obtain the following information:
❏ Names of systems you want the Agent to contact
❏ Names of subsystems connected to the host
❏ Email addresses for the Agent to send status updates
2. Insert the CD-ROM into a computer.
3. Enter one of the following commands at the command prompt:
❏ For V4.0x enter:
# mount -r-t cdfs -o rrip /dev/rz6c /mnt
(Substituting rz6c if necessary for your CD-ROM)
❏ For V5.1x enter:
# mount -r-t cdfs
/dev/disk/cdrom0c /mnt
(Substituting cdrom0c if necessary for your CD-ROM)
4–6 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
4. Change directories on the CD-ROM by entering:
# cd /mnt/agent
CAUTION: The version of Compaq Tru64 UNIX that you are using, either V4.0x or
V5.1x will determine which Device Special File Name format you will need to enter.
5. To run the installation program, enter the following at the command prompt:
# setld -l .
NOTE: The -l is a lowercase L.
You are asked if you want to install the listed subsets.
6. Enter the corresponding number of the software and then press the Enter key.
Configure Client System Options
7. Select option y and then press press the Enter key.
As the installation continues, you see several system messages, telling you that the
subset has been installed and is being loaded. The computer asks you to add a client
system entry.
8. Enter the name of the client system that you want to receive updates from this Agent.
Press the Enter key.
NOTE: Enter your most important client system first. Enter client systems that are connected
infrequently last. The Agent puts the client systems entered first at the top of its list. It first
contacts the client system located at the top of the list.
9. From the access options menu, enter an access level for the client system.
The access privilege level controls the client system’s level of access to the
subsystems. The following explains each access option:
Table 4–3 Client System Access Options
Options
0 = Overall Status
SWCC Function
■ Can use the Client to add a system to a Navigation Tree, set up a
pager, and view properties of the controller and the system
■ Cannot use Client to open a Storage Window
1 = Detailed Status
■ Can use the Client to open a Storage Window, but you cannot make
modifications in that window
2 = Configuration and Status
■ Can use the Client to make changes in a Storage Window to modify
a subsystem configuration
Installing and Configuring the HS-Series Agent
4–7
10. Press press the Enter key.
A menu for selecting a client system notification scheme appears:
11. From the menu, enter a notification scheme for the client system: 0, 1, 2, or 3.
The notification scheme defines the network protocol to be used by the Agent when
notifying the selected client system of a change in the state of a subsystem. The
following describes how the Transmission Control Protocol/Internet Protocol (TCP/IP)
and the Simple Network Management Protocol (SNMP) work with SWCC.
Table 4–4 Client System Notification Options
Options
Transmission control Protocol/Internet Protocol
(TCP/IP)
SWCC Function
■ Automatically updates the Storage Window of
subsystem changes provided AES is running
■ Required for Windows NT event logging and
pager notification
■ If you do not select TCP/IP, you will need to
refresh the Storage Window (depending on the
subsystem) to obtain the latest status of a
subsystem.
Simple Network Management Protocol (SNMP)
■ Requires you to use an SNMP-monitoring
program to view SNMP traps
12. Press press the Enter key.
The computer asks you if the entered information is correct.
13. Select option y and then press press the Enter key.
A message, asking if you would like to add another Client, appears.
14. To stop adding client systems, select option n and press the Enter key.
You are asked for a password, which is required to do configurations within the Client
software. If an old password is found, you are asked if you want to use it.
Enter Subsystem Information
15. Enter your case-sensitive password that has 4 to 16 characters, and press the Enter
key. You are asked to retype the password.
16. Retype the password and press the Enter key.
Once the password has been entered, the system scans for subsystems. You will be
asked for the name and monitoring interval of each RAID subsystem found.
4–8 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
17. Enter the subsystem name in lowercase letters, and press the Enter key. You are asked
for the monitoring interval. The monitoring interval is the rate at which the Agent
queries the specified subsystem for status.
18. Enter the monitoring interval in seconds, and press the Enter key. The monitoring
interval determines how often the Agent polls the subsystems. You are asked if the
displayed information is correct.
19. If the displayed information is correct, select option y and press the Enter key.
You are asked if you want Email enabled.
Configure Email Notification Details
20. To enable the Email, select option y and press the Enter key. The software asks for the
address of the person to notify.
21. Enter the address of the person to notify, and press the Enter key.
22. From the displayed menu, enter the notification level: 1, 2, or 3.
Table 4–5 provides the definitions of the Email notification options:
Table 4–5 Definitions of Email Notification Options
Term
Definition
Critical Errors
Notifies you of errors that would prevent you from accessing data; for example, a disk
failing or a client system not available on the network.
Warnings
Notifies you when something is broken, but its breakage does not prevent you from
accessing data; for example, the RAID degrading or losing a fan.
Information
Provides messages, but they do not indicate that something is broken. Examples of
informational messages are the following: an Agent startup message or a message
saying that an error has been resolved.
23. Press press the Enter key.
The software asks if the displayed information is correct.
24. If the displayed information is correct, select option y and press the Enter key.
A message, asking if you would like to add another person to notify, appears.
25. Select option n to end the dialog or y to add another person to notify. Press press the
Enter key.
After you select option n, the system will start the new Agent. If the installation script
detects a problem at Agent startup, it will alert you.
You are asked if you are running Agent in a TruCluster environment.
Installing and Configuring the HS-Series Agent
4–9
Final System Configuration
26. Do one of the following:
❏ If you are running the Agent in cluster, select option y. The software is configured
to scan the SCSI-bus for subsystems at startup.
❏ If you are not running the Agent in a cluster, select option n. The software is
configured to look at the storage.ini file at startup time. The storage.ini file contains
the listing of your subsystems.
NOTE: Compaq recommends that if you are running Tru64 UNIX 5.1x, especially within a fabric,
configure the software to scan the SCSI-bus at startup. When asked if running the Agent in a
cluster, answer y.
The system starts the new Agent.
27. Enter the following and press the Enter key to unmount the CD-ROM:
# cd /
# umount /mnt
The following instructions assume that you have a directory /mnt to which you can mount
the CD-ROM. If you do not, you must create a mount point and replace /mnt in the
following sequence with the mount point you created. It also assumes your CD-ROM
device is /dev/rz4c. If not, replace /dev/rz4c with the actual CD-ROM device.
IMPORTANT: Before you install the Agent, complete the steps in “Before You Install the Agent.”
1. Obtain the following information:
❏ Names of systems you want the Agent to contact
❏ Names of subsystems connected to the host
❏ Email addresses for the Agent to send status updates
2. Insert the CD-ROM into a computer.
3. Enter one of the following commands at the command prompt:
❏ For V4.0x enter:
# mount -r-t cdfs -o rrip /dev/rz6c /mnt
(Substituting rz6c if necessary for your CD-ROM)
❏ For V5.1x enter:
# mount -r-t cdfs
/dev/disk/cdrom0c /mnt
(Substituting cdrom0c if necessary for your CD-ROM)
4–10 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
4. Change directories on the CD-ROM by entering:
# cd /mnt/agent
CAUTION: The version of Compaq Tru64 UNIX that you are using, either V4.0x or
V5.1x will determine which Device Special File Name format you will need to enter.
5. To run the installation program, enter the following at the command prompt:
# setld -l .
NOTE: The -l is a lowercase L.
You are asked if you want to install the listed subsets.
6. Enter the corresponding number of the software and then press the Enter key.
Configure Client System Options
7. Select option y and then press press the Enter key.
As the installation continues, you see several system messages, telling you that the
subset has been installed and is being loaded. The computer asks you to add a client
system entry.
8. Enter the name of the client system that you want to receive updates from this Agent.
Press the Enter key.
NOTE: Enter your most important client system first. Enter client systems that are connected
infrequently last. The Agent puts the client systems entered first at the top of its list. It first
contacts the client system located at the top of the list.
9. From the access options menu, enter an access level for the client system.
The access privilege level controls the client system’s level of access to the
subsystems. The following explains each access option:
Table 4–6 Client System Access Options
Options
0 = Overall Status
SWCC Function
■ Can use the Client to add a system to a Navigation Tree, set up a
pager, and view properties of the controller and the system
■ Cannot use Client to open a Storage Window
1 = Detailed Status
■ Can use the Client to open a Storage Window, but you cannot make
modifications in that window
2 = Configuration and Status
■ Can use the Client to make changes in a Storage Window to modify
a subsystem configuration
Installing and Configuring the HS-Series Agent
4–11
10. Press press the Enter key.
A menu for selecting a client system notification scheme appears:
11. From the menu, enter a notification scheme for the client system: 0, 1, 2, or 3.
The notification scheme defines the network protocol to be used by the Agent when
notifying the selected client system of a change in the state of a subsystem. The
following describes how the Transmission Control Protocol/Internet Protocol (TCP/IP)
and the Simple Network Management Protocol (SNMP) work with SWCC.
Table 4–7 Client System Notification Options
Options
Transmission control Protocol/Internet Protocol
(TCP/IP)
SWCC Function
■ Automatically updates the Storage Window of
subsystem changes provided AES is running
■ Required for Windows NT event logging and
pager notification
■ If you do not select TCP/IP, you will need to
refresh the Storage Window (depending on the
subsystem) to obtain the latest status of a
subsystem.
Simple Network Management Protocol (SNMP)
■ Requires you to use an SNMP-monitoring
program to view SNMP traps
12. Press press the Enter key.
The computer asks you if the entered information is correct.
13. Select option y and then press press the Enter key.
A message, asking if you would like to add another Client, appears.
14. To stop adding client systems, select option n and press the Enter key.
You are asked for a password, which is required to do configurations within the Client
software. If an old password is found, you are asked if you want to use it.
Enter Subsystem Information
15. Enter your case-sensitive password that has 4 to 16 characters, and press the Enter
key. You are asked to retype the password.
16. Retype the password and press the Enter key.
Once the password has been entered, the system scans for subsystems. You will be
asked for the name and monitoring interval of each RAID subsystem found.
4–12 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
17. Enter the subsystem name in lowercase letters, and press the Enter key. You are asked
for the monitoring interval. The monitoring interval is the rate at which the Agent
queries the specified subsystem for status.
18. Enter the monitoring interval in seconds, and press the Enter key. The monitoring
interval determines how often the Agent polls the subsystems. You are asked if the
displayed information is correct.
19. If the displayed information is correct, select option y and press the Enter key.
You are asked if you want Email enabled.
Configure Email Notification Details
20. To enable the Email, select option y and press the Enter key. The software asks for the
address of the person to notify.
21. Enter the address of the person to notify, and press the Enter key.
22. From the displayed menu, enter the notification level: 1, 2, or 3.
Table 4–5 provides the definitions of the Email notification options:
Table 4–8 Definitions of Email Notification Options
Term
Definition
Critical Errors
Notifies you of errors that would prevent you from accessing data; for example, a disk
failing or a client system not available on the network.
Warnings
Notifies you when something is broken, but its breakage does not prevent you from
accessing data; for example, the RAID degrading or losing a fan.
Information
Provides messages, but they do not indicate that something is broken. Examples of
informational messages are the following: an Agent startup message or a message
saying that an error has been resolved.
23. Press press the Enter key.
The software asks if the displayed information is correct.
24. If the displayed information is correct, select option y and press the Enter key.
A message, asking if you would like to add another person to notify, appears.
25. Select option n to end the dialog or y to add another person to notify. Press press the
Enter key.
After you select option n, the system will start the new Agent. If the installation script
detects a problem at Agent startup, it will alert you.
You are asked if you are running Agent in a TruCluster environment.
Installing and Configuring the HS-Series Agent
4–13
Final System Configuration
26. Do one of the following:
❏ If you are running the Agent in cluster, select option y. The software is configured
to scan the SCSI-bus for subsystems at startup.
❏ If you are not running the Agent in a cluster, select option n. The software is
configured to look at the storage.ini file at startup time. The storage.ini file contains
the listing of your subsystems.
NOTE: Compaq recommends that if you are running Tru64 UNIX 5.1x, especially within a fabric,
configure the software to scan the SCSI-bus at startup. When asked if running the Agent in a
cluster, answer y.
The system starts the new Agent.
27. Enter the following and press the Enter key to unmount the CD-ROM:
# cd /
# umount /mnt
Reconfiguring the Agent
You can change your configuration using the SWCC Agent Configuration menu. To
access this menu, enter the following command:
# /usr/opt/SWCC510/scripts/swcc_config
The following is an example of the menu
.
SWCC Agent Configuration Utility
---------------------------------------Options Available Are:
1) Add/Delete Client PC Information.
2) Modify Storage Subsystem Information.
3) Change SWCC Agent Password.
4) Turn Email Notification ON/OFF.
5) Add/Delete Email Notification Users.
6) Restart Agent With Changes.
7) Enable/Disable Agent.
8) Cluster/No Cluster
Enter a number or 'q' to QUIT:
4–14 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
CAUTION: After you make a change to the configuration, such as adding a client
system, you must stop and then start the Agent for your changes take effect. When
you stop and then start the Agent, the Storage Windows for the subsystems
connected to the agent system lose their connection. To regain that connection, close
and then reopen the Storage Windows connected to the agent system after you restart
the Agent.
Table 4–9 Information Needed to Configure Agent
Term/Procedure
Description
Adding a Client system entry
For a client system to receive updates from the Agent, you must
add it to the Agent’s list of client system entries. The Agent will
only send information to client system entries that are on this list.
In addition, adding a client system entry allows you to access the
Agent system from the Navigation Tree on that Client system.
Adding a subsystem entry
You need to tell the Agent the subsystem that it needs to monitor.
Client system
Network names for the computers on which the Client software
runs.
Client system access options
The access privilege level controls the Client system’s level of
access to the subsystems.
0=No Access−Can use the Client software to add a system to a
Navigation Tree, set up a pager, and view properties of the
controller and the system. You cannot use Client to open a
Storage Window.
1=Show Level Access−Can use the Client software to open a
Storage Window, but you cannot make modifications in that
window.
2=Storage Subsystem Configuration Capability−Can use the
Client software to make changes in a Storage Window to modify a
subsystem configuration.
Installing and Configuring the HS-Series Agent
4–15
Table 4–9 Information Needed to Configure Agent (Continued)
Term/Procedure
Client system notification options
Note: For all of the client system
notification options, local
notification is available through an
entry in the system error log file
and Email (provided that Email
notification in PAGEMAIL.COM has
not been disabled).
Description
0 = No Error Notification−No error notification is provided over
network.
1 = Notification via a TCP/IP Socket (Transmission Control
Protocol/Internet Protocol)−Updates the Storage Window of
subsystem changes provided AES is running. Required for
Windows NT event logging and pager notification. If you do not
select TCP/IP, you will need to refresh the Storage Window to
obtain the latest status of a subsystem.
2 = Notification via the SNMP protocol (Simple Network
Management Protocol)–Requires you to use an SNMP-monitoring
program to view SNMP traps.
3 = Notification via both TCP/IP and SNMP–Combination of
options 1 and 2.
Deleting a client system entry
When you remove a client system from the Agent’s list, you are
instructing the Agent to stop sending updates to that client
system. In addition, you will be unable to access this agent
system from the Navigation Tree.
Email notification
Modify file pagemail.com in directory sys$manager. System is the
default.
When an error is logged, the Agent executes the PAGEMAIL.COM
command. You can modify this file for Agent to log errors in a log
file and/or change the account to which Agent sends messages.
You can also modify for which level of errors you will be notified.
Client does not need to be running to perform these actions.
Monitoring interval in seconds
How often the subsystem is monitored.
Password
It must be a text string that has 4 to 16 characters. It can be
entered from the client system to gain configuration access.
Accessing the SWCC Agent Configuration menu can change it.
Removing the Agent
1. Enter the following and press the Enter key:
# setld -d SWCC510
The procedure to remove the Agent asks the following:
Are you sure you want to delete all Agent data?
4–16 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
2. Do one of the following:
❏ Select option y and then press the Enter key. All configuration files for the Agent
is deleted.
❏ Select option n and press the Enter key. The computer saves the configuration
data, which includes the following:
▲ Subsystem list of entries
▲ Client system access options
▲ Password
▲ Email notification
You are able to use the data if you reinstall the Agent or install a newer version of the
Agent.
Chapter
5
Configuration Procedures
This chapter describes procedures to configure a subsystem that uses Fibre Channel fabric
topology. In fabric topology, the controller connects to its hosts through switches.
The following information is included in this chapter:
■ “Establishing a Local Connection,” page 5–2
■ “Setting Up a Single Controller,” page 5–3
■ “Setting Up a Controller Pair,” page 5–9
■ “Configuring Devices,” page 5–15
■ “Configuring a Stripeset,” page 5–15
■ “Configuring a Mirrorset,” page 5–15
■ “Configuring a RAIDset,” page 5–16
■ “Configuring a Striped Mirrorset,” page 5–17
■ “Configuring a Single-Disk Unit (JBOD),” page 5–18
■ “Configuring a Partition,” page 5–18
■ “Assigning Unit Numbers and Unit Qualifiers,” page 5–19
■ “Configuration Options,” page 5–21
Use the command line interpreter (CLI) or StorageWorks Command Console (SWCC) to
configure the subsystem. This chapter uses CLI, which is the low-level interface to the
controller. To use SWCC for configuration, see the SWCC online help for assistance.
5–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
IMPORTANT: These configuration procedures assume that controllers and cache modules are
installed in a fully functional and populated enclosure and that the PCMCIA cards are installed.
To install a controller or cache module and the PCMCIA card, see the Compaq
StorageWorks HSG80 Array Controller ACS Version 8.6 Maintenance and Service Guide.
Establishing a Local Connection
A local connection is required to configure the controller until a command console LUN
(CCL) is established using the CLI. Communication with the controller can be through the
CLI or SWCC.
The maintenance port, shown in Figure 5–1, provides a way to connect a maintenance
terminal. The maintenance terminal can be an EIA-423 compatible terminal or a computer
running a terminal emulator program. The maintenance port accepts a standard RS-232
jack. The maintenance port cable shown in Figure 5–1 has a 9-pin connector molded onto
the end for a PC connection. If you need a terminal connection or a 25-pin connection, you
can order optional cabling.
1
2
3
4
5
6
1
2
CXO7181A
1 Maintenance Port Cable
Figure 5–1. Maintenance port connection
2 Maintenance Port
Configuration Procedures
CAUTION: The maintenance port generates, uses, and can radiate radio-frequency
energy through its cables. This energy may interfere with radio and television
reception. Disconnect all maintenance port cables when not communicating with the
controller through the local connection.
Setting Up a Single Controller
Power On and Establish Communication
1. Connect the computer or terminal to the controller as shown in Figure 5–1. The
connection to the computer is through the COM1 or COM2 port.
2. Turn on the computer or terminal.
3. Apply power to the storage subsystem.
4. Verify that the computer or terminal is configured as follows:
❏ 9600 baud
❏ 8 data bits
❏ 1 stop bit
❏ no parity
5. Press Enter. A copyright notice and the CLI prompt appear, indicating that you
established a local connection with the controller.
Cabling a Single Controller
The cabling for a single controller is shown in Figure 5–2.
NOTE: It is a good idea to plug only the controller cables into the switch. The host cables are
plugged into the switch as part of the configuration procedure (“Configuring a Single Controller
Using CLI,” page 5–4).
5–3
5–4 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
4
1
2
5
3
5
4
CXO6881B
1 Controller
4 Cable from the switch to the host Fibre Channel adapter
2 Host port 1
5 FC switch
3 Host port 2
Figure 5–2. Single controller cabling
Configuring a Single Controller Using CLI
To configure a single controller using CLI involves the following processes:
■ Verify the Node ID and Check for Any Previous Connections.
■ Configure Controller Settings.
■ Restart the Controller.
■ Set Time and Verify all Commands.
■ Plug in the FC Cable and Verify Connections.
■ Repeat Procedure for Each Host Adapter.
■ Verify Installation.
Verify the Node ID and Check for Any Previous
Connections
1. Enter a SHOW THIS command to verify the node ID:
SHOW THIS
Configuration Procedures
5–5
See “Worldwide Names (Node IDs and Port IDs),” page 1–28, for the location of the
sticker.
The node ID is located in the third line of the SHOW THIS result:
HSG80> SHOW THIS
Controller:
HSG80 ZG80900583 Software V8.6F-1, Hardware E11
NODE_ID
= 5000-1FE1-0001-3F00
ALLOCATION_CLASS = 0
If the node ID is present, go to step 5.
If the node ID is all zeroes, enter the node ID and checksum, which are located on a
sticker on the controller enclosure. Use the following syntax to enter the node ID:
SET THIS NODE_ID=NNNN-NNNN-NNNN-NNNN nn
Where:NNNN-NNNN-NNNN-NNNN is the node ID, and nn is the checksum.
2. When using a controller that is not new from the factory, enter the following command
to take it out of any failover mode that may have been configured previously:
SET NOFAILOVER
If the controller did have a failover mode previously set, the CLI may report an error.
Clear the error with this command:
CLEAR_ERRORS CLI
3. Enter the following command to remove any previously configured connections:
SHOW CONNECTIONS
A list of named connections, if any, is displayed.
4. Delete these connections by entering the following command:
DELETE !NEWCON01
Repeat the Delete command for each of the listed connections. When completed, no
connections will be displayed.
Configure Controller Settings
5. Set the SCSI version to SCSI-3for Tru64 UNIX V5.1x only using the following
command:
SET THIS SCSI_VERSION=SCSI-3
5–6 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
NOTE: Setting the SCSI version to SCSI-3 does not make the controller fully compliant with the
SCSI-3 standards.
SET THIS SCSI_VERSION=SCSI-3
NOTE: Setting the SCSI version to SCSI-3 does not make the controller fully compliant with all
SCSI-3 standards. .
6. Assign an identifier for the communication LUN (also called the command console
LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the
range 1 to 32767, and which is different from the identifiers of all units. Use the
following syntax:
SET THIS IDENTIFIER=N
Identifier must be unique among all the controllers attached to the fabric within the
specified allocation class.
7. Set the topology for the controller. If both ports are used, set topology for both ports:
SET THIS PORT_1_TOPOLOGY=FABRIC
SET THIS PORT_2_TOPOLOGY=FABRIC
If the controller is not factory-new, it may have another topology set, in which case
these commands will result in an error message. If this happens, take both ports offline
first, then reset the topology:
SET THIS PORT_1_TOPOLOGY=OFFLINE
SET THIS PORT_2_TOPOLOGY=OFFLINE
SET THIS PORT_1_TOPOLOGY=FABRIC
SET THIS PORT_2_TOPOLOGY=FABRIC
Restart the Controller
8. Restart the controller, using the following command:
RESTART THIS
Set Time and Verify all Commands
9. Set the time on the controller by entering the following syntax:
SET THIS TIME=DD-MMM-YYYY:HH:MM:SS
10. Use the FRUTIL utility to set up the battery discharge timer. Enter the following
command to start FRUTIL:
RUN FRUTIL
Configuration Procedures
5–7
When FRUTIL asks if you intend to replace the battery, answer “Y”:
Do you intend to replace this controller's cache battery? Y/N [N] Y
FRUTIL will print out a procedure, but will not give you a prompt. Ignore the
procedure and press the Enter key.
11. Set up any additional optional controller settings, such as changing the CLI prompt.
See the SET THIS CONTROLLER/OTHER CONTROLLER command in the Compaq StorageWorks
HSG80 Array Controller ACS Version 8.6 CLI Reference Guide for the format of
optional settings.
12. Verify that all commands have taken effect. Use the following command:
SHOW THIS
Verify node ID, allocation class, SCSI version, failover mode, identifier, and port
topology.
The following sample is a result of a SHOW THIS command, with the areas of interest in
bold.
Controller:
HSG80 (C) DEC ZG09030200 Software V8.6F, Hardware 0000
NODE_ID
= 5000-1FE1-0000-0000
ALLOCATION_CLASS = 0
SCSI_VERSION
= SCSI-3
Not configured for dual-redundancy
Device Port SCSI address 7
Time: 10-Mar-1999:12:30:34
Command Console LUN is lun 0
Host PORT_1:
Reported PORT_ID = 5000-1FE1-0000-0001
PORT_1_TOPOLOGY = FABRIC (fabric up)
Address
= 210313
Host PORT_2:
Reported PORT_ID = 5000-1FE1-0000-0002
PORT_2_TOPOLOGY = FABRIC (fabric up)
Address
= 210513
NOREMOTE_COPY
.......
5–8 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
13. Turn on the switches, if not done previously.
If you want to communicate with the Fibre Channel switches through Telnet, set an IP
address for each switch. See the manuals that came with the switches for details.
Plug in the FC Cable and Verify Connections
14. Plug the Fibre Channel cable from the first host bus adapter into the switch. Enter the
SHOW CONNECTIONS command to view the connection table:
SHOW CONNECTIONS
15. Rename the connections to something meaningful to the system and easy to remember.
For example, to assign the name ANGEL1A1 to connection !NEWCON01, enter:
RENAME !NEWCON01 ANGEL1A1
For a recommended naming convention, see “Naming Connections,” page 1–13.
16. Specify the operating system for the connection:
SET ANGEL1A1 OPERATING_SYSTEM=TRU64_UNIX
17. Verify the changes:
SHOW CONNECTIONS
Mark or tag all Fibre Channel cables at both ends for ease of maintenance.
Repeat Procedure for Each Host Adapter
18. Repeat step 15, 16, and 17 for each of that adapter’s host connections, or delete the
unused connections from the table.
19. For each host adapter, repeat steps 14 through 18.
Verify Installation
To verify installation for your Tru64_UNIX host, enter one of the following commands:
For V4.0G Use
file /dev/rrx*c | grep HSG80
For V5.1x Use
file /dev/cport/ *
Your host computer should report one CCL device special file for each HSG80 configured.
Configuration Procedures
Setting Up a Controller Pair
Power Up and Establish Communication
1. Connect the computer or terminal to the controller as shown in Figure 5–1. The
connection to the computer is through the COM1 or COM2 ports.
2. Turn on the computer or terminal.
3. Apply power to the storage subsystem.
4. Configure the computer or terminal as follows:
❏ 9600 baud
❏ 8 data bits
❏ 1 stop bit
❏ no parity
5. Press Enter. A copyright notice and the CLI prompt appear, indicating that you
established a local connection with the controller.
Cabling a Controller Pair
The cabling for a controller pair is shown in Figure 5–3.
NOTE: It is a good idea to plug only the controller cables into the switch. The host cables are
plugged into the switch as part of the configuration procedure (“Configuring a Controller Pair
Using CLI,” page 5–10).
5–9
5–10 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
The following figure shows a controller pair with failover cabling showing one HBA per
server with HSG80 controller in transparent failover mode.
5
6
1
3
4
2
6
5
CXO6887B
1 Controller A
4 Host port 2
2 Controller B
5 Cable from the switch to the host FC adapter
3 Host port 1
6 FC switch
Figure 5–3. Controller pair failover cabling
Configuring a Controller Pair Using CLI
To configure a single controller using CLI involves the following processes:
■ Configure Controller Settings.
■ Restart the Controller.
■ Set Time and Verify All Commands.
■ Plug in the FC Cable and Verify Connections.
■ Repeat Procedure for Each Host Adapter.
■ Verify Installation.
1. Enter a SHOW THIS command to verify the node ID:
SHOW THIS
See “Worldwide Names (Node IDs and Port IDs),” page 1–28, for the location of the
sticker.
Configuration Procedures
5–11
The node ID is located in the third line of the SHOW THIS result:
HSG80> show this
Controller:
HSG80 ZG80900583 Software V8.6F-1, Hardware E11
NODE_ID
= 5000-1FE1-0001-3F00
ALLOCATION_CLASS = 0
If the node ID is present, go to step 5.
If the node ID is all zeroes, enter the node ID and checksum, which are located on a
sticker on the controller enclosure. Use the following syntax to enter the node ID:
SET THIS NODE_ID=NNNN-NNNN-NNNN-NNNN nn
Where: NNNN-NNNN-NNNN-NNNN is the node ID and nn is the checksum.
2. If the controller is not new from the factory, enter the following command to take it out
of any failover mode that may have been previously configured:
SET NOFAILOVER
If the controller did have a failover mode previously set, the CLI may report an error.
Clear the error with this command:
CLEAR_ERRORS CLI
3. Enter the following command to remove any previously configured connections:
SHOW CONNECTIONS
A list of named connections, if any, is displayed.
4. Delete these connections by entering the following command:
DELETE !NEWCON01
Repeat the Delete command for each of the listed connections. When completed, no
connections will be displayed.
Configure Controller Settings
5. Set the SCSI version to SCSI-3 for Tru64 UNIX V5.1x only, using the following
command:
SET THIS SCSI_VERSION=SCSI-3
NOTE: Setting the SCSI version to SCSI-3 does not make the controller fully compliant with the
SCSI-3 standards.
5–12 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
6. Set the topology for the controller. If both ports are used, set topology for both ports:
SET THIS PORT_1_TOPOLOGY=FABRIC
SET THIS PORT_2_TOPOLOGY=FABRIC
If the controller is not factory-new, it may have another topology set, in which case
these commands will result in an error message. If this happens, first take both ports
offline, then reset the topology:
SET THIS PORT_1_TOPOLOGY=OFFLINE
SET THIS PORT_2_TOPOLOGY=OFFLINE
SET THIS PORT_1_TOPOLOGY=FABRIC
SET THIS PORT_2_TOPOLOGY=FABRIC
Restart the Controller
7. Restart the controller, using the following command:
RESTART THIS
It takes about a minute for the CLI prompt to come back after a RESTART command.
Set Time and Verify All Commands
8. Set the time on the controller by entering the following syntax:
SET THIS TIME=DD-MMM-YYYY:HH:MM:SS
9. Use the FRUTIL utility to set up the battery discharge timer. Enter the following
command to start FRUTIL:
RUN FRUTIL
When FRUTIL asks if you intend to replace the battery, answer “Y”:
Do you intend to replace this controller's cache battery? Y/N [N] Y
FRUTIL will print out a procedure, but will not give you a prompt. Ignore the
procedure and press Enter.
10. Set up any additional optional controller settings, such as changing the CLI prompt.
See the SET THIS CONTROLLER/OTHER CONTROLLER command in the Compaq StorageWorks
HSG80 Array Controller ACS Version 8.6 CLI Reference Guide for the format of
optional settings. Perform this step on both controllers.
11. Verify that all commands have taken effect by entering the following command:
SHOW THIS
Configuration Procedures
5–13
12. Verify node ID, allocation class, SCSI version, failover mode, identifier, and port
topology. The following display is a sample result of a SHOW THIS command, with the
areas of interest in bold.
Controller:
HSG80 (C) DEC ZG09030200 Software V8.6F-1, Hardware E11
NODE_ID
= 5000-1FE1-0000-0000
ALLOCATION_CLASS = 0
SCSI_VERSION
= SCSI-3
Configured for dual-redundancy with 2GOPO30245
Device Port SCSI address 7
Time: 10-Mar-2001:12:30:34
Command Console LUN is lun 0
Host PORT_1:
Reported PORT_ID = 5000-1FE1-0000-0001
PORT_1_TOPOLOGY = FABRIC (standby)
Address
= 210313
Host PORT_2:
Reported PORT_ID = 5000-1FE1-0000-0002
PORT_2_TOPOLOGY = FABRIC (fabric up)
Address
= 210513
NOREMOTE_COPY
.......
.......
13. Turn on the switches if not done previously.
If you want to communicate with the FC switches through Telnet, set an IP address for
each switch. See the manuals that came with the switches for details.
Plug in the FC Cable and Verify Connections
14. Plug the FC cable from the first host adapter into the switch. Enter a SHOW CONNECTIONS
command to view the connection table:
SHOW CONNECTIONS
5–14 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
The first connection will have one or more entries in the connection table. Each
connection will have a default name of the form !NEWCONxx, where xx is a number
representing the order in which the connection was added to the connection table.
For a description of why plugging in one adapter can result in multiple connections,
see “Numbers of Connections,” page 1–13.
15. Rename the connections to something meaningful to the system and easy to remember.
For example, to assign the name ANGEL1A1 to connection !NEWCON01, enter:
RENAME !NEWCON01 ANGEL1A1
Compaq recommends using a naming convention, see “Naming Connections,” page
1–13.
16. Specify the operating system for the connection:
SET ANGEL1A1 OPERATING_SYSTEM=TRU64_UNIX
17. Verify the changes:
SHOW CONNECTIONS
Mark or tag all Fibre Channel cables at both ends for ease of maintenance.
Repeat Procedure for Each Host Adapter Connection
18. Repeat steps 15, 16, and 17 for each of that adapter’s host connections or delete the
unwanted connections from the table.
19. For each host adapter, repeat steps 14 through 18.
Verify Installation
To verify installation for your Tru64_UNIX host, enterone of the following commands:
For V4.0G Use
file /dev/rrx*c | grep HSG80
For V5.1x Use
file /dev/cport/ *
Your host computer should report one CCL device special file for each HSG80 configured.
Configuration Procedures
5–15
Configuring Devices
The disks on the device bus of the HSG80 can be configured manually or with the
CONFIG utility. The CONFIG utility is easier. Invoke CONFIG with the following
command:
RUN CONFIG
CONFIG takes about two minutes to discover and to map the configuration of a
completely populated storage system.
Configuring a Stripeset
1. Create the stripeset by adding its name to the controller's list of storagesets and by
specifying the disk drives it contains. Use the following syntax:
ADD STRIPESET STRIPESET-NAME DISKNNNNN DISKNNNNN.......
2. Initialize the stripeset, specifying any desired switches:
INITIALIZE STRIPESET-NAME SWITCHES
See “Initialization Switches” on page 2–25 for a description of the initialization
switches.
3. Verify the stripeset configuration:
SHOW STRIPESET-NAME
4. Assign the stripeset a unit number to make it accessible by the hosts. See “Assigning
Unit Numbers and Unit Qualifiers” on page 5–19.
For example:
The commands to create Stripe1, a stripeset consisting of three disks (DISK10000,
DISK20000, and DISK10100) and having a chunksize of 128:
ADD STRIPESET STRIPE1 DISK10000 DISK20000 DISK10100
INITIALIZE STRIPE1 CHUNKSIZE=128
SHOW STRIPE1
Configuring a Mirrorset
1. Create the mirrorset by adding its name to the controller's list of storagesets and by
specifying the disk drives it contains. Optionally, you can append mirrorset switch
values:
ADD MIRRORSET MIRRORSET-NAME DISKNNNNN DISKNNNNN SWITCHES
5–16 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
NOTE: See the ADD MIRRORSET command in the Compaq StorageWorks HSG80 Array
Controller ACS Version 8.6 CLI Reference Guide for a description of the mirrorset switches.
2. Initialize the mirrorset, specifying any desired switches:
INITIALIZE MIRRORSET-NAME SWITCHES
See “Initialization Switches” on page 2–25 for a description of the initialization
switches.
3. Verify the mirrorset configuration:
SHOW MIRRORSET-NAME
4. Assign the mirrorset a unit number to make it accessible by the hosts. See “Assigning
Unit Numbers and Unit Qualifiers” on page 5–19.
For example:
The commands to create Mirr1, a mirrorset with two members (DISK10000 and
DISK20000), and to initialize it using default switch settings:
ADD MIRRORSET MIRR1 DISK10000 DISK20000
INITIALIZE MIRR1
SHOW MIRR1
Configuring a RAIDset
1. Create the RAIDset by adding its name to the controller's list of storagesets and by
specifying the disk drives it contains. Optionally, you can specify RAIDset switch
values:
ADD RAIDSET RAIDSET-NAME DISKNNNNN DISKNNNNN DISKNNNNN SWITCHES
NOTE: See the ADD RAIDSET command in the Compaq StorageWorks HSG80 Array Controller
ACS Version 8.6 CLI Reference Guide for a description of the RAIDset switches.
2. Initialize the RAIDset, specifying any desired switches:
INITIALIZE RAIDSET-NAME SWITCH
NOTE: Compaq recommends that you allow initial reconstruct to complete before allowing I/O
to the RAIDset. Not doing so may generate forced errors at the host level. To determine whether
initial reconstruct has completed, enter SHOW RAIDSET FULL.
See “Initialization Switches” on page 2–25 for a description of the initialization
switches.
3. Verify the RAIDset configuration:
SHOW RAIDSET-NAME
Configuration Procedures
5–17
4. Assign the RAIDset a unit number to make it accessible by the hosts. See “Assigning
Unit Numbers and Unit Qualifiers” on page 5–19.
For example:
The commands to create RAID1, a RAIDset with three members (DISK10000,
DISK20000, and DISK10100) and to initialize it with default values:
ADD RAIDSET RAID1 DISK10000 DISK20000 DISK10100
INITIALIZE RAID1
SHOW RAID1
Configuring a Striped Mirrorset
1. Create, but do not initialize, at least two mirrorsets.
See “Configuring a Mirrorset” on page 5–15.
2. Create a stripeset and specify the mirrorsets it contains:
ADD STRIPESET STRIPESET-NAME MIRRORSET-1 MIRRORSET-2....MIRRORSET-N
3. Initialize the striped mirrorset, specifying any desired switches:
INITIALIZE STRIPESET-NAME SWITCH
See “Initialization Switches” on page 2–25 for a description of the initialization
switches.
4. Verify the striped mirrorset configuration:
SHOW STRIPESET-NAME
5. Assign the stripeset mirrorset a unit number to make it accessible by the hosts. See
“Assigning Unit Numbers and Unit Qualifiers” on page 5–19.
For example:
The commands to create Stripe1, a striped mirrorset that comprises Mirr1, Mirr2, and
Mirr3, each of which is a two-member mirrorset:
ADD MIRRORSET MIRR1 DISK10000 DISK20000
ADD MIRRORSET MIRR2 DISK20100 DISK10100
ADD MIRRORSET MIRR3 DISK10200 DISK20200
ADD STRIPESET STRIPE1 MIRR1 MIRR2 MIRR3
INITIALIZE STRIPE1
SHOW STRIPE1
5–18 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Configuring a Single-Disk Unit (JBOD)
1. Initialize the disk drive, specifying any desired switches:
INITIALIZE DISK-NAME SWITCHES
See “Initialization Switches” on page 2–25 for a description of the initialization
switches.
2. Verify the configuration by entering the following command:
SHOW DISK-NAME
3. Assign the disk a unit number to make it accessible by the hosts. See “Assigning Unit
Numbers and Unit Qualifiers” on page 5–19.
Configuring a Partition
1. Initialize the storageset or disk drive, specifying any desired switches:
INITIALIZE STORAGESET-NAME SWITCHES
or
INITIALIZE DISK-NAME SWITCHES
See “Initialization Switches” on page 2–25 for a description of the initialization
switches.
2. Create each partition in the storageset or disk drive by indicating the partition's size.
Also specify any desired switch settings:
CREATE_PARTITION STORAGESET-NAME SIZE=N SWITCHES
or
CREATE_PARTITION DISK-NAME SIZE=N SWITCHES
where N is the percentage of the disk drive or storageset that will be assigned to the
partition. Enter SIZE=LARGEST to let the controller assign the largest free space
available to the partition.
NOTE: See the CREATE_PARTITION command in the Compaq StorageWorks HSG80 Array
Controller ACS Version 8.6 CLI Reference Guide for a description of the partition switches.
3. Verify the partitions:
SHOW STORAGESET-NAME
or
SHOW DISK-NAME
Configuration Procedures
5–19
The partition number appears in the first column, followed by the size and starting
block of each partition.
4. Assign the partition a unit number to make it accessible by the hosts. See “Assigning
Unit Numbers and Unit Qualifiers” on page 5–19.
For example:
The commands to create RAID1, a three-member RAIDset, then partition it into two
storage units are shown below.
ADD RAIDSET RAID1 DISK10000 DISK20000 DISK10100
INITIALIZE RAID1
CREATE_PARTITION RAID1 SIZE=25
CREATE_PARTITION RAID1 SIZE=LARGEST
SHOW RAID1
Assigning Unit Numbers and Unit Qualifiers
Each storageset, partition, or single (JBOD) disk must be assigned a unit number for the
host to access. As the units are added, their properties can be specified through the use of
command qualifiers, which are discussed in detail under the ADD UNIT command in the
Compaq StorageWorks HSG80 Array Controller ACS Version 8.6 CLI Reference Guide.
Because of different SCSI versions, refer to the section “Assigning Unit Numbers
Depending on SCSI_VERSION,” page 1–19. The choice for SCSI_VERSION effects how
certain unit numbers and host connection offsets interact.
Each unit can be reserved for the exclusive use of a host or group of hosts. See
“Restricting Host Access in Transparent Failover Mode,” page 1–21 and “Restricting Host
Access in Multiple-Bus Failover Mode,” page 1–24..
Assigning a Unit Number to a Storageset
To assign a unit number to a storageset, use the following syntax:
ADD UNIT UNIT-NUMBER STORAGESET-NAME
For example:
To assign unit D102 to RAIDset R1, use the following command:
ADD UNIT D102 R1
5–20 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Assigning a Unit Number to a Single (JBOD) Disk
To assign a unit number to a single (JBOD) disk, use the following syntax:
ADD UNIT UNIT-NUMBER DISK-NAME
For example:
To assign unit D4 to DISK20300, use the following command:
ADD UNIT D4 DISK20300
Assigning a Unit Number to a Partition
To assign a unit number to a partition, use the following syntax:
ADD UNIT UNIT-NUMBER STORAGESET-NAME PARTITION=PARTITION-NUMBER
For example:
To assign unit D100 to partition 3 of mirrorset mirr1, use the following command:
ADD UNIT D100 MIRR1 PARTITION=3
Preferring Units
In multiple-bus failover mode, individual units can be preferred to a specific controller.
For example, to prefer unit D102 to “this controller,” use the following command:
SET D102 PREFERRED_PATH=THIS
RESTART commands must be issued to both controllers for this command to take effect:
RESTART OTHER_CONTROLLER
RESTART THIS_CONTROLLER
NOTE: The controllers need to restart together for the preferred settings to take effect. The
RESTART this_controller command must be entered immediately after the RESTART
other_controller command.
Configuration Procedures
5–21
Configuration Options
Changing the CLI Prompt
To change the CLI prompt, enter a 1- to 16- character string as the new prompt, according
to the following syntax:
SET THIS_CONTROLLER PROMPT = “NEW PROMPT”
If you are configuring dual-redundant controllers, also change the CLI prompt on the
“other controller.” Use the following syntax:
SET OTHER_CONTROLLER PROMPT = “NEW PROMPT”
NOTE: It is suggested that the prompt name reflect some information about the controllers. For
example, if the subsystem is the third one in a lab, name the top controller prompt, LAB3A and
the bottom controller, LAB3B.
Mirroring cache
To specify mirrored cache, use the following syntax:
SET THIS MIRRORED_CACHE
Adding Disk Drives
If you add new disk drives to the subsystem, the disk drives must be added to the
controllers’ list of known devices:
■ To add one new disk drive to the list of known devices, use the following syntax:
ADD DISK DISKNNNNN P T L
■ To add several new disk drives to the list of known devices, enter the following
command:
RUN CONFIG
Adding a Disk Drive to the Spareset
The spareset is a collection of spare disk drives that are available to the controller should it
need to replace a failed member of a RAIDset or mirrorset.
5–22 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
NOTE: This procedure assumes that the disks that you are adding to the spareset have already
been added to the controller's list of known devices.
To add the disk drive to the controller's spareset list, use the following syntax:
ADD SPARESET DISKNNNNN
Repeat this step for each disk drive you want to add to the spareset:
For example:
The following example shows the syntax for adding DISK11300 and DISK21300 to the
spareset.
ADD SPARESET DISK11300
ADD SPARESET DISK21300
Removing a Disk Drive from the Spareset
You can delete disks in the spareset if you need to use them elsewhere in your subsystem.
1. Show the contents of the spareset entering the following command:
SHOW SPARESET
2. Delete the desired disk drive entering the following command:
DELETE SPARESET DISKNNNNN
NOTE: The RUN CONFIG command does not delete disks from the controllers’ device table if a
disk has been physically removed or replaced. In this case, you must use the command:
DELETE DISKNNNN.
3. Verify the contents of the spareset by entering the following command:
SHOW SPARESET
Enabling Autospare
With AUTOSPARE enabled on the failedset, any new disk drive that is inserted into the
PTL location of a failed disk drive is automatically initialized and placed into the spareset.
If initialization fails, the disk drive remains in the failedset until you manually delete it
from the failedset.
To enable autospare, use the following command:
SET FAILEDSET AUTOSPARE
Configuration Procedures
5–23
To disable autospare, use the following command:
SET FAILEDSET NOAUTOSPARE
During initialization, AUTOSPARE checks to see if the new disk drive contains metadata.
Metadata is information the controller writes on the disk drive when the disk drive is
configured into a storageset. Therefore, the presence of metadata indicates that the disk
drive belongs to, or has been used by, a storageset. If the disk drive contains metadata,
initialization stops. (A new disk drive will not contain metadata but a repaired or reused
disk drive might. To erase metadata from a disk drive, add it to the controller's list of
devices, then set it to be notransportable and initialize it.)
Deleting a Storageset
NOTE: If the storageset you are deleting is partitioned, you must delete each partitioned unit
before you can delete the storageset.
1. Show the storageset’s configuration:
SHOW STORAGESET-NAME
2. Delete the unit number that uses the storageset. Use the following command:
DELETE UNIT-NUMBER
3. Delete the storageset. Use the following command:
DELETE STORAGESET-NAME
4. Verify the configuration:
SHOW STORAGESET-NAME
Changing Switches for a Storageset or Device
You can optimize a storageset or device at any time by changing the switches that are
associated with it. Remember to update the storageset profile when changing its switches.
Displaying the Current Switches
To display the current switches for a storageset or single-disk unit, enter a SHOW command,
specifying the FULL switch:
SHOW STORAGESET-NAME
or
SHOW DEVICE-NAME FULL
5–24 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Changing RAIDset and Mirrorset Switches
Use the SET storageset-name command to change the RAIDset and Mirrorset switches
associated with an existing storageset.
For example, the following command changes the replacement policy for RAIDset RAID1
to BEST_FIT:
SET RAID1 POLICY=BEST_FIT
Changing Device Switches
Use the SET device-name command to change the device switches.
For example, to request a data transfer rate of 20 MHz for DISK10000:
SET DISK10000 TRANSFER_RATE_REQUESTED=20MHZ
Changing Initialize Switches
The initialization switches cannot be changed without destroying the data on the
storageset or device. These switches are integral to the formatting and can only be changed
by reinitializing the storageset. Initializing a storageset is similar to formatting a disk
drive; all data is destroyed during this procedure.
Changing Unit Switches
Use the SET unit-name command to change the characteristics of a unit.
For example, the following command enables write protection for unit D100:
SET D100 WRITE_PROTECT
Chapter
6
Verifying Storage Configuration from the Host
This chapter briefly describes how to verify that multiple paths exist to virtual disk units
under Tru64 UNIX V5.1x.
After configuring units (virtual disks) through either the CLI or SWCC, access the new
storage by performing one of the following methods:
■ Issuing the following command to rescan the bus:
# hwmgr
−scan scsi
or
■ Restarting the host.
After the host restarts, verify that the disk is correctly presented to the host. The command
to use consists of the following syntax:
# hwmgr -view dev
−category disk
The disk information returned is shown in the following table:
HWID: Device Name
Mfg
Model
Location
123: /dev/disk/dsk56c
DEC
HSG80CCL
bus-5-targ-0-lun-0
124: /dev/disk/dsk57c
DEC
HSG80
bus-6-targ-0-lun-1
125: /dev/disk/dsk58c
DEC
HSG80
bus-5-targ-0-lun-2
126: /dev/disk/dsk59c
DEC
HSG80
bus-5-targ-0-lun-60
127: /dev/disk/dsk60c
DEC
HSGCCL
bus-5-targ-10-lun-0
128: /dev/disk/dsk61c
DEC
HSG80
bus-5-targ-10-lun-60
129: /dev/disk/dsk62c
DEC
HSG80
bus-5-targ-10-lun-61
6–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
HWID: Device Name
130: /dev/disk/dsk63c
Mfg
DEC
Model
HSG80
Location
bus-5-targ-10-lun-62
Chapter
7
Configuration Example Using CLI
This chapter presents an example of how to configure a storage subsystem using the
Command Line Interpreter (CLI). The CLI configuration example shown assumes:
■ A normal, new controller pair, meaning:
❏ NODE ID set
❏ No previous failover mode
❏ No previous topology set
■ Full array with no expansion cabinet
■ PCMCIA cards installed in both controllers
A storage subsystem example is shown in Figure 7–1. The example system contains three
non-clustered TRU64_UNIX hosts, as shown in Figure 7–2. From the hosts’ point of
view, each host will have four paths to its own virtual disks. The resulting virtual system,
from the host’s point of view, is shown in Figure 7–3.
7–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Figure 7–1 shows an example storage system map for the BA370 enclosure. Details on
building your own map are described in Chapter 2. Templates to help you build your
storage map are supplied in Appendix A.
1
2
Port
3
4
5
6
Power
Supply
D2
S2
DISK40300
D0
D0
S1
S1
MI
M1
DISK10200 DISK20200
D0
S1
M2
DISK30200
D0
S1
M2
DISK40200
D2
D101
DISK50300
spareset
member
DISK60300
Power
Supply
D1
M3
DISK50200
D1
M3
DISK60200
Power
Supply
Targets
D2
S2
DISK30300
2
D2
D2
S2
S2
DISK10300 DISK20300
3
Power
Supply
Power
Supply
Power
Supply
D120
R2
DISK30100
D120
R2
DISK40100
D120
R2
DISK50100
D120
R2
DISK60100
Power
Supply
1
D120
D120
R2
R2
DISK10100 DISK20100
Power
Supply
D102
R1
DISK30000
D102
R1
DISK40000
D102
R1
DISK50000
D102
R1
DISK60000
0
D102
D102
R1
R1
DISK10000 DISK20000
Figure 7–1. Example storage map for the BA370 Enclosure
The figure shows a representative multiple-bus failover configuration. Restricting the
access of unit D101 to host BLUE can be done by enabling only the connections to host
BLUE. At least two connections must be enabled for multiple-bus failover to work. For
most operating systems, it is desirable to have all connections to the host enabled. The
example system, shown in Figure 7–2, contains three non-clustered TRU64_UNIX hosts.
Port 1 link is separate from port 2 link (that is, ports 1 of both controllers are on one loop
or fabric, and port 2 of both controllers are on another) therefore, each adapter has two
connections.
Configuration Example Using CLI
7–3
.
Host 1
"RED"
Host 2
"GREY"
Host 3
"BLUE"
FCA1 FCA2
FCA1 FCA2
FCA1 FCA2
Switch
or hub
Connections
RED1B1
GREY1B1
BLUE1B1
Switch
or hub
Connections
RED1A1
GREY1A1
BLUE1A1
Connections
RED2A2
GREY2A2
BLUE2A2
Host
port 1
active
Host
port 2
active
Controller A
D0
D1
D2
D101
Connections
RED2B2
GREY2B2
BLUE2B2
D102
D120
All units visible to all ports
Host
port 1
active
Controller B
Host
port 2
active
NOTE: FCA = Fibre Channel Adapter
CXO7109B
Figure 7–2. Example system
The following figure represents units that are logical or virtual disks comprised of
storagesets configured from physical disks.
7–4 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
"RED"
"GREY"
"BLUE"
D1
D0
D2
D101
D102
D120
CXO7110B
Figure 7–3. Example virtual system layout from the hosts’ point of view
CLI Configuration Example
The series of commands and information presented in this section provides a CLI
configuration example. Text conventions used in this section are listed below:
■ Text in italics indicates an action you take.
■ Text in THIS FORMAT, indicates a command you type. Be certain to press Enter after each
command.
■ Text enclosed within a box, indicates information that is displayed by the CLI
interpreter.
■ NOTE text provides additional helpful information.
NOTE: “This” controller is top controller (A).
Plug serial cable from maintenance terminal into top controller.
CLEAR CLI
SET MULTIBUS COPY=THIS
CLEAR CLI
SET THIS SCSI_VERSION=SCSI-3
SET THIS PORT_1_TOPOLOGY=FABRIC
SET THIS PORT_2_TOPOLOGY=FABRIC
SET OTHER PORT_1_TOPOLOGY=FABRIC
SET OTHER PORT_2_TOPOLOGY=FABRIC
SET THIS ALLOCATION_CLASS=0
RESTART OTHER
Configuration Example Using CLI
RESTART THIS
SET THIS TIME=10-Mar-2001:12:30:34
RUN FRUTIL
Do you intend to replace this controller's cache battery? Y/N [Y]
Y
Plug serial cable from maintenance terminal into bottom controller.
NOTE: Bottom controller (B) becomes “this” controller.
RUN FRUTIL
Do you intend to replace this controller's cache battery? Y/N [Y]
Y
SET THIS MIRRORED_CACHE
NOTE: This command causes the controllers to restart.
SET THIS PROMPT=“BTVS BOTTOM”
SET OTHER PROMPT=“BTVS TOP”
SHOW THIS
SHOW OTHER
Plug in the Fibre Channel cable from the first adapter in host “RED.”
SHOW CONNECTIONS
RENAME !NEWCON00 RED1B1
SET RED1B1 OPERATING_SYSTEM=TRU64_UNIX
RENAME !NEWCON01 RED1A1
SET RED1A1 OPERATING_SYSTEM=TRU64_UNIX
SHOW CONNECTIONS
NOTE: Connection table sorts alphabetically.
7–5
7–6 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Connection
Name
Operating
System
Controller
Port
Address
Status
Unit
Offset
RED1A1
TRU64_UNIX
OTHER
1
XXXXXX
OL other
0
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED1B1
TRU64_UNIX
THIS
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
1
HOST_ID=XXXX-XXXX-XXXX-XXXX
XXXXXX
OL this
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
Mark or tag both ends of Fibre Channel cables.
Plug in the Fibre Channel cable from the second adapter in host “RED.”
SHOW CONNECTIONS
Connection
Name
Operating
System
Controller
Port
Address
Status
Unit
Offset
!NEWCON02
TRU64_UNIX
THIS
2
XXXXXX
OL this
0
HOST_ID=XXXX-XXXX-XXXX-XXXX
!NEWCON03
TRU64_UNIX
OTHER
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
2
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED1A1
TRU64_UNIX
OTHER
OL other
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
1
...
RENAME !NEWCON02 RED2B2
SET RED2B2 OPERATING_SYSTEM=TRU64_UNIX
RENAME !NEWCON02 RED2A2
SET RED2A2 OPERATING_SYSTEM=TRU64_UNIX
SHOW CONNECTIONS
XXXXXX
XXXXXX
OL other
0
Configuration Example Using CLI
Connection
Name
Operating
System
Controller
Port
Address
Status
Unit
Offset
RED1A1
TRU64_UNIX
OTHER
1
XXXXXX
OL other
0
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED1B1
TRU64_UNIX
THIS
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
1
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED2A2
TRU64_UNIX
OTHER
TRU64_UNIX
THIS
XXXXXX
OL this
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
2
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED2B2
7–7
XXXXXX
OL other
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
2
HOST_ID=XXXX-XXXX-XXXX-XXXX
XXXXXX
OL this
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
Mark or tag both ends of Fibre Channel cables
Repeat this process to add connections from the other two hosts. The resulting connection
table should appear similar to the following:
Connection
Name
Operating
System
Controller
Port
Address
Status
Unit
Offset
GREY1A1
TRU64_UNIX
OTHER
1
XXXXXX
OL other
0
HOST_ID=XXXX-XXXX-XXXX-XXXX
GREY1B1
TRU64_UNIX
THIS
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
1
HOST_ID=XXXX-XXXX-XXXX-XXXX
GREY2A2
TRU64_UNIX
OTHER
TRU64_UNIX
THIS
2
TRU64_UNIX
OTHER
2
TRU64_UNIX
THIS
1
TRU64_UNIX
OTHER
HOST_ID=XXXX-XXXX-XXXX-XXXX
OL other
0
XXXXXX
OL this
0
XXXXXX
OL other
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
1
HOST_ID=XXXX-XXXX-XXXX-XXXX
BLUE2A2
XXXXXX
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
HOST_ID=XXXX-XXXX-XXXX-XXXX
BLUE1B1
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
HOST_ID=XXXX-XXXX-XXXX-XXXX
BLUE1A1
OL this
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
HOST_ID=XXXX-XXXX-XXXX-XXXX
GREY2B2
XXXXXX
XXXXXX
OL this
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
2
XXXXXX
OL other
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
0
7–8 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
BLUE2B2
TRU64_UNIX
THIS
2
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED1A1
TRU64_UNIX
OTHER
TRU64_UNIX
THIS
1
TRU64_UNIX
OTHER
TRU64_UNIX
THIS
XXXXXX
1
OL other
0
XXXXXX
OL this
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
2
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED2B2
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED2A2
OL this
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
HOST_ID=XXXX-XXXX-XXXX-XXXX
RED1B1
XXXXXX
XXXXXX
OL other
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
2
HOST_ID=XXXX-XXXX-XXXX-XXXX
XXXXXX
OL this
0
ADAPTER_ID=XXXX-XXXX-XXXX-XXXX
RUN CONFIG
ADD RAIDSET R1 DISK10000 DISK20000 DISK30000 DISK40000 DISK50000 DISK60000
INITIALIZE R1
ADD UNIT D102 R1 DISABLE_ACCESS_PATH=ALL
SET D102 ENABLE_ACCESS_PATH=(RED1A1, RED1B1, RED2A2, RED2B2)
ADD RAIDSET R2 DISK10100 DISK20100 DISK30100 DISK40100 DISK50100 DISK60100
INITIALIZE R2
ADD UNIT D120 R2 DISABLE_ACCESS_PATH=ALL
SET D120 ENABLE_ACCESS_PATH=(BLUE1A1, BLUE1B1, BLUE2A2, BLUE2B2)
ADD MIRRORSET MI DISK10200 DISK20200
ADD MIRRORSET M2 DISK30200 DISK40200
ADD STRIPESET S1 M1 M2
INITIALIZE S1
ADD UNIT D0 S1 DISABLE_ACCESS_PATH=ALL
SET D0 ENABLE_ACCESS_PATH=(GREY1A1, GREY1B1, GREY2A2, GREY2B2)
ADD MIRRORSET M3 DISK50200 DISK60200
INITIALIZE M3
ADD UNIT D1 M3 DISABLE_ACCESS_PATH=ALL
SET D1 ENABLE_ACCESS_PATH=(BLUE1A1, BLUE1B1, BLUE2A2, BLUE2B2)
ADD STRIPESET S2 DISK10300 DISK20300 DISK30300 DISK40300
INITIALIZE S2
Configuration Example Using CLI
ADD UNIT D2 S2 DISABLE_ACCESS_PATH=ALL
SET D2 ENABLE_ACCESS_PATH=(GREY1A1, GREY1B1, GREY2A2, GREY2B2)
INITIALIZE DISK50300
ADD UNIT D101 DISK50300 DISABLE_ACCESS_PATH=ALL
SET D101 ENABLE_ACCESS_PATH=(BLUE1A1, BLUE1B1, BLUE2A2, BLUE2B2)
ADD SPARESET DISK60300
SHOW UNITS ALL
7–9
Chapter
8
Backing Up the Subsystem, Cloning Data for
Backup, and Moving Storagesets
This chapter describes some common procedures that are not mentioned previously in this
guide. The following information is included in this chapter:
■ “Backing Up the Subsystem Configuration,” page 8–1
■ “Cloning Data for Backup,” page 8–2
■ “Moving Storagesets,” page 8–6
Backing Up the Subsystem Configuration
The controller stores information about the subsystem configuration in its nonvolatile
memory. This information could be lost if the controller fails or when you replace a
module in the subsystem.
Use the following command to produce a display that shows if the save configuration
feature is active and which devices are being used to store the configuration.
SHOW THIS_CONTROLLER FULL
The resulting display includes a line that indicates status and how many devices have
copies of the configuration. The last line shows on how many devices the configuration is
backed up.
The SHOW DEVICES FULL command shows which disk drive are set up to back up the
configuration. The syntax for this command is shown below:
SHOW DEVICES FULL
8–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Cloning Data for Backup
Use the CLONE utility to duplicate the data on any unpartitioned single-disk unit,
stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning
operation is complete, you can back up the clones rather than the storageset or single-disk
unit, which can continue to service its I/O load. When you are cloning a mirrorset,
CLONE does not need to create a temporary mirrorset. Instead, it adds a temporary
member to the mirrorset and copies the data onto this new member.
The CLONE utility creates a temporary, two-member mirrorset for each member in a
single-disk unit or stripeset. Each temporary mirrorset contains one disk drive from the
unit you are cloning and one disk drive onto which CLONE copies the data. During the
copy operation, the unit remains online and active so that the clones contain the most
up-to-date data.
After the CLONE utility copies the data from the members to the clones, it restores the
unit to its original configuration and creates a clone unit you can back up. The CLONE
utility uses steps shown in Figure 8–1 to duplicate each member of a unit.
Backing Up the Subsystem, Cloning Data for Backup, and Moving Storagesets
Unit
Unit
Temporary mirrorset
Disk10300
Disk10300
New member
Unit
Temporary mirrorset
Unit
Copy
Disk10300
Disk10300
New member
Clone Unit
Clone of Disk10300
CXO5510A
Figure 8–1. Steps the CLONE utility follows for duplicating unit members
Use the following steps to clone a single-disk unit, stripeset, or mirrorset:
1. Establish a connection to the controller that accesses the unit you want to clone.
2. Start CLONE using the following command:
RUN CLONE
3. When prompted, enter the unit number of the unit you want to clone.
4. When prompted, enter a unit number for the clone unit that CLONE will create.
5. When prompted, indicate how you would like the clone unit to be brought online:
either automatically or only after your approval.
6. When prompted, enter the disk drives you want to use for the clone units.
7. Back up the clone unit.
8–3
8–4 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
The following example shows the commands you would use to clone storage unit D98.
The clone command terminates after it creates storage unit D99, a clone or copy of D98.
RUN CLONE
CLONE LOCAL PROGRAM INVOKED
UNITS AVAILABLE FOR CLONING:
98
ENTER UNIT TO CLONE? 98
CLONE WILL CREATE A NEW UNIT WHICH IS A COPY OF UNIT 98.
ENTER THE UNIT NUMBER WHICH YOU WANT ASSIGNED TO THE NEW UNIT? 99
THE NEW UNIT MAY BE ADDED USING ONE OF THE FOLLOWING METHODS:
1. CLONE WILL PAUSE AFTER ALL MEMBERS HAVE BEEN COPIED. THE USER MUST THEN
PRESS RETURN TO CAUSE THE NEW UNIT TO BE ADDED.
2. AFTER ALL MEMBERS HAVE BEEN COPIED, THE UNIT WILL BE ADDED AUTOMATICALLY.
UNDER WHICH ABOVE METHOD SHOULD THE NEW UNIT BE ADDED[ ]?1
DEVICES AVAILABLE FOR CLONE TARGETS:
DISK20200 (SIZE=832317)
DISK20300 (SIZE=832317)
USE AVAILABLE DEVICE DISK20200(SIZE=832317) FOR MEMBER DISK10300(SIZE=832317)
(Y,N) [Y]? Y
MIRROR DISK10300 C_MA
SET C_MA NOPOLICY
SET C_MA MEMBERS=2
SET C_MA REPLACE=DISK20200
DEVICES AVAILABLE FOR CLONE TARGETS:
DISK20300 (SIZE=832317)
Backing Up the Subsystem, Cloning Data for Backup, and Moving Storagesets
USE AVAILABLE DEVICE DISK20300(SIZE=832317) FOR MEMBER DISK10000(SIZE=832317)
(Y,N) [Y]? Y
MIRROR DISK10000 C_MB
SET C_MB NOPOLICY
SET C_MB MEMBERS=2
SET C_MB REPLACE=DISK20300
COPY IN PROGRESS FOR EACH NEW MEMBER. PLEASE BE PATIENT...
.
.
COPY FROM DISK10300 TO DISK20200 IS 100% COMPLETE
COPY FROM DISK10000 TO DISK20300 IS 100% COMPLETE
PRESS RETURN WHEN YOU WANT THE NEW UNIT TO BE CREATED
REDUCE DISK20200 DISK20300
UNMIRROR DISK10300
UNMIRROR DISK10000
ADD MIRRORSET C_MA
DISK20200
ADD MIRRORSET C_MB
DISK20300
ADD STRIPESET C_ST1 C_MA C_MB
INIT C_ST1
NODESTROY
ADD UNIT D99 C_ST1
D99 HAS BEEN CREATED. IT IS A CLONE OF D98.
CLONE - NORMAL TERMINATION
8–5
8–6 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Moving Storagesets
You can move a storageset from one subsystem to another without destroying its data. You
also can follow the steps in this section to move a storageset to a new location within the
same subsystem.
CAUTION: Move only normal storagesets. Do not move storagesets that are
reconstructing or reduced, or data corruption will result.
See the release notes for the version of your controller software for information on which
drives can be supported.
CAUTION: Never initialize any container or this procedure will not protect data.
Use the following procedure to move a storageset, while maintaining the data the
storageset contains:
1. Show the details for the storageset you want to move. Use the following command:
SHOW STORAGESET-NAME
2. Label each member with its name and PTL location.
If you do not have a storageset map for your subsystem, you can enter the LOCATE
command for each member to find its PTL location. Use the following command:
LOCATE DISK-NAME
To cancel the locate command, enter the following:
LOCATE CANCEL
3. Delete the unit number shown in the “Used by” column of the SHOW storageset-name
command. Use the following syntax:
DELETE UNIT-NUMBER
4. Delete the storageset shown in the “Name” column of the SHOW storageset-name
command. Use the following syntax:
DELETE STORAGESET-NAME
Backing Up the Subsystem, Cloning Data for Backup, and Moving Storagesets
8–7
5. Delete each disk drive, one at a time, that the storageset contained. Use the following
syntax:
DELETE DISK-NAME
DELETE DISK-NAME
DELETE DISK-NAME
6. Remove the disk drives and move them to their new PTL locations.
7. Again add each disk drive to the controller's list of valid devices. Use the following
syntax:
ADD DISK DISK-NAME PTL-LOCATION
ADD DISK DISK-NAME PTL-LOCATION
ADD DISK DISK-NAME PTL-LOCATION
8. Recreate the storageset by adding its name to the controller's list of valid storagesets
and by specifying the disk drives it contains. (Although you have to recreate the
storageset from its original disks, you do not have to add the storagesets in their
original order.) Use the following syntax to recreate the storageset:
ADD STORAGESET-NAME DISK-NAME DISK-NAME
9. Represent the storageset to the host by giving it a unit number the host can recognize.
You can use the original unit number or create a new one. Use the following syntax:
ADD UNIT UNIT-NUMBER STORAGESET-NAME
The following example moves unit D100 to another cabinet. D100 is the RAIDset
RAID99 that consists of members DISK10000, DISK20000, and DISK10100.
DELETE D100
DELETE RAID99
DELETE DISK10000
DELETE DISK10100
DELETE DISK20000
DELETE DISK20100
ADD DISK DISK10000
ADD DISK DISK10100
ADD DISK DISK20000
ADD DISK DISK20100
ADD RAIDSET RAID99 DISK10000 DISK10100 DISK20000 DISK20100
ADD UNIT D100 RAID99
Appendix
A
Subsystem Profile Templates
This appendix contains storageset profiles to copy and use to create your system profiles.
It also contains an enclosure template to use to help keep track of the location of devices
and storagesets in your shelves. Four (4) templates will be needed for the subsystem.
NOTE: The storage map templates for the Model 4310R and Model 4214R or 4314R reflect the
disk enclosures physical location in the rack. Disk enclosures 6, 5, and 4 are stacked above the
controller enclosure and disk enclosures 1, 2, and 3 are stacked below the controller enclosure.
■ “Storageset Profile,” page A-2
■ “Storage Map Template 1 for the BA370 Enclosure,” page A-3
■ “Storage Map Template 2 for the second BA370 Enclosure,” page A-4
■ “Storage Map Template 3 for the third BA370 Enclosure,” page A-5
■ “Storage Map Template 4 for the Model 4214R Disk Enclosure,” page A-6
■ “Storage Map Template 5 for the Model 4254 Disk Enclosure,” page A-7
■ “Storage Map Template 6 for the Model 4310R Disk Enclosure,” page A-9
■ “Storage Map Template 7 for the Model 4350R Disk Enclosure,” page A-11
■ “Storage Map Template 8 for the Model 4314R Disk Enclosure,” page A-12
■ “Storage Map Template 9 for the Model 4354R Disk Enclosure,” page A-14
A–2 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Storageset Profile
Type of Storageset:
_____ Mirrorset
__X_ RAIDset
_____ Stripeset
_____ Striped Mirrorset ____ JBOD
Storageset Name
Disk Drives
Unit Number
Partitions:
Unit #
Unit #
Unit #
Unit #
Unit #
Unit #
Unit #
Unit #
RAIDset Switches:
Reconstruction Policy
___Normal (default)
Reduced Membership
__ _No (default)
Replacement Policy
___Best performance (default)
___Fast
___Yes, missing:
___Best fit
___None
Mirrorset Switches:
Replacement Policy
___Best performance (default)
Copy Policy
___Normal (default)
Read Source
___Least busy (default)
___Best fit
___Fast
___Round robin
___None
___Disk drive:
Initialize Switches:
Chunk size
___ Automatic (default)
Save Configuration
___No (default)
Metadata
___Destroy (default)
___ 64 blocks
___Yes
___Retain
___ 128 blocks
___ 256 blocks
___ Other:
Unit Switches:
Caching
Access by following hosts enabled
Read caching__________
____________________________________________________________
Read-ahead caching_____
____________________________________________________________
Write-back caching______
____________________________________________________________
Write-through caching____
____________________________________________________________
Subsystem Profile Templates
A–3
Storage Map Template 1 for the BA370
Enclosure
Use this template for:
■ BA370 single-enclosure subsystems
■ first enclosure of multiple BA370 enclosure subsystems
1
2
Port
3
4
5
6
Power
Supply
Power
Supply
3
D10300
D20300
D30300
D40300
D50300
D60300
Power
Supply
Power
Supply
2
D20200
D30200
D40200
D50200
D60200
Power
Supply
Targets
D10200
Power
Supply
1
D10100
D20100
D30100
D40100
D50100
D60100
Power
Supply
Power
Supply
0
D10000
D20000
D30000
D40000
D50000
D60000
A–4 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Storage Map Template 2 for the second
BA370 Enclosure
Use this template for the second enclosure of multiple BA370 enclosure subsystems.
1
2
Port
3
4
5
6
Power
Supply
Power
Supply
11
D11100
D21100
D31100
D41100
D51100
D61100
Power
Supply
Power
Supply
10
D21000
D31000
D41000
D51000
D61000
Targets
D11000
Power
Supply
Power
Supply
9
D10900
D20900
D30900
D40900
D50900
D60900
Power
Supply
Power
Supply
8
D10800
D20800
D30800
D40800
D50800
D60800
Subsystem Profile Templates
A–5
Storage Map Template 3 for the third
BA370 Enclosure
Use this template for the third enclosure of multiple BA370 enclosure subsystems.
1
2
Port
3
4
5
6
Power
Supply
Power
Supply
15
D11500
D21500
D31500
D41500
D51500
D61500
Power
Supply
Power
Supply
14
D21400
D31400
D41400
D51400
D61400
Targets
D11400
Power
Supply
Power
Supply
13
D11300
D21300
D31300
D41300
D51300
D61300
Power
Supply
Power
Supply
12
D11200
D21200
D31200
D41200
D51200
D61200
A–6 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Storage Map Template 4 for the Model
4214R Disk Enclosure
Use this template for a subsystem with a three-shelf Model 4214R disk enclosure
(single-bus). You can have up to six Model 4214R disk enclosures per controller shelf.
Model 4214R Disk Enclosure Shelf 1 (single-bus)
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk11500
13
Disk11400
12
Disk11300
11
Disk11200
10
Disk11100
9
Disk11000
8
Disk10900
7
Disk10800
6
Disk10500
5
Disk10400
4
Disk10300
3
Disk10200
2
Disk10100
1
Disk10000
Bay
Model 4214R Disk Enclosure Shelf 2 (single-bus)
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk21500
13
Disk21400
12
Disk21300
11
Disk21200
10
Disk21100
9
Disk21000
8
Disk20900
7
Disk20800
6
Disk20500
5
Disk20400
4
Disk20300
3
Disk20200
2
Disk20100
1
Disk20000
Bay
Model 4214R Disk Enclosure Shelf 3 (single-bus)
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk31500
13
Disk31400
12
Disk31300
11
Disk31200
10
Disk31100
9
Disk31000
8
Disk30900
7
Disk30800
6
Disk30500
5
Disk30400
4
Disk30300
3
Disk30200
2
Disk30100
1
Disk30000
Bay
Subsystem Profile Templates
A–7
Storage Map Template 5 for the Model
4254 Disk Enclosure
Use this template for a subsystem with a three-shelf Model 4254 disk enclosure
(dual-bus). You can have up to three Model 4254 disk enclosures per controller shelf.
Model 4254 Disk Enclosure Shelf 1 (dual-bus)
Bus A
Bus B
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
DISK ID
Disk20800
13
Disk20500
12
Disk20400
11
Disk20300
10
Disk20200
9
Disk20100
8
Disk20000
7
Disk10800
6
Disk10500
5
Disk10400
4
Disk10300
3
Disk10200
2
Disk10100
1
Disk10000
Bay
Model 4254 Disk Enclosure Shelf 2 (dual-bus)
Bus A
Bus B
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
DISK ID
Disk40800
13
Disk40500
12
Disk40400
11
Disk40300
10
Disk40200
9
Disk40100
8
Disk40000
7
Disk30800
6
Disk30500
5
Disk30400
4
Disk30300
3
Disk30200
2
Disk30100
1
Disk30000
Bay
continued on the following page
A–8 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
continued from previous page
Model 4254 Disk Enclosure Shelf 3 (dual-bus)
Bus A
Bus B
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
DISK ID
Disk60800
13
Disk60500
12
Disk60400
11
Disk60300
10
Disk60200
9
Disk60100
8
Disk60000
7
Disk50800
6
Disk50500
5
Disk50400
4
Disk50300
3
Disk50200
2
Disk50100
1
Disk50000
Bay
Subsystem Profile Templates
A–9
Storage Map Template 6 for the Model
4310R Disk Enclosure
Use this template for a subsystem with a six-shelf Model 4310R disk enclosure (dual-bus).
You can have up to six Model 4310R disk enclosures per controller shelf.
Model 4310R Disk Enclosure Shelf 6 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk61200
9
Disk61100
8
Disk61000
7
Disk60800
6
Disk60500
5
Disk60400
4
Disk60300
3
Disk60200
2
Disk60100
1
Disk60000
Bay
Model 4310R Disk Enclosure Shelf 5 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk51200
9
Disk51100
8
Disk51000
7
Disk50800
6
Disk50500
5
Disk50400
4
Disk50300
3
Disk50200
2
Disk50100
1
Disk50000
Bay
Model 4310R Disk Enclosure Shelf 4 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk41200
9
Disk41100
8
Disk41000
7
Disk40800
6
Disk40500
5
Disk40400
4
Disk40300
3
Disk40200
2
Disk40100
1
Disk40000
Bay
continued on the following page
A–10 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
continued from previous page
Model 4310R Disk Enclosure Shelf 1 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk11200
9
Disk11100
8
Disk11000
7
Disk10800
6
Disk10500
5
Disk10400
4
Disk10300
3
Disk10200
2
Disk10100
1
Disk10000
Bay
Model 4310R Disk Enclosure Shelf 2 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk21200
9
Disk21100
8
Disk21000
7
Disk20800
6
Disk20500
5
Disk20400
4
Disk20300
3
Disk20200
2
Disk20100
1
Disk20000
Bay
Model 4310R Disk Enclosure Shelf 3 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk31200
9
Disk31100
8
Disk31000
7
Disk30800
6
Disk30500
5
Disk30400
4
Disk30300
3
Disk30200
2
Disk30100
1
Disk30000
Bay
Subsystem Profile Templates
A–11
Storage Map Template 7 for the Model
4350R Disk Enclosure
Use this template for a subsystem with a three-shelf Model 4350R disk enclosure
(dual-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Model 4310R Disk Enclosure Shelf 6 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk61200
9
Disk61100
8
Disk61000
7
Disk60800
6
Disk60500
5
Disk60400
4
Disk60300
3
Disk60200
2
Disk60100
1
Disk60000
Bay
Model 4310R Disk Enclosure Shelf 5 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk51200
9
Disk51100
8
Disk51000
7
Disk50800
6
Disk50500
5
Disk50400
4
Disk50300
3
Disk50200
2
Disk50100
1
Disk50000
Bay
Model 4310R Disk Enclosure Shelf 4 (single-bus)
10
SCSI ID
00
01
02
03
04
05
08
10
11
12
DISK ID
Disk41200
9
Disk41100
8
Disk41000
7
Disk40800
6
Disk40500
5
Disk40400
4
Disk40300
3
Disk40200
2
Disk40100
1
Disk40000
Bay
A–12 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Storage Map Template 8 for the Model
4314R Disk Enclosure
Use this template for a subsystem with a six-shelf Model 4314R disk enclosure. You can
have a maximum of six Model 4314R disk enclosures with each Model 2200 controller
enclosure.
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk60000
Disk60100
Disk60200
Disk60300
Disk60400
Disk60500
Disk60800
Disk60900
Disk61000
Disk61100
Disk61200
Disk61300
Disk61400
Disk61500
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk50000
Disk50100
Disk50200
Disk50300
Disk50400
Disk50500
Disk50800
Disk50900
Disk51000
Disk51100
Disk51200
Disk51300
Disk51500
Disk51500
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk40000
Disk40100
Disk40200
Disk40300
Disk40400
Disk40500
Disk40800
Disk40900
Disk41000
Disk41100
Disk41200
Disk41300
Disk4140o
Disk41500
Model 4314R Disk Enclosure Shelf 6 (single-bus)
Model 4314R Disk Enclosure Shelf 5 (single-bus)
Model 4314R Disk Enclosure Shelf 4 (single-bus)
Subsystem Profile Templates
A–13
continued from previous page
2
3
4
5
6
7
8
9
10
11
12
13
14
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk10000
Disk10100
Disk10200
Disk10300
Disk10400
Disk10500
Disk10800
Disk10900
Disk11000
Disk11100
Disk11200
Disk11300
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk20000
Disk20100
Disk20200
Disk20300
Disk20400
Disk20500
Disk20800
Disk20900
Disk21000
Disk21100
Disk21200
Disk21300
Bay
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SCSI ID
00
01
02
03
04
05
08
09
10
11
12
13
14
15
DISK ID
Disk30100
Disk30200
Disk30300
Disk30400
Disk30500
Disk30800
Disk30900
Disk31000
Disk31100
Disk31200
Disk31300
Disk11500
1
00
Disk11400
Bay
SCSI ID
Disk30000
Model 4314R Disk Enclosure Shelf 1 (single-bus)
Disk21500
Disk21400
Model 4314R Disk Enclosure Shelf 2 (single-bus)
Disk31500
Disk31400
Model 4314R Disk Enclosure Shelf 3 (single-bus)
A–14 HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Storage Map Template 9 for the Model
4354R Disk Enclosure
Use this template for a subsystem with a three-shelf Model 4354R disk enclosure
(dual-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Model 4354R Disk Enclosure Shelf 1 (dual-bus)
SCSI Bus A
SCSI Bus B
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
DISK ID
Disk20800
13
Disk20500
12
Disk20400
11
Disk20300
10
Disk20200
9
Disk20100
8
Disk20000
7
Disk10800
6
Disk10500
5
Disk10400
4
Disk10300
3
Disk10200
2
Disk10100
1
Disk10000
Bay
Model 4354R Disk Enclosure Shelf 2 (dual-bus)
SCSI Bus A
SCSI Bus B
14
SCSI ID
00
01
02
03
04
05
08
00
01
02
03
04
05
08
DISK ID
Disk40800
13
Disk40500
12
Disk40400
11
Disk40300
10
Disk40200
9
Disk40100
8
Disk40000
7
Disk30800
6
Disk30500
5
Disk30400
4
Disk30300
3
Disk30200
2
Disk30100
1
Disk30000
Bay
Model 4354R Disk Enclosure Shelf 3 (dual-bus)
SCSI Bus A
SCSI Bus B
14
08
DISK ID
Disk60800
13
05
Disk60500
12
04
Disk60400
11
03
Disk60300
10
02
Disk60200
9
01
Disk60100
8
00
Disk60000
7
08
Disk50800
6
05
Disk50500
5
04
Disk50400
4
03
Disk50300
3
02
Disk50200
2
01
Disk50100
1
00
Disk50000
Bay
SCSI ID
Appendix
B
Installing, Configuring, and Removing the Client
The following information is included in this appendix:
■ “Why Install the Client?,” page B–1
■ “Before You Install the Client,” page B–2
■ “Installing the Client,” page B–2
■ “Troubleshooting the Client Installation,” page B–3
■ “Adding the Storage Subsystem and its Host to the Navigation Tree,” page B–5
■ “Removing the Command Console Client,” page B–7
■ “Where to Find Additional Information,” page B–8
Why Install the Client?
The Client monitors and manages a storage subsystem by performing the following tasks:
■ Create mirrored device group (RAID 1)
■ Create striped device group (RAID 0)
■ Create striped mirrored device group (RAID 0+1)
■ Create striped parity device group (3/5)
■ Create an individual device (JBOD)
■ Monitor many subsystems at once
■ Set up pager notification
B–2
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Before You Install the Client
1. Verify you are logged into an account that is a member of the administrator group.
2. Check the software product description that came with the software for a list of
supported hardware.
3. Verify that you have the SNMP service installed on the computer. SNMP must be
installed on the computer for this software to work properly. The Client software uses
SNMP to receive traps from the Agent. The SNMP service is available on the
Windows NT or Windows 2000 installation CD-ROM. To verify that you have the
SNMP service:
❏ For Windows NT, double-click Services in Start > Settings > Control Panel. The
entry for SNMP is shown in this window. If you install the SNMP service and you
already have Windows NT Service Pack 6A on the computer, reinstall the service
pack after installing the SNMP service.
❏ For Windows 2000, click Start > Settings > Control Panel > Administrative Tools >
Component Services. The entry for SNMP is shown in the Component Services
window.
4. Read the release notes.
5. Read "Troubleshooting the Client Installation" in this appendix.
6. If you have the Command Console Client open, exit the Command Console Client.
7. If you have Command Console Client version 1.1b or earlier, remove the program with
the Windows Add/Remove Programs utility.
8. If you have a previous version of Command Console, you can save the Navigation Tree
configuration by copying the SWCC2.MDB file to another directory. After you have
installed the product, move SWCC2.MDB to the directory to which you installed
SWCC.
9. Install the HS-Series Agent. For more information, see Chapter 4.
Installing the Client
1. Insert the CD-ROM into a computer running Windows 2000 with Service Pack 1 or
Windows NT 4.0 (Intel) with Service Pack 6A.
2. Using Microsoft Windows Explorer, navigate to:
\swcc
Double click SETUP.EXE
Installing, Configuring, and Removing the Client
B–3
3. Select HSG80 Controller and click Next.
NOTE: If the computer does not find a previous installation, it will install the SWCC Navigation
Window and the CLI Window.
4. Follow the instructions on the screen. After you install the software, the Asynchronous
Event Service (AES) starts. AES is a service that runs in the background. It collects
and passes traps from the subsystems to the Navigation Tree and to individual pagers
(for example, to show that a disk has failed). AES needs to be running for the client
system to receive updates.
NOTE: For more information on AES, see Compaq StorageWorks Command Console Version 2.4
User Guide.
Troubleshooting the Client Installation
This section provides information on how to resolve some of the problems that may
appear when installing the Client software:
■ Invalid Network Port Assignments During Installation
■
“There is no disk in the drive” Message
Invalid Network Port Assignments
During Installation
SWCC Clients and Agents communicate by using sockets. The SWCC installation
attempts to add entries into each system list of services (services file or for UCX, the local
services database). If the SWCC installation finds an entry in the local services file with
the same name as the one it wants to add, it assumes the one in the file is correct.
The SWCC installation may display a message, stating that it cannot upgrade the services
file. This happens if it finds an entry in the local services file with the same number as the
one it wants to add, but with a different name. In that case, appropriate port numbers must
be obtained for the network and added manually to the services file.
There are two default port numbers, one for Command Console (4998) and the other for
the device-specific Agent and Client software, such as the Fibre Channel Interconnect
Client and Agent (4989). There are two exceptions. The following software has two
default port numbers:
■ The KZPCC Agent and Client, (4991 and 4985)
■ The RA200 Agent and Client, (4997 and 4995)
B–4
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
If the Network Information Services (NIS) are being used to provide named port lookup
services, contact the network administrator to add the correct ports.
The following shows how the network port assignments appear in the services file:
spgui
4998/tcp
#Command Console
ccdevmgt
4993/tcp
#Device Management Client and
Agent
kzpccconnectport
4991/tcp
#KZPCC Client and Agent
kzpccdiscoveryport
4985/tcp
#KZPCC Client and Agent
ccfabric
4989/tcp
#Fibre Channel Interconnect Agent
spagent
4999/tcp
#HS-Series Client and Agent
spagent3
4994/tcp
#HSZ22 Client and Agent
ccagent
4997/tcp
#RA200 Client and Agent
spagent2
4995/tcp
#RA200 Client and Agent
“There is no disk in the drive” Message
When you install the Command Console Client, the software checks the shortcuts on the
desktop and in the Start menu. The installation will check the shortcuts of all users for that
computer, even if they are not currently logged on. You may receive an error message if
any of these shortcuts point to empty floppy drives, empty CD-ROM drives, or missing
removable disks. Do one of the following:
■ Ignore the error message by clicking Ignore.
■ Replace the removable disks, and place a disk in the floppy drive and a CD-ROM in
the CD-ROM drive. Then, click Retry.
Installing, Configuring, and Removing the Client
B–5
Adding the Storage Subsystem and its
Host to the Navigation Tree
The Navigation Tree enables you to manage storage over the network by using the Storage
Window. If you plan to use pager notification, you must add the storage subsystem to the
Navigation Tree.
1. Verify that you have properly installed and configured the HS-Series Agent on the
storage subsystem host.
2. Click Start > Programs > Command Console > StorageWorks Command Console.
Client displays the Navigation Window. The Navigation Window lets you monitor and
manage many storage subsystems over the network.
Figure B–1. Navigation window
3. Click File > Add System. The Add System window appears.
4. Type the host name or its TCP/IP address and click Apply.
5. Click Close.
B–6
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Figure B–2. Navigation window showing storage host system “Atlanta”
6. Click the plus sign to expand the host icon. When expanded, the Navigation Window
displays an icon for the storage subsystem. To access the Storage Window for the
subsystem, double-click the Storage Window icon.
Figure B–3. Navigation window showing expanded “Atlanta” host icon
NOTE: You can create virtual disks by using the Storage Window. For more information on the
Storage Window, refer to Compaq StorageWorks Command Console Version 2.4 User Guide.
Installing, Configuring, and Removing the Client
B–7
Removing the Command Console Client
Before you remove the Command Console Client (CCL) from the computer, remove AES.
This will prevent the system from reporting that a service failed to start every time the
system is restarted. Steps 2 through 5 describe how to remove the CCL.
NOTE: When you remove the CCL, the SWCC2.MDB file is deleted. This file contains the
Navigation Tree configuration. If you want to save this information, move the file to another
directory.
1. Click Start > Programs > Command Prompt and change to the directory to which you
installed the CCL.
2. Enter the following command:
C:\Program Files\Compaq\SWCC> AsyncEventService -remove
3. Do one of the following:
❏ On Windows NT 4.0, click Start > Settings > Control Panel, and double-click the
Add/Remove Programs icon in the Control Panel. The Add/Remove Program
Properties window appears.
❏ On Windows 2000, click Start > Settings > Control Panel > Add/Remove
Programs. The Add/Remove Program window appears.
4. Select Command Console in the window.
5. Do one of the following:
❏ On Windows NT 4.0, click Add/Remove.
❏ On Windows 2000, click Change/Remove.
6. Follow the instructions on the screen.
NOTE: This procedure removes only the Command Console Client (SWCC Navigation Window).
You can remove the HSG80 Client by using the Add/Remove program.
B–8
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
Where to Find Additional Information
You can find additional information about SWCC by referring to the online Help and to
Compaq StorageWorks Command Console Version 2.4 User Guide.
About the User Guide
Compaq StorageWorks Command Console Version 2.4 User Guide contains additional
information on how to use SWCC. Some of the topics in the user guide are the following:
■ About AES
■ Adding Devices
■ Adding Virtual Disks
■ Setting Up Pager Notification
■ How to Integrate SWCC with Compaq Insight Manager
■ Troubleshooting Information
About the Online Help
Most of the information about the Client is provided in the online Help. Online Help is
provided in two places:
■ Navigation Window – Online Help provides information on pager notification and a
tour of the Command Console Client, in addition to information on how to add a
system to the Navigation Tree.
■ Storage Window – Online Help provides detailed information about the Storage
Window, such as how to create virtual disks.
Glossary
This glossary defines terms pertaining to the ACS solution software. It is not a comprehensive glossary
of computer terms.
adapter
A device that converts the protocol and hardware interface of one bus type into
another without changing the function of the bus.
ACS
See array controller software.
AL_PA
See arbitrated loop physical address.
ANSI
Pronounced “ann-see.” Acronym for the American National Standards
Institute. An organization who develops standards used voluntarily by many
manufacturers within the USA. ANSI is not a government agency.
arbitrated loop
physical address
Abbreviated AL_PA. A one-byte value used to identify a port in an Arbitrated
Loop topology.
array controller
See controller.
array controller
software
Abbreviated ACS. Software contained on a removable ROM program card
that provides the operating system for the array controller.
asynchronous
Pertaining to events that are scheduled as the result of a signal asking for the
event; pertaining to that which is without any specified time relation. See also
synchronous.
autospare
A controller feature that automatically replaces a failed disk drive. To aid the
controller in automatically replacing failed disk drives, you can enable the
AUTOSPARE switch for the failedset causing physically replaced disk drives
to be automatically placed into the spareset. Also called “autonewspare.”
bad block
A data block that contains a physical defect.
Glossary–2
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
bad block
replacement
Abbreviated BBR. A replacement routine that substitutes defect-free disk
blocks for those found to have defects. This process takes place in the
controller, transparent to the host.
backplane
The electronic printed circuit board into which you plug subsystem
devices—for example, the SBB or power supply.
BBR
See bad block replacement.
BIST
See built-in self-test.
bit
A single binary digit having a value of either 0 or 1. A bit is the smallest unit
of data a computer can process.
block
Also called a sector. The smallest collection of consecutive bytes addressable
on a disk drive. In integrated storage elements, a block contains 512 bytes of
data, error codes, flags, and the block address header.
bootstrapping
A method used to bring a system or device into a defined state by means of its
own action. For example, a machine routine whose first few instructions are
enough to bring the rest of the routine into the computer from an input device.
built-in self-test
A diagnostic test performed by the array controller software on the controller
policy processor.
byte
A binary character string made up of 8 bits operated on as a unit.
cache memory
A portion of memory used to accelerate read and write operations.
CDU
Cable distribution unit. The power entry device for StorageWorks cabinets.
The CDU provides the connections necessary to distribute power to the
cabinet shelves and fans.
channel
An interface that allows high speed transfer of large amounts of data. Another
term for a SCSI bus. See also SCSI.
chunk
A block of data written by the host.
chunk size
The number of data blocks, assigned by a system administrator, written to the
primary RAIDset or stripeset member before the remaining data blocks are
written to the next RAIDset or stripeset member.
CLCP
An abbreviation for code-load code-patch utility. This utility is used to
upgrade the controller and EMU software. It can also be used to patch the
controller software.
Glossary–3
coax
A two-conductor wire in which one conductor completely wraps the other
with the two separated by insulation.
cold swap
A method of device replacement that requires the entire subsystem to be
turned off before the device can be replaced. See also hot swap and warm
swap.
command line
interpreter
(CLI) The configuration interface to operate the controller software.
concat commands
Concat commands implement storageset expansion features.
configuration file
A file that contains a representation of a storage subsystem configuration.
container
1) Any entity that is capable of storing data, whether it is a physical device or
a group of physical devices. (2) A virtual, internal controller structure
representing either a single disk or a group of disk drives linked as a
storageset. Stripesets and mirrorsets are examples of storageset containers the
controller uses to create units.
controller
A hardware device that, with proprietary software, facilitates communications
between a host and one or more devices organized in an array. The HSG80
family controllers are examples of array controllers.
copying
A state in which data to be copied to the mirrorset is inconsistent with other
members of the mirrorset. See also normalizing.
copying member
Any member that joins the mirrorset after the mirrorset is created is regarded
as a copying member. Once all the data from the normal member (or
members) is copied to a normalizing or copying member, the copying member
then becomes a normal member. See also normalizing member.
CSR
An acronym for control and status register.
DAEMON
Pronounced “demon.” A program usually associated with a UNIX systems
that performs a utility (housekeeping or maintenance) function without being
requested or even known of by the user. A daemon is a diagnostic and
execution monitor.
data center cabinet
A generic reference to large Compaq subsystem cabinets, such as the cabinets
in which StorageWorks components can be mounted.
Glossary–4
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
data striping
The process of segmenting logically sequential data, such as a single file, so
that segments can be written to multiple physical devices (usually disk drives)
in a round-robin fashion. This technique is useful if the processor is capable of
reading or writing data faster than a single disk can supply or accept the data.
While data is being transferred from the first disk, the second disk can locate
the next segment.
device
See node and peripheral device.
differential I/O
module
A 16-bit I/O module with SCSI bus converter circuitry for extending a
differential SCSI bus. See also I/O module.
differential SCSI
bus
A bus in which a signal level is determined by the potential difference between
two wires. A differential bus is more robust and less subject to electrical noise
than is a single-ended bus.
DIMM
Dual inline Memory Module.
dirty data
The write-back cached data that has not been written to storage media, even
though the host operation processing the data has completed.
DMA
Direct Memory Access.
DOC
DWZZA-On-a-Chip. ASCSI bus extender chip used to connect a SCSI bus in
an expansion cabinet to the corresponding SCSI bus in another cabinet.
driver
A hardware device or a program that controls or regulates another device. For
example, a device driver is a driver developed for a specific device that allows
a computer to operate with the device, such as a printer or a disk drive.
dual-redundant
configuration
A controller configuration consisting of two active controllers operating as a
single controller. If one controller fails, the other controller assumes control of
the failing controller devices.
dual-simplex
A communications protocol that allows simultaneous transmission in both
directions in a link, usually with no flow control.
DUART
Dual universal asynchronous receiver and transmitter. An integrated circuit
containing two serial, asynchronous transceiver circuits.
ECB
External cache battery. The unit that supplies backup power to the cache
module in the event the primary power source fails or is interrupted.
ECC
Error checking and correction.
EDC
Error detection code.
Glossary–5
EIA
The abbreviation for Electronic Industries Association. EIA is a standards
organization specializing in the electrical and functional characteristics of
interface equipment.
EMU
Environmental monitoring unit. A unit that provides increased protection
against catastrophic failures. Some subsystem enclosures include an EMU
which works with the controller to detect conditions such as failed power
supplies, failed blowers, elevated temperatures, and external air sense faults.
The EMU also controls certain cabinet hardware including DOC chips,
alarms, and fan speeds.
ESD
Electrostatic discharge. The discharge of potentially harmful static electrical
voltage as a result of improper grounding.
extended
subsystem
A subsystem in which two cabinets are connected to the primary cabinet.
external cache
battery
See ECB.
F_Port
A port in a fabric where an N_Port or NL_Port may attach.
fabric
A group of interconnections between ports that includes a fabric element.
failedset
A group of failed mirrorset or RAIDset devices automatically created by the
controller.
failover
The process that takes place when one controller in a dual-redundant
configuration assumes the workload of a failed companion controller. Failover
continues until the failed controller is repaired or replaced.
FC–AL
The Fibre Channel Arbitrated Loop standard. See Fibre Channel.
FC–ATM
ATM AAL5 over Fibre Channel
FC–FG
Fibre Channel Fabric Generic Requirements
FG–FP
Fibre Channel Framing Protocol (HIPPI on FC)
FC-GS-1
Fibre Channel Generic Services-1
FC–GS-2
Fibre Channel Generic Services-2
FC–IG
Fibre Channel Implementation Guide
FC–LE
Fibre Channel Link Encapsulation (ISO 8802.2)
Glossary–6
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
FC–PH
The Fibre Channel Physical and Signaling standard.
FC–SB
Fibre Channel Single Byte Command Code Set
FC–SW
Fibre Channel Switched Topology and Switch Controls
FCC
Federal Communications Commission. The federal agency responsible for
establishing standards and approving electronic devices within the United
States.
FCC Class A
This certification label appears on electronic devices that can only be used in a
commercial environment within the United States.
FCC Class B
This certification label appears on electronic devices that can be used in either
a home or a commercial environment within the United States.
FCP
The mapping of SCSI-3 operations to Fibre Channel.
FDDI
Fiber Distributed Data Interface. An ANSI standard for 100 megabaud
transmission over fiber optic cable.
FD SCSI
The fast, narrow, differential SCSI bus with an 8-bit data transfer rate of 10
MB/s. See also FWD SCSI and SCSI.
fiber
A fiber or optical strand. Spelled fibre in Fibre Channel.
fiber optic cable
A transmission medium designed to transmit digital signals in the form of
pulses of light. Fiber optic cable is noted for its properties of electrical
isolation and resistance to electrostatic contamination.
Fibre Channel
A high speed, high-bandwidth serial protocol for channels and networks that
interconnect over twisted pair wires, coaxial cable or fiber optic cable. The
Fibre Channel Switched (FC-SW) (fabric) offers up to 16 million ports with
cable lengths of up to 10 kilometers. The Fibre Channel Arbitrated Loop
(FC-AL) topology offers speeds of up to 100 Mbytes/seconds and up to 127
nodes, all connected in serial. In contrast to SCSI technology, Fibre Channel
does not require ID switches or terminators. The FC-AL loop may be
connected to a Fibre Channel fabric for connection to other nodes.
fibre channel
topology
An interconnection scheme that allows multiple Fibre Channel ports to
communicate with each other. For example, point-to-point, Arbitrated Loop,
and switched fabric are all Fibre Channel topologies.
FL_Port
A port in a fabric where N_Port or an NL_Port may be connected.
flush
The act of writing dirty data from cache to a storage media.
Glossary–7
FMU
Fault management utility.
forced errors
A data bit indicating a corresponding logical data block contains
unrecoverable data.
frame
An invisible unit used to transfer information in Fibre Channel.
FRU
Field replaceable unit. A hardware component that can be replaced at the
customer location by Compaq service personnel or qualified customer service
personnel.
FRUTIL
Field Replacement utility.
full duplex (n)
A communications system in which there is a capability for 2-way
transmission and acceptance between two sites at the same time.
full duplex (adj)
Pertaining to a communications method in which data can be transmitted and
received at the same time.
FWD SCSI
A fast, wide, differential SCSI bus with a maximum 16-bit data transfer rate of
20 MB/s. See also SCSI and FD SCSI.
GBIC
Gigabit Interface Converter. GBICs convert electrical signals to optical signals
(and vice-versa.) They are inserted into the ports of the Fibre Channel switch
and hold the Fibre Channel cables.
GLM
Gigabit link module
giga
A prefix indicating a billion (109) units, as in gigabaud or gigabyte.
gigabaud
An encoded bit transmission rate of one billion (109) bits per second.
gigabyte
A value normally associated with a disk drives storage capacity, meaning a
billion (109) bytes. The decimal value 1024 is usually used for one thousand.
half-duplex (adj)
Pertaining to a communications system in which data can be either transmitted
or received but only in one direction at one time.
hard address
The AL_PA which an NL_Port attempts to acquire during loop initialization.
heterogeneous
host support
Also called noncooperating host support.
HIPPI–FC
Fibre Channel over HIPPI
host
The primary or controlling computer to which a storage subsystem is attached.
Glossary–8
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
host adapter
A device that connects a host system to a SCSI bus. The host adapter usually
performs the lowest layers of the SCSI protocol. This function may be
logically and physically integrated into the host system.
hot disks
A disk containing multiple hot spots. Hot disks occur when the workload is
poorly distributed across storage devices which prevents optimum subsystem
performance. See also hot spots.
hot spots
A portion of a disk drive frequently accessed by the host. Because the data
being accessed is concentrated in one area, rather than spread across an array
of disks providing parallel access, I/O performance is significantly reduced.
See also hot disks.
hot swap
A method of device replacement that allows normal I/O activity on a device
bus to remain active during device removal and insertion. The device being
removed or inserted is the only device that cannot perform operations during
this process. See also cold swap and warm swap.
hub
A device (concentrator) which performs some or all of the following
functions:
■ Automatic insertion of operational loop devices without disrupting the
existing configuration.
■ Automatic removal of failed loop devices without impacting the existing
configuration.
■ Provides a centralized (star) wiring configuration and maintenance point.
■ Provides central monitoring and management.
IBR
Initial Boot Record.
ILF
Illegal function.
INIT
Initialize input and output.
initiator
A SCSI device that requests an I/O process to be performed by another SCSI
device, namely, the SCSI target. The controller is the initiator on the device
bus. The host is the initiator on the host bus.
instance code
A four-byte value displayed in most text error messages and issued by the
controller when a subsystem error occurs. The instance code indicates when
during software processing the error was detected.
interface
A set of protocols used between components, such as cables, connectors, and
signal levels.
Glossary–9
I/O
Refers to input and output functions.
I/O driver
The set of code in the kernel that handles the physical I/O to a device. This is
implemented as a fork process. Same as driver.
I/O interface
See interface.
I/O module
A 16-bit SBB shelf device that integrates the SBB shelf with either an 8-bit
single ended, 16-bit single-ended, or 16-bit differential SCSI bus.
I/O operation
The process of requesting a transfer of data from a peripheral device to
memory (or visa versa), the actual transfer of the data, and the processing and
overlaying activity to make both of those happen.
IPI
Intelligent Peripheral Interface. An ANSI standard for controlling peripheral
devices by a host computer.
IPI-3 Disk
Intelligent Peripheral Interface Level 3 for Disk
IPI-3 Tape
Intelligent Peripheral Interface Level 3 for Tape
JBOD
Just a bunch of disks. A term used to describe a group of single-device logical
units.
kernel
The most privileged processor access mode.
LBN
Logical Block Number.
L_port
A node or fabric port capable of performing arbitrated loop functions and
protocols. NL_Ports and FL_Ports are loop-capable ports.
LED
Light Emitting Diode.
link
A connection between two Fibre Channel ports consisting of a transmit fibre
and a receive fibre.
local connection
A connection to the subsystem using either its serial maintenance port or the
host SCSI bus. A local connection enables you to connect to one subsystem
controller within the physical range of the serial or host SCSI cable.
local terminal
A terminal plugged into the EIA-423 maintenance port located on the front
bezel of the controller. See also maintenance terminal.
logical bus
A single-ended bus connected to a differential bus by a SCSI bus signal
converter.
Glossary–10
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
logical unit
A physical or virtual device addressable through a target ID number. LUNs
use their target bus connection to communicate on the SCSI bus.
logical unit number
A value that identifies a specific logical unit belonging to a SCSI target ID
number. A number associated with a physical device unit during a task I/O
operations. Each task in the system must establish its own correspondence
between logical unit numbers and physical devices.
logon
Also called login. A procedure whereby a participant, either a person or
network connection, is identified as being an authorized network participant.
loop
See arbitrated loop.
loop_ID
A seven-bit value numbered contiguously from zero to 126-decimal and
represent the 127 legal AL_PA values on a loop (not all of the 256 hex values
are allowed as AL_PA values per FC-AL.)
loop tenancy
The period of time between the following events: when a port wins loop
arbitration and when the port returns to a monitoring state.
L_Port
A node or fabric port capable of performing Arbitrated Loop functions and
protocols. NL_Ports and FL_Ports are loop-capable ports.
LRU
Least recently used. A cache term used to describe the block replacement
policy for read cache.
Mbps
Approximately one million (106) bits per second—that is, megabits per
second.
MBps
Approximately one million (106) bytes per second—that is, megabytes per
second.
maintenance
terminal
An EIA-423-compatible terminal used with the controller. This terminal is
used to identify the controller, enable host paths, enter configuration
information, and check the controller status. The maintenance terminal is not
required for normal operations.
See also local terminal.
member
A container that is a storage element in a RAID array.
Glossary–11
metadata
The data written to a disk for the purposes of controller administration.
Metadata improves error detection and media defect management for the disk
drive. It is also used to support storageset configuration and partitioning.
Nontransportable disks also contain metadata to indicate they are uniquely
configured for StorageWorks environments. Metadata can be thought of as
“data about data.”
mirroring
The act of creating an exact copy or image of data.
mirrored
write-back caching
A method of caching data that maintains two copies of the cached data. The
copy is available if either cache module fails.
mirrorset
See RAID level 1.
MIST
Module Integrity Self-Test.
N_port
A port attached to a node for use with point-to-point topology or fabric
topology.
NL_port
A port attached to a node for use in all topologies.
network
In data communication, a configuration in which two or more terminals or
devices are connected to enable information transfer.
node
In data communications, the point at which one or more functional units
connect transmission lines.
Non-L_Port
A Node of Fabric port that is not capable of performing the Arbitrated Loop
functions and protocols. N_Ports and F_Ports loop-capable ports.
non-participating
mode
A mode within an L_Port that inhibits the port from participating in loop
activities. L_Ports in this mode continue to retransmit received transmission
words but are not permitted to arbitrate or originate frames. An L_Port in
non-participating mode may or may not have an AL_PA. See also
participating mode.
nominal
membership
The desired number of mirrorset members when the mirrorset is fully
populated with active devices. If a member is removed from a mirrorset, the
actual number of members may fall below the “nominal” membership.
node
In data communications, the point at which one or more functional units
connect transmission lines. In Fibre Channel, a device that has at least one
N_Port or NL_Port.
Glossary–12
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
nonredundant
controller
configuration
(1) A single controller configuration. (2) A controller configuration that does
not include a second controller.
normal member
A mirrorset member that, block-for-block, contains the same data as other
normal members within the mirrorset. Read requests from the host are always
satisfied by normal members.
normalizing
Normalizing is a state in which, block-for-block, data written by the host to a
mirrorset member is consistent with the data on other normal and normalizing
members. The normalizing state exists only after a mirrorset is initialized.
Therefore, no customer data is on the mirrorset.
normalizing
member
A mirrorset member whose contents are the same as all other normal and
normalizing members for data that has been written since the mirrorset was
created or lost cache data was cleared. A normalizing member is created by a
normal member when either all of the normal members fail or all of the
normal members are removed from the mirrorset. See also copying member.
NVM
Non-Volatile Memory. A type of memory where the contents survive power
loss. Also sometimes referred to as NVMEM.
OCP
Operator control panel. The control or indicator panel associated with a
device. The OCP is usually mounted on the device and is accessible to the
operator.
other controller
The controller in a dual-redundant pair that is connected to the controller
serving the current CLI session. See also this controller.
outbound fiber
One fiber in a link that carries information away from a port.
parallel data
transmission
A data communication technique in which more than one code element (for
example, bit) of each byte is sent or received simultaneously.
parity
A method of checking if binary numbers or characters are correct by counting
the ONE bits. In odd parity, the total number of ONE bits must be odd; in even
parity, the total number of ONE bits must be even.
parity bit
A binary digit added to a group of bits that checks to see if errors exist in the
transmission.
parity check
A method of detecting errors when data is sent over a communications line.
With even parity, the number of ones in a set of binary data should be even.
With odd parity, the number of ones should be odd.
Glossary–13
participating mode
A mode within an L_Port that allows the port to participate in loop activities.
A port must have a valid AL_PA to be in participating mode.
PCM
Polycenter Console Manager.
PCMCIA
Personal Computer Memory Card Industry Association. An international
association formed to promote a common standard for PC card-based
peripherals to be plugged into notebook computers. The card commonly
known as a PCMCIA card is about the size of a credit card.
parity
A method of checking if binary numbers or characters are correct by counting
the ONE bits. In odd parity, the total number of ONE bits must be odd; in even
parity, the total number of ONE bits must be even. Parity information can be
used to correct corrupted data. RAIDsets use parity to improve the availability
of data.
parity bit
A binary digit added to a group of bits that checks to see if there are errors in
the transmission.
parity RAID
See RAIDset.
partition
A logical division of a container, represented to the host as a logical unit.
peripheral device
Any unit, distinct from the CPU and physical memory, that can provide the
system with input or accept any output from it. Terminals, printers, tape
drives, and disks are peripheral devices.
point-to-point
connection
A network configuration in which a connection is established between two,
and only two, terminal installations. The connection may include switching
facilities.
port
(1) In general terms, a logical channel in a communications system. (2) The
hardware and software used to connect a host controller to a communications
bus, such as a SCSI bus or serial bus.
Regarding the controller, the port is (1) the logical route for data in and out of
a controller that can contain one or more channels, all of which contain the
same type of data. (2) The hardware and software that connects a controller to
a SCSI device.
port_name
A 64-bit unique identifier assigned to each Fibre Channel port. The
Port_Name is communicated during the logon and port discovery process.
preferred address
The AL_PA which an NL_Port attempts to acquire first during initialization.
Glossary–14
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
primary cabinet
The primary cabinet is the subsystem enclosure that contains the controllers,
cache modules, external cache batteries, and the PVA module.
private NL_Port
An NL_Port which does not attempt login with the fabric and only
communicates with NL_Ports on the same loop.
program card
The PCMCIA card containing the controller operating software.
protocol
The conventions or rules for the format and timing of messages sent and
received.
PTL
Port-Target-LUN. The controller method of locating a device on the controller
device bus.
PVA module
Power Verification and Addressing module.
quiesce
The act of rendering bus activity inactive or dormant. For example, “quiesce
the SCSI bus operations during a device warm-swap.”
RAID
Redundant Array of Independent Disks. Represents multiple levels of storage
access developed to improve performance or availability or both.
RAID level 0
A RAID storageset that stripes data across an array of disk drives. A single
logical disk spans multiple physical disks, enabling parallel data processing
for increased I/O performance. While the performance characteristics of
RAID level 0 is excellent, this RAID level is the only one that does not
provide redundancy. Raid level 0 storagesets are sometimes referred to as
stripesets.
RAID level 0+1
A RAID storageset that stripes data across an array of disks (RAID level 0)
and mirrors the striped data (RAID level 1) to provide high I/O performance
and high availability. This RAID level is alternatively called a striped
mirrorset.
RAID level 1
A RAID storageset of two or more physical disks that maintain a complete
and independent copy of the entire virtual disk's data. This type of storageset
has the advantage of being highly reliable and extremely tolerant of device
failure. Raid level 1 storagesets are sometimes referred to as mirrorsets.
RAID level 3
A RAID storageset that transfers data parallel across the array disk drives a
byte at a time, causing individual blocks of data to be spread over several disks
serving as one enormous virtual disk. A separate redundant check disk for the
entire array stores parity on a dedicated disk drive within the storageset. See
also RAID level 5.
Glossary–15
RAID level 5
A RAID storageset that, unlike RAID level 3, stores the parity information
across all of the disk drives within the storageset. See also RAID level 3.
RAID level 3/5
A RAID storageset that stripes data and parity across three or more members
in a disk array. A RAIDset combines the best characteristics of RAID level 3
and RAID level 5. A RAIDset is the best choice for most applications with
small to medium I/O requests, unless the application is write intensive. A
RAIDset is sometimes called parity RAID.
RAIDset
See RAID level 3/5.
RAM
Random access memory.
read ahead caching
A caching technique for improving performance of synchronous sequential
reads by prefetching data from disk.
read caching
A cache management method used to decrease the subsystem response time to
a read request by allowing the controller to satisfy the request from the cache
memory rather than from the disk drives.
reconstruction
The process of regenerating the contents of a failed member data. The
reconstruct process writes the data to a spareset disk and incorporates the
spareset disk into the mirrorset, striped mirrorset, or RAIDset from which the
failed member came. See also regeneration.
reduced
Indicates that a mirrorset or RAIDset is missing one member because the
member has failed or has been physically removed.
redundancy
The provision of multiple interchangeable components to perform a single
function in order to cope with failures and errors. A RAIDset is considered to
be redundant when user data is recorded directly to one member and all of the
other members include associated parity information.
regeneration
(1) The process of calculating missing data from redundant data. (2) The
process of recreating a portion of the data from a failing or failed drive using
the data and parity information from the other members within the storageset.
The regeneration of an entire RAIDset member is called reconstruction. See
also reconstruction.
request rate
The rate at which requests are arriving at a servicing entity.
RFI
Radio frequency interference. The disturbance of a signal by an unwanted
radio signal or frequency.
Glossary–16
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
replacement policy
The policy specified by a switch with the SET FAILEDSET command
indicating whether a failed disk from a mirrorset or RAIDset is to be
automatically replaced with a disk from the spareset. The two switch choices
are AUTOSPARE and NOAUTOSPARE.
SBB
StorageWorks building block. (1) A modular carrier plus the interface required
to mount the carrier into a standard StorageWorks shelf. (2) any device
conforming to shelf mechanical and electrical standards installed in a 3.5-inch
or 5.25-inch carrier, whether it is a storage device or power supply.
SCSI
Small computer system interface. (1) An ANSI interface standard defining the
physical and electrical parameters of a parallel I/O bus used to connect
initiators to devices. (2) a processor-independent standard protocol for
system-level interfacing between a computer and intelligent devices including
hard drives, floppy disks, CD-ROMs, printers, scanners, and others.
SCSI-A cable
A 50-conductor (25 twisted-pair) cable generally used for single-ended,
SCSI-bus connections.
SCSI bus signal
converter
Sometimes referred to as an adapter. (1) A device used to interface between
the subsystem and a peripheral device unable to be mounted directly into the
SBB shelf of the subsystem. (2) a device used to connect a differential SCSI
bus to a single-ended SCSI bus. (3) A device used to extend the length of a
differential or single-ended SCSI bus. See also DWZZA, DWZZB, DWZZC,
and I/O module.
SCSI device
(1) A host computer adapter, a peripheral controller, or an intelligent
peripheral that can be attached to the SCSI bus. (2) Any physical unit that can
communicate on a SCSI bus.
SCSI device ID
number
A bit-significant representation of the SCSI address referring to one of the
signal lines, numbered 0 through 7 for an 8-bit bus, or 0 through 15 for a
16-bit bus. See also target ID number.
SCSI ID number
The representation of the SCSI address that refers to one of the signal lines
numbered 0 through 15.
SCSI-P cable
A 68-conductor (34 twisted-pair) cable generally used for differential bus
connections.
SCSI port
(1) Software: The channel controlling communications to and from a specific
SCSI bus in the system. (2) Hardware: The name of the logical socket at the
back of the system unit to which a SCSI device is connected.
Glossary–17
serial transmission
A method transmission in which each bit of information is sent sequentially
on a single channel rather than simultaneously as in parallel transmission.
signal converter
See SCSI bus signal converter.
single ended I/O
module
A 16-bit I/O module. See also I/O module.
single-ended
SCSI bus
An electrical connection where one wire carries the signal and another wire or
shield is connected to electrical ground. Each signal logic level is determined
by the voltage of a single wire in relation to ground. This is in contrast to a
differential connection where the second wire carries an inverted signal.
spareset
A collection of disk drives made ready by the controller to replace failed
members of a storageset.
storage array
An integrated set of storage devices.
storage array
subsystem
See storage subsystem.
storageset
(1) A group of devices configured with RAID techniques to operate as a single
container. (2) Any collection of containers, such as stripesets, mirrorsets,
striped mirrorsets, and RAIDsets.
storageset
expansion
Concat commands implement storageset expansion features.
storage subsystem
The controllers, storage devices, shelves, cables, and power supplies used to
form a mass storage subsystem.
storage unit
The general term that refers to storagesets, single-disk units, and all other
storage devices that are installed in your subsystem and accessed by the host.
A storage unit can be any entity that is capable of storing data, whether it is a
physical device or a group of physical devices.
StorageWorks
A family of Compaq modular data storage products that allow customers to
design and configure their own storage subsystems. Components include
power, packaging, cabling, devices, controllers, and software. Customers can
integrate devices and array controllers in StorageWorks enclosures to form
storage subsystems.
StorageWorks systems include integrated SBBs and array controllers to form
storage subsystems. System-level enclosures to house the shelves and standard
mounting devices for SBBs are also included.
Glossary–18
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
stripe
The data divided into blocks and written across two or more member disks in
an array.
striped mirrorset
See RAID level 0+1.
stripeset
See RAID level 0.
stripe size
The stripe capacity as determined by n–1 times the chunksize, where n is the
number of RAIDset members.
striping
The technique used to divide data into segments, also called chunks. The
segments are striped, or distributed, across members of the stripeset. This
technique helps to distribute hot spots across the array of physical devices to
prevent hot spots and hot disks.
Each stripeset member receives an equal share of the I/O request load,
improving performance.
surviving controller
The controller in a dual-redundant configuration pair that serves its
companion devices when the companion controller fails.
switch
A method that controls the flow of functions and operations in software.
synchronous
Pertaining to a method of data transmission which allows each event to
operate in relation to a timing signal. See also asynchronous.
tape
A storage device supporting sequential access to variable sized data records.
target
(1) A SCSI device that performs an operation requested by an initiator. (2)
Designates the target identification (ID) number of the device.
target ID number
The address a bus initiator uses to connect with a bus target. Each bus target is
assigned a unique target address.
this controller
The controller that is serving your current CLI session through a local or
remote terminal. See also other controller.
tape inline
exerciser
(TILX) The controller diagnostic software to test the data transfer capabilities of
tape drives in a way that simulates a high level of user activity.
transfer data rate
The speed at which data may be exchanged with the central processor,
expressed in thousands of bytes per second.
ULP
Upper Layer Protocol.
Glossary–19
ULP process
A function executing within a Fibre Channel node which conforms to the
Upper Layer Protocol (ULP) requirements when interacting with other ULP
processes.
Ultra SCSI
A Fast-20 SCSI bus. See also Wide Ultra SCSI.
unit
A container made accessible to a host. A unit may be created from a single
disk drive or tape drive. A unit may also be created from a more complex
container such as a RAIDset. The controller supports a maximum of eight
units on each target. See also target and target ID number.
unwritten cached
data
Sometimes called unflushed data. See dirty data.
UPS
Uninterruptible power supply. A battery-powered power supply guaranteed to
provide power to an electrical device in the event of an unexpected
interruption to the primary power supply. Uninterruptible power supplies are
usually rated by the amount of voltage supplied and the length of time the
voltage is supplied.
VHDCI
Very high-density-cable interface. A 68-pin interface. Required for
Ultra-SCSI connections.
virtual terminal
A software path from an operator terminal on the host to the controller's CLI
interface, sometimes called a host console. The path can be established via the
host port on the controller or via the maintenance port through an intermediary
host.
VTDPY
An abbreviation for Virtual Terminal Display Utility.
warm swap
A device replacement method that allows the complete system to remain
online during device removal or insertion. The system bus may be halted, or
quiesced, for a brief period of time during the warm-swap procedure.
Wide Ultra SCSI
Fast/20 on a Wide SCSI bus.
Worldwide name
A unique 64-bit number assigned to a subsystem by the Institute of Electrical
and Electronics Engineers (IEEE) and set by Compaq manufacturing prior to
shipping. This name is referred to as the node ID within the CLI.
write-back caching
A cache management method used to decrease the subsystem response time to
write requests by allowing the controller to declare the write operation
“complete” as soon as the data reaches its cache memory. The controller
performs the slower operation of writing the data to the disk drives at a later
time.
Glossary–20
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
Guide
write-through
caching
A cache management method used to decrease the subsystem response time to
a read. This method allows the controller to satisfy the request from the cache
memory rather than from the disk drives.
write hole
The period of time in a RAID level 1 or RAID level 5 write operation when an
opportunity emerges for undetectable RAIDset data corruption. Write holes
occur under conditions such as power outages, where the writing of multiple
members can be abruptly interrupted. A battery backed-up cache design
eliminates the write hole because data is preserved in cache and unsuccessful
write operations can be retried.
write-through
cache
A cache management technique for retaining host write requests in read cache.
When the host requests a write operation, the controller writes data directly to
the storage device. This technique allows the controller to complete some read
requests from the cache, greatly improving the response time to retrieve data.
The operation is complete only after the data to be written is received by the
target storage device.
This cache management method may update, invalidate, or delete data from
the cache memory accordingly, to ensure that the cache contains the most
current data.
Index
A
accessing the CLI, SWCC 1−23, 5−29
accessing the configuration menu
Agent 4−59, 4−62, 4−63
ADD CONNECTIONS
multiple-bus failover 1−21
transparent failover 1−19
ADD UNIT
multiple-bus failover 1−21
transparent failover 1−19
adding
Client 4−55
client system entry
Agent 4−59, 4−62
subsystem 4−54
subsystem entry
Agent 4−59, 4−62, 4−66
virtual disks B−8
Adding a Client 4−55
adding a disk drive to the spareset
configuration options 5−31
Adding a Subsystem 4−54
adding disk drives
configuration options 5−31
Agent
accessing the configuration menu 4−59, 4−62,
4−63
choosing passwords 4−54
client system entry
adding 4−59, 4−62
configuration menu 4−59, 4−62, 4−63
configuring 4−59, 4−62, 4−63
adding a client system entry 4−59, 4−62
adding a subsystem entry 4−59, 4−62,
4−66
changing password 4−59, 4−62
changing the Agent access password 4−63
deleting a client system entry 4−59, 4−62,
4−65
deleting a subsystem entry 4−59, 4−62,
4−68
stopping and starting the Agent 4−59,
4−62, 4−69
using config.sh 4−52
viewing subsystem entries 4−59, 4−62
viewing the client systems 4−59, 4−62
configuring within FirstWatch 4−56
deleting a client system entry 4−59, 4−62,
4−65
deleting a subsystem entry 4−59, 4−62, 4−68
disabling startup 4−59, 4−62, 4−70
enabling startup 4−59, 4−62, 4−70
functions 4−2
installing 4−8, 4−10, 4−15, 4−17
reconfiguring 4−70
restarting 4−56
running 4−5, 4−48
starting 4−59, 4−62, 4−69
stopping 4−59, 4−62, 4−69
subsystem entry
adding 4−59, 4−62, 4−66
Index–2
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
toggling startup 4−59, 4−62, 4−70
uninstalling 4−73
using the configuration menu 4−59, 4−62 ,
4−63
arbitrated loop
naming convention 3−53
array of disk drives 2−12
assigning unit numbers 1−18
assignment
unit numbers
fabric topology 5−28
unit qualifiers
fabric topology 5−28
assignment of unit numbers
fabric topology
partition 5−28
single disk 5−28
asynchronous event service B−8
automatic mode
installation 3−63
autospare
enabling
fabric topology 5−32
availability 2−18
B
backup
cloning data 7−2
subsystem configuration 7−1
C
cabling
controller pair 5−15
multiple-bus failover
fabric topology configuration 5−14
single controller 5−5
cache modules
location 1−2, 1−3
read caching 1−11
write-back caching 1−11
write-through caching 1−11
caching techniques
mirrored 1−12
read caching 1−11
read-ahead caching 1−11
write-back caching 1−11
write-through caching 1−11
CCL
definition 3−13
disabling 3−20
CD-ROM
mounting 3−58
changing switches
configuration options 5−33
choosing passwords
Agent 4−54
chunk size
choosing for RAIDsets and stripesets 2−26
controlling stripesize 2−26
using to increase request rate 2−26
using to increase write performance 2−27
CHUNKSIZE 2−26
CLI commands
installation verification 5−13, 5−22
specifying identifier for a unit 1−23, 5−29
CLI prompt
changing
fabric topology 5−30
Client
adding 4−55
launching 3−19
removing B−7
uninstalling B−7
client system entry
Agent
adding 4−59, 4−62
CLONE utility
backup 7−2
cloning
backup 7−2
Command Console Agent_restarting 4−55
command console LUN 1−12
SCSI-2 mode 1−22
SCSI-3 mode 1−22
Compaq Insight Manager B−8
Index–3
comparison of container types 2−12
completing configuration 3−39
Configuration
fabric topology
procedure flowchart
illustrated 1−ξξιϖ
configuration
backup 7−1
changes 3−43
completing 3−35
completion 3−39
fabric topology
devices 5−23
multiple-bus failover
cabling 5−14
multiple-bus failover using CLI 6−5
single controller cabling 5−5
HP-UX file system 3−43
NetWare volume 3−36
restoring 2−28
rules 2−3
verifying 3−40
configuration menu
Agent 4−59, 4−62, 4−63
configuration options
fabric topology
adding a disk drive to the spareset 5−31
adding disk drives 5−31
changing switches
device 5−33
displaying the current switches 5−33
initialize 5−34
RAIDset and mirrorset 5−33
unit 5−34
changing switches for a storageset or
device 5−33
changing the CLI prompt 5−30
deleting a storageset 5−32
enabling autospare 5−32
removing a disk drive from the spareset
5−31
configuring
Agent 4−59, 4−62, 4−63
using config.sh 4−52
pager notification B−8
configuring devices
fabric topology 5−23
configuring storage
SWCC 1−23
connection methods
SWCC 3−17
connections 1−14
hardware 3−39
naming 1−14
containers
attributes 2−11
comparison 2−12
illustrated 2−11
mirrorsets 2−16
planning storage 2−11
stripesets 2−15
controller
verification of installation 5−22
controller verification
installation 5−13, 5−22
controllers
cabling 5−5, 5−15
location 1−2, 1−3
node IDs 1−30
properties 3−21
settings 3−13
verification of installation 5−13, 5−22
worldwide names 1−30
creating
storageset and device profiles 2−12
creating and tuning
filesystem 3−76
creating disk label 3−54
creating filesystems 3−58
creating virtual disk 3−17, 3−23
creation
file system 3−76
Index–4
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
D
E
dd
enabling
switches 2−23
enabling Agent startup 4−59, 4−62, 4−70
enabling startup
Agent 4−59, 4−62, 4−70
erasing metadata 2−28
establishing a local connection 5−2
establishing a serial connection
HSG60 storage window 3−20
executing the installation script 3−50
Tru64 UNIX utility 3−73
DECsafe 3−71
deleting a client system entry
Agent 4−59, 4−62, 4−65
deleting a subsystem entry
Agent 4−59, 4−62, 4−68
device switches
changing
fabric topology 5−33
devices
changing switches
fabric topology 5−33
configuration
fabric topology 5−23
creating a profile 2−12
disabling
CCL 3−20
disabling Agent startup 4−59, 4−62, 4−70
disabling startup
Agent 4−59, 4−62, 4−70
disk administrator 3−41
display 3−40
disk drives
adding
fabric topology 5−31
adding to the spareset
fabric topology 5−31
array 2−12
corresponding storagesets 2−29
dividing 2−22
removing from the spareset
fabric topology 5−31
disk label
creation 3−54
disklabel
creating partitions on a LUN 3−66
displaying the current switches
fabric topology 5−33
dividing storagesets 2−22
F
fabric device
naming convention 3−54
Fabric topology
procedure flowchart
illustrated 1−ξξιϖ
fabric topology
configuration
single controller cabling 5−5
failover 1−5
definition 3−13
multiple-bus 1−8
transparent 1−5
fibre channel
definition 3−13
fibre channel hub 3−17
fibre channel switch 3−16
file
Tru64 UNIX utility 3−71
file archive
temporary storage 3−49
file system
creation 3−76
tuning 3−76
filesystem
creating and tuning 3−76
creating on a LUN 3−67
mounting 3−68
preparing LUNs 3−66
filesystems
creation 3−58
Index–5
First enclosure of multiple-enclosure subsystem
storage map template 1 A−3, A−6, A−7, A−9,
A−11, A−12, A−14
format utility
labeling LUNs 3−69
functions
Agent 4−2
G
genvmunix 3−71
geometry
initialize switches 2−28
H
hardware
connections 3−39
installation 3−39
Host access
restricting in multiple-bus failover mode
disabling access paths 1−27
host access
restricting by offsets
multiple-bus failover 1−29
transparent failover 1−26
restricting in multiple-bust failover mode
1−27
restricting in transparent failover mode 1−24
disabling access paths 1−26
separate links 1−25
host adapter
installation 3−6
preparation 3−6
host connections 1−14
naming 1−14
host device
initialization 3−52
host sytem
connecting to MA6000 3−8
HP-UX file system
configuration 3−43
HSxDisk driver 3−15
hwmgr 3−74
I
initial setup settings 3−12
initialization
host device 3−52
initialize switches
changing
fabric topology 5−34
CHUNKSIZE 2−26
geometry 2−28
NOSAVE_CONFIGURATION 2−28
SAVE_CONFIGURATION 2−28
installation
automatic mode 3−38, 3−63
controller verification 5−13, 5−22
hardware 3−39
host adapter 3−7
invalid network port assignments B−3
manual mode 3−38, 3−64
solution software 3−49, 3−51
SWCC Client 3−17, 3−19
there is no disk in the drive message B−4
installation script
execution 3−50
installation verification
CLI commands 5−13, 5−22
installing
Agent 4−8, 4−10, 4−15, 4−17
KGPSA adapter device driver 3−11
integrating SWCC B−8
invalid network port assignments B−3
iostat
Tru64 UNIX utility 3−75
J
JBOD 2−12
K
KGPSA adapter device driver
installing 3−11
KGPSA driver support 3−16
Index–6
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
L
labeling LUNs
format utility 3−69
launching
Client 3−19
LOCATE
find devices 2−38
location
cache module 1−2, 1−3
controller 1−2, 1−3
logintmo parameter 3−35
LUN IDs
general description 1−32
LUN presentation 1−20
LUNs
creating a filesystem 3−67
creating partitions using disklabel 3−66
labeling 3−69
using format utility 3−69
preparing for use by filesystem 3−66
switches 2−25
mounting the CD-ROM 3−58
moving storagesets 7−7
mulitple-bus failover
restricting host access by offsets 1−29
multiple-bus failover 1−8
ADD CONNECTIONS command 1−21
ADD UNIT command 1−21
ADD UNITcommand 1−21
CLI configuration procedure
fabric topology 6−5
fabric topology
preferring units 5−30
fabric topology configuration
cabling 5−14
host connections 1−21
restricting host access 1−27
disabling access paths 1−27
SET CONNECTIONS command 1−21
SET UNITcommand 1−21
M
N
MA6000
connecting to host system 3−8
mail messages
RAIDManager 4−56
maintenance port connection
establishing a local connection 5−2
illustrated 5−2
manual mode
installation 3−64
mapping storagesets 2−29
messages
there is no disk in the drive B−4
mirrored caching
enabling 1−12
illustrated 1−12
mirrorset switches
changing
fabric topology 5−33
mirrorsets
planning considerations 2−16
important points 2−17
naming convention
arbitrated loop 3−53
fabric device 3−54
NetWare drive
installation 3−33
NetWare volume
configuration 3−36
network port assignments B−3
node IDs 1−30
restoring 1−31
NODE_ID
worldwide name 1−30
NOSAVE_CONFIGURATION 2−28
O
offset
LUN presentation 1−20
restricting host access
multiple-bus fafilover 1−29
transparent fafilover 1−26
SCSI version factor 1−20
Index–7
online help
SWCC B−8
options
for mirrorsets 2−25
for RAIDsets 2−24
initialize 2−26
other controller 1−3
P
pager notification B−8
configuring B−8
partitioning
storage 3−56
partitions
assigning a unit number
fabric topology 5−28
creating on a LUN using disklabel 3−66
creation 3−41
defining 2−22
formatting 3−42
planning considerations 2−22
guidelines 2−23
removing 3−43
passwords
choosing 4−54
performance 2−18
planning
overview 2−12
striped mirrorsets 2−21
stripesets 2−15
planning considerations 2−18
planning storage
containers 2−11
where to start 2−2
planning storagesets
characteristics
changing switches 2−24
enabling switches 2−23
initialization switch 2−23
storagest switch 2−23
unit switch 2−23
switches
initialization 2−25
storageset 2−24
preferring units
multiple-bus failover
fabric topology 5−30
profiles
creating 2−12
description 2−12
storageset A−1
example A−2
R
RAIDManager
mail messages 4−56
RAIDset switches
changing
fabric topology 5−33
RAIDsets
choosing chunk size 2−26
maximum membership 2−19
planning considerations 2−18
important points 2−19
switches 2−24
read caching
enabled for all storage units 1−11
general description 1−11
read requests
decreasing the subsystem response time with
read caching 1−11
read-ahead caching 1−11
enabled for all disk units 1−11
reconfiguring
Agent 4−70
remedial kit 3−36
removing
Client B−7
removing a subsystem entry
Agent 4−59, 4−62
Agents 4−68
request rate 2−26
requirements
host adapter installation 3−7
Index–8
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration Guide
storage configuration 1−23
restarting
Agent 4−56
restricting host access
disabling access paths
multiple-bus failover 1−27
transparent failover 1−26
multiple-bus failover 1−27
separate links
transparent failover 1−25
transparent failover 1−24
running
Agent 4−5, 4−48
S
SAVE_CONFIGURATION 2−28
saving configuration 2−28
SCSI version
offset 1−20
SCSI-2
assigning unit numbers 1−21
command console lun 1−22
SCSI-3
assigning unit numbers 1−21
command console lun 1−22
scu
Tru64 UNIX utility 3−73
Second enclosure of multiple-enclosure subsystem
storage map template 2 A−4
selective storage presentation 1−24
serial connection
establishing
HSG60 storage window 3−20
SET CONNECTIONS
multiple-bus failover 1−21
transparent failover 1−19
SET UNIT
multiple-bus failover 1−21
setting
controller configuration handling 2−28
settings
controllers 3−13
single disk (JBOD)
assigning a unit number
fabric topology 5−28
Single-enclosure subsystem
storage map template 1
A−3
software
verification 3−33
solution software 3−10
installation 3−49, 3−51
specifying identifier for a unit
CLI commands 1−23, 5−29
specifying LUN ID alias
SWCC 1−24, 5−29
starting
Agent 4−59, 4−62, 4−69
stopping
Agent 4−59, 4−62, 4−69
storage
creating map 2−29
partitioning 3−56
profile
example A−2
storage map 2−29
Storage map template 1 A−3
first enclosure of multiple-enclosure
subsystem A−3, A−6, A−7, A−9, A−11,
A−12, A−14
single enclosure subsystem A−3
Storage map template 2 A−4
second enclosure of multiple-enclosure
subsystem A−4
Storage map template 3 A−5
third enclosure of multiple-enclosure
subsystem A−5
storageset
deleting
fabric topology 5−32
fabric topology
changing switches 5−33
planning considerations 2−14
mirrorsets 2−16
partitions 2−22
Index–9
RAIDsets 2−18
striped mirrorsets 2−20
stripesets 2−14
profile 2−12
example 2−13
profiles A−1
storageset profile 2−12
storageset switches
SET command 2−24
storagesets
creating a profile 2−12
moving 7−7
striped mirrorsets
planning 2−21
planning considerations 2−20
stripesets
distributing members across buses 2−16
planning 2−15
planning considerations 2−14
important points 2−15
Subsystem
adding 4−54
subsystem
adding 4−54
saving configuration 2−28
subsystem configuration
backup 7−1
subsystem entry
Agent
adding 4−59, 4−62, 4−66
SWCC 4−2
accessing the CLI 1−23, 5−29
additional information B−8
configuring storage 1−23
connection methods 3−17
integrating B−8
online help B−8
specifying LUN ID alias 1−24, 5−29
SWCC Client
installation 3−17, 3−19
switches
changing 2−24
changing characteristics 2−23
CHUNKSIZE 2−26
enabling 2−23
mirrorsets 2−25
NOSAVE_CONFIGURATION 2−28
RAIDset 2−24
SAVE_CONFIGURATION 2−28
switches for storagesets
overview 2−23
system
preparing LUNs 3−68
T
templates
subsystem profile A−1
terminology
other controller 1−3
this controller 1−3
Third enclosure of multiple-enclosure subsystem
storage map template 3 A−5
this controller 1−3
toggling startup
Agent 4−59, 4−62, 4−70
transparent failover 1−5
ADD CONNECTIONS 1−19
ADD UNIT 1−19
matching units to host connections 1−19
restricting host access 1−24
disabling access paths 1−26
separate links 1−25
restricting host access by offsets 1−26
SET CONNECTIONS 1−19
troubleshooting
invalid network port assignments B−3
there is no disk in the drive message B−4
Tru64 UNIX utilities 3−71
dd 3−73
file 3−71
iostat 3−75
scu 3−73
tuning
file system 3−76
Index–10
HSG80 ACS Solution Software Version 8.6 for Compaq Tru64 UNIX Installation and Configuration
U
uninstalling
Agent 4−73
Client B−7
unit numbers
assigning 1−18
fabric topology 5−28
assigning depending on SCSI version 1−21
assigning in fabric topology
partition 5−28
single disk 5−28
unit qualifiers
assigning
fabric topology 5−28
unit switches
changing
fabric topology 5−34
units
LUN IDs 1−32
using the configuration menu
Agent 4−59, 4−62, 4−63
V
verification
controller installation 5−13, 5−22
software 3−33
verification of installation
controller 5−13, 5−22
virtual disk
creation 3−17, 3−23
virtual disks
adding B−8
W
worldwide names 1−30
NODE_ID 1−30
REPORTED PORT_ID 1−30
restoring 1−31
write performance 2−27
write requests
improving the subsystem response time with
write-back caching 1−11
placing data with with write-through caching
1−11
write-back caching
general description 1−11
write-through caching
general description 1−11

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement