Qlogic QLA2100F Specifications

IBM TotalStorage™ Enterprise Storage Server™
Host Systems Attachment Guide
2105 Models E10, E20, F10, and F20
SC26-7296-05
IBM TotalStorage™ Enterprise Storage Server™
Host Systems Attachment Guide
2105 Models E10, E20, F10, and F20
SC26-7296-05
Note:
Before using this information and the product it supports, read the information in “Safety and environmental notices” on
page xiii and “Notices” on page 179.
Sixth Edition (November 2001)
This edition replaces SC26-7296-04.
© Copyright International Business Machines Corporation 1999, 2001. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Safety and environmental notices . . . . . . . . . . . . . . . . . xiii
Product recycling . . . . . . . . . . . . . . . . . . . . . . . . xiii
Disposing of products . . . . . . . . . . . . . . . . . . . . . . xiii
|
About this guide . . . . . . . . . . . . . . . . . . . . . . . . xv
Who should use this guide. . . . . . . . . . . . . . . . . . . . . xv
Summary of changes. . . . . . . . . . . . . . . . . . . . . . . xv
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Publications . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
The IBM TotalStorage ESS library . . . . . . . . . . . . . . . . . xvi
Other IBM publications . . . . . . . . . . . . . . . . . . . . xviii
Other non-IBM publications . . . . . . . . . . . . . . . . . . . xxi
Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
How to send your comments . . . . . . . . . . . . . . . . . . . xxii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . 1
Finding attachment information in this guide. . . . . . . . . . . . . . . 1
Overview of the IBM TotalStorage Enterprise Storage Server (ESS) . . . . . . 2
Host systems that the ESS supports . . . . . . . . . . . . . . . . . 3
SCSI-attached open-systems hosts . . . . . . . . . . . . . . . . . 4
Fibre-channel (SCSI-FCP) attached open-systems hosts . . . . . . . . . 4
ESCON-attached S/390 and zSeries hosts . . . . . . . . . . . . . . 5
FICON-attached S/390 and zSeries hosts . . . . . . . . . . . . . . 6
General information about attaching to an open-systems host with SCSI adapters 7
Cable interconnections . . . . . . . . . . . . . . . . . . . . . 7
Cable lengths . . . . . . . . . . . . . . . . . . . . . . . . . 9
SCSI initiators and I/O queuing . . . . . . . . . . . . . . . . . . 9
Connecting the SCSI cables . . . . . . . . . . . . . . . . . . . 9
Handling electrostatic discharge-sensitive components . . . . . . . . . 10
Checking the attachment . . . . . . . . . . . . . . . . . . . . 10
Solving attachment problems. . . . . . . . . . . . . . . . . . . 10
LUN affinity . . . . . . . . . . . . . . . . . . . . . . . . . 11
Targets and LUNs . . . . . . . . . . . . . . . . . . . . . . . 11
FlashCopy and PPRC restrictions . . . . . . . . . . . . . . . . . 11
SCSI host system limitations . . . . . . . . . . . . . . . . . . . 11
General information about attaching to an open-systems hosts with
fibre-channel adapters . . . . . . . . . . . . . . . . . . . . . 12
Fibre-channel architecture . . . . . . . . . . . . . . . . . . . . 13
Fibre-channel cables and adapter types. . . . . . . . . . . . . . . 15
Fibre-channel node-to-node distances . . . . . . . . . . . . . . . 15
LUN affinity . . . . . . . . . . . . . . . . . . . . . . . . . 15
Targets and LUNs . . . . . . . . . . . . . . . . . . . . . . . 15
FlashCopy and PPRC restrictions for open-systems hosts . . . . . . . . 16
LUN access modes . . . . . . . . . . . . . . . . . . . . . . 16
Fibre-channel storage area networks (SANs) . . . . . . . . . . . . . 17
Chapter 2. Attaching to a Compaq host . . . . . . . . . . . . . . . 19
Attaching with SCSI adapters . . . . . . . . . . . . . . . . . . . 19
Attachment requirements . . . . . . . . . . . . . . . . . . . . 19
© Copyright IBM Corp. 1999, 2001
iii
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Installing and configuring the Compaq Tru64 UNIX Version 4.0x host
Installing and Configuring Compaq Tru64 UNIX Version 5.x . . .
Attachment considerations. . . . . . . . . . . . . . . .
Installing the KZPBA-CB adapter card . . . . . . . . . . .
Adding or modifying AlphaServer connections . . . . . . . .
Configuring host adapter ports . . . . . . . . . . . . . .
Adding and assigning volumes . . . . . . . . . . . . . .
Confirming storage connectivity . . . . . . . . . . . . . .
Setting up the Tru64 UNIX device parameter database . . . . .
Setting the kernel SCSI parameters . . . . . . . . . . . .
Configuring the storage . . . . . . . . . . . . . . . . .
Attaching with fibre-channel adapters. . . . . . . . . . . . .
Attachment requirements . . . . . . . . . . . . . . . .
Attachment considerations. . . . . . . . . . . . . . . .
Support for the AlphaServer console . . . . . . . . . . . .
Installing the KGPSA-CA or KGPSA-BC adapter card. . . . . .
Adding or modifying AlphaServer connections . . . . . . . .
Configuring host adapter ports . . . . . . . . . . . . . .
Adding and assigning volumes . . . . . . . . . . . . . .
Confirming switch connectivity . . . . . . . . . . . . . .
Confirming storage connectivity . . . . . . . . . . . . . .
Setting up the Tru64 UNIX device parameter database . . . . .
Setting the kernel SCSI parameters . . . . . . . . . . . .
Configuring the storage . . . . . . . . . . . . . . . . .
system
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
21
21
22
23
23
23
24
25
26
26
27
27
28
28
29
30
30
30
31
31
35
36
36
Chapter 3. Attaching to a Hewlett-Packard 9000 host
Attaching with SCSI adapters . . . . . . . . . .
Attachment requirements . . . . . . . . . . .
Installing the 2105 host install script file . . . . . .
Configuring the ESS for clustering . . . . . . . .
Attaching with fibre-channel adapters. . . . . . . .
Attachment requirements . . . . . . . . . . .
Installing the 2105 host install script file . . . . . .
Configuring the ESS for clustering . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
39
39
40
41
41
41
42
43
Chapter 4. Attaching to an IBM AS/400 or iSeries host . . .
Attaching with SCSI adapters to an AS/400 host system. . . .
Attachment requirements . . . . . . . . . . . . . .
Attachment considerations. . . . . . . . . . . . . .
Recommended configurations for the AS/400 . . . . . . .
9337 subsystem emulation . . . . . . . . . . . . .
Attaching with fibre-channel adapters to the iSeries host system.
Attachment requirements . . . . . . . . . . . . . .
Attachment considerations. . . . . . . . . . . . . .
Host Limitations . . . . . . . . . . . . . . . . .
Recommended configurations . . . . . . . . . . . .
Performing Peer-to-Peer remote copy functions . . . . . .
iSeries hardware . . . . . . . . . . . . . . . . .
iSeries software . . . . . . . . . . . . . . . . .
Setting up the 3534 Managed Hub . . . . . . . . . .
Setting up the 2109 S08 or S16 switch . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
45
45
45
46
46
48
48
49
50
50
51
52
52
52
53
Chapter 5. Attaching to an IBM Eserver xSeries
Attaching with fibre-channel adapters. . . . . .
Attachment requirements . . . . . . . . .
System requirements . . . . . . . . . .
iv
ESS Host Systems Attachment Guide
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
430 or IBM NUMA-Q
. . . . . . . .
. . . . . . . .
. . . . . . . .
host 55
. . . 55
. . . 55
. . . 56
Installing the IOC-0210-54 adapter card. . . . . . . . . . . . . . . 56
Configuring the IOC-0210-54 adapter card . . . . . . . . . . . . . . 56
|
|
|
|
|
|
|
|
Chapter 6. Attaching to an IBM RS/6000 or IBM eServer pSeries host . . .
Attaching with SCSI adapters . . . . . . . . . . . . . . . . . . .
Attachment requirements . . . . . . . . . . . . . . . . . . . .
Installing the 2105 host attachment package . . . . . . . . . . . . .
Verifying the ESS configuration . . . . . . . . . . . . . . . . . .
Configuring VSS and ESS devices with multiple paths per LUN . . . . . .
Emulating UNIX-based host systems . . . . . . . . . . . . . . . .
Attaching with fibre-channel adapters. . . . . . . . . . . . . . . . .
Attachment requirements . . . . . . . . . . . . . . . . . . . .
Installing the 2105 host attachment package . . . . . . . . . . . . .
Verifying the configuration . . . . . . . . . . . . . . . . . . . .
Configuring VSS and ESS devices with multiple paths per LUN . . . . . .
Attaching to multiple RS/6000 or pSeries hosts without the HACMP/6000™ host
system . . . . . . . . . . . . . . . . . . . . . . . . . . .
Software requirements . . . . . . . . . . . . . . . . . . . . .
Hardware requirements . . . . . . . . . . . . . . . . . . . . .
Attachment procedures . . . . . . . . . . . . . . . . . . . . .
Saving data on the ESS . . . . . . . . . . . . . . . . . . . .
Restoring data on the ESS . . . . . . . . . . . . . . . . . . .
Configuring for the HACMP/6000 host system . . . . . . . . . . . . .
64
64
64
65
66
66
67
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries host
Attaching with ESCON . . . . . . . . . . . . . . . . . .
Controller images and interconnections . . . . . . . . . . .
Support for 9032 Model 5 ESCON director FICON bridge feature .
Host adapters, cables, distances and specifications . . . . . .
Logical paths and path groups . . . . . . . . . . . . . .
Cable lengths and path types . . . . . . . . . . . . . .
Data transfer. . . . . . . . . . . . . . . . . . . . .
Directors and channel extenders . . . . . . . . . . . . .
Identifying the port for TSO commands . . . . . . . . . . .
Attachment requirements . . . . . . . . . . . . . . . .
Migrating from ESCON to native FICON . . . . . . . . . . .
Native ESCON configuration . . . . . . . . . . . . . . .
Mixed configuration . . . . . . . . . . . . . . . . . .
FICON configuration . . . . . . . . . . . . . . . . . .
Migrating from a FICON bridge to a native FICON attachment . . .
FICON bridge configuration . . . . . . . . . . . . . . .
Mixed configuration . . . . . . . . . . . . . . . . . .
Native FICON configuration . . . . . . . . . . . . . . .
Attaching to a FICON channel . . . . . . . . . . . . . . .
Configuring the ESS for FICON attachment . . . . . . . . .
Attachment considerations. . . . . . . . . . . . . . . .
Attaching to a FICON channel with G5 and G6 hosts . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 8. Attaching a Linux host . . . . . . . . . . . . . .
Attaching with fibre-channel adapters. . . . . . . . . . . . . .
Attachment requirements . . . . . . . . . . . . . . . . .
Installing the QLogic QLA2200F or Qlogic QLA2300F adapter card .
Loading the current fibre-channel adapter driver. . . . . . . . .
Installing the fibre-channel adapter drivers . . . . . . . . . . .
Configuring the ESS with the QLogic QLA2200F or QLogic QLA2300F
adapter card . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
57
57
58
59
59
60
60
61
61
63
63
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
70
70
70
71
71
71
71
71
72
72
72
73
74
75
75
76
77
78
78
79
80
. .
. .
. .
. .
. .
. .
host
. .
.
.
.
.
.
.
83
83
83
84
85
86
. 86
Contents
v
Number of disk devices on Linux . . . .
Configuration of ESS storage under Linux .
Partitioning ESS disks . . . . . . .
Creating and using file systems on ESS
|
|
|
|
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
86
87
87
88
Chapter 9. Attaching to a Novell NetWare host . . . . . .
Attaching with SCSI adapters . . . . . . . . . . . . .
Attachment requirements . . . . . . . . . . . . . .
Installing and configuring the Adaptec adapter card . . . .
Installing and configuring the QLogic QLA1041 adapter card .
Attaching with fibre-channel adapters. . . . . . . . . . .
Installing the QLogic QLA2100F adapter card . . . . . .
Installing the QLogic QLA2200F adapter card . . . . . .
Loading the current adapter driver . . . . . . . . . . .
Installing the adapter drivers . . . . . . . . . . . . .
Configuring the QLogic QLA2100F or QLA2200F adapter card
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
91
91
91
93
94
94
95
96
97
97
Chapter 10. Attaching to a Sun host . . . . .
Attaching with SCSI adapters . . . . . . . .
Attachment requirements . . . . . . . . .
Mapping hardware . . . . . . . . . . .
Configuring host device drivers . . . . . .
Installing the IBM Subsystem Device Driver . .
Setting the parameters for the Sun host system
Attaching with fibre-channel adapters . . . . .
Attachment requirements. . . . . . . . .
Installing the Emulex LP8000 adapter card . .
Installing the JNI PCI adapter card . . . . .
Installing the JNI SBUS adapter card . . . .
Installing the QLogic QLA2200F adapter card .
Downloading the current QLogic adapter driver.
Installing the QLogic adapter drivers. . . . .
Configuring host device drivers . . . . . .
Installing the IBM Subsystem Device Driver . .
Setting the Sun host system parameters . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Chapter 11. Attaching to a Windows NT 4.0 host . . . . . . .
Attaching with SCSI adapters . . . . . . . . . . . . . . .
Attachment requirements. . . . . . . . . . . . . . . .
Installing and configuring the Adaptec AHA-2944UW adapter card
Installing and configuring the Symbios 8751D adapter card . . .
Installing and configuring the QLogic adapter card . . . . . .
Configuring for availability and recoverability . . . . . . . .
Performing a FlashCopy from one volume to another volume . .
Attaching with fibre-channel adapters . . . . . . . . . . . .
Attachment requirements. . . . . . . . . . . . . . . .
Installing the QLogic QLA2100F adapter card . . . . . . . .
Installing the QLogic QLA2200F adapter card . . . . . . . .
Downloading the QLogic adapter driver . . . . . . . . . .
Installing the QLogic adapter drivers . . . . . . . . . . .
Configuring the QLogic host adapter cards . . . . . . . . .
Installing the Emulex LP8000 adapter card . . . . . . . . .
Downloading the Emulex adapter driver . . . . . . . . . .
Installing the Emulex adapter drivers . . . . . . . . . . .
Parameter settings for the Emulex LP8000 on a Windows NT host
Configuring the ESS with the Emulex LP8000 host adapter card .
|
|
vi
ESS Host Systems Attachment Guide
. . . 99
. . . 99
. . . 99
. . . 100
. . . 100
. . . 102
. . . 103
. . . 104
. . . 105
. . . 106
. . . 108
. . . 109
. . . 110
. . . 111
. . . 111
. . . 112
. . . 117
. . . 118
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
system
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
121
121
121
122
123
124
125
126
126
126
127
128
129
130
130
130
131
131
132
. 133
|
|
|
|
|
|
Configuring for availability and recoverability . . .
Setting the TimeOutValue registry . . . . . . .
Verifying the host system is configured for storage .
Performing a FlashCopy from one volume to another
. . .
. . .
. . .
volume
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
133
134
134
134
Chapter 12. Attaching to a Windows 2000 host . . . . . . . . . . .
Attaching with SCSI adapters . . . . . . . . . . . . . . . . . . .
Attachment requirements. . . . . . . . . . . . . . . . . . . .
Attaching an ESS to a Windows 2000 host system . . . . . . . . . .
Installing and configuring the Adaptec AHA-2944UW adapter card . . . .
Installing and configuring the Symbios 8751D adapter card . . . . . . .
Installing and configuring the QLogic adapter card . . . . . . . . . .
Configuring for availability and recoverability . . . . . . . . . . . .
Setting the TimeOutValue registry . . . . . . . . . . . . . . . .
Performing a FlashCopy from one volume to another volume . . . . . .
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . .
Attachment requirements. . . . . . . . . . . . . . . . . . . .
Installing the QLogic QLA2100F adapter card . . . . . . . . . . . .
Installing the QLogic QLA2200F adapter card . . . . . . . . . . . .
Downloading the QLogic adapter driver . . . . . . . . . . . . . .
Installing the QLogic adapter drivers . . . . . . . . . . . . . . .
Configuring the ESS with the QLogic QLA2100F or QLA2200F adapter card
Installing the Emulex LP8000 adapter card . . . . . . . . . . . . .
Downloading the Emulex adapter driver . . . . . . . . . . . . . .
Installing the Emulex adapter drivers . . . . . . . . . . . . . . .
Parameter settings for the Emulex LP8000 for a Windows 2000 host system
Configuring the ESS with the Emulex LP8000 host adapter card . . . . .
Configuring for availability and recoverability for a Windows 2000 host
system . . . . . . . . . . . . . . . . . . . . . . . . .
Setting the TimeOutValue registry . . . . . . . . . . . . . . . .
Verifying the host is configured for storage . . . . . . . . . . . . .
Performing a FlashCopy from one volume to another volume . . . . . .
137
137
137
138
138
139
140
141
141
142
142
143
143
144
145
146
146
146
147
148
149
150
150
150
151
151
Appendix A. Locating the worldwide port name (WWPN). . . . .
Fibre-channel port name identification . . . . . . . . . . . . .
Locating the WWPN for a Compaq host . . . . . . . . . . . .
Locating the WWPN for a Hewlett-Packard host . . . . . . . . .
Locating the WWPN for an iSeries host . . . . . . . . . . . .
Locating the WWPN for an IBM eServer xSeries or IBM NUMA-Q host.
Locating the WWPN for an IBM eServer RS/6000 and pSeries host . .
Locating the WWPN for a Linux host . . . . . . . . . . . . .
Locating the WWPN for a Novell NetWare host . . . . . . . . .
Locating the WWPN for a Sun host . . . . . . . . . . . . . .
Locating the WWPN for a Windows NT host . . . . . . . . . .
Locating the WWPN for a Windows 2000 host . . . . . . . . . .
153
153
153
154
154
155
155
156
156
156
157
157
.
.
.
.
.
.
.
.
.
.
.
.
Appendix B. Migrating from SCSI to fibre-channel . . . . . . . .
Software requirements . . . . . . . . . . . . . . . . . . .
Preparing a host system to change from SCSI to fibre-channel attachment
Nonconcurrent migration . . . . . . . . . . . . . . . . . . .
Migrating from native SCSI to fibre-channel . . . . . . . . . . .
Migrating on a Hewlett-Packard host . . . . . . . . . . . . .
Migrating on an IBM RS/6000 host . . . . . . . . . . . . . .
Migrating on a Windows NT or Windows 2000 host system . . . . .
Concurrent migration . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . 159
. . 159
159
. . 160
. . 160
. . 160
. . 162
. . 164
. . 166
Contents
vii
Appendix C. Migrating from the IBM SAN Data Gateway to fibre-channel
attachment . . . . . . . . . . . . . . . . . . . . . . . . . 169
Overview of the IBM SAN Data Gateway . . . . . . . . . . . . . . . 169
Migrating volumes from the SAN Data Gateway to native fibre-channel. . . . 170
Statement of Limited Warranty . . . . . .
Part 1 – General Terms . . . . . . . . .
The IBM Warranty for Machines . . . . .
Extent of Warranty . . . . . . . . . .
Items Not Covered by Warranty . . . . .
Warranty Service. . . . . . . . . . .
Production Status . . . . . . . . . .
Limitation of Liability . . . . . . . . .
Part 2 - Country or region-unique Terms . . .
ASIA PACIFIC. . . . . . . . . . . .
EUROPE, MIDDLE EAST, AFRICA (EMEA) .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Notices . . . . . . . . . . . . . . . . . . . . . . .
Trademarks. . . . . . . . . . . . . . . . . . . . . .
Electronic emission notices . . . . . . . . . . . . . . . .
Federal Communications Commission (FCC) statement . . . .
Industry Canada compliance statement . . . . . . . . . .
European community compliance statement. . . . . . . . .
Japanese Voluntary Control Council for Interference (VCCI) class A
statement . . . . . . . . . . . . . . . . . . . .
Korean government Ministry of Communication (MOC) statement .
Taiwan class A compliance statement . . . . . . . . . . .
IBM agreement for licensed internal code. . . . . . . . . . .
Actions you must not take . . . . . . . . . . . . . . .
Glossary
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
171
171
171
171
172
172
173
173
174
174
174
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
179
180
180
181
181
181
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
182
182
182
183
183
. . . . . . . . . . . . . . . . . . . . . . . . . . 185
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
viii
ESS Host Systems Attachment Guide
Figures
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
ESS Models E10, E20, F10, and F20 base enclosure; front and rear views . . . . . . . . .
ESS Expansion enclosure, front and rear views . . . . . . . . . . . . . . . . . . .
ESS host interconnections . . . . . . . . . . . . . . . . . . . . . . . . . . .
Connecting the ESS to two host systems . . . . . . . . . . . . . . . . . . . . .
Point-to-point topology . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Switched-fabric topology . . . . . . . . . . . . . . . . . . . . . . . . . . .
Arbitrated loop topology . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of a modified /etc/fstab file . . . . . . . . . . . . . . . . . . . . . . .
Example of what is displayed when you type show config . . . . . . . . . . . . . . .
Example of what is displayed when you type show device . . . . . . . . . . . . . .
Example of what is displayed when you use the hwmgr command to verify attachment . . . .
Example of a Korn shell script to display a summary of ESS volumes . . . . . . . . . .
Example of what is displayed when you execute the Korn shell script . . . . . . . . . .
Example of the ddr.dbase file . . . . . . . . . . . . . . . . . . . . . . . . .
Example of how to change the timeout section of the camdata.c file from 10 to 60 seconds
Example of how to configure storage . . . . . . . . . . . . . . . . . . . . . .
Confirming the ESS licensed internal code on a Compaq AlphaServer . . . . . . . . . .
Example of what is displayed when you type set mode diag and wwidmgr -show . . . . . .
Example of what is displayed when you type the switchshow command . . . . . . . . .
Example of ESS volumes at the AlphaServer console . . . . . . . . . . . . . . . .
Example of a hex string for an ESS volume on an AlphaServer console or Tru64 UNIX . . . .
Example of a hex string that identifies the decimal volume number for an ESS volume on an
AlphaServer console or Tru64 UNIX . . . . . . . . . . . . . . . . . . . . . . .
Example of hex representation of last 5 characters of an ESS volume serial number on an
AlphaServer console . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of what is displayed when you type wwidmgr -quickset, sho wwid and sho n . . . .
Example of what is displayed when you use the hwmgr command to verify attachment . . . .
Example of a Korn shell script to display a summary of ESS volumes . . . . . . . . . .
Example of what is displayed when you execute the Korn shell script . . . . . . . . . .
Example of the ddr.dbase file . . . . . . . . . . . . . . . . . . . . . . . . .
Example of how to change the timeout section of the camdata.c file from 10 to 60 seconds
Example of how to configure storage . . . . . . . . . . . . . . . . . . . . . .
Example of the display for the auxiliary storage hardware resource detail for the 2766 adapter
card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of the logical hardware resources associated with an IOP . . . . . . . . . . .
Example of the display for the auxiliary storage hardware resource detail for the 2105 disk unit
Example of a list of other devices displayed when you use the lsdev -Cc disk | grep 2105
command, SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of a list of other devices displayed when you use the lsdev -Cc disk | grep 2105
command, SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of a list of devices displayed when you use the lsdev -Cc disk | grep 2105 command,
fibre-channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of a list of other devices displayed when you use the lsdev -Cc | grep 2105 command,
fibre-channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ESCON connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Port identification for S/390 and zSeries TSO commands . . . . . . . . . . . . . . .
Example of an ESCON configuration . . . . . . . . . . . . . . . . . . . . . .
Example of an ESCON configuration with added FICON channels. . . . . . . . . . . .
Example of a native FICON configuration with FICON channels that have been moved
nondisruptively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of how to configure a FICON bridge from an S/390 or zSeries host system to an ESS
Example of how to add a FICON director and a FICON host adapter . . . . . . . . . .
Example of the configuration after the FICON bridge is removed . . . . . . . . . . . .
© Copyright IBM Corp. 1999, 2001
. 2
. 3
. 8
. 10
. 13
. 14
. 15
. 21
. 22
. 23
. 24
. 25
. 25
. 26
26
. 27
. 29
. 30
. 31
. 32
. 32
. 32
.
.
.
.
.
.
33
33
34
34
35
36
36
. 37
. 50
. 51
51
. 59
. 59
. 63
.
.
.
.
.
63
69
72
73
74
. 75
76
. 77
. 78
ix
46.
47.
48.
49.
50.
51.
52.
53.
54.
| 55.
| 56.
| 57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
|
|
|
|
|
x
Example of range of devices for a Linux host . . . . . . . . . . . . . . . . . . .
Example of different options for the fdisk utility . . . . . . . . . . . . . . . . . . .
Example of primary partition on the disk /dev/sdb . . . . . . . . . . . . . . . . . .
Example of assignment of Linux system ID to the partition . . . . . . . . . . . . . .
Example of creating a file with the mke2fs or mkfs command . . . . . . . . . . . . .
Example of sd.conf file entries . . . . . . . . . . . . . . . . . . . . . . . .
Example of default settings for SCSI options . . . . . . . . . . . . . . . . . . .
Example of the path you see when you insert the IBM Subsystem Device Driver compact disc
Example of how to include the IBM DPO subdirectory in the system path . . . . . . . .
Example of sd.conf file entries for SCSI . . . . . . . . . . . . . . . . . . . . .
Example of sd.conf file entries for fibre-channel . . . . . . . . . . . . . . . . . .
Example of start lpfc auto-generated configuration . . . . . . . . . . . . . . . . .
Example of a path to the IBM Subsystem Device Driver package subdirectories . . . . . .
Example of how to edit the .profile file in the root director to include the IBM DPO subdirectory
Example of boot adapter list for the Symbios 8751D adapter card for Windows NT . . . . .
Example of boot adapter list for the Symbios 8751D adapter card for Windows 2000 . . . .
Example of the output from the Compaq wwidmgr -show command. . . . . . . . . . .
Example of the output from the Compaq #fgrep wwn /var/adm/messages command . . . .
SAM display of an export volume group . . . . . . . . . . . . . . . . . . . . .
SAM display of an import volume group . . . . . . . . . . . . . . . . . . . . .
Sample script to get the hdisk number and the serial number . . . . . . . . . . . . .
Example list of the hdisk and serial numbers on the ESS . . . . . . . . . . . . . .
SCSI setup between the Windows NT or Windows 2000 host system and the ESS . . . . .
Initial setup of volumes attached to SCSI adapters on the host . . . . . . . . . . . .
Disk Administrator panel showing the initial setup . . . . . . . . . . . . . . . . .
ESS Host Systems Attachment Guide
. 87
. 87
. 88
. 88
. 89
. 101
. 102
103
. 103
. 113
. 114
. 114
. 117
117
. 123
. 139
. 153
. 154
. 161
. 162
. 163
. 163
. 164
. 165
. 165
Tables
1.
2.
3.
4.
5.
6.
|
7.
|
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
| 24.
|
| 25.
|
26.
27.
Publications in the ESS library . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Other IBM publications related to the ESS. . . . . . . . . . . . . . . . . . . . . . xviii
Other IBM publications without order numbers . . . . . . . . . . . . . . . . . . . . xxi
ESS Web sites and descriptions . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Matrix of where to find information for SCSI, fibre-channel, ESCON and FICON attachment. . . . 1
Host system limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Maximum number of adapters you can use for an AlphaServer . . . . . . . . . . . . . . 21
Maximum number of adapters you can use for an AlphaServer . . . . . . . . . . . . . . 28
Example from HSM logical resources for AS/400 host systems . . . . . . . . . . . . . . 47
Example of the capacity and status of disk drives for AS/400 host systems . . . . . . . . . 47
Size and type of the protected and unprotected AS/400 models . . . . . . . . . . . . . 48
Host system limitations for the iSeries host system . . . . . . . . . . . . . . . . . . 50
Capacity and models of disk volumes for iSeries . . . . . . . . . . . . . . . . . . . 51
IBM xSeries 430 and IBM NUMA-Q system requirements for the ESS . . . . . . . . . . . 56
Size of drives, configurations, and maximum size of LUNs . . . . . . . . . . . . . . . 60
Hardware and software levels supported for HACMP version 4.2.1, 4.2.2, 4.3.1, and 4.3.3 . . . . 68
Recommended SCSI ID assignments in a multihost environment . . . . . . . . . . . . . 68
Solaris 2.6, 2.7, and 8 minimum revision level patches for SCSI . . . . . . . . . . . . . 99
Example of SCSI options . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Solaris 2.6, 7, and 8 minimum revision level patches for fibre-channel . . . . . . . . . . . 105
Recommended configuration file parameters for the host bus adapters for the Emulex LP-8000
adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Recommended configuration file parameters for the host bus adapters for the JNI FC64-1063
and JNI FCI-1063. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Recommended configuration file parameters for the host bus adapters for the QLogic QLA2200F
adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Recommended configuration file parameters for the host bus adapters for the Emulex LP8000
adapter on a Windows NT host system . . . . . . . . . . . . . . . . . . . . . . 132
Recommended configuration file parameters for the host bus adapters for the Emulex LP8000
adapter on a Windows 2000 host system . . . . . . . . . . . . . . . . . . . . . 149
Volume mapping before migration . . . . . . . . . . . . . . . . . . . . . . . . 166
LUN limitations for various components . . . . . . . . . . . . . . . . . . . . . . 169
© Copyright IBM Corp. 1999, 2001
xi
xii
ESS Host Systems Attachment Guide
Safety and environmental notices
This section contains information about:
v Safety notices that are used in this guide
v Environmental guidelines for this product
To find the translated text for a danger or caution notice:
1. Look for the identification number at the end of each danger notice or each
caution notice. In the following examples, look for the numbers 1000 and 1001.
DANGER
A danger notice indicates the presence of a hazard that has the
potential of causing death or serious personal injury.
1000
CAUTION:
A caution notice indicates the presence of a hazard that has the potential
of causing moderate or minor personal injury.
1001
2. Find the number that matches in the IBM TotalStorage Safety Notices,
GC26-7229.
Product recycling
This unit contains recyclable materials. Recycle these materials at your local
recycling sites. Recycle the materials according to local regulations. In some areas,
IBM provides a product take-back program that ensures proper handling of the
product. Contact your IBM representative for more information.
Disposing of products
This unit may contain batteries. Remove and discard these batteries, or recycle
them, according to local regulations.
© Copyright IBM Corp. 1999, 2001
xiii
xiv
ESS Host Systems Attachment Guide
About this guide
This guide provides information about:
v Attaching the IBM Enterprise Storage Server (ESS) to an open-systems host with
Small Computer System Interface (SCSI) adapters
v Attaching the ESS to an open-systems host with fibre-channel adapters
v Connecting IBM Enterprise Systems Connection (ESCON®) cables to your IBM
S/390® and IBM ERserver zSeries (zSeries) host systems
v Connecting IBM Enterprise Systems Fibre Connection (FICON®) cables to your
S/390 and zSeries host systems
|
|
You can attach the following host systems to an ESS:
v Compaq
v Data General
v Hewlett Packard
v Linux
v IBM AS/400 and IBM Eserver iSeries (iSeries) with IBM Operating System/400®
Version 3 or Version 4 (OS/400®)
v IBM NUMA-Q and IBM Eserver xSeries (xSeries)
v IBM RS/6000® and IBM Eserver pSeries (pSeries)
v IBM RS/6000 SP
v
v
v
v
IBM S/390 and IBM Eserver zSeries (zSeries)
Microsoft® Windows NT® 4.0
Microsoft Windows 2000
Novell NetWare
v Sun
v UNIX®
Use this publication along with the publications for your host system.
Who should use this guide
Customers or IBM service support representatives can use this manual to attach
the ESS to a host system.
Summary of changes
Vertical revision bars (|) in the left margin indicate technical changes to this
document. Minor editorial changes do not have vertical revision bars.
November 2001
|
|
This edition includes the following new information:
v Procedures about how to attach an ESS to a Compaq host system with
fibre-channel adapters.
v Procedures about how to attach an ESS to a Linux host system with
fibre-channel adapters.
© Copyright IBM Corp. 1999, 2001
xv
Each chapter describes how to attach an ESS to open-system hosts with SCSI
adapters or fibre-channel adapters.
|
Prerequisites
IBM recommends you read the following publications before you use the IBM
TotalStorage Enterprise Storage Server Host Systems Attachment Guide:
v IBM TotalStorage Enterprise Storage Server Introduction and Planning Guide
v IBM TotalStorage Enterprise Storage Server Configuration Planner
|
|
|
|
Publications
The tables in this section list and describe the following publications:
v The publications that compose the IBM TotalStorage ESS library.
v Other IBM publications that relate to the ESS.
v non-IBM publications that relate to the ESS.
See “Ordering ESS publications” on page xvii for information about how to order
publications in the IBM TotalStorage ESS publication library. See “How to send your
comments” on page xxii for information about how to send comments about the
publications.
The IBM TotalStorage ESS library
Table 1 shows the customer publications that comprise the ESS library. See “The
IBM publications center” on page xvii for information about ordering these and other
IBM publications.
Table 1. Publications in the ESS library
Long title (short title)
Description
Order number
IBM TotalStorage
Enterprise Storage Server
Copy Services
Command-line Interface
User’s Guide (ESS CLI
User’s Guide)
SC26-7434
This user’s guide describes the commands you can use
from the ESS Copy Services command-line interface (CLI).
The CLI application provides a set of commands you can
use to write customized scripts for a host system. The
scripts initiate pre-defined tasks in an ESS Copy Services
server application. You can use the CLI commands to
indirectly control ESS Peer-to-Peer Remote Copy and
FlashCopy configuration tasks within an ESS Copy Services
server group.
This book is not available in hardcopy. It is available in PDF
format on the following Web site:
www.storage.ibm.com/hardsoft/products/ess/refinfo.htm
IBM TotalStorage
Enterprise Storage Server
Configuration Planner
(ESS Configuration
Planner)
This guide provides work sheets for planning the logical
configuration of the ESS. This book is not available in
hardcopy. This guide is available on the following Web site:
IBM TotalStorage
Enterprise Storage Server
Host Systems Attachment
Guide (ESS Attachment
Guide)
This book provides guidelines for attaching the ESS to your
host system and for migrating from Small Computer System
Interface (SCSI) to fibre-channel attachment.
xvi
SC26-7353
www.storage.ibm.com/hardsoft/products/ess/refinfo.htm
ESS Host Systems Attachment Guide
SC26-7296
Table 1. Publications in the ESS library (continued)
Long title (short title)
Description
Order number
IBM TotalStorage
Enterprise Storage Server
DFSMS Software Support
Reference (ESS DFSMS
Software Support)
SC26-7440
This book gives an overview of the ESS and highlights its
unique capabilities. It also describes Data Facility Storage
Management Subsystems (DFSMS) software support for the
ESS, including support for large volumes.
IBM TotalStorage
Enterprise Storage Server
Introduction and Planning
Guide (ESS Introduction
and Planning Guide)
This guide introduces the ESS product and lists the features GC26-7294
you can order. It also provides guidelines for planning the
installation and configuration of the ESS.
IBM TotalStorage
Enterprise Storage Server
Quick Configuration Guide
(ESS Quick Configuration
Guide)
This booklet provides flow charts for using the TotalStorage SC26-7354
Enterprise Storage Server Specialist (ESS Specialist). The
flow charts provide a high-level view of the tasks the IBM
service support representative performs during initial logical
configuration. You can also use the flow charts for tasks that
you might perform when you are modifying the logical
configuration. The hardcopy of this booklet is a 9-inch ×
4-inch fanfold.
IBM TotalStorage
Enterprise Storage Server
S/390 Command
Reference (ESS S/390
Command Reference)
This book describes the functions of the ESS and provides
reference information for S/390® and Eserver zSeries
hosts, such as channel commands, sense bytes, and error
recovery procedures.
SC26-7298
IBM TotalStorage Safety
Notices (Safety Notices)
This book provides translations of the danger notices and
caution notices that IBM uses in ESS publications.
GC26-7229
IBM TotalStorage
Enterprise Storage Server
Small Computer System
Interface (SCSI) Command
Reference (ESS SCSI
Command Reference)
This book describes the functions of the ESS. It provides
reference information for UNIX®, Application System/400®
(AS/400®), and Eserver iSeries 400 hosts, such as
channel commands, sense bytes, and error recovery
procedures.
SC26-7297
IBM TotalStorage
Enterprise Storage Server
User’s Guide (ESS Users
Guide)
This guide provides instructions for setting up and operating
the ESS and for analyzing problems.
SC26-7295
IBM TotalStorage
Enterprise Storage Server
Web Interface User’s
Guide (ESS Web Interface
Users Guide)
This guide provides instructions for using the two ESS Web
interfaces, ESS Specialist and ESS Copy Services.
SC26-7346
Ordering ESS publications
All the customer publications that are listed in “The IBM TotalStorage ESS library”
on page xvi are available on a compact disc that comes with the ESS, unless
otherwise noted.
The customer documents are also available on the following ESS Web site in PDF
format:
www.storage.ibm.com/hardsoft/products/ess/refinfo.htm
|
|
|
The IBM publications center
The publications center is a worldwide central repository for IBM product
publications and marketing material.
About this guide
xvii
|
|
|
|
|
|
The IBM publications center offers customized search functions to help you find the
publications that you need. A number of publications are available for you to view or
download free of charge. You can also order publications. The publications center
displays prices in your local currency. You can access the IBM publications center
through the following Web site:
www.ibm.com/shop/publications/order/
|
|
|
|
|
|
Publications notification system
|
|
|
If you want to subscribe, you can access the publications notification system from
the IBM publications center at the following Web site:
www.ibm.com/shop/publications/order/
The IBM publications center Web site offers you a notification system about IBM
publications. Register and you can create your own profile of publications that
interest you. The publications notification system sends you daily electronic mail
(e-mail) notes that contain information about new or revised publications that are
based on your profile.
Other IBM publications
Table 2 lists and describes other IBM publications that have information.
Table 2. Other IBM publications related to the ESS.
Order
number
Title
Description
DFSMS/MVS®
Version 1 Advanced
Copy Services,
This publication helps you to understand and use IBM Advanced Copy
SC35-0355
Services functions on an S/390 or zSeries. It describes two dynamic-copy
functions and several point-in-time copy functions. These functions provide
backup and recovery of data if a disaster occurs to your data center. The
dynamic-copy functions are Peer-to-Peer Remote Copy and Extended Remote
Copy. Collectively, these functions are known as remote copy. FlashCopy™
and Concurrent Copy are the point-in-time copy functions.
DFSMS/MVS Version This publication provides guidelines for using remote copy functions with S/390 SC35-0169
1 Remote Copy
and zSeries hosts.
Guide and Reference
|
|
SG24-5250
Enterprise Storage
Solutions Handbook
This book helps you understand what comprises enterprise storage
management. The concepts include the key technologies that you need to
know and the IBM subsystems, software, and solutions that are available
today. It also provides guidelines for implementing various enterprise storage
administration tasks, so that you can establish your own enterprise storage
management environment.
ESS Fibre-Channel
Migration Scenarios
This white paper describes how to change your host system attachment to the No order
ESS from SCSI and SAN Data Gateway to native fibre-channel attachment.
number
|
To get the white paper, go to the following Web site:
|
www.storage.ibm.com/hardsoft/products/ess/refinfo.htm
This publication provides a description of the physical and logical ESA/390 I/O
Enterprise Systems
Architecture/390
interface and the protocols which govern information transfer over that
ESCON I/O Interface interface. It is intended for designers of programs and equipment associated
with the ESCON I/O interface and for service personnel maintaining that
equipment. However, anyone concerned with the functional details of the
ESCON I/O interface will find it useful.
xviii
ESS Host Systems Attachment Guide
SA22-7202
Table 2. Other IBM publications related to the ESS. (continued)
Order
number
Title
Description
ESS Solutions for
Open Systems
Storage Compaq
AlphaServer, HP, and
Sun
This book helps you to install, tailor, and configure the ESS when you attach
Compaq AlphaServer (running Tru64 UNIX), HP, and Sun hosts. This book
does not cover Compaq AlphaServer running the Open VMS operating
system. This book also focuses on the settings required to give optimal
performance and on device driver levels. This book is for the experienced
UNIX professional who has a broad understanding of storage concepts.
SG24-6119
Fibre Channel
Connection (FICON)
I/O Interface,
Physical Layer
This publication provides information to the Fiber Channel I/O Interface. This
book is also available in PDF format by accessing the following Web site:
SA24-7172
Fibre-channel
Subsystem
Installation Guide
This publication tells you how to attach the xSeries 430 and NUMA-Q host
system with fibre-channel adapters.
www.ibm.com/servers/resourcelink/
See note.
This publication provides information about fibre-optic and ESCON-trunking
Fibre Transport
Services (FTS) Direct systems.
Attach, Physical and
Configuration
Planning Guide
GA22-7234
IBM Enterprise
Storage Server
This book, from the IBM International Technical Support Organization,
introduces the ESS and provides an understanding of its benefits. It also
describes in detail the architecture, hardware, and functions of the ESS.
SG24-5465
IBM Enterprise
Storage Server
Performance
Monitoring and
Tuning Guide
This book provides guidance on the best way to configure, monitor, and
manage your ESS to ensure optimum performance.
SG24-5656
IBM OS/390
Hardware
Configuration
Definition User’s
Guide
This publication provides detailed information about the IODF. It also provides
details about configuring parallel access volumes (PAVs). OS/390 uses the
IODF.
SC28-1848
|
|
|
|
|
|
|
|
|
|
|
|
|
IBM SAN Fibre
Channel Managed
Hub 3534 Service
Guide
The IBM SAN Fibre Channel Managed Hub can now be upgraded to switched GC26-7391
fabric capabilities with this Entry Switch Activation Feature. As your fibre
channel SAN requirements grow, and you need to migrate from the operational
characteristics of the Fibre Channel arbitrated loop (FC-AL) configuration
provided by the IBM Fibre Channel Managed Hub, 35341RU, to a fabric
capable switched environment, the Entry Switch Activation feature is designed
to provide this upgrade capability. This upgrade is designed to allow a
cost-effective, and scalable approach to developing fabric based Storage Area
Networks (SANs). The Entry Switch Activation feature (P/N 19P3126) supplies
the activation key necessary to convert the FC-AL based Managed Hub to
fabric capability with eight fabric F_ports, one of which can be an interswitch
link-capable port, an E_port, for attachment to the IBM SAN Fibre Channel
Switch, or other supported switches.
|
|
|
|
|
IBM SAN Fibre
Channel Managed
Hub 3534 Users
Guide
The IBM SAN Fibre Channel Switch 3534 is an eight-port Fibre Channel
Gigabit Hub that consists of a motherboard with connectors for supporting up
to eight ports, including seven fixed shortwave optic ports and one GBIC port,
and an operating system for building and managing a switched loop
architecture.
SY27–7616
About this guide
xix
Table 2. Other IBM publications related to the ESS. (continued)
|
|
|
|
Title
Description
IBM SAN Fibre
Channel Switch,
2109 Model S08
Users Guide
The IBM Fibre Channel Switch 2109 Model S08 Users Guide manual
describes the switch and the IBM StorWatch Specialist. It provides information
on the commands and how to manage the switch with Telnet and Simple
Network Management Protocol (SNMP).
|
To get a copy of this manual, see the Web site at:
|
www.ibm.com/storage/fcswitch
|
|
|
|
|
|
IBM SAN Fibre
This publication describes how to install and maintain the IBM SAN Fibre
Channel Switch 2109 Channel Switch 2109 Model S16. It is intended for trained service
Model S16
representatives and service providers who act as the primary level of field
Installation and
hardware service support to help solve and diagnose hardware problems.
Service Guide
To get a copy of this manual, see the Web site at:
|
Order
number
SC26-7349
SC26-7352
www.ibm.com/storage/fcswitch
IBM StorWatch
Expert Hands-On
Usage Guide
This guide helps you to install, tailor, and configure ESS Expert, and it shows
you how to use Expert.
SG24-6102
IBM TotalStorage
Enterprise Storage
Server Subsystem
Device Driver
Installation and Users
Guide
This book describes how to use the IBM Subsystem Device Driver on
open-systems hosts to enhance performance and availability on the ESS. The
Subsystem Device Driver creates redundant paths for shared logical unit
numbers. The Subsystem Device Driver permits applications to run without
interruption when path errors occur. It balances the workload across paths,
and it transparently integrates with applications.
GC26-7442
For information about the Subsystem Device Driver, see the following Web
site:
www.ibm.com/storage/support/techsup/swtechsup.nsf/support/sddupdates/
Implementing ESS
Copy Services on
S/390
SG24-5680
This publication tells you how to install, customize, and configure Copy
Services on an ESS that is attached to an S/390 or zSeries host system. Copy
Services functions include Peer-to-Peer Remote Copy, Extended Remote
Copy, FlashCopy™ and, Concurrent Copy. This publication describes the
functions, prerequisites, and corequisites and describes how to implement
each of the functions into your environment.
Implementing ESS
Copy Services on
UNIX and Windows
NT/2000
This publication tells you how to install, customize, and configure ESS Copy
Services on UNIX or Windows NT host systems. Copy Services functions
include Peer-to-Peer Remote Copy, FlashCopy, Extended Remote Copy, and
Concurrent Copy. Extended Remote Copy and Concurrent Copy are not
available for UNIX and Windows NT host systems; they are only available on
the S/390 or zSeries. This publication describes the functions and shows you
how to implement each of the functions into your environment. It also shows
you how to implement these solutions in ahigh-availability cluster
multiprocessing (HACMP) cluster.
Implementing Fibre
Channel Attachment
on the ESS
This book helps you to install, tailor, and configure fibre-channel attachment of SG24-6113
open-systems hosts to the ESS. It gives you a broad understanding of the
procedures involved and describes the prerequisites and requirements. It also
shows you how to implement fibre-channel attachment. This book also
describes the steps required to migrate to direct fibre-channel attachment from
native SCSI adapters and from fibre-channel attachment through the SAN
Data Gateway (SDG).
Implementing the
IBM Enterprise
Storage Server
This book can help you install, tailor, and configure the ESS in your
environment.
xx
ESS Host Systems Attachment Guide
SG24-5757
SG24-5420
Table 2. Other IBM publications related to the ESS. (continued)
Order
number
Title
Description
NUMA-Q ESS
Integration Release
Notes for NUMA
Systems
This publication provides information about special procedures and limitations
involved in running ESS with Copy Services on an IBM Eserver xSeries 430
and an IBM NUMA-Q® host system.
Part number
1003-80094.
It also provides information on how to:
v Configure the ESS
v Configure the IBM NUMA-Q and xSeries 430 host system
v Manage the ESS from the IBM NUMA-Q and xSeries 430 host system with
DYNIX/ptx tools
OS/390 MVS System This publication lists OS/390 and zSeries MVS system messages ABA to ASA. GC28-1784
Messages Volume 1
(ABA - ASA)
z/Architecture
Principles of
Operation
SA22-7832
This publication provides, for reference purposes, a detailed definition of the
z/Architecture. It is written as a reference for use primarily by assembler
language programmers and describes each function at the level of detail
needed to prepare an assembler language program that relies on that function;
although anyone concerned with the functional details of z/Architecture will find
it useful.
Note: There is no order number for this publication. This publication is not available
through IBM ordering systems. Contact your sales representative to obtain
this publication.
Other non-IBM publications
Table 3 lists and describes other related publications that are not available through
IBM ordering systems. To order, contact the sales representative at the branch
office in your locality.
Table 3. Other IBM publications without order numbers
Title
Description
Quick Start Guide: An Example with Network File System This publication tells you how to configure the Veritas
(NFS)
Cluster Server. See also the companion document,
Veritas Cluster Server User’s Guide.
Veritas Cluster Server Installation Guide
This publication tells you how to install the Veritas Cluster
Server. See also the companion document, Veritas
Cluster Server Release Notes.
Veritas Cluster Server Release Notes
This publication tells you how to install the Veritas Cluster
Server. See also the companion document, Veritas
Cluster Server Installation Guide.
Veritas Cluster Server User’s Guide
This publication tells you how to configure the Veritas
Cluster Server. See also the companion document, Quick
Start Guide: An Example with NFS.
Veritas Volume Manager Hardware Notes
This publication tells you how to implement dynamic
multipathing.
Veritas Volume Manager Installation Guide
This publication tells you how to install VxVM. It is not
available through IBM ordering systems. Contact your
sales representative to obtain this document.
Veritas Volume Manager Storage Administrators Guide
This publication tells you how to administer and configure
the disk volume groups.
About this guide
xxi
Web sites
Table 4 shows Web sites that have information about the ESS and other IBM
storage products.
Table 4. ESS Web sites and descriptions
Web site
Description
www.storage.ibm.com/
This Web site has general
information about IBM storage
products.
www.storage.ibm.com/hardsoft/products/ess/ess.htm
This Web site has information
about the IBM Enterprise Storage
Server (ESS).
ssddom02.storage.ibm.com/disk/ess/documentation.html
This Web site allows you to view
and print the ESS publications.
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
This Web site provides current
information about the host system
models, operating systems, and
adapters that the ESS supports.
ssddom01.storage.ibm.com/techsup/swtechsup.nsf/support/sddupdates/
This Web site provides information
about the IBM Subsystem Device
Driver.
www.storage.ibm.com/hardsoft/products/sangateway/sangateway.htm
This Web site provides information
about attaching Storage Area
Network or host system that uses
an industry-standard, fibre-channel
arbitrated loop (FC-AL) topology
through the IBM 2108 Storage
Area Network Data Gateway
Model G07.
www.storage.ibm.com/software/sms/sdm/sdmtech.htm
This Web site provides information
about the latest updates to Copy
Services components including
XRC, PPRC, Concurrent Copy,
and FlashCopy for S/390 and
zSeries.
ssddom01.storage.ibm.com/techsup/swtechsup.nsf/support/sddcliupdates/
This Web site provides information
about the IBM ESS Copy Services
Command-Line Interface (CLI).
How to send your comments
Your feedback is important to help us provide the highest quality information. If you
have any comments about this book or any other ESS documentation, you can
submit them in one of the following ways:
v e-mail
Submit your comments electronically to the following e-mail address:
starpubs@us.ibm.com
Be sure to include the name and order number of the book and, if applicable, the
specific location of the text you are commenting on, such as a page number or
table number.
v Mail or fax
xxii
ESS Host Systems Attachment Guide
Fill out the Readers’ Comments form (RCF) at the back of this book. Return it by
mail or fax (1-800-426-6209) or give it to an IBM representative. If the RCF has
been removed, you may address your comments to:
International Business Machines Corporation
RCF Processing Department
G26/050
5600 Cottle Road
San Jose, CA 95193-0001
U.S.A.
About this guide
xxiii
xxiv
ESS Host Systems Attachment Guide
Chapter 1. Introduction
This chapter describes the:
v Matrix of where to find information quickly for SCSI, fibre-channel, Enterprise
Systems Connection (ESCON), and FICON attachment.
v Overview of the IBM TotalStorage Enterprise Storage Server (ESS)
– Host systems that the ESS supports
– SCSI-attached open-systems hosts
– Fibre-channel (SCSI-FCP) attached hosts
– ESCON attached S/390 and zSeries hosts
– FICON attached S/390 and zSeries hosts
v General information about attaching to an open-systems host with SCSI adapters
v General Information about attaching to an open-systems host with fibre-channel
adapters
Finding attachment information in this guide
|
|
|
This matrix is a map that tells you how to find attachment information quickly in this
guide. The numbers in the columns for SCSI, fibre-channel, ESCON and FICON
represent the page number where you can find the information.
Table 5. Matrix of where to find information for SCSI, fibre-channel, ESCON and FICON
attachment
Host
SCSI
Fibre-channel
ESCON
FICON
Compaq
19
27
Not
applicable
Not applicable
Hewlett Packard
39
41
Not
applicable
Not applicable
IBM AS/400 and
IBM Eserver
iSeries
45
48
Not
applicable
Not applicable
IBM xSeries 430
and NUMA-Q
Not applicable
55
Not
applicable
Not applicable
IBM pSeries and
RS/6000
57
60
Not
applicable
Not applicable
IBM zSeries and
S/390
69
78
Not applicable
Not applicable
Linux
Not applicable
83
Not
applicable
Not applicable
Novell NetWare
91
94
Not
applicable
Not applicable
Sun
99
Not
applicable
Not applicable
Windows NT 4.0
121
126
Not
applicable
Not applicable
Windows 2000
137
142
Not
applicable
Not applicable
104
© Copyright IBM Corp. 1999, 2001
1
Overview of the IBM TotalStorage Enterprise Storage Server (ESS)
The ESS is a part of the Seascape® family of storage servers. The ESS provides
integrated caching and support for redundant arrays of independent disks (RAID)
for the disk drive modules (DDMs). The DDMs are attached through a serial storage
architecture (SSA) interface.
The minimum configuration for all ESS models is 16 DDMs. ESS Models E10 and
F10 support a maximum of 64 DDMs. ESS Models E20 and F20 support a
maximum of 384 DDMs, with 128 DDMs in the base enclosure and 256 DDMs in
the expansion enclosure.
Figure 1 and Figure 2 on page 3 show the ESS base enclosure and the expansion
enclosure. ESS Models E10 and F10 do not support an expansion enclosure.
Both the ESS base enclosure and the expansion enclosure have dual power cables
and redundant power. The redundant power system enables the ESS to continue
normal operation when one of the power cables is inactive. Redundancy also
ensures continuous data availability.
Front view
Rear view
Figure 1. ESS Models E10, E20, F10, and F20 base enclosure; front and rear views
2
ESS Host Systems Attachment Guide
Front view
Rear view
Figure 2. ESS Expansion enclosure, front and rear views
For detailed information about the ESS, see the IBM TotalStorage ESS Introduction
and Planning Guide.
|
|
|
|
You get redundancy with the IBM Subsystem Device Driver (SDD). The SDD
resides in the host server with the native disk-device driver for the IBM ESS. It uses
redundant connections between disk storage server and host server in an ESS to
provide data availability and performance.
|
|
|
The Subsystem Device Driver provides the following functions:
v Enhanced data availability
|
|
v Automatic path failover and recovery to an alternate path
v Dynamic load balancing of multiple paths
|
v Path selection policies for the AIX operating system
v Concurrent download of licensed internal code
|
|
For more information about the IBM Subsystem Device Driver, see the following
Web site:
|
ssddom01.storage.ibm.com/techsup/swtechsup.nsf/support/sddupdates/
Host systems that the ESS supports
The ESS provides heterogeneous host attachments so that you can consolidate
storage capacity and workloads for open-systems hosts, S/390 hosts, and zSeries
ERserver hosts. The ESS supports a maximum of 16 host adapters. You can
configure the for any intermix of supported host adapter types.
The following sections contain more information about the following types of host
attachments:
v SCSI-attached open-systems hosts
v Fibre-channel (SCSI-FCP) attached hosts
v ESCON-attached IBM S/390 and zSeries hosts
v FICON-attached IBM S/390 and zSeries hosts
Chapter 1. Introduction
3
For fibre-channel attachments, IBM recommends that you establish zones. The
zones should contain a single port attached to a host adapter with the desired
number of ports attached to the ESS. By establishing zones, you reduce the
possibility of interactions between host adapters in switched configurations. You can
establish the zones by using either of two zoning methods:
v Port number
v Worldwide port name (WWPN)
You can configure ports that are attached to the ESS in more than one zone. This
enables multiple host adapters to share access to the ESS fibre-channel ports.
Shared access to an ESS fibre-channel port might be from host platforms that
support a combination of bus adapter types and the operating systems.
|
|
For information about host systems, operating system levels, host bus adapters,
cables, and fabric support that IBM supports, see the following Web site:
|
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
SCSI-attached open-systems hosts
An ESS attaches to open-systems hosts with two-port SCSI adapters. SCSI ports
are 2-byte wide, differential, fast-20. With SCSI adapters the ESS supports:
v A maximum of 15 targets per SCSI adapter
v A maximum of 64 logical units per target, depending on host type
v A maximum of 512 SCSI-FCP host log or SCSI-3 initiators per ESS
|
|
|
The ESS supports the following host systems for SCSI attachment:
v
v
v
v
v
v
v
|
|
Compaq AlphaServer with the Tru64 UNIX and OpenVMS operating systems
Data General with the DG/UX operating system
Hewlett-Packard with the HP-UX operating system
IBM AS/400 and the IBM ERserver iSeries 400 (iSeries) with the IBM Operating
System/400® (OS/400®)
IBM RS/6000®, IBM Eserver pSeries (pSeries), RS/6000 SP, and pSeries SP
with the IBM AIX® operating system
IBM NUMA-Q and the IBM Eserver xSeries (xSeries) with the IBM ptx operating
system
Intel based servers and, Linux operating system that runs Red Hat 7.1 and SuSE
7.2
Intel-based servers with the Microsoft Windows NT® operating system
v
v Intel-based servers with the Microsoft Windows 2000 operating system
v Intel-based servers with the Novell NetWare operating system
v Sun with the Solaris operating system
See the following ESS Web site for details about types, models, adapters, and the
operating systems that the ESS supports for SCSI-attached host systems:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Fibre-channel (SCSI-FCP) attached open-systems hosts
Each ESS fibre-channel adapter has one port. You can configure the port to operate
with the SCSI-FCP upper layer protocol. Shortwave adapters are available on ESS
Models E10 and E20. Longwave adapter types and shortwave adapter types are
available on Models F10 and F20.
|
|
|
|
4
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
Fibre-channel adapters that are configured for SCSI-FCP (fibre-channel protocol)
support:
v A maximum of 128 host logins per fibre-channel port
v A maximum of 512 SCSI-FCP host logins or SCSI-3 initiators per ESS
v A maximum of 4096 LUNs per target (one target per host adapter), depending on
host type
v Port masking and LUN by target
v Either fibre-channel arbitrated lop (FC-AL), fabric, or point-to-point topologies
The ESS supports the following host systems for shortwave fibre-channel
attachment and longwave fibre-channel attachment.
v IBM AS/400 and iSeries with the IBM OS/400 operating system
v IBM NUMA-Q and xSeries with the ptx operating system
v IBM RS/6000, pSeries, RS/6000 SP, and pSeries SP with the IBM AIX operating
system
v Hewlett-Packard with the HP/UX operating system
v Intel-based servers with Microsoft Windows NT operating system
|
v
v
v
v
Intel-based servers with Microsoft Windows 2000 operating system
Intel-based servers with Novell NetWare operating system
Linux with the Red Hat Linux 7.1 and SuSE Linux 7.1
Sun with the Solaris operating system
See “Fibre-channel architecture” on page 13 for information about the fibre-channel
protocols that the ESS supports. See the following ESS Web site for details about
types, models, adapters, and the operating systems that the ESS supports:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
ESCON-attached S/390 and zSeries hosts
|
|
An ESS attaches to S/390 and zSeries host systems with ESCON channels. With
ESCON adapters, the ESS supports:
|
|
|
|
|
|
v A maximum of logical paths per port
v A maximum of 2048 logical paths across all ESCON ports
v A maximum of 256 logical paths per control-unit image (requires ESS LIC level of
1.6.0 or greater)
v Access to all 16 control-unit images (4096 CKD devices) over a single ESCON
port on the ESS
.
Note: Certain host channels might limit the number of devices per ESCON channel
to 1024. To fully access all 4096 devices on an ESS, it might be necessary
to multiplex the signals from the four ESCON host channels. You can access
the devices through a switch to a single ESS ESCON port. This method
exposesfour control-unit images (1024 devices) to each host channel.
|
|
|
The FICON bridge card in ESCON director 9032 Model 5 enables a FICON bridge
channel to connect to ESCON host adapters in the ESS. The FICON bridge
architecture supports up to 16384 devices per channel.
The ESS supports the following operating systems for S/390 and zSeries hosts:
Chapter 1. Introduction
5
v
v
v
v
v
v
|
OS/390®
Transaction Processing Facility (TPF)
Virtual Machine/Enterprise Storage Architecture (VM/ESA®)
Virtual Storage Extended/Enterprise Storage Architecture (VSE/ESA™)
z/OS™
z/VM™
|
|
For details on models and operating system versions, and releases that the ESS
supports for these host systems, see the following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
|
|
Default operation on the MVS operating system uses a 30-second missing-interrupt
handler (MIH) timeout for the ESS.
See the preventive service planning (PSP) bucket for operating system support and
for planning information. The PSP includes authorized program analysis reports
(APARs) and programming temporary fixes (PTFs).
For additional information about S/390 and zSeries support of ESS functions, see
DFSMS/MVS® Software Support for the IBM Enterprise Storage Server.
FICON-attached S/390 and zSeries hosts
ESS Models F10 and F20 attach to S/390 and zSeries host systems with FICON
channels. Each ESS fibre-channel adapter has one port. You can configure the port
to operate with the FICON upper layer protocol. When configured for FICON, the
fibre-channel port supports connections to a maximum of 128 FICON hosts with up
to 512 logical paths. On FICON, the fibre-channel adapter can operate with fabric or
point-to-point topologies. With fibre-channel adapters that are configured for FICON,
the ESS supports:
v Either fabric or point-to-point topologies
|
|
|
|
|
|
|
|
|
v A maximum of 128 channel logins per fibre-channel port
v A maximum of 256 logical paths control-unit image (requires LIC level of 1.6.0 or
greater)
v A maximum of 256 logical paths on each fibre-channel port
|
|
|
|
v A maximum of 4096 logical paths across all fibre-channel ports
v Access to all 16 control-unit images (4096 CKD devices) over each FICON port
|
Note: Certain FICON host channels may support more devices than the 4096
possible devices on an ESS. This allows you to attach other control units or
other ESS to the same host channel up to the limit that the host supports.
FICON is not supported on the ESS Models E10 and E20. ESS Models F10 and
F20 support both longwave and shortwave adapters.
|
|
The ESS supports the following operating systems for S/390 and zSeries hosts:
v OS/390
v
v
v
v
v
|
6
Transaction Processing Facility (TPF)
Virtual Machine/Enterprise Storage Architecture (VM/ESA®)
Virtual Storage Extended/Enterprise Storage Architecture (VSE/ESA™)
z/OS
z/VM
ESS Host Systems Attachment Guide
For details on models and operating system versions and releases that the ESS
supports for these host systems, see the following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
General information about attaching to an open-systems host with
SCSI adapters
The following section provides information about attaching an ESS to your SCSI
open-systems host.
For configuration limitations for the host systems, see the IBM Enterprise Storage
Server Introduction and Planning Guide. Also see “SCSI host system limitations” on
page 11.
Cable interconnections
Figure 3 on page 8 shows how a single SCSI bus connects between an ESS and a
set of SCSI devices. Each SCSI adapter card in an ESS has a built-in terminator.
You can configure each SCSI bus that is attached to the ESS independently.
The attached SCSI devices might be initiators (hosts) or target devices. The
ESS supports a maximum of four SCSI target devices on any wide SCSI bus. IBM
recommends that you use one SCSI initiator per SCSI bus on an ESS. The number
of SCSI devices that the ESS controller uses on the bus is determined by the
number of targets specified in the logical configuration for that bus. The SCSI
adapter card in the ESS operates in target-only mode.
Note: If you have multiple hosts attached to the same SCSI bus, IBM strongly
recommends that you use the same type of host. If you have different hosts
on the same SCSI bus, you must use the same type of host adapter. For a
list of adapters see the following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Chapter 1. Introduction
7
Other SCSI bus on host adapter
SCSI bus
T
T
T
HA
2105 Model
E10, E20,
F10 and F20
SCSI
host
SCSI
host
SCSI
host
Single controller on a bus
Other SCSI bus on host adapter
Other SCSI bus on host adapter
SCSI bus
T
T
T
HA
2105 Model
E10, E20,
F10 and F20
SCSI
host
SCSI
host
Two controllers on a bus
T=SCSI Terminator
HA=host adapter
T
HA
2105 Model
E10, E20,
F10 and F20
VS08043N
Figure 3. ESS host interconnections
Cable interconnection hints and tips
The following is a list of hints and tips for cable interconnections:
v Host time-outs might occur due to bus contention when there are too many
initiators that try to drive excessive loads over a single bus. The four-initiator limit
allows each host to run a significant amount of work without incurring time-outs
on I/O operations.
v Your host system might have configuration requirements on the number and type
of SCSI devices on the SCSI bus other than what you can do on the ESS.
v You can attach a host system to multiple ports through a separate SCSI bus
cable and a separate SCSI adapter for each port.
v You cannot use the configuration in Figure 3 for the AS/400 and iSeries. See
“Recommended configurations for the AS/400” on page 46 for information about
how to configure an AS/400 and iSeries.
v The SCSI adapter card in an ESS does not provide terminator power
(TERMPWR) for the SCSI bus to which it is connected.
v Each host system you attach to a single SCSI bus must be a compatible host
system.
v The SCSI adapter card in an ESS provides its own power for termination.
v The host adapter in the ESS has a built-in terminator. Therefore, you do not
require external terminators.
v The SCSI adapter card in an ESS must always be at one end of the SCSI bus to
which it is connected.
v Each device on a SCSI bus must have a unique ID. Before you attach any
device to a SCSI bus, ensure that it has a unique ID for the bus to which you
want to connect.
8
ESS Host Systems Attachment Guide
v When you attach a device to the end of your SCSI bus, you must terminate it. If
you attach a device in the middle of a SCSI bus, you must not terminate it.
v Each SCSI bus requires at least one initiator. The SCSI specification requires
initiators to provide TERMPWR to the SCSI bus.
Cable lengths
The ESS requires a total SCSI bus length that is no greater than 25 m (75 ft). The
resulting configuration must meet any cable length limitations that are required by
any attached SCSI device. Use only ESS supported cables that are 10 m (30 ft)
and 20 m (60 ft) in length.
SCSI initiators and I/O queuing
SCSI host adapters support from 1 to 15 initiators on a SCSI bus. The number of
initiators and the number of targets you define on the bus will be less than or equal
to 16. The limit for the number of initiators that perform persistent reservations is
512 node ports.
Connecting the SCSI cables
Perform the following steps to connect the ESS to your host system.
Attention: To avoid static discharge damage when you handle DDMs and other
parts, observe the precautions in “Handling electrostatic discharge-sensitive
components” on page 10.
1. Complete the installation planning and configuration planning tasks outlined in
the ESS Introduction and Planning Guide.
2. Install a SCSI adapter (interface card) in your host system by using the
instructions in your host system documents.
Note: Contact your IBM service support representative (SSR) to install the
6501 adapter on the IBM AS/400 and iSeries. The 6501 adapter is a
feature code.
3. Connect the SCSI cables to your host system.
Attention: Use caution when handling the SCSI cables, especially when
aligning the connectors for plugging. You can damage the cables or plugs. Turn
off the AS/400 or iSeries host before you attach to or move SCSI cables on the
6501 adapter.
4. Have the IBM service support representative attach the SCSI cables to the
ESS.
5. If you are attaching a second host to the same SCSI interface card, ensure that
each host adapter has a unique SCSI ID (address).
Figure 4 on page 10 represents an ESS that is connected to two hosts.
Chapter 1. Introduction
9
ESS
SCSI cable
First
host
SCSI cable
Second
host
Figure 4. Connecting the ESS to two host systems
Handling electrostatic discharge-sensitive components
The IBM service support representative must observe the following precautions
when handling disk drive modules and other parts to avoid causing damage from
electrostatic discharge (ESD):
1. Keep the ESD-sensitive part in its original shipping container until you install the
part in the machine.
2. Before you touch the ESD-sensitive part, discharge any static electricity in your
body by touching the metal frame of the machine. Keep one hand on the frame
when you install or exchange an ESD-sensitive part.
3. Hold the ESD-sensitive part by the plastic enclosure or the locking handles. Do
not touch any connectors or electronic components.
4. When you hold the ESD-sensitive part, move as little as possible to prevent an
increase of static electricity from clothing fibers, carpet fibers, and furniture.
Checking the attachment
Ensure that your installation meets the following requirements:
1. One or two ESSs and no other I/O devices are attached to each SCSI interface
card.
2. Cables are connected correctly and are seated properly.
Solving attachment problems
If errors occur during the attachment procedure, the conditions listed below might
be what is causing the problem:
1. Your host system has an incorrect SCSI ID.
You or the service support representative can check the SCSI ID by using the
service interface terminal.
2. The root file system is not large enough to add another device.
You can increase the size of the root file system.
If a problem persists, contact your service provider.
10
ESS Host Systems Attachment Guide
LUN affinity
For SCSI attachment, logical unit numbers (LUNs) have an affinity to SCSI ports,
independent of which hosts might be attached to the ports. If you attach multiple
hosts to a single SCSI port, each host has the exact same access to all the LUNs
available on that port.
Targets and LUNs
For SCSI attachment, each SCSI bus can attach a combined total of 16 initiators
and targets. Because at least one of these attachments must be a host initiator, that
leaves a maximum of 15 that can be targets. The ESS is capable of defining all 15
targets on each of its SCSI ports. Each can support up to 64 LUNs. The software in
many hosts is only capable of supporting 8 or 32 LUNs per target, but the
architecture allows for 64. Therefore, the ESS can support 960 LUNs per SCSI port
(15 targets x 64 LUNs = 960).
FlashCopy and PPRC restrictions
When you copy a source volume to a target volume with FlashCopy or PPRC and
you require concurrent read/write access of both volumes, the source and target
volumes should be on different host systems. A copy operation with the source and
target volume on the same host system creates a target volume with the same
identification as the source volume. The host system sees two identical volumes.
When the copy operation creates the same identification for the target volume as
for the source volumes, you are not able to distinguish one from the other.
Therefore, you might not be able to access the original data.
Note: You cannot create a host target on a single Novell NetWare host system. For
Novell NetWare, the target volume must be attached to a second Novell
NetWare host system.
The target volume and the source volume can be on the same host system for a
PPRC or FlashCopy operation only under the following conditions:
v For AIX, when the host system is using a logical volume manager (LVM) with
recreatevg command support.
v For AIX, Sun, and HP, when the host system is not using a logical volume
manager.
v For any host system, when the host system can distinguish between a source
and a target volume that have the same identification.
SCSI host system limitations
Table 6 shows the configuration limitations for the host systems. These limitations
can be caused by the device drivers, hardware, or different adapters that the host
systems support.
Table 6. Host system limitations
Host System
LUN
assignments
and limitations
per target
Configuration notes
Compaq OpenVMS
0-7
None
|
Compaq Tru64 4.x
0-7
None
|
Compaq Tru64 5.x
0 - 15
None
Chapter 1. Introduction
11
Table 6. Host system limitations (continued)
Host System
LUN
assignments
and limitations
per target
Configuration notes
Data General
0-7
None
Hewlett Packard 9000
0-7
None
0-7
The target SCSI ID is always 6.
Sixteen LUNs are supported for
each feature code 6501. For ESS,
the two ports on the feature code
6501 each support 8 drives at full
capacity for RAID. Real 9337s
running RAID-5 must account for
parity. Therefore, the 8 drives
provide the equivalent of a 7-drive
capacity.
IBM AS/400. See note 1.
IBM iSeries (fibre-channel) See
note 2.
0 - 32
There is one target per AS/400 and
iSeries adapter.
IBM NUMA-Q (UNIX)
0-7
Use a minimum operating system
level of DYNIX/ptx V4.4.7
IBM Personal Computer Server
0-7
None
IBM RS/6000 and pSeries
0 - 31
AIX 4.3.3 supports 64 LUNs per
target.
Novell NetWare
0 - 31
None
Sun Ultra A
0-7
None
Sun Ultra B
0 - 31
Use Solaris 2.6, 7, or 8. (Solaris
2.6 and 7 require a Solaris patch to
enable 32 LUNs per target.)
Windows NT 4.0
0-7
None
Windows 2000
0-7
None
Notes:
1. The naming convention for the AS/400 defines a machine connected through a 6501
bus using SCSI cables.
2. You can use the Model 270 and 8xx for a fibre-channel connection.
General information about attaching to an open-systems hosts with
fibre-channel adapters
This section provides information about attaching an ESS to host systems with fibre
channel-adapters.
Fibre channel is a 100-MBps, full-duplex, serial communications technology to
interconnect I/O devices and host systems that are separated by tens of kilometers.
Fibre channel transfers information between the sources and the users of the
information. This information can include commands, controls, files, graphics, video,
and sound. Fibre-channel connections are established between fibre-channel ports
that reside in I/O devices, host systems, and the network that interconnects them.
The network consists of elements like switches, hubs, bridges, and repeaters that
12
ESS Host Systems Attachment Guide
are used to interconnect the fibre-channel ports. For information about the ESS
three basic topologies you can use with the ESS, see “Fibre-channel architecture”.
Fibre-channel architecture
The ESS provides a fibre-channel connection when your IBM SSR installs a
fibre-channel adapter card (shortwave or longwave) in the ESS. For more
information about hosts and operating systems that the ESS supports on the
fibre-channel adapters, see the ESS Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Fibre-channel architecture provides a variety of communication protocols on the
ESS. The units that are interconnected are referred to as nodes. Each node has
one or more ports.
An ESS is a node in a fibre-channel network. Each port on an ESS fibre-channel
host adapter is a fibre-channel port. A host is also a node in a fibre-channel
network. Each port attaches to a serial-transmission medium that provides duplex
communication with the node at the other end of the medium.
ESS architecture supports three basic interconnection topologies:
v Point-to-point
v Switched fabric
v Arbitrated loop
Point-to-point topology
The point-to-point topology, also known as direct connect, enables you to
interconnect ports directly. Figure 5 shows an illustration of a point-to-point topology.
1
2
S008944L
Legend
1 is the host system.
2 is the ESS.
Figure 5. Point-to-point topology
The ESS supports direct point-to-point topology at a maximum distance of 500 m
(1500 ft) with the shortwave adapter. The ESS supports direct point-to-point
topology at a maximum distance of 10 km (6.2 mi) with the longwave adapter.
Switched-fabric topology
The switched-fabric topology provides the underlying structure that enables you to
interconnect multiple nodes. You can use a fabric that provides the necessary
switching functions to support communication between multiple nodes.
You can extend the distance that the ESS supports up to 100 km (62 mi) with a
storage area network (SAN) or other fabric components.
Chapter 1. Introduction
13
The ESS supports increased connectivity with the use of fibre-channel (SCSI-FCP
and FICON) directors. Specific details on status, availability, and configuration
options for the fibre-channel directors supported by the ESS are available on the
Web at:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
The ESS supports the switched-fabric topology with point-to-point protocol. You
should configure the ESS fibre-channel adapter to operate in point-to-point mode
when you connect it to a fabric topology. See Figure 6.
2
1
3
3
1
3
2
S008945N
Legend
1 is the host system.
2 is the ESS.
3 is a switch.
Figure 6. Switched-fabric topology
Arbitrated loop
Fibre-channel arbitrated loop (FC-AL) is a ring topology that enables you to
interconnect a set of nodes. The maximum number of ports that you can have on a
fibre-channel arbitrated loop is 127. See Figure 7 on page 15.
The ESS supports FC-AL as a private loop. It does not support the fabric-switching
functions in FC-AL.
The ESS supports up to 127 hosts or devices on a loop. However, the loop goes
through a loop initialization process (LIP) whenever you add or remove a host or
device from the loop. LIP disrupts any I/O operations currently in progress. For this
reason, IBM recommends that you only have a single host and a single ESS on any
loop.
Note: The ESS does not support FC-AL topology on adapters that are configured
for FICON protocol.
14
ESS Host Systems Attachment Guide
2
1
2
1
S008943N
Legend
1 is the host system.
2 is the ESS.
Figure 7. Arbitrated loop topology
Note: If you have not configured the port, only the topologies for point-to-point and
arbitrated loop are supported. If you have configured the port, and you want
to change the topology, you must first unconfigure the port. After you
unconfigure the port, you can change the topology.
|
|
|
|
Fibre-channel cables and adapter types
For detailed information about fibre-channel cables and adapter types, see IBM
TotalStorage Enterprise Storage Server Introduction and Planning Guide.
Fibre-channel node-to-node distances
For detailed information about fibre-channel node-to-node distances, see IBM
TotalStorage Enterprise Storage Server Introduction and Planning Guide.
LUN affinity
For fibre-channel attachment, LUNs have an affinity to the host’s fibre-channel
adapter through the worldwide port name (WWPN) for the host adapter. In a
switched fabric configuration, a single fibre-channel host could have physical access
to multiple fibre-channel ports on the ESS. In this case, you can configure the ESS
to allow the host to use either:
v All physically accessible fibre-channel ports on the ESS
v Only a subset of the physically accessible fibre-channel ports on the ESS
In either case, the set of LUNs that are accessed by the fibre-channel host are the
same on each of the ESS ports that can be used by that host.
Targets and LUNs
For fibre-channel attachment, each fibre-channel host adapter can architecturally
attach up to 2⁶⁴ LUNs. The ESS supports only a maximum of 4096 LUNs divided
into a maximum of 16 logical subsystems each with up to 256 LUNs. If the software
Chapter 1. Introduction
15
in the fibre-channel host supports the SCSI command Report LUNs, you can
configure all 4096 LUNs on the ESS to be accessible by that host. Otherwise, you
can configure no more than 256 of the LUNs in the ESS to be accessible by that
host.
FlashCopy and PPRC restrictions for open-systems hosts
When you copy a source volume to a target volume with FlashCopy or PPRC and
you require concurrent read/write access of both volumes, the source and target
volumes should be on different host systems. A copy operation with the target
volume and the source on the same host system creates a target volume with the
same identification as the source volume. The host system sees two identical
volumes.
When the copy operation creates the same identification for the target volume as
for the source volume, you cannot distinguish one from the other. Therefore, you
might not be able to access the original data.
Note: You cannot create a host target on a single Novell NetWare host system. For
Novell NetWare, the target volume must be attached to a second Novell
NetWare host system.
The target volume and the source volume can be on the same host system for a
PPRC or FlashCopy operation only under the following conditions:
v For AIX, when the host system is using a logical volume manager (LVM) with
recreatevg command support.
v For HP, when the host system is using LVM with the vfchigid -f command.
v For AIX and Sun when the host is not using an LVM.
v For any host system, when the host system can distinguish between a source
and a target volume that have the same identification.
LUN access modes
The following sections describe the LUN access modes for fibre-channel.
Fibre-channel access modes
The fibre-channel architecture allows any fibre-channel initiator to access any
fibre-channel device, without access restrictions. However, in some environments
this kind of flexibility can represent a security exposure. Therefore, the Enterprise
Storage Server allows you to restrict this type of access when IBM sets the access
mode for your ESS during initial configuration. There are two types of LUN access
modes:
1. Access-any mode
The access-any mode allows all fibre-channel attached host systems that do not
have an access profile to access all non-AS/400 iSeries open system logical
volumes that you have defined in the ESS.
Note: If you connect the ESS to more than one host system with multiple
platforms and use the access-any mode without setting up an access
profile for the hosts, the data in the LUN used by one open-systems host
might be inadvertently corrupted by a second open-systems host. Certain
host operating systems insist on overwriting specific LUN tracks during
the LUN discovery phase of the operating system start process.
2. Access-restricted mode
16
ESS Host Systems Attachment Guide
The access-restricted mode prevents all fibre-channel-attached host systems
that do not have an access profile from accessing any volumes that you have
defined in the ESS. This is the default mode.
Your IBM service support representative (SSR) can change the logical unit number
(LUN) access mode. However, changing the access mode is a disruptive process,
and requires that you shut down and restart both clusters of the ESS.
Access profiles
Any fibre-channel-attached host system that has an access profile can access only
those volumes that are defined in the profile. Depending on the capability of the
particular host system, an access profile can contain up to 256 or up to 4096
volumes.
The setup of an access profile is transparent to you when you use the ESS
Specialist to configure the hosts and volumes in the ESS. Configuration actions that
affect the access profile are as follows:
v When you define a new fibre-channel-attached host system in the ESS Specialist
by specifying its worldwide port name (WWPN) using the Modify Host Systems
panel, the access profile for that host system is automatically created. Initially the
profile is empty. That is, it contains no volumes. In this state, the host cannot
access any logical volumes that are already defined in the ESS.
v When you add new logical volumes to the ESS using the Add Fixed Block
Volumes panel, the new volumes are assigned to the host. The new volumes are
created and automatically added to the access profile of the selected host.
v When you assign volumes to fibre-channel-attached hosts using the Modify
Volume Assignments panel, the selected volumes are automatically added to the
access profile of the selected host.
v When you remove a fibre-channel-attached host system from the ESS Specialist
using the Modify Host Systems panel, you delete the host and its access profile.
The anonymous host
When you run the ESS in access-any mode, the ESS Specialist displays a
dynamically created pseudo-host called anonymous. This is not a real host system
connected to the storage server. It is intended to represent all fibre-channelattached host systems that are connected to the ESS that do not have an access
profile defined. This is a visual reminder to the user that certain logical volumes
defined in the ESS can be accessed by hosts which have not been specifically
identified to the ESS.
ESSESS
Fibre-channel storage area networks (SANs)
A SAN is a specialized, high-speed network that attaches servers and storage
devices. A SAN is also called the network behind the servers. With a SAN, you can
perform an any-to-any connection across the network using interconnect elements
such as routers, gateways, hubs, and switches. With a SAN, you can eliminate the
connection between a server and storage and the concept that the server effectively
owns and manages the storage devices.
The SAN also eliminates any restriction to the amount of data that a server can
access. This is limited by the number of storage devices, that can be attached to
the individual server. Instead, a SAN introduces the flexibility of networking to
enable one server or many heterogeneous servers to share a common storage
Chapter 1. Introduction
17
utility. This might comprise many storage devices, including disk, tape, and optical
storage. You can locate the storage utility far from the servers that use it.
Think of ESCON as the first real SAN. It provides connectivity that is commonly
found in SANs. However, it is restricted to ESCON hosts and devices.
Fibre-channel SANs, however, provide the capability to interconnect open systems
and storage in the same network as S/390 and zSeries host systems and storage.
This is possible because you can map the protocols for attaching open systems and
S/390 and zSeries host systems to the FC-4 layer of the fibre-channel architecture.
18
ESS Host Systems Attachment Guide
Chapter 2. Attaching to a Compaq host
This chapter describes the host system requirements and provides the procedure to
attach to an ESS to a Compaq AlphaServer with SCSI adapters or fibre-channel
adapters.
For information about the Compaq AlphaServer models that you can attach to the
ESS, see the following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Attaching with SCSI adapters
This section describes how to attach an ESS to a Compaq host system with SCSI
adapters. For procedures on how to attach an ESS to a Compaq host system with
fibre-channel adapters, see “Attaching with fibre-channel adapters” on page 27.
Attachment requirements
This section lists the requirements to attach the ESS to your host system.
v Check the logical unit number limitations for your host system. See Table 6 on
page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. An IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. You assign the SCSI hosts to the SCSI ports on the ESS.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
3. You configure the host system for the ESS. Use the instructions in your host
system publications.
Note: The IBM Subsystem Device Driver does not support the Compaq open
system in a clustering environment.
Installing and configuring the Compaq Tru64 UNIX Version 4.0x host
system
Use the following procedures to install and configure the ESS with hosts running
Compaq Tru64 UNIX Version 4.0x.
Console device check
Use the following procedure to perform a console device check.
1. Push the halt button to turn on the AlphaServer.
© Copyright IBM Corp. 1999, 2001
19
The system performs self-test diagnostics and responds with the console
prompt >>>.
2. Type show device at the >>> prompt to list the devices available to the
AlphaServer.
The system responds with a list of controllers and disks that are connected to
the system. In the description field on the right of the screen, you should see a
list of all devices assigned by the ESS. Disk devices begin with the letters “dk”.
If you do not see a list of devices, verify the SCSI connections, connectors, and
terminators on the bus. If you still do not see a list of devices, check the ESS to
ensure that the ESS is operating correctly.
3. When the list of the devices is displayed on your screen, type Boot to restart the
system.
Operating system device recognition
After the system has restarted, perform the following steps to verify that UNIX
recognizes the disks:
1. Open two new terminal windows.
2. Type uerf -R 300|more on the command line in each of the windows.
A list of device names that begin with the letters “rz” is displayed in each
window. For example, device names should look like the following:
rz28, rz29, rzb28, and rzb29
3. Compare the lists to determine which ESS devices you want to add to the
system.
Device special files
If you install a serial storage architecture (SSA) device after the initial operating
system installation, you must make the device special files that create the character
devices needed for file systems. Perform the following steps:
1. Type: # cd /dev
2. Type: # ./MAKEDEV rzxx, where xx is the number portion of the device name.
For each new drive that you installed in the SSA device, type # ./MAKEDEV rzxx,
where xx is the number portion of the device name.
Initializing disk device
After the list of devices has been determined, you must label the disk volume sizes.
Perform the following steps to label the disks:
1. Write the new label by typing: #disklabel –rw rz28 ESS
2. Verify the label by typing: #disklabel rz28
The #disklabel rz28 command shows the new partition layouts on the Compaq
Tru64 and automatically detects the LUNs that are provided by the SSA device.
Configuring AdvFS
Before you create an AdvFS file system, you must design a structure by assigning a
file domain and the file sets. Perform the following steps to create an AdvFS file
system with file sets:
1. Type: # cd /
2. Type: # mkfdmn –rw /dev/rzXc vol1_dom
3. Type: # mkfset vol1_dom vol1
4. Type: # mkdir /vol1 # mount vol1_dom#vol1 /vol1
To display all mounted devices, type: df -k
20
ESS Host Systems Attachment Guide
Configuring devices to mount automatically
To enable an AdvFS file system to start automatically, add an entry to the /etc/fstab
file that the mount command will issue during startup. Figure 8 shows an example
of a modified /etc/fstab file.
In Figure 8, the lines that are shown in bold type are the lines that were entered
since the initial operating system installation.
# root_domain#root / advfs rw,userquota,groupquota 0 0
/proc /proc procfs rw 0 0
usr_domain#usr /usr advfs rw,userquota,groupquota 0 0
/dev/rz8b swap1 ufs sw 0 2
vol1_dom#vol1 /vol1 advfs rw,userquota,groupquota 0 2
vol2_dom#vol1 /vol2 advfs rw,userquota,groupquota 0 2
vol3_dom#vol1 /vol3 advfs rw,userquota,groupquota 0 2
vol4_dom#vol1 /vol4 advfs rw,userquota,groupquota 0 2
vol5_dom#vol1 /vol5 advfs rw,userquota,groupquota 0 2
vol6_dom#vol1 /vol6 advfs rw,userquota,groupquota 0 2
vol7_dom#vol1 /vol7 advfs rw,userquota,groupquota 0 2
vol8_dom#vol1 /vol8 advfs rw,userquota,groupquota 0 2
vol9_dom#vol1 /vol9 advfs rw,userquota,groupquota 0 2
vol10_dom#vol1 /vol10 advfs rw,userquota,groupquota 0 2
Figure 8. Example of a modified /etc/fstab file
When the system starts, it mounts all the volumes that you created in “Configuring
AdvFS” on page 20.
|
|
|
|
Installing and Configuring Compaq Tru64 UNIX Version 5.x
Use the following procedures to install and configure the ESS disk drives on a
Compaq Tru64 UNIX Version 5.x.
Attachment considerations
|
See Table 7 for the maximum number of adapters you can have for an AlphaServer.
|
Table 7. Maximum number of adapters you can use for an AlphaServer
|
AlphaServer name
Maximum number of adapters
|
800
2
|
1200
4
|
2100
4
|
4000, 4000a
4
|
4100
4
|
8200, 8400
8
|
DS10, DS20, DS20E
2
|
ES40
4
|
|
GS60, GS60E, GS140
8
|
|
|
Ensure that the console firmware level is v6.0-x or later. Ensure that you have the
following host bus adapter and firmware:
|
Tru64 patch 399.00 Security (SSRT0700U) must be installed.
v KZPBA-CB (firmware rev. 5.57 or later)
Chapter 2. Attaching to a Compaq host
21
|
Installing the KZPBA-CB adapter card
The following procedures describe how to install the KZPBA-CB adapter card.
1. Shut down the Compaq AlphaServer host system.
2. Install the KZPBA-CB host bus adapter.
3. Restart the host (non-clustered configurations) or each cluster member
(clustered configurations).
4. Bring each host system to a halt condition at the console level.
5. On each AlphaServer console, execute a show config command to confirm that
you installed each adapter properly.
Figure 9 shows an example of what displays when you type show config. Note
the isp number of each adapter. Update the adapter firmware if required.
|
|
|
|
|
|
|
|
|
|
|
P00>>>show config
Name Type Ext Rev Mnemonic
TLSB
0++ KN7CH-AB 8025 0 0000 kn7ch-ab0
7+ MS7CC 5000 0
313043 ms7cc0
8+ KFTHA 2000 0 0D03 kftha0
C0 PCI connected to kftha0 pci0
3+ DEC PCI MC
181011 0 0022 mc0
4+ QLogic ISP1040B
10201077 0 0005 isp0
5+ DE600-AA
12298086 B1440E11 0008 ei0
6+ DE600-AA
12298086 B1440E11 0008 ei1
7+ QLogic ISP1040B
10201077 0 0005 isp1
8+ QLogic ISP1040B
10201077 0 0005 isp2
9+ KGPSA-C
F80010DF F80010DF 0002 kgpsa0
A+ KGPSA-B
F70010DF F70010DF 0004 kgpsa1
B+ QLogic ISP1040B
10201077 0 0005 isp3
P00>>>
Figure 9. Example of what is displayed when you type show config
6. Type show device and note the Bus IDs shown for each isp.
See Figure 10 on page 23 for an example of what displays when you type show
device.
|
|
|
|
22
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
P00>>>show device
polling for units on isp0, slot 4, bus 0, hose0...
pka.6.0.4.0 pka term on Bus ID 6 5.57
dka400.4.0.4.0 DKA400 IBM 2105F20 4.32
dka401.4.0.4.0 DKA401 IBM 2105F20 4.32
dka402.4.0.4.0 DKA402 IBM 2105F20 4.32
dka500.5.0.4.0 DKA500 IBM 2105F20 4.32
dka501.5.0.4.0 DKA501 IBM 2105F20 4.32
dka502.5.0.4.0 DKA502 IBM 2105F20 4.32
...
polling for units on isp3, slot 11, bus 0, hose0...
pkd.6.0.11.0 pkd term on Bus ID 6 5.57
dkd400.4.0.11.0 DKD400 IBM 2105F20 4.32
dkd401.4.0.11.0 DKD401 IBM 2105F20 4.32
dkd402.4.0.11.0 DKD402 IBM 2105F20 4.32
dkd500.5.0.11.0 DKD500 IBM 2105F20 4.32
dkd501.5.0.11.0 DKD501 IBM 2105F20 4.32
dkd502.5.0.11.0 DKD502 IBM 2105F20 4.32
P00>>>
Figure 10. Example of what is displayed when you type show device
7. Set the KZPBA-CB SCSI ID so it does not conflict with any other SCSI device
on the bus. Set the isp* or pk* console variables as required by the server
model.
Adding or modifying AlphaServer connections
To add, remove or modify the AlphaServer connections, see the IBM TotalStorage
ESS Web Interface User’s Guide.
Configuring host adapter ports
To configure the ESS host adapter ports for Tru64 SCSI connections, select the
configure host adapter ports button on the ESS Specialist open system storage
screen. One by one, select and configure the relevant SCSI host adapters. Edit the
default Compaq Alpha SCSI bus configuration and change the Maximum Number of
LUNs parameter to 15. Save the configuration with a new name such as
Compaq_Alpha_16. Set each subsequent host adapter to this new SCSI bus
configuration type.
Adding and assigning volumes
|
|
To set up disk groups, create volumes on the arrays and assign them to the Tru64
connections, see the IBM TotalStorage ESS Web Interface User’s Guide.
|
|
For initial installations in non-clustered configurations, you can use the ESS
volumes for the following parameters:
v /,
v /usr
v /var
v boot partition
|
|
|
|
|
|
|
v quorum
v swap
For better performance, use the swap parameter for local storage.
|
|
With the ESS LIC, IBM recommends that you set up a minimum size dummy
volume to be seen as LUN 0 during initial installations. You can use the LUN as the
Chapter 2. Attaching to a Compaq host
23
|
|
|
|
|
CCL (command control LUN) or pass-through LUN. This LUN passes commands to
the ESS controller. Future versions of ESS LIC will automatically allocate this LUN.
IBM recommends that the first volume on the ESS as a minimum size volume. The
host should not use the volume for data storage for data storage or any other
purpose. Setting up this LUN 0 will minimize difficulties during future LIC migrations.
|
|
|
Each ESS LSS will assign the first volume as LUN 0. Therefore, a typical
configuration may have multiple LUN 0s. It is advisable to make each LUN 0 a
minimum size and not use it for data storage.
|
Confirming storage connectivity
The following procedures describe how to confirm storage connectivity
1. Restart the host (non-clustered configurations) or each cluster member
(clustered configurations)
2. Bring each host system to a halt condition at the console level.
|
|
|
|
|
|
|
Setting an ESS volume as a boot device
|
Perform the following steps to set an ESS volume as a boot device.
1. Determine which ESS volume you want to use as a boot device for each host
by decoding the serial number.
2. Assign one of the paths to the bootdefdev AlphaServer console value.
3. For initial installations, install the operating system and clustering software and
register the operating system and clustering licenses.
All cluster member boot partitions must exist on shared external storage.
Non-cluster boot partitions might also exist on external storage.
|
|
|
|
|
Verifying the attachment of the ESS volumes
|
|
|
|
|
To verify the attachment of the ESS volumes, use the hwmgr command. See
Figure 11 for an example of the commands you can use to verify the attachment of
the ESS volumes.
# hwmgr -view dev -cat disk
HWID: Device Name Mfg Model Location
-----------------------------------------------------------------------------54: /dev/disk/floppy0c 3.55in floppy fdi0-unit-0
60: /dev/disk/dsk1c DEC RZ2DD-LS (C) DEC bus-2-targ-0-lun-0
63: /dev/disk/cdrom0c COMPAQ CDR-8435 bus-5-targ-0-lun-0
66: /dev/disk/dsk5c IBM 2105F20 bus-0-targ-253-lun-0
67: /dev/disk/dsk6c IBM 2105F20 bus-0-targ-253-lun-1
68: /dev/disk/dsk7c IBM 2105F20 bus-0-targ-253-lun-2
:
:
# hwmgr –get attributes –id 66
66:
name = SCSI-WWID:01000010:6000-1fe1-0000-2b10-0009-9010-0323-0046
category = disk
sub_category = generic
architecture = SCSI
:
:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
Figure 11. Example of what is displayed when you use the hwmgr command to verify
attachment
24
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
Figure 12 shows an example of the Korn shell script called essvol. Figure 12 shows
a summary that includes information for all the ESS volumes that are attached.
echo Extracting ESS volume information...
for ID in ′hwmgr -view dev -cat disk | grep 2105F20 | awk '{ print $1}'′
do echo; echo ESS vol, H/W ID $ID
hwmgr -get attrib -id $ID | awk '/phys_loc//dev_base//capacity//serial/'
done
Figure 12. Example of a Korn shell script to display a summary of ESS volumes
See Figure 13 for an example of what displays when you execute the essvol korn
shell script.
# ./essvol |more
Extracting ESS volume information...ESS vol, H/W ID 38:
phys_location = bus-2-targ-0-lun-0
dev_base_name = dsk3
capacity = 5859392
serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
ESS vol, H/W ID 39:
phys_location = bus-2-targ-0-lun-1
dev_base_name = dsk4
capacity = 5859392
serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2831-5660
ESS vol, H/W ID 40:
phys_location = bus-2-targ-0-lun-2
dev_base_name = dsk5
capacity = 5859392
serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2841-5660
#
Figure 13. Example of what is displayed when you execute the Korn shell script
Note: ESS volumes 282, 283, and 284 are displayed as LUNs 0, 1, and 2,
respectively. You can access the LUNs in Tru64 UNIX by using the following
special device files:
v /dev/rdisk/dsk3
v /dev/rdisk/dsk4
v /dev/rdisk/dsk5
Setting up the Tru64 UNIX device parameter database
|
Perform the following steps to set up the Tru64 UNIX device parameter database.
|
|
|
|
Note: For a clustered configuration, you must perform these steps for each
member in the cluster.
|
|
3. Login as root.
4. Change the directory to /etc
1. Quiesce the storage
2. Place the system in single-user mode.
Chapter 2. Attaching to a Compaq host
25
5. Edit the ddr.dbase file to include the lines shown in Figure 14 as an entry in the
disks subsection.
|
|
|
SCSIDEVICE
#
# Values for the IBM ESS 2105
#
Type = disk
Name = "IBM" "2105F20"
#
PARAMETERS:
TypeSubClass = hard_disk, raid
BadBlockRecovery = disabled
DynamicGeometry = true
LongTimeoutRetry = enabled
PwrMgmt_Capable = false
TagQueueDepth = 20
ReadyTimeSeconds = 180
CMD_WriteVerify = supported
InquiryLength = 255
RequestSenseLength = 255
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 14. Example of the ddr.dbase file
6. Type ddr_config -c to compile the ddr.dbase file.
7. Type ddr_config -s disk "IBM" "2105F20" to confirm the values in the
ddr.dbase file.
|
|
|
Setting the kernel SCSI parameters
|
Perform the following steps to set the kernel SCSI parameters
|
|
|
|
Note: You must rebuild the kernel after you perform these steps.
1. Quiesce the storage.
2. Place the system in single-user mode.
3. Log in as root.
4. Change the directory to /sys/data.
5. Edit the /camdata.c file by changing the non-read/write command timeout value
in the changeable disk driver timeout section. Change it from 10 seconds to 60
seconds. See Figure 15 for an example of how to edit the camdata.c file.
|
|
|
|
|
u_long cdisk_to_def = 10; /* 10 seconds */
u_long cdisk_to_def = 60; /* 60 seconds */
|
|
||
|
|
|
|
|
|
Figure 15. Example of how to change the timeout section of the camdata.c file from 10 to 60
seconds
6. Type doconfig -c HOSTNAME to rebuild the kernel, where HOSTNAME is the
hostname of the system you are modifying.
7. Restart the host system.
|
|
Configuring the storage
To partition and prepare ESS LUNs and create and mount file systems, use the
standard Tru64 storage configuration utilities.
|
|
26
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
Figure 16 shows an example of commands you can use to configure storage.
# disklabel -wr /dev/rdisk/dsk6c
# mkfdmn /dev/disk/dsk6c adomain
# mkfset adomain afs
# mkdir /fs
# mount -t advfs adomain#afs /fs
Figure 16. Example of how to configure storage
Attaching with fibre-channel adapters
|
|
|
|
|
This section describes the host system requirements and provides the procedure to
attach an ESS to a Compaq AlphaServer with fibre-channel adapters. This section
also describes how to attach a clustered Compaq AlphaServer Tru64 host system
with level 5.0a and 5.1 of the operating system with fibre-channel switched (FC-SW)
or direct connect (FC-AL) protocols on optical fiber media.
|
|
Note: You do not need the IBM Subsystem Device Driver because the Tru64 UNIX
manages multipathing.
|
|
For information about adapters you can use to attach the ESS to the Compaq host
and the Compaq AlphaServer models, see the following Web site:
|
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
|
Attachment requirements
|
|
|
|
|
|
|
|
|
|
|
This section lists the requirements to attach the ESS to your host system.
v Ensure that you have all of the items listed in the equipment list in IBM
TotalStorage Enterprise Storage Server Introduction and Planning Guide.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
v Check the logical unit number (LUN) limitations for your host system. See Table 6
on page 11.
v See the following Web site for details about the release level for your operating
system:
|
|
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an IBM ESS.
1. An IBM SSR installs the IBM Enterprise Storage Server by using the procedures
in the IBM Enterprise Storage Server Service Guide.
2. Either you or an IBM SSR assigns the fibre-channel hosts to the fibre-channel
ports on the ESS.
|
|
|
|
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Chapter 2. Attaching to a Compaq host
27
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
|
|
|
3. Either you or an IBM SSR configures the host system for the ESS. Use the
instructions in your host system publications.
|
|
|
Attachment considerations
|
See Table 8 for the maximum number of adapters you can have for an AlphaServer.
|
Table 8. Maximum number of adapters you can use for an AlphaServer
|
AlphaServer name
Maximum number of adapters
|
800
2
|
1200
4
|
2100
4
|
4000, 4000a
4
|
4100
4
|
8200, 8400
8
|
DS10, DS20, DS20E
2
|
ES40
4
|
|
GS60, GS60E, GS140
8
|
|
|
Ensure that the console firmware level is v6.0-x or later. Ensure that you have the
following host bus adapters and firmware:
|
v KGPSA-CA (firmware rev. 3.81A4 [2.01A0]) or higher
v KGPSA-BC (firmware rev. 3.01 (1.31) or higher
|
Tru64 patch 399.00 Security (SSR50700U) must be installed.
|
Support for the AlphaServer console
|
|
|
|
|
|
|
Support for the AlphaServer console recognition of ESS LUNs with fibre is available
for the current version of ESS that is licensed internal code (LIC). To get the ESS
LIC level, type telnet xxxxx where xxxxx is the ESS cluster name. The minimum
ESS microcode levels are:
v Operating system level 4.3.2.15
|
|
Figure 17 on page 29 shows an example of the output from the telnet command
v Code EC 1.3.3.27
v sbld 0622
28
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
telnet xxxxx (where xxxxx is the cluster name)
2105 Enterprise Storage Server
Model F20 SN 75-99999 Cluster Bay 1
OS Level 4.3.2.15
Code EC 1.3.3.27
EC Installed on: Jun 27 2001
sbld0622
SEA.rte level = 2.6.402.592
SEA.ras level = 2.6.402.592
Licensed Internal Code - Property of IBM.
2105 Licensed Internal Code
(C) IBM Corporation 1997, 2001. All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
Restricted by GSA ADP Schedule Contract with IBM Corporation.
Login:
Figure 17. Confirming the ESS licensed internal code on a Compaq AlphaServer
|
|
|
|
|
|
Supported switches
|
|
|
Cascaded switches are supported in configurations up to a maximum of 8 switches
with a maximum of 3 inter-switch hops for any path. IBM recommends two hops for
normal operation with the third hop reserved for backup paths.
|
|
|
|
|
|
|
|
IBM supports the following switches:
v IBM 2109 Model S08 and S16
v Corresponding Brocade and Compaq switches
– FW 2.1.7 or later
– FC-SW mode
Installing the KGPSA-CA or KGPSA-BC adapter card
The following procedures describe how to install the KGPSA-CA or KGPSA-BC
adapter card.
1. Shut down the Compaq AlphaServer host system.
2. Install the KGPSA-CA or KGPSA-CA host bus adapters.
3. Restart the host (non-clustered configurations) or each cluster member
(clustered configurations).
|
|
4. Bring each host system to a halt condition at the console level.
5. If required by the host, type set mode diag at the Compaq AlphaServer console
to place the console in diagnostic mode.
|
|
6. Type wwidmgr-show adapter to confirm that you installed each adapter properly.
7. If necessary, update the adapter firmware.
|
|
|
See Figure 18 on page 30 for an example of what displays when you type set mode
diag and wwidmgr -show adapter.
Chapter 2. Attaching to a Compaq host
29
P00>>>set mode diag
Console is in diagnostic mode
P00>>>wwidmgr -show adapter
polling for units on kgpsa0, slot 9, bus 0, hose0...
kgpsaa0.0.0.9.0 PGA0 WWN 2000-0000-c922-69bf
polling for units on kgpsa1, slot 10, bus 0, hose0...
kgpsab0.0.0.10.0 PGB0 WWN 2000-0000-c921-df4b
item adapter WWN Cur. Topo Next
Topo
[ 0] kgpsab0.0.0.10.0 2000-0000-c921-df4b FABRIC
FABRIC
[ 1] kgpsaa0.0.0.9.0 2000-0000-c922-69bf FABRIC
FABRIC
[9999] All of the above.
P00>>>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 18. Example of what is displayed when you type set mode diag and wwidmgr -show
Figure 18 illustrates the worldwide node name (WWNN), It is identified as the
“WWN” for each adapter. You need the world wide port name (WWPN) to configure
the ESS host attachment. To determine the worldwide port name (WWPN) for the
KGPSA adapters, replace the “2” with a “1”.
Adding or modifying AlphaServer connections
To add, remove or modify the AlphaServer connections, see the IBM TotalStorage
ESS Web Interface User’s Guide. When you add a connection, it is necessary to
specify the worldwide port name of the host connection. See “Appendix A. Locating
the worldwide port name (WWPN)” on page 153 for procedures on how to locate
the WWPN for each KGPSA adapter card.
|
|
|
|
|
|
Configuring host adapter ports
To configure the host adapter ports, see the IBM TotalStorage ESS Web Interface
User’s Guide.
|
|
|
Adding and assigning volumes
|
|
To set up disk groups, create volumes on the arrays and assign them to the Tru64
connections, see the IBM TotalStorage ESS Web Interface User’s Guide.
|
|
For initial installations in non-clustered configurations, you can use the ESS
volumes for the following parameters:
v /,
v /usr
v /var
v boot partition
|
|
|
|
|
|
|
v quorum
v swap
For better performance, use the swap parameter for local storage.
With the ESS LIC, IBM recommends that you set up a minimum size dummy
volume to be seen as LUN 0 during initial installations. You can use the LUN as the
CCL (command controlLUN) or pass-through LUN. This LUN passes commands to
the ESS controller. Future versions of ESS LIC will automatically allocate this LUN.
IBM recommends that the first volume on the ESS as a minimum size volume. The
|
|
|
|
|
30
ESS Host Systems Attachment Guide
|
|
host should not use the volume for data storage or any other purpose. Setting up
this LUN 0 will minimize difficulties during future LIC migrations.
|
Confirming switch connectivity
|
|
|
|
To
1.
2.
3.
|
|
|
Figure 19 shows an example of what displays when you type the switchshow
command.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
confirm switch connectivity
Telnet and login to the switch as an administrator.
Confirm that each host bus adapter has performed a fabric login to the switch.
Confirm that each ESS host adapter has performed a fabric login to the switch.
P00>>>switchshow
switchType: 3.2
switchState: Online
switchRole: Principal
switchDomain: 2
switchId: fffc02
switchWwn: 10:00:00:60:69:20:02:72
port 0: sw Online F-Port 21:00:00:e0:8b:02:2b:6e
port 1: sw Online F-Port 21:00:00:e0:8b:02:2d:6e
port 2: sw Online F-Port 10:00:00:00:c9:22:16:ab
port 3: sw Online F-Port 10:00:00:00:c9:20:eb:65
port 4: id No_Light
port 5: id No_Light
port 6: -- No_Module
port 7: -- No_Module
value =8 =0x8
GRETZKY: admin>
Figure 19. Example of what is displayed when you type the switchshow command
Confirming storage connectivity
The following procedures describe how to confirm storage connectivity
1. Restart the host (non-clustered configurations) or each cluster member
(clustered configurations)
2. Bring each host system to a halt condition at the console level.
3. Type set mode diag at the Compaq AlphaServer console (if required by the
host) to place the console in diagnostic mode.
4. Type wwidmgr-show adapter to confirm storage attachment.
Note: ESS with the supported LIC emulates the Compaq ESA12000 for
fibre-channel and RAID array 8000 for fibre-channel. Both of these systems
use the HSG80 Controller. For information about how ESS volumes are
presented to the AlphaServer host as HSG80 LUNs, see the following Web
site:
|
www.compaq.com/storageworks
|
|
|
Differences in storage architecture between the ESS and HSG80 require
different procedures for storage administration. See the procedures in
“Displaying the ESS volume” on page 32.
Chapter 2. Attaching to a Compaq host
31
Displaying the ESS volume
|
|
|
|
|
The following procedure describes how to display the information about the ESS
volume. You can use this information to identify the volumes attached to an
AlphaServer.
1. Type set mode diag to put the console in diagnostic mode.
2. Type wwidmgr -show wwid
|
Figure 20 shows an example of information about the ESS volumes that you can
see at the AlphaServer console. This format is also used in the Tru64 UNIX.
|
|
|
P00>>>set mode diag
Console is in diagnostic mode
P00>>>wwidmgr -show wwid
[0] UDID: -1 WWID:01000010:6000-1fe1-4942-4d20-0000-0000-28b1-5660 (ev:none)
[1] UDID: -1 WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2881-5660 (ev:none)
[2] UDID: -1 WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660 (ev:none)
P00>>>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 20. Example of ESS volumes at the AlphaServer console
Note that the UDID for each volume appears as -1, signifying that the UDID is
undefined. With the supported ESS LIC, all UDIDs for ESS volumes are undefined.
The underscore in Figure 21 highlights the hex string that identifies an ESS volume
that is attached to an AlphaServer.
|
|
|
01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
|
||
|
|
|
|
|
|
Figure 21. Example of a hex string for an ESS volume on an AlphaServer console or Tru64
UNIX
The third and fourth quartet of the UDID number is always the value “4942-4d20”.
This is the string IBM in hex and represents an ESS volume.
The underscore in Figure 22 highlights an example of a hex string that identifies the
decimal volume number of the ESS volume. The first three characters of the next to
last quartet of numbers is the hex string representation. Figure 22 shows that the
ESS volume number is decimal 282.
|
|
|
|
|
01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
|
||
|
|
|
|
|
|
|
Figure 22. Example of a hex string that identifies the decimal volume number for an ESS
volume on an AlphaServer console or Tru64 UNIX
Figure 23 on page 33 shows a hex representation of the last 5 characters of the
ESS volume serial number.
32
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
Figure 23. Example of hex representation of last 5 characters of an ESS volume serial
number on an AlphaServer console
|
|
|
Setting an ESS volume as a boot device
|
|
|
|
|
|
Perform the following steps to set an ESS volume as a boot device.
1. Determine which ESS volume you want to use as a boot device for each host
by decoding the serial number.
Use the -item form of the wwidmgr -quickset command to assign a device unit
number to an ESS volume.
See Figure 24 for an example of how to assign a device unit number.
|
|
|
Note: You cannot use the -UDID form of the command because you cannot
assign the UDID to an ESS volume.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
All cluster member boot partitions must exist on shared external storage.
Non-cluster boot partitions might also exist on external storage.
P00>>>wwidmgr -quickset -item 0 -unit 111
Disk assignment and reachability after next initialization:
6000-1fe1-4942-4d20-0000-0000-28b1-5660:
via adapter: via fc nport:
connected:
dgb111.1001.0.10.0 kgpsab0.0.0.10.0 5005-0763-00c7-0f20
Yes
dga111.1001.0.9.0 kgpsaa0.0.0.9.0 5005-0763-00c7-0f20
Yes
P00>>>sho wwid?
wwid0 888 1 WWID:01000010:6000-1fe1-4942-4d20-00000000-28b1-5660
wwid1
wwid2
wwid3
P00>>>sho n?
N1 5005076300c70f20
N2
N3
N4
P00>>>
Figure 24. Example of what is displayed when you type wwidmgr -quickset, sho wwid and
sho n
The wwidn and nn values are defined automatically. There are multiple paths to
the ESS boot device.
2. Assign one of the paths to the bootdefdev AlphaServer console value.
3. For initial installations, install the operating system and clustering software and
register the operating system and clustering licenses.
Chapter 2. Attaching to a Compaq host
33
Verifying the attachment of the ESS volumes
|
|
|
|
|
To verify the attachment of the ESS volumes, use the hwmgr command. See
Figure 25 for an example of the commands you can use to verify the attachment of
the ESS volumes.
# hwmgr -view dev -cat disk
HWID: Device Name Mfg Model Location
-----------------------------------------------------------------------------54: /dev/disk/floppy0c 3.55in floppy fdi0-unit-0
60: /dev/disk/dsk1c DEC RZ2DD-LS (C) DEC bus-2-targ-0-lun-0
63: /dev/disk/cdrom0c COMPAQ CDR-8435 bus-5-targ-0-lun-0
66: /dev/disk/dsk5c IBM 2105F20 bus-0-targ-253-lun-0
67: /dev/disk/dsk6c IBM 2105F20 bus-0-targ-253-lun-1
68: /dev/disk/dsk7c IBM 2105F20 bus-0-targ-253-lun-2
:
:
# hwmgr –get attributes –id 66
66:
name = SCSI-WWID:01000010:6000-1fe1-0000-2b10-0009-9010-0323-0046
category = disk
sub_category = generic
architecture = SCSI
:
:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
Figure 25. Example of what is displayed when you use the hwmgr command to verify
attachment
Figure 26 shows an example of the Korn shell script called essvol. Figure 26 shows
a summary that includes information for all the ESS volumes that are attached.
echo Extracting ESS volume information...
for ID in ′hwmgr -view dev -cat disk | grep 2105F20 | awk '{ print $1}'′
do echo; echo ESS vol, H/W ID $ID
hwmgr -get attrib -id $ID | awk '/phys_loc//dev_base//capacity//serial/'
done
|
|
|
|
|
||
|
|
|
|
|
|
Figure 26. Example of a Korn shell script to display a summary of ESS volumes
See Figure 27 on page 35 for an example of what displays when you execute the
essvol korn shell script.
34
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# ./essvol |more
Extracting ESS volume information...ESS vol, H/W ID 38:
phys_location = bus-2-targ-0-lun-0
dev_base_name = dsk3
capacity = 5859392
serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
ESS vol, H/W ID 39:
phys_location = bus-2-targ-0-lun-1
dev_base_name = dsk4
capacity = 5859392
serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2831-5660
ESS vol, H/W ID 40:
phys_location = bus-2-targ-0-lun-2
dev_base_name = dsk5
capacity = 5859392
serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2841-5660
#
Figure 27. Example of what is displayed when you execute the Korn shell script
Note: ESS volumes 282, 283, and 284 are seen as LUNs 0, 1, and 2, respectively.
You can access the LUNs in Tru64 UNIX by using the following special
device files:
v /dev/rdisk/dsk3
v /dev/rdisk/dsk4
v /dev/rdisk/dsk5
|
|
Setting up the Tru64 UNIX device parameter database
|
Perform the following steps to set up the Tru64 UNIX device parameter database.
|
|
Note: For a clustered configuration, you must perform these steps for each
member in the cluster.
1. Quiesce the storage
|
|
|
|
|
|
|
2.
3.
4.
5.
Place the system in single-user mode.
Login as root.
Change the directory to /etc
Edit the ddr.dbase file to include the lines shown in Figure 28 on page 36 as an
entry in the disks subsection.
Chapter 2. Attaching to a Compaq host
35
SCSIDEVICE
#
# Values for the IBM ESS 2105
#
Type = disk
Name = "IBM" "2105F20"
#
PARAMETERS:
TypeSubClass = hard_disk, raid
BadBlockRecovery = disabled
DynamicGeometry = true
LongTimeoutRetry = enabled
PwrMgmt_Capable = false
TagQueueDepth = 20
ReadyTimeSeconds = 180
CMD_WriteVerify = supported
InquiryLength = 255
RequestSenseLength = 255
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 28. Example of the ddr.dbase file
6. Type ddr_config -c to compile the ddr.dbase file.
7. Type ddr_config -s disk "IBM" "2105F20" to confirm the values in the
ddr.dbase file.
Setting the kernel SCSI parameters
|
Perform the following steps to set the kernel SCSI parameters
|
|
|
|
Note: You must rebuild the kernel after you perform these steps.
1. Quiesce the storage.
2. Place the system in single-user mode.
3. Log in as root.
4. Change the directory to /sys/data.
5. Edit the /camdata.c file by changing the non-read/write command timeout value
in the changeable disk driver timeout section. Change it from 10 seconds to 60
seconds. See Figure 29 for an example of how to edit the camdata.c file.
|
|
|
|
|
u_long cdisk_to_def = 10; /* 10 seconds */
u_long cdisk_to_def = 60; /* 60 seconds */
|
|
||
|
|
|
|
|
|
Figure 29. Example of how to change the timeout section of the camdata.c file from 10 to 60
seconds
6. Type doconfig -c HOSTNAME to rebuild the kernel, where HOSTNAME is the
hostname of the system you are modifying.
7. Restart the host system.
|
|
Configuring the storage
|
|
To partition and prepare ESS LUNs and create and mount file systems, use the
standard Tru64 storage configuration utilities.
|
|
|
Figure 30 on page 37 shows an example of commands you can use to configure
storage.
36
ESS Host Systems Attachment Guide
# disklabel -wr /dev/rdisk/dsk6c
# mkfdmn /dev/disk/dsk6c adomain
# mkfset adomain afs
# mkdir /fs
# mount -t advfs adomain#afs /fs
Figure 30. Example of how to configure storage
Chapter 2. Attaching to a Compaq host
37
38
ESS Host Systems Attachment Guide
Chapter 3. Attaching to a Hewlett-Packard 9000 host
This chapter describes the host system requirements and provides procedures to
attach an ESS to a Hewlett-Packard 9000 host with SCSI and fibre-channel
adapters.
Attaching with SCSI adapters
This section describes the procedures to attach a Hewlett-Packard 9000 host
system with SCSI adapters. For procedures about how to attach an ESS to
Hewlett-Packard 9000 host with fibre-channel adapters, see “Attaching with
fibre-channel adapters” on page 41.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Ensure that you have the installation script files. The script file is on the compact
disc that you receive with the ESS.
v Ensure that you have 1 MB minimum of hard disk space available to install the
2105inst script file.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
v Check the logical unit number limitations for your host system. See Table 6 on
page 11.
Note: You must have special SCSI cables to attach the ESS to a Hewlett-Packard
host that has peripheral component interconnect (PCI) adapters installed.
The SCSI cables have built-in terminators on the host end. See the following
Web site for information about the special SCSI cable to attach the ESS to a
Hewlett-Packard host that has PCI adapters installed:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. You assign the SCSI hosts to the SCSI ports on the ESS.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
3. You configure the host system for the ESS by using the instructions in your host
system publications.
Note: The IBM Subsystem Device Driver does not support the Hewlett-Packard
hosts in a clustering environment. To have failover protection on an open
system, the IBM Subsystem Device Driver requires a minimum of two
adapters. You can run the Subsystem Device Driver with one SCSI adapter,
© Copyright IBM Corp. 1999, 2001
39
but you have no failover protection. The maximum number of adapters
supported is 16 for a total of 32 SCSI ports. For the HP-UX operating system
11.0, the IBM Subsystem Device Driver supports 64-bit mode.
See the following web site for the most current information about the IBM
Subsystem Device Driver:
www.ibm.com/storage/support/techsup/swtechsup.nsf/support/sddupdates
Installing the 2105 host install script file
This section provides the instructions to install the 2105 host install script file from a
compact disc. You must have superuser authority to complete the instructions.
Before installing the 2105 host install script file, connect the host system to the
ESS. See “General information about attaching to an open-systems host with SCSI
adapters” on page 7.
Install the 2105 host install script from a compact disc.
Notes:
1. You can only install and run the ESS set queue depth program (version
2.7.1.00) on the HP-UX operating system 10.01 or later.
|
|
2. Use the following formula to set the queue depth for an HP-UX N class or
HP-UX L class:
|
(queue depth) multiplied by the (number of LUNs on an adapter) = 256
|
|
Note: This limits the number of LUNs on an adapter to 256, but HP-UX
supports up to 1024 LUNs.
Perform the following steps to install the 2105 host install script file from a compact
disc:
|
1. If you do not already have a directory called /SD_CDROM, type mkdir
/SD_CDROM to create a new directory.
2. Insert the compact disc into the CD-ROM drive.
3. Mount the drive as a file system.
a. Type: ioscan -fnkC disk
Look for the device name on the list with a name of compact disc.
b. Type: mount /dev/dsk/c_t_d_ /SD_CDROM
Replace /dev/dsk/c_t_d_ with the device special file found in step 3.
4. Type: swinstall -s /SD_CDROM/hp-common
|
5. Restart your host system, then continue to step 6.
6. From the Software Selection window, click IBMis_tag.
7. From the Action menu click Mark for Install.
8. When you see the word Yes next to the IBMis_tag product, go to the Action
menu and click Install.
9. When the analysis completes with no errors (Status- Ready), click OK.
10. Click Yes in the Confirmation window to begin the installation.
A window opens notifying you that the installation is complete and that the
system needs to be restarted.
11. Click OK to continue.
12. Type swinstall -s var/spool/sw/ibm2105.
|
40
ESS Host Systems Attachment Guide
Configuring the ESS for clustering
The following section describes how to configure an ESS for clustering on the
HP-UX 11.00 operating systems using MC/ServiceGuard.
Configuring MC/ServiceGuard on an HP-UX 11.00 with the ESS
This section describes how to configure a Hewlett-Packard host system for
clustering for HP-UX.
The steps to configure MC/ServiceGuard with the ESS are the same as the steps in
the Hewlett-Packard high availability documentation. You can find that
documentation at the following Web site:
www.docs.hp.com/hpux/ha/index.html
After you configure your host for normal operating system access, the ESS acts as
a normal disk device in the MC/ServiceGuard configuration. IBM recommends that
you create volume groups that contain the volumes using the Hewlett-Packard
logical volume manager. This method of disk management is more reliable, easier,
and more flexible than whole-disk management techniques.
Creating volume groups also allows you to implement PV-Links, Hewlett-Packard’s
built-in multipathing software, for highly available disks such as the ESS. To
establish PV-Links, perform the following steps:
1. Create the volume group, using the path to the volumes that you want as the
primary path to the data.
2. Extend the volume group with the path to the volumes that are intended as
alternate paths.
The logical volume manager reads the label on the disk and knows that it is an
alternate path to one of the volumes in the group. The logical volume manager
labels the volume.
For example, if a host has access to a volume on an ESS with the device
nodes c2t0d0 and c3t0d0, you can use the c2 path as the primary path and
create the volume group using only the c2t0d0 path.
3. Extend the volume group to include the c3t0d0 path. When you issue a
vgdisplay -v command on the volume group, the command lists c3t0d0 as an
alternate link to the data.
Attaching with fibre-channel adapters
This section describes the host system requirements and provides procedures to
attach an ESS to a Hewlett-Packard 9000 host system with fibre-channel adapters.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Ensure that you have the installation script file. The script file is on the compact
disc that you receive with the ESS.
v Ensure that you have 1 MB minimum of hard disk space available to install the
2105inst script file.
v Check the LUN limitations for your host system; see Table 6 on page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
Chapter 3. Attaching to a Hewlett-Packard 9000 host
41
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. An IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. Either you or an IBM SSR defines the fibre-channel host system with the
worldwide port name identifiers. For the list of worldwide port names, see
“Appendix A. Locating the worldwide port name (WWPN)” on page 153.
3. Either you or an IBM SSR defines the fibre-port configuration if you did not do it
during the installation of the ESS or fibre-channel adapters.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you previously filled
out.
4. Either you or an IBM SSR configures the host system for the ESS by using the
instructions in your host system publications.
Note: The IBM Subsystem Device Driver 1.1.3 supports the Hewlett-Packard host
system in a clustering environment. To have failover protection on an open
system, the IBM Subsystem Device Driver requires a minimum of two
fibre-channel adapters. The maximum number of fibre-channel adapters
supported is 16 for a total of 16 fibre-channel ports.
Installing the 2105 host install script file
This section provides the instructions to install the 2105 host install script file from a
compact disc.
Before installing the 2105 host install script file, connect the host system to the
ESS. See “General information about attaching to an open-systems host with SCSI
adapters” on page 7.
Install the 2105 host install script from a compact disc. You must have superuser
authority to complete these instructions.
Notes:
1. You can only install and run the ESS set queue depth program (version
2.7.1.00) on HP-UX operating system 10.01 or later.
2. Use the following formula to set the queue depth for an HP-UX N class or
HP-UX L class:
|
|
|
(queue depth) X (number of LUNs on an adapter) = 256
Note: This limits the number of LUNs on an adapter to 256, but HP-UX
supports up to 1024 LUNs.
|
|
|
3. You must delete and re-create the Hewlett-Packard device types if you use a
SAN Data Gateway on an HP-UX host system to create the LUNs as
fibre-channel devices.
Perform the following steps to install the 2105 host install script from a compact
disc.
42
ESS Host Systems Attachment Guide
|
|
|
1. If you do not already have a directory called /SD_CDROM, type mkdir
/SD_CDROM to create a new directory.
2. Insert the compact disc into the CD-ROM drive.
3. Mount the drive as a file system.
a. Type: ioscan -fnkC disk
Look for the device name on the list with a name of the compact disc.
b. Type: mount /dev/dsk/c_t_d_ /SD_CDROM
Replace /dev/dsk/c_t_d_ with the device special file found in step 3.
4.
5.
6.
7.
8.
9.
10.
|
11.
12.
Type: swinstall -s /SD_CDROM/hp-common
Restart your host system, then continue to step 6.
From the Software Selection window, click IBMis_tag.
From the Action menu, click Mark for Install.
When you see the word Yes next to the IBMis_tag product, go to the Action
menu and click Install.
When the analysis completes with no errors (Status- Ready), click OK.
Click Yes in the Confirmation window to begin the installation.
A window opens, notifying you that the installation is complete and that the
system needs to be restarted.
Click OK to continue.
Type swinstall -s var/spool/sw/ibm2105.
Configuring the ESS for clustering
This section describes how to configure a Hewlett-Packard host system for
clustering.
The steps to configure MC/ServiceGuard with the ESS are the same as the steps in
the Hewlett-Packard high-availability documentation. You can find that
documentation at the following Web site:
www.docs.hp.com/hpux/ha/index.html
After you configure your host for normal operating system access, the ESS acts as
a normal disk device in the MC/ServiceGuard configuration. IBM recommends that
you create volume groups that contain the volumes using the Hewlett-Packard
logical volume manager. This method of disk management is more reliable, easier,
and more flexible to manage than whole-disk management techniques.
Creating volume groups also allows you to implement PV-Links, Hewlett-Packard’s
built-in multipathing software, for highly available disks such as the ESS. To
establish PV-Links, perform the following steps:
1. Create the volume group, using the path to the volumes that you want as the
primary path to the data.
2. Extend the volume group with the path to the volumes that are intended as
alternate paths.
The logical volume manager reads the label on the disk and knows that it is an
alternate path to one of the volumes in the group. The logical volume manager
labels the volume.
For example, if a host has access to a volume on an ESS with the device
nodes c2t0d0 and c3t0d0, you can use the c2 path as the primary and create
the volume group using only the c2t0d0 path.
Chapter 3. Attaching to a Hewlett-Packard 9000 host
43
3. Extend the volume group to include the c3t0d0 path. When you issue a
vgdisplay -v command on the volume group, the command lists c3t0d0 as an
alternate link to the data.
44
ESS Host Systems Attachment Guide
Chapter 4. Attaching to an IBM AS/400 or iSeries host
This chapter describes the host system requirements. This chapter also provides
the procedures to attach an ESS to an IBM AS/400 or IBM iSeries host system with
SCSI and fibre-channel adapters.
Notes:
1. You cannot serially connect more than one AS/400 host system to the ESS.
2. You cannot interconnect more than one ESS attachment to a single port on the
host adapter.
3. Your IBM AS/400 host system supports the ESS as a peripheral device.
|
|
|
|
|
|
The ESS emulates an IBM 9337 subsystem when you attach it to an IBM AS/400
host system with the SCSI adapter. You must configure the ESS so that it appears
to be 1 - 8 logical units (LUNs). Because the IBM AS/400 host requires a separate
address for each LUN, configure the ESS to report a unique address for each
virtual drive defined to the IBM AS/400 host. See “9337 subsystem emulation” on
page 46 for more details about emulating the 9337 subsystem.
Note: You cannot use the IBM Subsystem Device Driver on the AS/400 host
system.
Attaching with SCSI adapters to an AS/400 host system
This section describes how to attach an ESS to an AS/400 host system with SCSI
adapters. For procedures about how to attach an ESS to an iSeries host system
with fibre-channel adapters, see “Attaching with fibre-channel adapters to the
iSeries host system” on page 48.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
1. Obtain the documents for the AS/400 host system.
2. See the following Web site for details about program temporary fixes (PTFs)
that you need to install on your AS/400 host system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
3. Check the LUN limitations for your host system. See Table 6 on page 11.
4. Contact your IBM service support representative to install and configure the IBM
ESS.
Attachment considerations
This section lists the attachment considerations for an AS/400 host system. See the
following Web site for a list of AS/400 and iSeries models to which you can attach
an ESS:
|
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
|
Notes:
|
|
|
1. You cannot specify a LUN size of 70.564-GB for a SCSI-3 attachment.
2. You can specify 1 - 8 LUNs for each SCSI-3 attachment to a specific ESS host
adapter port.
3. You can specify a LUN serial number that is eight characters. For example, you
can specify a LUN serial number, 0L0PPNNN, where:
|
|
© Copyright IBM Corp. 1999, 2001
45
|
L
LUN number 0 - 7
|
PP
ESS host port number 1 - 32
|
|
|
|
|
|
|
|
|
NNN
low-order three characters of the ESS unit serial number or unique
three character value entered using an ESS service panel menu option
4. SCSI-3 attached LUNs emulate the 9337 device type.
5. You can place AS/400 volumes in the ESS storage arrays according to the
selected host system attachment type.
For a SCSI-3 attachment, you must place the volumes in ESS RAID-5 storage
arrays that are common to a single ESS device adapter. You cannot spread the
volumes across arrays that are attached to multiple device adapters.
6. You cannot place AS/400 volumes in an ESS non-RAID storage array.
|
|
|
|
|
|
|
|
|
7. You cannot share an AS/400 volume with two SCSI-3 system attachments.
8. You can use different sizes for AS/400 volumes according to the AS/400 release
level and SCSI-3 attachment type. See Table 11 on page 48 for the supported
volume sizes.
The largest available ESS storage array capacity determines the maximum
volume size you can create.
9. The attachment type and the available ESS storage array capacity determines
the number of volumes that you can create.
You can create 1 - 8 volumes for a SCSI-3 attachment.
Recommended configurations for the AS/400
IBM recommends the following configurations:
v Install the feature code 6501 adapter card in the AS/400 system unit or in the I/O
towers.
v Do not use more than two 6501 adapter cards for each AS/400 tower.
9337 subsystem emulation
When attached to an AS/400 host system, the ESS emulates a 9337 subsystem.
The ESS, emulating a 9337, has one unique controller that has as many as eight
disk units (LUNs) attached.
A single feature code 6501, which contains two ports and is attached to an ESS,
supports two emulated 9337s. Feature code 6501 supports 16 LUNs, 8 per port,
and requires two ports with at least one adapter on the ESS. IBM recommends that
you spread the ESS ports across clusters and adapters.
For a real 9337, parity data occupies one-fourth the capacity of each odd numbered
physical drive installed. The total capacity for the parity data is equivalent to one full
physical drive. Four out of eight physical drives have less available capacity to the
AS/400 host system. On the ESS, you can perform the parity function so that all
eight LUNs show a total available capacity for the AS/400.
A real 9337 reports each LUN serial number to the AS/400 host system as a
two-digit physical slot number. The real 9337 also reports the last five digits of the
serial number of the 9337 physical subsystem. The real 9337 appears on the
AS/400 host system as one controller with as many as eight LUNs.
The ESS reports each LUN to the AS/400 host system in the following sequence:
1. A zero (not shown on the AS/400 display)
2. The LUN number
46
ESS Host Systems Attachment Guide
3. A zero
4. a two-digit port number, followed by a 3-digit storage facility identification
number
The ESS sets the storage facility identification number equal to the last 3 characters
of the array serial number. If your enterprise has installed multiple ESS arrays in
which the last 3 characters of the array serial number are equal, the SSR can
modify the value using the service login.
An example of a storage facility number is: 01003789 for LUN 1 on port 3 in
storage facility 789. The AS/400 host system uses the last five digits as the
controller serial number. The ESS appears to be the same as a real 9337 to the
AS/400 host system. They both appear as one controller with up to eight LUNs
attached.
Table 9 shows an example from the hardware service manager (HSM) logical
resources.
Table 9. Example from HSM logical resources for AS/400 host systems
Opt Description
Type-Model
Status
Resource Name
Storage IOP
6501-001
Operational
SI03
Disk controller
9337-5A2
Operational
DC07
Disk unit
9337-5AC
Operational
DD033
Disk unit
9337-5AC
Operational
DD034
Disk unit
9337-5AC
Operational
DD036
Disk unit
9337-5AC
Operational
DD038
Disk unit
9337-5AC
Operational
DD040
Table 10 shows an example of the capacity and status of the disk drives.
Table 10. Example of the capacity and status of disk drives for AS/400 host systems
Serial number
Type-Model
Resource name
Capacity
Status
00-1003789
9337-5AC
DD032
17548 DPY
Active
00-2003789
9337-5AC
DD035
17548 DPY
17548
DPY/Active
00-3003789
9337-5AC
DD037
17548 DPY
Active
00-4003789
9337-5AC
DD039
17548 DPY
Active
00-5003789
9337-5AC
DD041
17548 DPY
Active
The ESS disk drives that emulate the 9337 models on an AS/400 host system have
full-capacity drives (all xxC models). The xxC indicates protected models. The
status for protected models is DPY (device parity).
|
|
|
|
|
The ESS supports the requirement by defining a protected and unprotected 9337
Models xxC and unprotected 9337 Models xxA. You can define a status of
unprotected through the ESS Web server when you assign the LUNs. The AS/400
host supports only software mirroring on an unprotected 9337 model and prevents
software mirroring on a protected 9337 model.
Chapter 4. Attaching to an IBM AS/400 or iSeries host
47
|
|
|
|
Note: From an ESS logical configuration view point, all AS/400 disks are RAID-5
and are protected within the ESS. When you create the AS/400 disk using
ESS Specialist, you can create it as a protected or unprotected volume. See
Table 11.
|
Table 11. Size and type of the protected and unprotected AS/400 models
Size
Type
Protected
Unprotected
4.190 GB
9337
48C
48A
8.589 GB
9337
59C
59A
17.548 GB
9337
5AC
5AA
35.165 GB
9337
5CC
5CA
36.003 GB
9337
5BC
5BA
|
|
|
|
Note: IBM does not support versions of the OS/400 operating system prior to Version 4
Release 4.0. If your operating system is down level, IBM recommends that you upgrade the
software to a more current version of the OS/400 operating system. For information about
the latest required program temporary fixes, go to the following Web sites:
|
http://www-912.ibm.com/supporthome.nsf/document/17623433
|
http://www-912.ibm.com/supporthome.nsf/document/17403848
Or call, 1-800-237-5511.
Attaching with fibre-channel adapters to the iSeries host system
This section describes the host system requirements and provides the procedure to
attach your iSeries host system to the ESS with fibre-channel adapters. For host
system requirements and procedures to attach your AS/400 host system to an ESS
with SCSI adapters, see “Attaching with SCSI adapters to an AS/400 host system”
on page 45.
Notes:
1. You cannot serially connect more than one AS/400 host system to the ESS.
2. You cannot interconnect more than one ESS attachment to a single port on the
host adapter.
3. Your iSeries host system supports the ESS as a peripheral device.
The fibre-channel attached ESS presents individual ESS LUNs to the iSeries host
system. You do not need to perform manual tasks to assign LUN addresses
because the ESS does it automatically during configuration.
|
|
|
Note: You cannot use the IBM Subsystem Device Driver on the iSeries host
system.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
1. Obtain the documents for the iSeries host system from the following Web site:
publib.boulder.ibm.com/pubs/html/as400/infocenter.htm
2. See the following Web site for details about program temporary fixes (PTFs)
that you might need to install on your iSeries host system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
3. Check the LUN limitations for your host system.
48
ESS Host Systems Attachment Guide
4. Contact your IBM service support representative to install and configure the IBM
ESS.
Attachment considerations
|
|
This section lists the attachment considerations for an iSeries host system.
1. The ESS creates LUN serial numbers that are eight characters in the format
0LLLLNNN, where:
|
|
LLLL
A unique volume number assigned by the ESS when the LUN is
created.
|
|
|
NNN
low-order three characters of the ESS unit serial number or unique
three-character value which an IBM SSR can enter using an ESS
service panel menu option
|
Contact your IBM SSR to perform the following tasks:
a. From the Service Menu, select the Configuration Options Menu option.
|
|
|
|
|
b. Select Change / Show Control Switches.
c. Highlight AS/400 LUN Serial Number Suffix.
d. Select Control Switch Value, and press F4 to list the available values.
The default value is the last three digits of the serial number for the ESS.
Notes:
1. You cannot specify a 4.190 GB LUN size for the SCSI fibre-channel protocol
(FCP) attachment.
2. You can specify 1 - 32 LUNs for each attachment to an iSeries fibre-channel
adapter.
3. Fibre-channel attached LUNS are identified as the ESS device type on the
iSeries host system.
4. You can place iSeries volumes in the ESS storage arrays according to the
selected host system attachment type.
For a fibre-channel attachment, you must place the volumes in ESS RAID-5
storage arrays that have capacity available. You can spread the volumes across
arrays that are attached to multiple device adapters.
5. You cannot place iSeries volumes in an ESS non-RAID storage array.
6. You cannot share an iSeries volume with more than one fibre-channel system
attachment.
7. The attachment type and the available ESS storage array capacity determine
the number of volumes that you can create.
You can create 1 - 32 LUNs for a fibre-channel attachment.
Figure 31 on page 50 shows an example of the display for the hardware service
manager (HSM) auxiliary storage hardware resource detail for the 2766 adapter
card.
Chapter 4. Attaching to an IBM AS/400 or iSeries host
49
Description........................:
Type-Model.........................:
Status.............................:
Serial number......................:
Part number........................:
Resource name......................:
Port worldwide name................:
PCI bus............................:
System bus.......................:
System board.....................:
System card......................:
Storage............................:
I/O adapter......................:
I/O bus..........................:
Controller.......................:
Device...........................:
Multiple Function IOA
2766-001
Operational
10-22036
0000003N2454
DC18
10000000C922D223
35
0
32
6
Figure 31. Example of the display for the auxiliary storage hardware resource detail for the
2766 adapter card
Host Limitations
See Table 12 for a description of the LUN assignments for the iSeries host system.
Table 12. Host system limitations for the iSeries host system
Host system
IBM iSeries (fibre-channel). See
note.
LUN limitation
assignments per
target
Configuration notes
0 - 32
There is one target per iSeries
adapter.
Note: The naming convention for the iSeries describes Models 270 and 8xx. These models
attach through a fibre-channel adapter 2766.
Recommended configurations
IBM recommends the following configurations:
v Install feature code 2766, which is an I/O adapter card, in the iSeries system unit
or the high speed link (HSL) PCI I/O towers.
v Only one 2766 adapter is supported per I/O processor (IOP) and requires a
dedicated IOP. No other I/O adapters are supported under the same IOP.
v Only two 2766 adapters are supported per a multi-adapter bridge.
Note: You can get the 2766 adapter card through RPQ 847126.
|
Figure 32 on page 51 shows an example of the display for the hardware service
manager (HSM) logical hardware resources associated with the IOP.
50
ESS Host Systems Attachment Guide
Opt Description
Type-Model
Status
Resource Name
Combined Function IOP
Disk Unit
Disk Unit
Disk Unit
Disk Unit
2843-001
2766-001
2105-A82
2105-A81
2105-A81
Operational
Operational
Operational
Operational
Operational
CMB04
DC18
DD143
DD140
DD101
Figure 32. Example of the logical hardware resources associated with an IOP
Figure 33 shows an example of the display for the hardware service manager
(HSM) auxiliary storage hardware resource detail for the 2105 disk unit.
Description........................:
Type-Model.........................:
Status.............................:
Serial number......................:
Part number........................:
Resource name......................:
Licensed Internal Code.............:
Level..............................:
PCI bus............................:
System bus.......................:
System board.....................:
System card......................:
Storage............................:
I/O adapter......................:
I/O bus..........................:
Controller.......................:
Device...........................:
Disk unit
2105-A82
Operational
75-1409194
DD143
FFFFFFFF
35
0
32
6
0
1
1
Figure 33. Example of the display for the auxiliary storage hardware resource detail for the
2105 disk unit
On an ESS, you can define the ESS LUNs as either protected or unprotected. From
an ESS physical configuration view point, all iSeries volumes are RAID-5 and are
protected within the ESS. When you create the iSeries LUNs using the ESS
Specialist, you can create them as logically protected or unprotected. Table 13
shows the disk capacity for the protected and unprotected models.
Table 13. Capacity and models of disk volumes for iSeries
Size
Type
Protected
Unprotected
Release support
8.59 GB
2105
A01
A81
V5R1
17.548 GB
2105
A02
A82
V5R1
35.165 GB
2105
A05
A85
V5R1
36.003 GB
2105
A03
A83
V5R1
70.564 GB
2105
A04
A84
V5R1
Performing Peer-to-Peer remote copy functions
This section describes the basic hardware and software requirements for an iSeries
host system. You can set up Copy Services on an ESS. A description of how to set
up the primary and secondary Copy Services servers follows this section.
Chapter 4. Attaching to an IBM AS/400 or iSeries host
51
Requirements for Copy Services
To get current information about the host system models, operating systems, and
adapters that the ESS supports, see the following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
iSeries hardware
The ESS supports the following models and features for the iSeries hosts:
v Models 270, 820, 830, 840
v Fibre-channel features:
– Host adapter feature code 2766
– IBM 3534 Fibre Channel Managed Hub Model 1RU
– IBM Fibre Channel Switch 2109, Model S08/S16 in quick-loop mode only
iSeries software
You must have OS/400 Version 5.1 or higher support installed to connect the ESS
to the iSeries host system.
See iSeries in Storage Area Networks for information about configurations that are
supported for the iSeries host.
|
|
Setting up the 3534 Managed Hub
To set up the 3534 Managed Hub, see IBM 3534 SAN Fibre Channel Managed Hub
Installation and Service Guide . See also IBM 3534 SAN Fibre Channel Managed
Hub Users Guide.
Managing the IBM 3534 Managed Hub
To manage the IBM 3534 Managed Hub, you can choose from the following two
methods:
v Telnet commands
v The Managed Hub StorWatch Specialist
|
|
Note: You must enable QuickLoop on the IBM 2109 when you attach to an iSeries
host system.
|
|
|
|
|
|
|
|
|
|
The following lists some general information about connecting the iSeries host
through a hub or switch.
v When you connect to an iSeries host system through a 3534 hub or 2109 switch
in QuickLoop mode, the port on the ESS must be configured to arbitrated-loop
mode. See “Arbitrated loop” on page 14 for more information about arbitrated
loop mode.
v For Version 5 Release 1, the iSeries supports only a homogeneous environment
(only iSeries initiators). You can accomplish this through the use of logical zoning
of the hub or switch. All host systems within an iSeries zone must be iSeries
systems.
v You must configure all ports within the iSeries zone with QuickLoop. This
includes the device (target) port attaching to the ESS.
v Because the iSeries requires the device (target) port to be configured as
QuickLoop, all ports within any zone sharing this device (target) port must be
configured as QuickLoop in order to recognize the ESS on this port.
|
|
|
|
|
52
ESS Host Systems Attachment Guide
Setting up the 2109 S08 or S16 switch
To set up the 2109 S08 Switch, see 2109 Model S08 Installation and Service
Guide, and the 2109 Model S08 Users Guide.
To set up the 2109 S16 Switch, see 2109 Model S16 Users Guide, and the 2109
Model S16 Installation and Service Guide.
Managing the IBM 2109 S08 or S16 switch
To manage the IBM 2109 S08 or S16 switch, you can choose from the following
two methods:
v Telnet commands
v The Managed Hub StorWatch Specialist
Chapter 4. Attaching to an IBM AS/400 or iSeries host
53
54
ESS Host Systems Attachment Guide
Chapter 5. Attaching to an IBM Eserver xSeries 430 or IBM
NUMA-Q host
This chapter tells you how to attach an ESS to an IBM xSeries 430 or an IBM
NUMA-Q host system with fibre-channel adapters. This chapter also tells you how
to install and configure the IOC-0210-54 adapter card.
|
|
Note: You can use either the switched fabric topology or direct fibre-channel
arbitrated loop topology to attach the ESS to either an IBM xSeries 430 host.
The ESS offers feature code 3019 as an interim solution for fibre-channel attach.
With the feature code 3019, you can attach an ESS to an IBM xSeries 430 host
through the NUMA-Q fibre-channel- to- SCSI bridge. This feature code includes one
SCSI adapter that you purchase and a no-cost loan of a NUMA-Q fibre-channelto-SCSI bridge. IBM requires that you sign a loan agreement for the bridge.
Notes:
|
|
1. Feature code 3019 is not a standard feature. To get feature code 3019, contact
your IBM sales representative.
2. You should have an IBM SSR attach an ESS to an IBM NUMA-Q host system
to an IBM xSeries 430.
For more information about how to attach an IBM xSeries 430 or an IBM NUMA-Q
host system with fibre-channel adapters, see the NUMA-Q ESS Integration Release
Notes. See also Fibre Channel Subsystem Installation Guide. To obtain a copy, see
your IBM sales representative.
Attaching with fibre-channel adapters
This section describe how to attach an IBM xSeries 430 or an IBM NUMA-Q host
system with fibre-channel adapters. You cannot attach the ESS to an xSeries 430
or a NUMA-Q host system by using SCSI adapters.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system.
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS:
1. The IBM SSR installs the IBM ESS by using the procedures in the IBM
Enterprise Storage Server Service Guide.
2. Either you or the IBM SSR defines the fibre-channel host system with the
worldwide port name identifiers. For information about how to locate the
worldwide port name for an IBM xSeries 430 or an IBM NUMA-Q host system,
see “Appendix A. Locating the worldwide port name (WWPN)” on page 153.
3. Either you or the IBM SSR defines the fibre-port configuration if you did not do it
during the installation of the ESS or fibre-channel adapters.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you previously filled
out.
4. Configure your host system for the ESS by using the instructions in your host
system publications.
© Copyright IBM Corp. 1999, 2001
55
System requirements
The ESS is supported on the IBM xSeries 430 and the IBM NUMA-Q host systems
by a module of code that is incorporated into Service Pack 3 for PTX V4.5.2. To
install Service Pack 3:
1. Insert the Service Pack 3 compact disc into the CD-ROM drive.
2. Open the README file for instructions on installing Service Pack 3.
See Table 14 for the NUMA-Q system requirements. Support for Copy Services
on PTX V4.5.2 requires a special Technology Pack. You can obtain the
Technology Pack through an IBM sales representative who handles your IBM
xSeries 430 and IBM NUMA-Q purchases.
Table 14. IBM xSeries 430 and IBM NUMA-Q system requirements for the ESS
Element
Requirement
PTX operating system
Version 4.5.2 or higher
Hardware models
All IBM NUMA-Q and IBM xSeries 430
quad-based systems
Fibre-channel host adapter
Emulex LP7000E adapter with firmware SF
3.2.1
Fibre-channel switch
IBM 2109 Model S08 or IBM 2109 Model
S16
Clustered IBM NUMA-Q and IBM xSeries
430 hosts
ptx/Clusters V2.2.1
Installing the IOC-0210-54 adapter card
Perform the following steps to install the IOC-0210-54 adapter card:
1. Contact your IBM SSR to install the IOC-0210-54 adapter card in the ESS.
2. Connect the cable to the ESS port.
The SSR establishes the private LAN connection between both clusters on the
ESS, the Ethernet hub, and the ESS personal computer console.
Preconfigured multimode optical cables are available to connect the ESS to the
NUMA-Q host system. You might need the 8-m (24-ft) cable. The way you
connect the cable to the ESS through the fibre-channel switch depends on the
level of I/O throughput.
Note: For information about connection schemes, see the Fibre Channel
Subsystems Installation Guide at the following Web site:
techdocs.sequent.com/staticpath/shsvccd/start.htm
3. Restart the server.
Configuring the IOC-0210-54 adapter card
To configure the IOC-0210-54 adapter card, contact your IBM SSR or see the IBM
Enterprise Storage Server Web Interface User’s Guide.
56
ESS Host Systems Attachment Guide
Chapter 6. Attaching to an IBM RS/6000 or IBM eServer
pSeries host
This chapter describes the host system requirements and provides procedures to
attach an ESS to the following host systems with either SCSI or fibre-channel
adapters:
v
v
v
v
RS/6000
pSeries
RS/6000 Series Parallel (SP) Complex
pSeries SP Complex
For procedures on how to migrate from SCSI to fibre-channel for an RS/6000
system and a pSeries host system, see “Appendix B. Migrating from SCSI to
fibre-channel” on page 159.
Attaching with SCSI adapters
This section describes how to attach an RS/6000 or pSeries host system with SCSI
adapter. For procedures about how to attach an RS/6000 or pSeries host system
with fibre-channel adapters, see “Attaching with fibre-channel adapters” on page 60
Attachment requirements
This section lists the requirements for attaching the ESS to your host system.
v Ensure that you have 1 MB minimum of hard disk space available to install the
AIX host attachment package.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the CD that you
receive with the ESS.
For details about the release level for your operating system, see the following
Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
v Ensure that you have the installation script files. These files are on the CD that
you receive with the ESS.
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. Either you or an IBM SSR assigns the SCSI hosts to the SCSI ports on the
ESS.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
3. Either you or an IBM SSR configures the host system for the ESS. Use the
instructions in your host system publications.
4. Check the logical unit numbers (LUN) limitations for the RS/6000 and pSeries.
See Table 6 on page 11.
Note: The IBM Subsystem Device Driver 1.1.4 supports the RS/6000 and pSeries
host systems in a clustering environment. To have failover protection on an
© Copyright IBM Corp. 1999, 2001
57
open system, the IBM Subsystem Device Driver requires a minimum of two
adapters. You can run the Subsystem Device Driver with one SCSI adapter,
but you have no failover protection. The maximum number of adapters
supported is 16 for a total of 32 SCSI ports.
Installing the 2105 host attachment package
This section provides the instructions to install the host attachment package for the
ESS. IBM recommends that you run the host attachment package on each host
system that is attached to the ESS and for which an installation script is provided.
Before you install the package
Perform the following steps before you install the host attachment package:
1. Attach the ESS to your host system. See “General information about attaching
to an open-systems host with SCSI adapters” on page 7.
2. Turn on the host system and all attachments.
3. Ensure that you have root access.
4. Ensure that you have administrator knowledge.
5. Ensure that you have knowledge of the System Management Interface Tool
(SMIT).
Replacing a prior version of the package
If you want to replace a prior version of the host attachment package and have data
that exists on all configured 2105 disks, the code prompts you to remove all ESS
product-related hdisk devices. Perform the following steps to remove the devices:
1. Run the umount command on the file system.
For example, type umount -t x, where x is the file system name.
2. Run the varyoffvg command for the 2105 volume group.
For example, type varyoffvg -s VGname.
3. Type rmdev -dl hdisk# on the command line to unconfigure the 2105 devices.
After you install the ibm2105.rte file and reconfigure the devices, run the vary on
command on the volume groups and remount the file systems. The data on the file
systems should be available.
Installing the package
Perform the following steps by using SMIT to install the host attachment package
from a compact disc on your system. You must have superuser authority to
complete the instructions.
Note: The following procedure is an example. The example uses /dev/cd0 for the
address of the compact disc. Your address might be different.
1. From your desktop window, type smit install_update to go directly to the
installation panel.
2. Click Install and Update from the Latest Available Software and press
Enter.
3. Press F4 to open the Input Device/Directory for Software window.
4. Select the CD-ROM drive that you are using for the installation, for example,
/dev/cd0.
5. Press Enter. The Install and Update from the Latest Available Software window
opens.
6. Click Software to Install and press F4.
7. Select Software Packages and press F7.
58
ESS Host Systems Attachment Guide
The Install and Update from the Latest Available Software panel opens with
the name of the software you selected to install.
8. Check the default option settings to ensure that they are what you need.
9. Press Enter to install the software. SMIT responds with the following question:
Are you sure?
10. Press Enter to continue. The installation process might take several minutes. A
message displays when the installation process is complete.
11.
12.
13.
14.
Press F10 when the installation process is complete.
Exit from SMIT.
Remove the compact disc.
Restart the host system.
Verifying the ESS configuration
To verify the configuration of the ESS on the AIX host system, type the following
command:
lsdev -Cc disk | grep 2105
A list of all ESS devices displays. See the example in Figure 34.
hdisk3 Available 30-68-00-0-,1 IBM 2105E20
hdisk4 Available 30-68-00-0-,2 IBM 2105E20
hdisk5 Available 30-68-00-0-,3 IBM 2105E20
...
...
Figure 34. Example of a list of other devices displayed when you use the lsdev -Cc disk |
grep 2105 command, SCSI
If a device is listed as another type of device, the message in Figure 35 displays.
This message indicates that the configuration was not successful.
hdisk3 Available 30-68-00-0-, Other SCSI disk device
hdisk4 Available 30-68-00-0-, Other SCSI disk device
hdisk5 Available 30-68-00-0-, Other SCSI disk device
...
...
Figure 35. Example of a list of other devices displayed when you use the lsdev -Cc disk |
grep 2105 command, SCSI
When you use the lsdev -Cc disk | grep 2105 command, you know the installation
is successful if you see the information listed in Figure 34.
When you use the lsdev -Cc disk | grep 2105 command, you see only display
lines that contain the value immediately after it. If you have not defined any 2105
devices, a message of none displays.
Configuring VSS and ESS devices with multiple paths per LUN
The Versatile Storage Server™ and Enterprise Storage Server support multiple path
configurations for a LUN. This means that you can have multiple hdisks available
Chapter 6. Attaching to an IBM RS/6000 or IBM eServer pSeries host
59
on the AIX server for each physical LUN. If you create a PVID sector 0 of a LUN
and you delete all hdisks from the system with the rmdev command, you must
restart the system. If you want to restore all multiple paths for all LUNs, use the
cfgmgr command for each SCSI adapter.
Emulating UNIX-based host systems
For UNIX-based host systems, the ESS emulates multiple SCSI DDMs. The host
system accesses the virtual drives of the ESS as if they were generic SCSI DDMs.
The AIX operating system contains entries in its object distribution manager
database to identify the ESS. However, the AIX operating system accesses the ESS
through its generic SCSI DDMs.
The ESS appears as a standard physical volume or hdisk to AIX, Solaris, and
HP-UX systems.
When you use ultra- or wide-SCSI adapters in your host systems, a total of 16
SCSI IDs per interface is available on the ESS. The host system SCSI IDs are
known as initiators; the ESS SCSI IDs are the targets.
If only one host system connects to an ESS SCSI port, the ESS can assign up to
15 unique target IDs. If the maximum of four host systems are connected to an
ESS SCSI port, the ESS assigns up to 12 unique SCSI target IDs because each
host uses one SCSI ID.
You can configure an ESS to appear as 64 LUNs per SCSI target. The ESS
supports LUN sizes from 100 MB up to 100 MB x n where n equals 1 to 2455
(245.5 GB). You can increase LUN sizes in 100 MB increments.
Note: LUN usage is limited for some host systems. See Table 6 on page 11.
Table 15. Size of drives, configurations, and maximum size of LUNs
Size of
Drives
Configuration
Maximum Size of
LUNs
9.1 GB
6 + P array Model E10, E20, F10, or F20
52.5
9.1 GB
7 + P array Expansion enclosure for Model E20
61.3
18.2 GB
6 + P array Model E10, E20, F10 or F20
105.2
18.2 GB
7 + P array Expansion enclosure for Model E20
122.7
36.4 GB
6 + P array Model E10, E20, F10, or F20
210.4
36.4 GB
7 + P array Expansion enclosure for Model E20
245.5
Attaching with fibre-channel adapters
This section describes the host system requirements and provides the procedures
to attach an ESS to the following host systems::
v RS/6000
v pSeries
v RS/6000 Series Parallel (SP) Complex
v pSeries SP Complex
Note: For an RS/6000 or pSeries host system, you can use either of the following
topologies:
60
ESS Host Systems Attachment Guide
v Point-to-point (switched fabric) topology
v Arbitrated loop topology
The RS/6000 and pSeries host systems do not support more than one
host bus adapter on the loop. The RS/6000 and pSeries host systems do
support a direct connection of the RS/6000 and pSeries host systems to
an ESS using the fibre-channel arbitrated loop protocol.
|
|
For procedures about how to attach an RS/6000 or pSeries host system with SCSI
adapters, see “Attaching with SCSI adapters” on page 57.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Ensure that you have the installation script files. These files are on the diskette
and the CD that you receive with the ESS.
v Ensure that you have 1 MB minimum of hard disk space available to install the
AIX host attachment package.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the CD that you
receive with the ESS.
v For details about the release level for your operating system, see the following
Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS:
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. Either you or an IBM SSR defines the fibre-port configuration if you did not do it
during the installation of the ESS or fibre-channel adapters.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you previously filled
out.
3. Either you or an IBM SSR configures the host system for the ESS. Use the
instructions in your host system publications.
4. Either you or an IBM SSR checks the LUN limitations for the RS/6000 and
pSeries. See Table 6 on page 11.
Note: The IBM Subsystem Device Driver supports RS/6000 and pSeries host
systems in a clustering environment. To have failover protection on an open
system, the Subsystem Device Driver requires a minimum of two
fibre-channel adapters. The maximum number of fibre-channel adapters
supported is 16 for a total of 16 fibre-channel ports.
Installing the 2105 host attachment package
This section provides the instructions to install the host attachment package for the
ESS. IBM recommends that you run the host attachment package on each host
system that is attached to the ESS.
Before installing the package
Perform the following steps before you install the host attachment package:
Chapter 6. Attaching to an IBM RS/6000 or IBM eServer pSeries host
61
1. Attach the ESS to your host system. See “General information about attaching
to an open-systems host with SCSI adapters” on page 7.
2. Turn on the host system and all attachments.
3. Ensure that you have root access.
4. Ensure that you have administrator knowledge.
5. Ensure that you have knowledge of the System Management Interface Tool
(SMIT).
Replacing a prior version of the package
If you want to replace a prior version of the host attachment package (tar version)
and have data that exists on all configured 2105 disks, the code prompts you to
remove all ESS product-related hdisk devices. Perform the following steps to
remove the devices:
1. Run the umount command on the file system.
For example, type umount -t x, where x is the file system name.
2. Run the varyoffvg command for the 2105 volume group.
For example, type varyoffvg -s VGname.
3. Type rmdev -dl on the command line to unconfigure the 2105 devices.
After you install the ibm2105.rte file and all the 2105 devices are reconfigured, vary
on the volume groups and remount the file systems. The data on the file systems
should be available again.
Perform the following steps by using SMIT to install the IBM 2105 host attachment
on your system.
Installing the package
Perform the following steps by using SMIT to install the host attachment package
from a CD or a diskette. You must have superuser authority to complete the
instructions.
Note: The following procedure is an example. The example uses /dev/cd0 for the
address of the CD-ROM drive. Your address might be different.
1. From your desktop window, type smit install_update to go directly to the
installation panel.
2. Click Install and Update from the Latest Available Software and press
Enter.
3. Press F4 to open the Input Device/Directory for Software window.
4. Select the CD-ROM drive that you are using for the installation, for example,
/dev/cd0.
5. Press Enter.
The Install and Update from the Latest Available Software window opens.
6. Click Software to Install and press F4.
7. Select Software Packages and press F7.
The Install and Update from the Latest Available Software panel displays with
the name of the software you selected to install.
8. Check the default option settings to ensure that they are what you need.
9. Press Enter to install the software.
SMIT responds with the following question: Are you sure?
10. Press Enter to continue.
62
ESS Host Systems Attachment Guide
11.
12.
13.
14.
The installation process might take several minutes. A message displays when
the installation process is complete.
Press F10 when the installation process is complete.
Exit from SMIT.
Remove the compact disc.
Restart the host system.
Verifying the configuration
To verify the configuration of the ESS on the AIX host system, type the following
command:
lsdev -Cc disk | grep 2105
A list of all ESS devices displays. See Figure 36 for an example.
hdisk3 Available 30-68-01 IBM FC2105F20
hdisk4 Available 30-68-01 IBM FC2105F20
hdisk5 Available 30-68-01 IBM FC2105F20
...
...
Figure 36. Example of a list of devices displayed when you use the lsdev -Cc disk | grep
2105 command, fibre-channel
If a device is listed as another type of device, the message shown in Figure 37
displays. This message indicates that the configuration was not successful.
hdisk3 Available 30-68-01, Other FCSCSI disk device
hdisk4 Available 30-68-01, Other FCSCSI disk device
hdisk5 Available 30-68-01, Other FCSCSI disk device
...
...
Figure 37. Example of a list of other devices displayed when you use the lsdev -Cc | grep
2105 command, fibre-channel
When you use the lsdev -Cc disk | grep 2105 command, you know the installation
is successful if you see the information listed in Figure 36.
When you use the lsdev -Cc disk | grep 2105 command, you see only display
lines that contain the value immediately after it. If you have not defined any 2105
devices, a message of none displays.
Configuring VSS and ESS devices with multiple paths per LUN
The Versatile Storage Server (VSS) and the ESS support multiple path
configurations for a LUN. This means that you can have multiple hdisks available
on the AIX server for each physical LUN. If you create a PVID sector 0 of a LUN
and you delete all hdisks from the system with the rmdev command, you must
restart the system. If you want to restore all multiple paths for all LUNs, use the
cfgmgr command for each fibre-channel adapter.
Chapter 6. Attaching to an IBM RS/6000 or IBM eServer pSeries host
63
Attaching to multiple RS/6000 or pSeries hosts without the
HACMP/6000™ host system
This section describes the requirements and provides the instructions to attach one
or two ESSs to multiple host systems without the High Availability Cluster
Multi-Processing/6000 (HACMP/6000) host system.
Install HACMP/6000 to define and access a unique journaled file system (JFS) file
stored on a single ESS from any attached host system.
When attaching multiple host systems to an ESS, consider the following:
v Multiple host systems cannot access the same volume group or the same
journaled file system simultaneously.
v Without HACMP/6000, some system failure management features such as
failover are not available. Therefore, a failure on the ESS or any one of the
connected host systems will most likely affect the availability of the other
connected devices.
v You must vary and mount the volume groups and journaled file systems every
time you start the system.
v The ESS does not allow ownership of volume groups to move from one system
to another.
v When you use this procedure, you can define between 2 - 4 multiple host
systems.
Software requirements
This section lists the software requirements for attaching multiple RS/6000 or
pSeries host systems to the ESS.
v For details about the RS/6000 or pSeries operating system requirements, see the
following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
|
|
v All host systems must have the devices.SCSI.TM (target mode) package
installed.
v All host systems connected to the same ESS must have either the same or
compatible SCSI interface cards installed.
v You must set unique IDs for the SCSI interface cards on the ESS.
The IBM SSR uses the ESS Specialist to assign the SCSI addresses.
|
|
|
|
|
|
|
Note: Use the information on the logical configuration work sheets in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
v All host systems must have the external SCSI ID of each adapter (interface card
that is set to a unique ID). Follow the instructions in your RS/6000 or pSeries
documents for setting the IDs.
v Restart the systems to make the changes effective.
Hardware requirements
You need the following hardware to connect an ESS to multiple host systems:
v A cable to connect the ESS to each host system SCSI adapter card.
v This configuration allows up to 32 host SCSI attachments. Refer to your host
system documents for instructions about connecting additional SCSI interface
cards with a Y-cable.
64
ESS Host Systems Attachment Guide
v All host systems connected to the same ESS must have the same type of SCSI
adapter or a compatible SCSI adapter. For details on adapters, see the following
Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Attachment procedures
Perform the following steps to attach multiple host systems to the ESS:
1. Is the ESS currently installed on a host system?
Yes
No
↓
Go to step 3.
2. Is any data stored on the ESS that you want to preserve?
No
Yes
↓
Go to “Saving data on the ESS” on page 66.
3. If you have not previously installed SCSI interface cards in the associated host
systems, do so now by using the instructions for your host system.
4. Assign a unique SCSI ID to each of the SCSI interface cards that you install in
your host system.
5. Turn off the host systems that you are connecting. This allows the SCSI ID to
take effect when you turn on the host.
Note: Do not connect the host systems to the ESS at this time.
6. Install the SCSI signal cables.
7. Turn on one host system at a time. Allow each system unit to complete its
start-up procedure before you turn on the next host system connected to the
ESS.
8. After all the host systems have completely started up, if the power to the ESS
is off, turn it on.
9. On each of the host systems, use the mkdir /usr/opt/your_files command to
create a new directory. This step is unnecessary if the directory already exists.
10. On each of the host systems, use the cd /usr/opt/your_files command to
change the active directory.
11. On each of the host systems, use the tar command to read the CD files into
the /usr/lpp/2105 directory. This step is unnecessary if the directory and files
already exist. Use the tar xvf flags for proper extraction. For example, to read
the files from the fd0 directory, type: tar -xvf /dev/fd0.
12. On each of the host systems, use the /usr/opt/your_files command to run the
ESS installation program on each host system that is connected to the ESS.
Wait for the program to complete on one system before you run it on the next.
13. On each host system, use the lsdev -Cc disk command to verify that each
associated system unit has the hdisk descriptions defined for all initialized
LUNs.
14. Determine which hdisks are accessed.
15. Do you have any data currently on the ESS that you want preserved from a
previous installation on the host system?
No
Yes
↓
Go to “Restoring data on the ESS” on page 66.
16. Type smit mkvg to create a volume group on the selected hdisk.
17. Click No for the Activate Volume Group Automatically.
Chapter 6. Attaching to an IBM RS/6000 or IBM eServer pSeries host
65
18. Select an appropriate physical partition size for the volume group, using the
Physical Partition Size.
Note: AIX limits the number of physical partitions to 1016 per logical volume.
This limitation does not apply to AIX 4.3.1.
The default physical partition size is 4 MB. Choose this value to make the
most efficient use of the physical hard disk size. See your host system
documentation for more information.
19. Type smit chvg to change the volume group on the selected hdisk.
20. Click No for the Active Volume Group Automatically.
21. Click No for the A Quorum option of disk required to keep the volume group
online option.
22. Type varyonvg <volumegroup_name> to vary the volume group online.
23. If you are using the journaled file system (JFS), type smit crjfs to create the
JFS for the selected hdisk.
Note: Click No for the Mount Automatically option at the system restart
option.
24. Type mount <mount_point> to mount the file system and verify access from the
selected host system.
Saving data on the ESS
Perform the following steps to preserve the data that is stored on an ESS that was
previously installed and connected to a host system. This procedure does not erase
the data on the ESS, but it removes the volume groups from the host system.
1. Type umount to unmount all file systems from all host systems connected to the
ESS.
2. Type fsck on each of the file systems on the ESS to verify the file system
integrity.
3. Type varyoffvg to vary off all the ESS volume groups from all of the host
systems connected to the ESS.
4. Type exportvg to remove all of the ESS volume groups from all the host
systems connected to the ESS.
5. Type rmdev -ld hdiskx to delete all physical volumes (hdisks) on each host
system that is associated with the ESS.
6. Be sure you have completed step 3 on page 65 of the attachment procedures.
Restoring data on the ESS
Perform the following steps to restore access to the data that was originally
installed on a host system. This procedure assumes that you have preserved the
data by following the instructions in “Saving data on the ESS”.
1. Check to be sure that the ESS physical volumes (hdisks) are available.
Type lsdev -Cc disk to display the hdisks on the host system.
2. Type importvg to restore the ESS volume groups to the applicable host systems
that are connected to the ESS, one system at a time.
3. Type smit chvg to verify that No is selected for the Activate Volume Group
Automatically and A Quorum and of disks required to keep the volume
group online?
4. Type varyonvg to vary-on the ESS volume groups to the applicable host
systems connected to the ESS. Perform the steps one system at a time.
66
ESS Host Systems Attachment Guide
5. Type mount to mount the ESS volume groups to the applicable host systems
that are connected to the ESS. Perform the steps one system at a time.
6. If you want to create new volume groups, go to step 16 on page 65 in
“Attachment procedures” on page 65.
Configuring for the HACMP/6000 host system
This section provides guidelines for planning and installing the software for the
HACMP/6000 host system.
The HACMP provides an availability solution for the commercial UNIX environment.
The HACMP/6000 software supports shared external disk configurations, such as
the ESS.
You can use the ESS in nonconcurrent access clusters and also in concurrent
clusters if the clusters are not mirrored.
For information about the latest releases of HACMP/6000 that are available to
attach the ESS to the RS/6000 and pSeries host systems or the RS/6000 and
pSeries SP server host systems with SCSI adapters or fibre-channel adapters, see
the following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Use the HACMP/6000 documentation for your software version to plan and set up a
cluster. Because the ESS appears to the host as a pure SCSI device, you do not
need specific device-type considerations.
HACMP for AIX Version 4 Release 3 Modification level 1 now supports the following
hardware for the IBM SAN Data Gateway 2108 Model G07:
v Feature code 2214
v Feature code 2213
v Feature code 2319
Note: When you use two fibre-channel adapters to a single SAN Data Gateway in
an HACMP environment, the SAN Data Gateway must be run in split mode.
Each fibre-channel adapter must be connected to a separate port on the
SAN Data Gateway and two separate ports on the SAN Data Gateway must
be used to attach to two separate SCSI ports on the ESS.
The feature codes add support for fibre-channel with four initiators. Support includes
a matrix for HACMP and Models E10 and E20. Models F10 and F20 are not
supported by HACMP.
For additional information about supported host systems, operating systems, and
adapters, see the following Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
The IBM Subsystem Device Driver is only supported in concurrent mode available
in the CRM and ESCRM features of HACMP. For ESS installations, use IBM
Subsystem Device Driver Version 1 Release 1 Modification 4. Prior versions do not
support the SAN Data Gateway. See Table 16 on page 68 for information about
hardware and software levels supported for HACMP Version 4 Release 2
Modification 2 and HACMP Version 4 Release 3 Modification 1.
Chapter 6. Attaching to an IBM RS/6000 or IBM eServer pSeries host
67
Table 16. Hardware and software levels supported for HACMP version 4.2.1, 4.2.2, 4.3.1, and 4.3.3
Hardware/Software
HACMP 4.2.2 AIX
4.2.1
HACMP 4.2.2 AIX
4.3.3
HACMP 4.3.1 AIX 4.3.3
IBM SAN Data Gateway Model 2108 Not supported
Model G07
Not supported
HACMP APAR IY07313 and
APAR IY09595
IBM ESS 2105 Model E20
APAR IY04403
APAR IY04403
APAR IY03438
IBM Subsystem Device Driver for
UNIX Version 1.4
Not supported
Not supported
HACMP APAR IY07392
The host systems and attached ESS should use the SCSI ID assignments
recommended in Table 17.
Notes:
1. The ESS supports SCSI IDs 00 through 15.
2. If you do not use a Y-cable, you do not need to assign special SCSI IDs.
Table 17. Recommended SCSI ID assignments in a multihost environment
SCSI ID
68
Recommended SCSI device
Should not be used for:
07
Not applicable
Not applicable
06
Host system number 1 (primary)
IBM ESS
05
IBM ESS number 1
Host system
04
Host system number 2
IBM ESS
03
Host system number 3 (if applicable)
IBM ESS
02
Host system number 4 (if applicable)
IBM ESS
01
IBM ESS number 2 (if applicable)
Host system
00
Not applicable
Not applicable
ESS Host Systems Attachment Guide
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries
host
This chapter describes the host system requirements to attach the IBM S/390 or
zSeries host system to the ESS ESCON adapter or FICON adapter.
For information about how to use parallel access volumes (PAVs) for S/390 and
zSeries hosts, see the IBM Enterprise Storage Server User’s Guide. The guide
includes an overview of PAV and PAV requirements, and an example of the
input/output configuration program (IOCP).
Figure 38 shows how an ESS that is attached through ESCON links to different
computer-electronic complexes and logical partitions (LPARs). It also shows a
configuration that has been designed for optimum availability. For optimum
availability, make the ESCON host adapters available through all bays. For optimum
performance, have at least eight host adapter ports installed on four ESCON host
adapter cards in the ESS. This setup ensures the best performance from the
attached systems.
1
5
2
CEC 1
LPAR A
LPAR B
CEC 2
LPAR C
ESCD
4
HA HA
ESCD
HA HA
HA HA
Cluster 1
3
CU 0
CU 2
LPAR D
HA HA
Cluster 2
CU 4
CU 6
CU 1
CU 3
CU 5
CU 7
S008176N
Legend
1– computer-electronic complex 1
2– computer-electronic complex 2
3– controller
4– ESCON director
5– logically partitioned mode
Figure 38. ESCON connectivity
Attaching with ESCON
The following section describes how to attach an ESS with ESCON adapters.
© Copyright IBM Corp. 1999, 2001
69
Controller images and interconnections
An ESS supports up to 16 controller images.
All controller images are accessible over any installed ESCON physical path. Each
controller image can have from 1 - 256 devices. The ranges of supported device
addresses may be noncontiguous. Devices that are not mapped to a logical device
respond and show address exceptions.
Note: When a primary controller connects to a secondary controller, the primary
connection converts to a channel. You cannot use it for host connectivity.
You can convert all 32 host attachments to channels. You should not use
more than 31 channels.
The controller images can handle the following ESCON interconnections:
v 1 - 256 devices (bases and aliases) per controller image
v
v
v
v
1 - 4096 devices (bases and aliases) with 16 controller images
1 - 128 logical paths per controller image
1 - 64 logical paths per ESCON Port (shared by all controller images)
2048 logical paths in the ESS
Support for 9032 Model 5 ESCON director FICON bridge feature
The ESS provides fibre-channel connectivity to S/390 and zSeries host systems
with the IBM 9032 Model 5 ESCON director. The 9032 FICON bridge provides
connection and switching among FICON channels and ESCON controllers.
Host adapters, cables, distances and specifications
Each ESCON host adapter connects to both clusters. An ESS emulates 0, 8, or 16
of the 3990 logical controllers. Half the logical controllers are in cluster 1 and half in
cluster 2. Because the ESCON adapter connects to both clusters, each adapter can
address 16 logical controllers.
Each ESCON host adapter provides two host connections. Order two ESCON
cables for each adapter for S/390 and zSeries hosts.
The standard distances for ESCON cables are 2 km (1.2 mi) with a 50-µm
multimode fibre and 3 km (1.9 mi) with 62.5-µm multimode fibre. You can extend
the distance of the cable to 103 km (64 mi) for the Peer-to-Peer Remote Copy
feature. You can also extend the distance of the cable to 103 km (64 mi) from
controller-to-controller.
Note: For optimum performance, use a cable shorter than 103 km (64 mi).
There is an ESCON channel on the S/390 and zSeries host systems. The S/390 or
zSeries host system attaches to one port of an ESCON host adapter in the ESS.
Each ESS adapter card has two ports.
See the IBM Enterprise Storage Server Introduction and Planning Guide for a list of
the ESCON host adapter features codes. The publication also contains the number
of ESCON host adapters, cable group number, number of cables, and connector
IDs to order for the cables.
70
ESS Host Systems Attachment Guide
Logical paths and path groups
A logical path is a connection between a controller image and a host image. An
ESCON link consists of two fibre-channels. There is a fibre-channel for each
direction. An ESCON connector connects the fibre-channel port to an ESCON.
Each ESCON adapter card supports two ESCON ports or links. Each port supports
64 logical paths. With a maximum of 32 ESCON ports, the maximum number of
logical paths is 2048.
Each controller image supports up to 64 path groups. Each path group might have
up to eight logical paths. Each controller image supports a maximum of 128 logical
paths.
Cable lengths and path types
All ESCON attachments have a light-emitting diode (LED) interface. The cable
attached to the host adapter can be up to 2 km (1.2 mi) in length using 50-µm fibre
or 3 km (1.9 mi) in length for 62.5-µm fibre. There are no cable splices inside the
ESS. Laser links or LEDs can exist between the host system attached to the:
v Storage server and the host channel controller
v Peer controller host channel
v Peer controller with appropriate equipment
Note: Appropriate retention hardware to support cable attachments that control
bend-radius limits comes with each ESCON host attachment.
Data transfer
The ESCON host adapter supports all data input buffer sizes up to 256 bytes.
During write operations, the host adapter requests the minimum pacing count of hex
XX'02X'. For commands whose parameter data length is not determined by the
parameter data itself, the full transfer count in the command frame is requested in
the first data request. The adapter supports an NDF-R count of 7 (that is, a
maximum of eight data requests).
Directors and channel extenders
The ESS supports IBM ESCON directors 9032 Models 1, 2, 3, and 5. The ESS
supports IBM 9036 channel extenders to the distances that are allowed by the 9036
as described in “Cable lengths and path types”. The ESS supports the 9729
Wavelength Division Multiplexer channel extender up to 50 km (31 mi).
Identifying the port for TSO commands
Figure 39 on page 72 helps you identify the port ID for the MVS TSO commands for
S/390 and zSeries hosts. The numbers like 806, 804, 802, and 800 are internal
microcode numbers. The numbers like 00, 04, 08, and 0C are tag numbers. You
need tag numbers for the path setup. To determine the tag number, use the deserv
command with the rcd (read configuration data) parameter.
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries host
71
Cluster 1
Slot#
ID
Port 0
Port 1
Tag
Port 0
Port 1
Cluster 2
Board 0 (CPI4)
______________
|
| | |
0
1 2 3
Board 1 (CPI6)
______________
| | |
|
0 1 2
3
Board 0 (CPI5)
______________
|
|
|
|
0
1
2
3
Board 1 (CPI7)
______________
|
|
|
|
0
1
2
3
806 804 802 800
807 805 803 801
816 814 812 810
817 815 813 811
80E 80C 80A 80C
80F 80D 80B 809
81E 81C 81A 818
81F 81D 81B 819
00
01
04
05
08
09
0C
0D
20
21
24
25
28
29
2C
2D
80
81
84
85
88
89
8C
8D
A0
A1
A4
A5
A8
A9
AC
AD
Figure 39. Port identification for S/390 and zSeries TSO commands
Attachment requirements
Ensure that your installation meets the following requirements:
1. One or two ESSs and no other I/O devices are attached to each ESCON host
adapter card.
Attention: To avoid causing static discharge damage when handling disk drive
modules and other parts, observe the precautions in “Handling electrostatic
discharge-sensitive components” on page 10.
2. The cables are connected correctly and are seated properly.
Migrating from ESCON to native FICON
The following section provides information about how to migrate from ESCON to
native FICON. FICON is supported only on Models F10 and F20.
Note: FICON support consists of hardware enhancements for enterprise servers,
host software upgrades, ESS LIC, and adapters. If your ESS is not at the
level that supports FICON, you must install the LIC upgrade. You can
perform a nondisruptive update to most hardware and software upgrades.
Native ESCON configuration
Figure 40 on page 73 shows an example of a native ESCON configuration. The
configuration shows an S/390 or zSeries host that has four ESCON channels 1
attached to the ESS through two ESCON directors 3. The channels are grouped
into a channel-path group 2 for multipathing capability to the ESS ESCON
adapters 4.
72
ESS Host Systems Attachment Guide
S/390 or
zSeries Host
1 1 1 1
2
3
3
4 4 4 4
ESS
S009064
Figure 40. Example of an ESCON configuration
Mixed configuration
Figure 41 on page 74 shows another example of a S/390 or zSeries host system
with four ESCON channels 1. In this example, two FICON channels 2 have
been added to an S/390 or zSeries host. The illustration also shows the
channel-path group 3 and FICON directors 4 through which the two FICON
adapters 5 are installed in the ESS.
The two FICON directors 4 are not required. You can improve reliability by
eliminating a single point of failure. The sngle point of failure might be present if
both FICON channels 2 are connected to a single FICON director. You can
connect the FICON channels 2 to the ESS FICON adapters 5 directly, without
directors.
Figure 41 on page 74 also shows four ESCON adapters 6 and two ESCON
directors 7. This configuration gives you the most flexibility for future I/O changes.
Figure 41 on page 74 illustrates the FICON channels 2 that have been added to
the existing ESCON channel-path group 3. Because the channel-path group has
ESCON and FICON channel paths, it makes migrating easier. This intermixing of
types of channel-paths allows you to nondisruptively add FICON channels 2 to
the host and to add FICON adapters 5 to the ESS.
Notes:
1. The configuration in Figure 41 on page 74 is supported for migration only.
2. Do not use this configuration for an extended period.
3. IBM recommends that you migrate from a mixed channel-path group
configuration to an all FICON channel-path group configuration.
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries host
73
S/390 or
zSeries Host
2 2
1 1 1 1
3
7
7
4
6 6 6 6
4
5 5
ESS
S009065
Figure 41. Example of an ESCON configuration with added FICON channels
FICON configuration
Figure 42 on page 75 illustrates how to remove the ESCON paths. The S/390 or
zSeries host has four ESCON channels 1 connected to two ESCON directors 6.
The S/390 or zSeries host system also has two FICON channels 2.
You can remove the ESCON adapters nondisruptively from the ESS while I/O
continues on the FICON paths. You can change the channel-path group 3
definition to include only the FICON director 4 paths to complete the migration to
the ESS with two FICON adapters 5.
You can retain the ESCON channels on the S/390 or zSeries host system so that
you can access other ESCON controllers. You can also keep the ESCON adapters
1 on the ESS to connect to other S/390 or zSeries hosts.
74
ESS Host Systems Attachment Guide
S/390 or
zSeries Host
1 1 1 1
2 2
3
6
6
4
4
5 5
ESS
S009066
Figure 42. Example of a native FICON configuration with FICON channels that have been
moved nondisruptively
Migrating from a FICON bridge to a native FICON attachment
This section shows how to migrate from a FICON bridge to a native FICON
attachment. The FICON bridge is a feature card of the ESCON Director 9032 Model
5. The FICON bridge supports an external FICON attachment and connects
internally to a maximum of eight ESCON links. The volume on these ESCON links
is multiplexed on the FICON link. You can perform the conversion between ESCON
and FICON on the FICON bridge.
FICON bridge configuration
Figure 43 on page 76 shows an example of how to configure a FICON bridge. It
also shows an S/390 or zSeries host with two FICON channels 1 attached to two
FICON bridges 2. You can attach the ESS through the channel-path group 3 to
four ESCON links 4.
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries host
75
S/390 or
zSeries Host
1 1
3
2
2
4 4 4 4
ESS
S009067
Figure 43. Example of how to configure a FICON bridge from an S/390 or zSeries host
system to an ESS
Mixed configuration
Figure 44 on page 77 shows an example of an S/390 or zSeries host system with
one FICON channel 1 and one FICON director 2 through a channel-path group
3 and FICON host adapter 4 to the ESS. Figure 44 on page 77 also shows an
S/390 or zSeries host system with one FICON channel 1 and one ESCON
director with a FICON bridge 6 through a channel-path group 3 and two
ESCON adapters 5.
Figure 44 on page 77 shows that one FICON bridge was removed from the FICON
configuration. The FICON channel that was connected to that bridge is reconnected
to the new FICON director. The ESS FICON adapter is connected to this new
director. The channel-path group was changed to include the new FICON path. The
channel-path group is a mixed ESCON and FICON path group. I/O operations
continue to the ESS devices across this mixed path group. Access to the ESS
devices is never interrupted because all the actions are nondisruptive.
Notes:
1. The configuration in Figure 44 on page 77 is supported for migration only.
2. Do not use this configuration for an extended period.
3. IBM recommends that you migrate from a mixed channel-path group
configuration to an all FICON channel-path group configuration.
76
ESS Host Systems Attachment Guide
S/390 or
zSeries Host
1 1
3
6
2
5 5
4
ESS
S009068
Figure 44. Example of how to add a FICON director and a FICON host adapter
Native FICON configuration
Figure 45 on page 78 shows an S/390 or zSeries host system with two FICON
channels 1 connected to two FICON directors 2 through a channel-path group
3 to two FICON adapters 4. Note that the second bridge has been removed
and a FICON director has been added. The channel-path group has only the
FICON paths.
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries host
77
S/390 Host
or zSeries Host
1 1
3
2
2
4 4
ESS
Figure 45. Example of the configuration after the FICON bridge is removed
Attaching to a FICON channel
This section tells you how to configure the ESS for a FICON attachment.
Configuring the ESS for FICON attachment
You can perform a FICON channel attachment on the ESS Models F10 and F20.
You cannot perform a FICON channel attachment on the ESS Models E10 and E20.
When you attach a Model F10 or F20 to a FICON interface, you must use one of
the following host adapter feature codes:
v 3021
Feature code 3021 is a longwave laser adapter that has a 31-m (100-ft) 9-micron
cable with duplex connectors.
Because the 3021 uses one of four slots in one of the four I/O bays, you can
have a maximum of 16 adapters in the ESS. This allows you to have a maximum
of 16 FICON interface attachments.
v 3023
Feature code 3023 is the shortwave laser adapter that includes a 31-m (100 ft),
(50-micron) cable with duplex connectors.
Note: You cannot mix FICON and fibre-channel protocol (SCSI) connections on
the same ESS adapter.
Because the 3023 uses one of four slots in one of the four I/O bays, you can
have a maximum of 16 adapters in the ESS. This allows you to have a maximum
of 16 FICON interface attachments. If the attachments are all point-to-point, you
can attach directly to 16 FICON channels. If you attach to a switch or director,
you can attach a maximum of 128 FICON channels per ESS FICON adapter. For
this scenario, you must be able to attach to a number of hosts. The ESS
78
ESS Host Systems Attachment Guide
supports 256 logical paths per FICON link (compared to just 64 for ESCON), 128
logical paths per logical subsystem, and 2048 logical paths for each ESS.
Before FICON, you could only connect with a fibre-channel and use the
fibre-channel protocol with feature code 3022. Feature code 3023, with a 50-micron
single-mode fiber cable increases the point-to-point distance from 500 m (1500 ft) to
10 km (6.2 mi). The increased distance provides greater configuration options with
zSeries host systems with FICON host adapters.
Attachment considerations
This section describes some things you should consider before you configure your
system with a FICON interface.
Setting up ESCON and FICON links
If the system requires x ESCON links, where x is the number of links to get the
performance and availability attributes you want, you must consider the number of
FICON links you need. For example, you can map 4 ESCON links to a single
FICON link and maintain approximately equivalent performance. If the ESCON
channel use is low, you can map 6 or 8 ESCON links to a single FICON link.
Multipathing for ESCON and FICON
Consider the difference between the path groups when you compare FICON to
ESCON. For example, for ESCON, you can configure 4 or 8 paths per path group
from a host to an ESS. For ESCON, you want at least four paths in the path group
to maximize performance. Most ESCON controllers initiate channel command
execution that partially synchronizes the lower DASD interface with the upper
channel interface. This channel command only allows you a very short time to
reconnect. The consequence is a reconnection that can fail. When you have eight
paths in the path group, it minimizes the number of missed reconnections.
Increasing the number of path groups does not minimize the number of missed
reconnections substantially. If you use eight paths in path groups, you can increase
the overall throughput.
For FICON controllers, there is no synchronization between the lower DASD
interface and the upper channel interface. The number of paths in the path group
depend on the throughput requirement. If it takes x paths to satisfy the throughput
requirement, where x is the number of paths, set the path group to x.
Note: x must be a minimum of two and cannot exceed a maximum of eight.
Attaching to a FICON channel or a FICON channel-path group
When you attach multiple controllers to a channel, you are connecting serially. You
can use a switch (director) for each controller or an ESCON or FICON channel that
has a direct connection to the controller. I/O activity does not flow through all the
other controllers before you get to the target controller. I/O activity goes directly to
the target controller. When multiple controllers are connected to a channel through
a switch, you create the logical equivalent of the parallel interconnection.
With the parallel interface and with the ESCON interface, the channel and controller
communicate to form a private connection. None of the other controllers on the
channel can communicate with the channel while this private connection is in place.
The private connection supports input and output activity between the channel and
the controller. It can run slowly, depending upon the factors that affect the controller
and the device. The protocol does not allow any of the serially connected
controllers to use any spare cycles. The result is poor performance.
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries host
79
FICON does not support a private connection. FICON performs frame (or packet)
multiplexing. A configuration with the serially connected controllers communicates
with the controllers simultaneously. It can multiplex I/O operations across all
controllers simultaneously. No interface cycles are wasted because of a private
connection. You can serially connect controllers with FICON without performance
degradation.
The next question though is whether or not it is permitted to serially connect DASD
control units with tape controllers. Tape generally performs much larger I/O
operations at any instant in time. Therefore, even with FICON, when you have tape
I/O running, you can temporarily “lockout” some DASD I/O. It is still better to put
tape and DASD on different FICON channels.
Attaching to a FICON channel with G5 and G6 hosts
You can use the following FICON adapters with the IBM S/390 Generation 5 (G5)
and Generation 6 (G6) host systems:
v Feature code 2314
Feature code 2314 is the longwave laser adapter.
v Feature code 2316
Feature code 2316 is the shortwave laser adapter.
You can use the following FICON adapters with the zSeries host systems:
v Feature code 2315
Feature code 2315 is the FICON longwave laser adapter. This adapter has two
ports per adapter. This adapter is a 9-micron single mode cable, but you can use
it with a 62.5-micron multimode cable when you attach mode-conditioning cables
at each end.
v Feature code 2318
Feature code 2318 is the FICON shortwave laser adapter. This adapter has two
ports per adapter. The shortwave laser adapter supports the 50- and 62.5-micron
multimode cable.
You can attach the FICON channels directly to an ESS or you can attach the
FICON channels to a fibre-channel switch. When you attach the FICON channels
directly to an ESS, the maximum number of FICON attachments is 16. Sixteen is
the maximum number of host adapters you can configure in an ESS. When you use
an ESS host adapter to attach to FICON channels either directly or through a
switch, the adapter is dedicated to FICON attachment. It cannot be simultaneously
attached to fibre-channel protocol hosts.
When you attach an ESS to FICON channels through one or more switches, the
maximum number of FICON attachments is 128 per ESS adapter. The directors
provide very high availability with redundant components and no single points of
failure or repair.
You can use the IBM 2032 Model G4 (McData ED-6064 Enterprise fibre-channel
director) or IBM 2042 Model 001 (Inrange FC/9000 fibre-channel director). You can
use either director to attach fibre-channel protocol hosts and devices in addition to
the FICON hosts and devices. For these configurations, the fibre-channel protocol
hosts should communicate only with the fibre-channel protocol devices. The FICON
hosts should communicate only with the FICON devices. IBM recommends that you
set up zones in the directors to guarantee that none of the fibre-channel protocol
hosts or devices can affect the FICON traffic.
80
ESS Host Systems Attachment Guide
|
When you attach FICON products to switches or directors, you cannot use cascade
switches. You cannot configure a fabric of multiple interconnected directors and
have a FICON channel attached to one director communicate to a FICON control
unit that is attached to another director. The FICON architecture prohibits this
capability. The reason for the restriction is because the base S/390 and zSeries I/O
architecture uses a single byte for addressing the I/O devices. This one-byte I/O
address is not compatible with the fibre-channel, 3-byte port address. The FICON
solution to this problem is to disallow switch cascading.
Chapter 7. Attaching to an IBM S/390 or IBM eServer zSeries host
81
82
ESS Host Systems Attachment Guide
|
|
Chapter 8. Attaching a Linux host
|
|
|
|
|
This chapter describes how to attach a an Intel server that runs Linux with Red Hat
7.1 and SuSE 7.2 to an IBM ESS. You can attach to an ESS with the following
adapter cards:
|
You cannot attach the ESS to a Linux host system with SCSI adapters..
|
|
Note: The steps to install and configure adapter cards are examples. Your
configuration might be different.
|
|
v QLogic QLA2200F
v QLogic QLA2300F
Attaching with fibre-channel adapters
|
|
|
|
|
This section describes how to attach an Intel server running Linux with Red Hat 7.1
and SuSE 7.2 to an IBM ESS. You can attach to an ESS with the following adapter
cards:
v QLogic QLA2200F
v QLogic QLA2300F
|
|
For information about the most current version of the kernel and the switches that
are supported, see the following Web site:
|
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Check the LUN limitations for your host system; see Table 6 on page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. Either you or an IBM SSR defines the fibre-channel host system with the
worldwide port name identifiers. For the list of worldwide port names see
“Appendix A. Locating the worldwide port name (WWPN)” on page 153.
3. Either you or an IBM SSR defines the fibre-port configuration if you did not do it
during the installation of the ESS or fibre-channel adapters.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
4. You or an IBM SSR configures the host system for the ESS by using the
instructions in your host system publications.
© Copyright IBM Corp. 1999, 2001
83
|
Installing the QLogic QLA2200F or Qlogic QLA2300F adapter card
This section tells you how to attach an ESS to a Linux host system with the QLogic
QLA2200F or Qlogic QLA2300F adapter card. Single-and dual port fibre-channel
interfaces with the QLogic QLA2200F adapter card support the following public and
private loop modes:
v Target
v Public initiator
|
|
|
|
|
|
|
|
|
v Private initiator
v Target and public initiator
v Target and private initiator
|
|
Perform the following steps to install the QLogic QLA2300F or Qlogic QLA2300F
adapter card:
|
|
|
|
|
|
|
|
Note: The following steps are an example of a configuration. The configuration for
your adapter might differ.
1. Install the QLogic QLA2200F or Qlogic QLA2300F adapter card in the host
system.
2.
3.
4.
5.
|
|
|
|
|
|
Connect the cable to the ESS port.
Restart the server.
Press Alt+Q to get to the FAST!Util Command panel.
From the Configuration Settings menu, click Host Adapter Settings.
Set the parameters and values from the Host Adapter Settings menu as
follows:
a. Host adapter BIOS: Disabled
b. Frame size: 2048
c. Loop reset delay: 5 (minimum)
d. Adapter hard loop ID: Disabled
6. From the Advanced Adapter Settings menu, press the Down Arrow key to
highlight LUNs per target; then press Enter. Set the parameters and values
from the Advanced Adapter Settings menu as follows:
a. Execution throttle: 240
b. Fast command posting: Enabled
c. >4 GB addressing: Disabled for 32 bit systems
d. LUNs per target: 0 or 128
e. Enable LIP reset: No
f. Enable LIP full login: No
|
|
|
|
|
|
|
|
|
|
|
|
g.
h.
Enable target reset: Yes
Login retry count: 60
i. Port down retry count: 60
j. Driver load RISC code: Enabled
k. Enable database updates: No
l. Disable database load: No
m. IOCB allocation: 256
n. Extended error logging: Disabled (might be enabled for debugging)
7. Press Esc to return to the Configuration Settings menu.
|
|
|
|
|
|
84
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
|
|
|
|
8. From the Configuration Settings menu, scroll down to Extended Firmware
Settings. Press Enter.
9. From the Extended Firmware Settings menu, scroll down to Connection
Options to open the Option and Type of Connection window.
10. Press Enter.
11. Select the option:
v 0: Loop only
v 1: Point-to-point
v 2: Loop preferred (If you cannot use arbitrated loop, then default to
point-to-point.)
v 3: Point-to point, otherwise loop (If you cannot use point-to-point, default to
arbitrated loop.)
|
|
|
|
|
Note: If you connect the ESS directly to the host system, the option you select
must match the port connections on the ESS.
12. Press Esc.
13. To save the changes, click Yes. Press Enter.
14. Restart the server.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Loading the current fibre-channel adapter driver
Perform the following steps to load the current driver onto the QLogic adapter card.
1. Go to the following Web site:
2.
3.
4.
5.
www.qlogic.com
From the home page, click Driver Download.
Click User Qlogic Drivers.
Click Fibre-Channel Adapter Drivers and Software.
Click QLA22xx or QLA23xx.
6. Click Linux.
7. Click the link for the appropriate Linux source code and the appropriate Linux
driver or the Linux kernel.
8. In the Save As window, find the current driver file, xxxxxxx.exe, where xxxxxxx
is the driver file name.
Note: IBM recommends that you save the file to a floppy diskette.
9. Click Save.
|
|
|
|
|
|
|
10. Close the Web site.
11. From your host system Start menu, click Run.
12. In the Run window, ensure the drive letter is the same as the drive letter where
you saved the xxxxxxx.exe file in step 8. If no drive letter appears, type the
letter of the drive where you saved the driver file.
13. Type the driver file name after x:, where x is the drive letter you specified to
save the file.
14. Type the directory name where you want to put the file. Click Zip.
|
|
15. Click OK to unzip the current driver file.
Note: IBM recommends that you save the file to a floppy diskette.
Chapter 8. Attaching a Linux host
85
|
Installing the fibre-channel adapter drivers
|
Perform the following steps to install the fibre-channel adapter drivers.
|
|
Note: This is an example of how to install the fibre-channel adapter drivers.
Type mkdir /usr/src/qlogic
Type mv [download location]/[driver source] /usr/src/qlogic
Type tar -xzf [driver source]
Type cd /usr/src/linux
Type make modules
Type make modules_instal
Type make OSVER=linux-2.4.x SMP=1 where x represents kernel version, and
SMP=1 if running multi-processors
8. Type mkdir /lib/modules/2.4.x/kernel/drivers/scsi where x is your kernel
version
9. Type cp qla2x00.o /lib/modules/2.4 x/kernel/drivers/scsi
10. Type insmod qla2x00
1.
2.
3.
4.
5.
6.
7.
|
|
|
|
|
|
|
|
|
|
|
|
|
Configuring the ESS with the QLogic QLA2200F or QLogic QLA2300F
host adapter card
To configure the host adapter card, use the IBM Enterprise Storage Server
StorWatch Specialist.
|
|
|
|
Number of disk devices on Linux
|
|
|
|
The maximum number of devices that are supported on a Linux system is 128. The
standard Linux kernel uses a major and minor number address mechanism. A
special device file represents each disk device. For each default, there is a
maximum of 16 partitions per disk. The major and minor numbers are 8-bit.
|
|
|
There are eight major numbers that are reserved for SCSI devices. Fibre-channel
attached devices are handled as SCSI devices. The major numbers are 8, 65, 66,
67, 68, 79, 70 and 71.
|
|
|
|
There are 256 minor numbers available for each of the eight major numbers. The
following formula provides the maximum number of devices under Linux.
|
|
|
|
There are several Linux extensions available to address this limitation. One
approach is to use the major and minor number address spaces in different ways.
Some of the minor number address space for the partitions is used for the major
number address space to allow more devices with less partitions.
|
|
|
|
|
You an also use devfs (device file systems). Devfs uses a 32-bit device identifier
which allows you to address many more. It shows only the devices that are
available on the system, instead of listing device files for devices that are not
attached to the system. Devfs is backwards compatible, mounts over /dev and uses
UNIX like device identification. Here is an example /dev/scsi/host/bus/target/lun
Number of devices = (number of major#) x (number of minor#) / (number of partitions)
Number of devices = 8 x 256/16 = 128
86
ESS Host Systems Attachment Guide
|
|
|
|
|
|
On the Red Hat distribution, there are all special device file entries available for the
128 devices. On the SuSE distribution, there are only special devices file available
for the first sixteen devices. You must create all other devices manually using the
mknod command.
Configuration of ESS storage under Linux
|
|
|
|
|
|
Each of the attached ESS LUNs has a special device file in the Linux directory /dev.
In the current release there is a maximum of 128 SCSI or fibre-channel disks that
are based on the major numbers currently available. ForRedHat, the entries for all
128 devices are added automatically by the operating system. For SuSE, there are
only special device files for the first 16 disks. You must create the device files for
additional disks using the mknod command.
|
|
|
|
|
The range of devices goes from /dev/sda (LUN0) to /dev/sddx (LUN127). Figure 46
shows an example of the range of devices.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# ls –l /dev/sda
brw-rw---- 1 root disk 8, 0 Aug 24 2000 /dev/sda
Figure 46. Example of range of devices for a Linux host
Partitioning ESS disks
Before you create a file system, you must partition the disk using the fdisk utility.
You have to specify the special device file of the disk you want to partition when
executing fdisk. Figure 47 shows an example of the different options for the fdisk
utility.
# fdisk /dev/sdb
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Figure 47. Example of different options for the fdisk utility
Figure 48 on page 88 shows an example of a primary partition on the disk /dev/sdb.
Chapter 8. Attaching a Linux host
87
Command (m for help): n
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-953, default 1): Enter
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-953, default 953): Enter
Using default value 953
Command (m for help): p
Disk /dev/sdb: 64 heads, 32 sectors, 953 cylinders
Units = cylinders of 2048 * 512 bytes
Device Boot Start End Blocks Id
/dev/sdb1 1 953 975856 83 Linux
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
System
Figure 48. Example of primary partition on the disk /dev/sdb
Next, assign the system partition ID before you write the information to the partition
table on the disk and exiti the fdisk program. Figure 49 shows the assignment of the
of the Linux system ID to the partition (hex code 83).
Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): 83
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
SCSI device sdb: hdwr sector= 512 bytes. Sectors= 1953152 [953 MB] [1.0 GB]
sdb: sdb1
SCSI device sdb: hdwr sector= 512 bytes. Sectors= 1953152 [953 MB] [1.0 GB]
sdb: sdb1
WARNING: If you have created or modified any DOS 6.x partitions, please see the
fdisk manual page for additionalinformation.
Syncing disks.
[root@yahoo /data]#
Figure 49. Example of assignment of Linux system ID to the partition
Creating and using file systems on ESS
After you partition the disk as described in “Partitioning ESS disks” on page 87, the
next step is to create a file system. Figure 50 on page 89 shows an example of the
EXT2 Linux file system (which is non journaled) using the mke2fs or mkfs
command.
|
|
|
|
|
88
ESS Host Systems Attachment Guide
Using mke2fs:
[root@yahoo /data]# mke2fs /dev/sdb1
mke2fs 1.18, 11-Nov-1999 for EXT2 FS 0.5b, 95/08/09
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
122112 inodes, 243964 blocks
12198 blocks (5.00%) reserved for the super user
First data block=0
8 block groups
32768 blocks per group, 32768 fragments per group
15264 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
[root@yahoo /data]#
Using mkfs:
[root@yahoo /data]# mkfs -t ext2 /dev/sdb1
mke2fs 1.18, 11-Nov-1999 for EXT2 FS 0.5b, 95/08/09
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
122112 inodes, 243964 blocks
12198 blocks (5.00%) reserved for the super user
First data block=0
8 block groups
32768 blocks per group, 32768 fragments per group
15264 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
[root@yahoo /data]#
Figure 50. Example of creating a file with the mke2fs or mkfs command
Chapter 8. Attaching a Linux host
89
90
ESS Host Systems Attachment Guide
Chapter 9. Attaching to a Novell NetWare host
This chapter describes how to attach an ESS to a Novell NetWare host system with
the following adapter cards:
v Adaptec AHA-2944UW
v QLogic QLA1041
v QLogic QLA2100F
v QLogic QLA2200F
Note: The steps to install and configure adapter cards are examples. Your
configuration might be different.
Attaching with SCSI adapters
This section describes the procedures to attach to a Novell NetWare host system
with the following SCSI adapters:
v Adaptec AHA-2944UW
v QLogic QLA1041
For procedures about how to attach an ESS to a Novell NetWare host system with
fibre-channel adapters, see “Attaching with fibre-channel adapters” on page 94.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Check the LUN limitations for your host system; see Table 6 on page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the CD that you
received with the ESS.
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS:
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. You or an IBM SSR configures the host system for the ESS by using the
instructions in your host system publications.
Installing and configuring the Adaptec adapter card
Perform the following steps to install and configure the Adaptec AHA-2944UW
adapter card.
Note: The parameter settings shown are examples. The settings for your
environment might be different.
1. Install the Adaptec AHA-2944UW on the host system.
2. Connect the cable to the ESS port.
3. Restart the server.
4. Press Ctrl+A to get to the SCSISelect menu.
© Copyright IBM Corp. 1999, 2001
91
a. Set the parameters on the Advanced Configuration Options panel as
follows:
v Host Adapter SCSI ID: 7
v SCSI Parity Checking: Enabled
v Host Adapter SCSI Termination: Automatic
v Sync Transfer Rate (megabytes per second): 40.0
v Initiate Wide Negotiation: Yes
v Enable Disconnection: Yes
v Send Start Unit Command: No
v Enable Write Back Cache: No
v BIOS Multiple LUN Support: Yes
v Include in BIOS Scan: Yes
b. Set the parameters on the SCSI Device Configuration panel as follows:
v Reset SCSI BIOS at IC Int: Enabled
v Display Ctrl+A Message During BIOS: Enabled
v Extend BIOS translation for DOS drives > 1 GB: Disabled
Note: Set this parameter to Disabled if you do not put DOS partitions
on the ESS or use remote boot for ESS hosted volumes.
v Verbose or Silent Mode: Verbose
v Host Adapter BIOS: Disabled:scan bus
v Support Removable Disks under BIOS as fixed disks: Disabled
v BIOS support for bootable CD-ROM: Disabled
v BIOS support for INT 13 extensions: Enabled
c. Save the changes and select SCSISelect again to verify that you saved
the changes.
5. Restart the server.
6. With NetWare 5, use the nwconfig command; with NetWare 4.2, use the load
install command. Type the command on the command line.
7. Load the AHA2940.ham file (version 7.0) by using the disk driver option.
8. Edit the startup.ncf file and make sure that the load statement looks like the
following example:
LOAD AHA2940.ham
SLOT=x lun_enable=ff
where x is the slot number of the adapter.
9. Save the startup.ncf file.
10. Edit the startup.ncf file and add the following text to the end of the file:
Scan all LUNs
Note: This enables the server to scan all the attached storage before
mounting the volumes.
11. Save the file.
12. Restart the server.
13. At the system console, type the following commands:
Scan for new devices
List devices
A list of devices is displayed.
92
ESS Host Systems Attachment Guide
14. Partition the devices and make volume groups.
Installing and configuring the QLogic QLA1041 adapter card
Perform the following steps to install and configure the QLogic QLA1041 adapter
card.
Note: The parameter settings shown are an example. The settings for your
environment might be different.
1. Install the QLogic QLA1041 adapter card in the server.
2. Connect the cable to the ESS port.
3. Start the server.
4. Press the Alt+Q key to get to the FAST!Util menu.
a. From the Configuration Settings menu, select Host Adapter Settings.
Set the following parameters:
v Host Adapter: Enabled
v Host Adapter BIOS: Disabled
v Host Adapter SCSI ID: 7
v
v
v
v
v
PCI Bus DMA Burst: Enabled
CD ROM Boot: Disabled
SCSI Bus Reset: Enabled
SCSI Bus Reset Delay: 5
Concurrent Command or Data: Enabled
v Drivers Load RISC Code: Enabled
v Adapter Configuration: Auto
b. Set the parameters in the SCSI Device Settings menu as follows:
v
v
v
v
Disconnects OK: Yes
Check Parity: Yes
Enable LUNS: Yes
Enable Devices: Yes
v
v
v
v
Negotiate Wide: Yes
Negotiate Sync: Yes
Tagged Queueing: Yes
Sync Offset: 8
v Sync Period: 12
v Exec Throttle: 16
c. Save the changes and select FAST!Util again to verify that you saved the
changes.
5. Restart the server.
6. With NetWare 5, use the nwconfig command; with NetWare 4.2, use the load
install command. Type the command on the command line.
7. Load the QL1000.ham file (version 1.27) by using the disk driver option.
8. Edit the startup.ncf file, and make sure that the load statement looks like the
following example:
LOAD QL1000.HAM SLOT=x
where x is the slot number of the adapter.
9. Save the file.
10. Edit the startup.ncf file, and add the following text to the end of the file:
Chapter 9. Attaching to a Novell NetWare host
93
SCAN ALL LUNS
Note: This enables the server to scan all the attached storage before
mounting the volumes.
11. Save the file.
12. Restart the server.
13. At the system console, type the following commands:
SCAN FOR NEW DEVICES
LIST DEVICES
A list of all the devices is displayed.
14. Partition the devices and make volume groups.
Attaching with fibre-channel adapters
This section describes how to attach an ESS to a Novell NetWare host system with
the following adapter cards:
v QLogic QLA2100F
v QLogic QLA2200F
Note: The IBM SAN Fibre Channel Switch 2109 Models S08 and S16 are
supported for Novell NetWare. The IBM SAN Data Gateway 2108 Model G07
is not supported for Novell NetWare.
For procedures about how to attach an ESS to a Novell NetWare host system with
SCSI adapters, see “Attaching with SCSI adapters” on page 91.
Installing the QLogic QLA2100F adapter card
This section tells you how to attach an ESS to a Novell NetWare host system with
the QLogic QLA2100F adapter card. Single-port fibre-channel interfaces with the
QLogic QLA2100F adapter card support the following loop modes:
v Target
v Initiator
v Target and initiator
Note: The arbitrated loop topology is the only topology available for the QLogic
QLA2100F adapter card.
Perform the following steps to install the QLogic QLA2100F adapter card.
Note: The following steps are an example of a configuration. The configuration for
your adapter might differ.
1. Install the QLogic QLA2100F adapter card in the host system.
2.
3.
4.
5.
6.
Connect the cable to the ESS port.
Restart the server.
Press Alt+Q to get to the FAST!Util menu.
From the Configuration Settings menu, click Host Adapter Settings.
From the Advanced Adapter Settings menu, press the Down Arrow to
highlight LUNs per target. Press Enter.
7. Press the Down Arrow to find and highlight 32. Press Enter.
8. Press Esc.
94
ESS Host Systems Attachment Guide
9. To save the changes, click Yes. Press Enter.
10. Restart the server.
Installing the QLogic QLA2200F adapter card
This section tells you how to attach an ESS to a Novell NetWare host system with
the QLogic QLA2200F adapter card. Single- and dual-port fibre-channel interfaces
with the QLogic QLA2200F adapter card support the following public and private
loop modes:
v Target
v Public initiator
v Private initiator
v Target and public initiator
v Target and private initiator
Perform the following steps to install the QLogic QLA2200F adapter card.
Note: The following steps are an example of a configuration. The configuration for
your adapter might differ.
1. Install the QLogic QLA2200F adapter card in the host system.
2.
3.
4.
5.
Connect the cable to the ESS port.
Restart the server.
Press Alt+Q to get to the FAST!Util menu.
From the Configuration Settings menu, click Host Adapter Settings.
Set the parameters and values from the Host Adapter Settings menu as
follows:
a. Host adapter BIOS: Disabled
b. Frame size: 2048
c. Loop reset delay: 5 (minimum)
d. Adapter hard loop ID: Disabled
6. From the Advanced Adapter Settings menu, press the Down Arrow key to
highlight LUNs per target; then press Enter. Set the parameters and values
from the Advanced Adapter Settings menu as follows:
a.
b.
Execution throttle: 240
Fast command posting: Enabled
c.
d.
>4 GB addressing: Disabled for 32 bit systems
LUNs per target: 32
e. Enable LIP reset: No
f. Enable LIP full login: No
g. Enable target reset: Yes
h. Login retry count: 20 (minimum)
i. Port down retry count: 20 (minimum)
j. Driver load RISC code: Enabled
k. Enable database updates: No
l. Disable database load: No
m. IOCB allocation: 256
n. Extended error logging: Disabled (might be enabled for debugging)
7. Press Esc to return to the Configuration Settings menu.
Chapter 9. Attaching to a Novell NetWare host
95
8. From the Configuration Settings menu, scroll down to Extended Firmware
Settings. Press Enter.
9. From the Extended Firmware Settings menu, scroll down to Connection
Options to open the Option and Type of Connection window.
10. Press Enter.
11. Select the option:
v 0: Loop only
v 1: Point-to-point
v 2: Loop preferred (If you cannot use arbitrated loop, then default to
point-to-point.)
v 3: Point-to point, otherwise loop (If you cannot use point-to-point, default to
arbitrated loop.)
Note: If you connect the ESS directly to the host system, the option you select
must match the port connections on the ESS.
12. Press Esc.
13. To save the changes, click Yes. Press Enter.
14. Restart the server.
Loading the current adapter driver
Perform the following steps to load the current driver onto the QLogic adapter card:
1. Go to the following Web site:
2.
3.
4.
5.
www.qlogic.com/
From the home page, click Driver Download.
Click Drivers.
Click Fibre-Channel Adapter Drivers.
Click QLA2xxx drivers.
6. Click Novell NetWare.
7. Click Qlogic Vx.xxx where V is the version and x.xxx is the version level of the
file name.
8. In the Save As window, find the current driver file, xxxxxxx.exe, where xxxxxxx
is the driver file name. Before you proceed to step 9, decide whether you want
to store the file on your hard drive or a floppy diskette.
Note: IBM recommends that you save the file to a floppy diskette.
9. Click Save.
10. Close the Web site.
11. From your host system Start menu, click Run.
12. In the Run window, ensure the drive letter in the field is the same as the drive
letter where you saved the xxxxxxx.exe file in step 8. If no drive letter appears,
type the letter of the drive where you saved the driver file.
13. Type the driver file name after x:, where x is the drive letter you specified to
save the file.
14. Type the directory name where you want to put the file. Click Zip.
Note: IBM recommends that you save the file to a floppy diskette.
15. Click OK to unzip the current driver file.
96
ESS Host Systems Attachment Guide
Installing the adapter drivers
Perform the following steps to install the fibre-channel adapter drivers:
1. From the NetWare server console, type nwconfig for NetWare 5.0 or 5.1 or
load install for NetWare 4.x.
2. Select Driver Options.
3. Select Configure Disk and Storage Device Drivers.
4. In the SCSI Adapters window, click the Drivers tab.
5. Select Select an additional driver.
6. Press the Insert key.
7. Insert a floppy diskette with the QLogic drivers into the A:\ drive of the
NetWare server. Press Enter.
The available driver is displayed.
8. Select the driver for the QLogic card, and press Enter.
9. Select Select/Modify driver parameters and type the slot number of the
QLogic card into the slot number parameter.
10. Set the Scan All LUNs parameter to Yes.
11. Press the Tab key and check Save Parameters and Load Driver.
12. Type exit to exit the nwconfig or install utility.
13. If storage has already been assigned to the server from the ESS Specialist,
type Scan for all new devices, Scan all LUNs, and List devices.
The ESS volumes are displayed in the devices list. Create volumes using the
nwconfig utility if necessary.
Configuring the QLogic QLA2100F or QLA2200F adapter card
To configure the adapter card, use the IBM Enterprise Storage Server StorWatch
Specialist.
Chapter 9. Attaching to a Novell NetWare host
97
98
ESS Host Systems Attachment Guide
Chapter 10. Attaching to a Sun host
This chapter tells you how to attach an ESS to a Sun Microsystem host system with
SCSI or fibre-channel adapters. You must install the SCSI adapters in the Sun host
system before you start.
Attaching with SCSI adapters
This section describes how to attach an ESS to a SUN host system with SCSI
adapters. For procedures on how to attach an ESS to a SUN host system with
fibre-channel adapters, see “Attaching with fibre-channel adapters” on page 104.
Attachment requirements
This section lists the requirements to attach the ESS to your host system:
v Check the logical unit number limitations for your host system. See Table 6 on
page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the CD that you
received with the ESS.
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
v Solaris 2.6, Solaris 2.7, and Solaris 8 require patches to ensure that the host and
ESS function correctly. See Table 18 for the minimum revision level that is
required for each Solaris patch ID
Table 18. Solaris 2.6, 2.7, and 8 minimum revision level patches for SCSI
Solaris 2.6
Solaris 2.7
Solaris 8
105181-23 kernel update
106541-12 kernel update
108528-03 kernel update
105356-16 sd, ssd drivers
106924-06 isp driver
109524-02 ssd driver
105580-16 glm driver
106925-04 glm driver
109657-01 isp driver
105600-19 isp driver
107147-08 pci driver
108974-03 sd, uata drivers
Not applicable
107458-10 dad, sd, ssd, uata Not applicable
drivers
v Review the Sun host SCSI adapter device driver installation and configuration
utility documents for additional Solaris patches that you might need.
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS:
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. You assign the SCSI hosts to the SCSI ports on the ESS.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
3. You configure the host system for the ESS. Use the instructions in your host
system publications.
© Copyright IBM Corp. 1999, 2001
99
Note: The IBM Subsystem Device Driver does not support the Sun host system in
a clustering environment. To have failover protection on an open system, the
Subsystem Device Driver requires a minimum of two adapters. You can run
the Subsystem Device Driver with one SCSI adapter, but you have no
failover protection. The maximum number of adapters supported is 16 for a
total of 32 SCSI ports.
The following two Subsystem Device Drivers support Sun host systems:
v Sun host hardware platforms limited to 32-bit mode and all Sun host
systems running Solaris 2.6.
v Sun host hardware platforms with 64-bit mode capabilities running Solaris
2.7 or Solaris 8.
Mapping hardware
Perform the following steps to map the hardware if your host is not turned on:
1. Turn on the Sun host system and wait for the host to perform the self-tests.
2. Press Stop+A.
3. Type printenv at the console prompt.
4. Review the list that is displayed to determine the value of the scsi-initiator-id.
The target ID is reserved for the system and must not be used by another
device.
Perform the following steps to map the hardware if your host is turned on:
1. Type eeprom at the console prompt.
2. Review the list that displays to determine the value of the scsi-initiator-id.
The target ID is reserved for the system and must not be used by another
device.
Configuring host device drivers
The following instructions explain how to update device driver configuration files on
the Sun host to enable access to target and LUN pairs configured on the ESS:
1. Change to the device driver configuration subdirectory by typing:
cd /kernel/drv
2. From the command prompt, type cp sd.conf sd.conf.bak to backup the sd.conf
file in the subdirectory.
3. Edit the sd.conf file to add support for the ID and LUN pairs that are configured
on the ESS. Figure 51 on page 101 shows the lines that you would add to the
file to access LUNs 0 - 7 on target 8.
Note: Do not add duplicate target and LUN pairs.
100
ESS Host Systems Attachment Guide
name="sd" class="scsi"
target=8 lun=0;
name="sd" class="scsi"
target=8 lun=1;
name="sd" class="scsi"
target=8 lun=2;
name="sd" class="scsi"
target=8 lun=3;
name="sd" class="scsi"
target=8 lun=4;
name="sd" class="scsi"
target=8 lun=5;
name="sd" class="scsi"
target=8 lun=6;
name="sd" class="scsi"
target=8 lun=7;
Figure 51. Example of sd.conf file entries
4. If you attach the ESS to a Sun host through a PCI SCSI adapter, continue to
step 5. If not, skip to step 7.
5. If the glm.conf file exists, back it up to the /kernel/drv subdirectory.
6. If you are running Solaris 2.6, edit the glm.conf file and add the following lines
to enable support for LUNs 8 - 32:
device-type-scsi-options-list=
"IBM
2105F20
", "ibm-scsi-options";
ibm-scsi-options = 0x107f8;
If you run Solaris 2.7 or Solaris 8, edit the glm.conf file and add the following
lines to enable support for LUNs 8 - 32:
device-type-scsi-options-list=
"IBM
2105F20
", "ibm-scsi-options";
ibm-scsi-options = 0x407f8;
Note: The ESS inquiry information in these examples (IBM 2105F20), must
include eight characters of vendor information (IBM and five spaces).
These characters are followed by 16 characters of product information
(2105F20 and nine spaces). The examples show the inquiry data for the
IBM 2105 Model F20. The actual product information must match the
model being attached.
7. Type reboot -- -r from the Open Windows panel to shut down and restart the
Sun host system with the kernel reconfiguration option. Or, type boot -r from
the OK prompt after you shut down.
Descriptions for setting the scsi_options in /etc/system file
You can configure the scsi_options variable in the Solaris 2.x kernel to enable or
disable particular capabilities. For example, you can set the scsi_options variable in
the /etc/system file. The default scsi_options variable allows the widest range of
capabilities that the SCSI host adapter provides.
The default scsi_options value on Solaris 2.x works for 5-MB and 10-MB devices.
The driver negotiates with each device to determine if it is capable of transferring
10-MB or not. If they are 10- MB devices, 10-MB transfer rate will be used. If not, a
5-MB transfer rate will be used.
Chapter 10. Attaching to a Sun host
101
If you need to enable or disable a particular capability, use the following definitions
for the SCSI subsystem options:
v Bits 0 - 2 are reserved for debugging or informational-level switch.
v Bit 3 is reserved for a global disconnect or reconnect switch.
v Bit 4 is reserved for a global linked-command capability switch.
v Bit 5 is reserved for a global synchronous-SCSI capability switch.
All other bits are reserved for future use.
See Figure 52 for an example of how to specify the default SCSI options for:
v Wide SCSI
v Fast SCSI
v Tagged commands
v Synchronous-transfer linked commands
v Global parity
v Global disconnect or reconnect.
set scsi_options=0x3f8
Figure 52. Example of default settings for SCSI options
In the /etc/system file, ensure that the mask scsi_options have the following values.
For Solaris 2.4, 2.5 or higher, include the default settings in Figure 52.
See Table 19 for an example of the SCSI options.
Table 19. Example of SCSI options
Bit
Mask
Meaning
3
0x08
Disconnect enable
4
0x10
Linked commands enable
5
0x20
Synchronous transfer enable
6
0x40
Parity support enable
7
0x80
Command tagged queuing
8
0x100
Fast SCSI enable
9
0x200
Wide SCSI enable
Installing the IBM Subsystem Device Driver
The following instructions explain how to install the IBM Subsystem Device Driver
from a compact disc. You can use the Subsystem Device Driver in conjunction with
the IBM Copy Services command-line interface program.
1. To ensure that the volume manager is running, type: ps -ef | grep vold
The contents of the /usr/sbin/vold is displayed. If the directory is not displayed,
type: /etc/init.d/volmgt start
2. Insert the Subsystem Device Driver CD-R into the CD-ROM drive.
102
ESS Host Systems Attachment Guide
A File Manager window opens showing the paths for the Subsystem Device
Driver package subdirectories. Figure 53 shows an example of the path.
Note: You must be on the host console to see this window.
/cdrom/unnamed_cdrom
Figure 53. Example of the path you see when you insert the IBM Subsystem Device Driver
compact disc
3. Change to the subdirectory that contains the Subsystem Device Driver
package.
a. For Sun host hardware platforms limited to 32-bit mode and for all Sun
host systems running Solaris 2.6, type:
cd /cdrom/unnamed_cdrom/Sun32bit
b. For Sun host hardware platforms with 64-bit capabilities running Solaris 2.7
or Solaris 8 type:
cd /cdrom/unnamed_cdrom/Sun64bit
4. To initiate the Package Add menu, type: ls
A list of available packages is shown. When you select the package, you must
specify the directory and package name.
5. Select the option number for the IBM DPO driver (IBMdpo), and press Enter.
6. Select y to continue the installation for all prompts until the package installation
is complete.
7. Select q and press Enter to exit the Package Options menu.
8. Type cd to change back to the root directory.
9. To remove the Subsystem Device Driver compact disc, type eject cdrom.
Press Enter.
10. Edit the .profile file in the root directory, and add the lines shown in Figure 54
to include the IBM DPO subdirectory in the system path.
PATH=$PATH:/opt/IBMdpo/bin
export PATH
Figure 54. Example of how to include the IBM DPO subdirectory in the system path
11. Restart the host system to add the IBM DPO driver subdirectory automatically
to the path.
Setting the parameters for the Sun host system
The following procedures explain how to set the Sun host system parameters for
optimum performance on the ESS:
1. Type cd /etc to change to the /etc subdirectory.
2. Type cp system system.bak to backup the system file in the /etc subdirectory.
3. Edit the system file to set the following parameters:
Chapter 10. Attaching to a Sun host
103
sd_max_throttle
This sd_max_throttle parameter specifies the maximum number of
commands that the sd driver can queue to the host bus adapter driver. The
default value is 256, but you must set the parameter to a value less than or
equal to a maximum queue depth for each LUN connected. Determine the
value using the following formula:
256 LUNs per adapter
The parameter shows thirty two 2105 LUNs attached to controller 1,
(c1t#d#), and forty eight 2105 LUNs attached to controller 2, (c2t#d#). The
value for sd_max_throttle is calculated using the controller with the highest
number of LUNs attached.
The sd_max_throttle parameter for the ESS LUNs in this example would be
set by adding the following line to the /etc/system file:
set sd:sd_max_throttle=5
sd_io_time
This parameter specifies the time out value for disk operations. Add the
following line to the /etc/system file to set the sd_io_time parameter for the
ESS LUNs:
set sd:sd_io_time=0x78
sd_retry_count
This parameter specifies the retry count for disk operations. Add the
following line to the /etc/system file to set the sd_retry_count parameter for
the ESS LUNs:
set sd:sd_retry_count=5
maxphys
This parameter specifies the maximum number of bytes you can transfer for
each SCSI transaction. The default value is 126976 (124 KB). If the I/O
block size that you requested exceeds the default value, the request is
broken into more than one request. The value should be tuned to the
intended use and application requirements. For maximum bandwidth, set
the maxphys parameter by adding the following line to the /etc/system file:
set maxphys=8388608
If you use Veritas volume manager on the ESS LUNs, you must set the
VxVM max I/O size parameter (vol_maxio) to match the maxphys
parameter. If you set the maxphys parameter to 8388608, add the following
line to the /etc/system file to set the VxVM I/O size to 8 MB:
set vxio:vol_maxio=16384
Attaching with fibre-channel adapters
This section describes how to attach an ESS to a Sun host system with the
following fibre-channel adapters:
v
v
v
v
104
Emulex LP8000
JNI PCI
JNI SBUS
QLogic QLA2200F
ESS Host Systems Attachment Guide
This section also tells you how to change the Sun system kernel. Before you start,
you must meet the attachment requirements listed in “Attachment requirements”.
For procedures on how to attach a SUN host system with SCSI adapters, see
“Attaching with SCSI adapters” on page 99.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Ensure there are enough fibre-channel adapters installed in the server to handle
the total LUNs you want to attach.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
v Solaris 2.6, Solaris 7, and Solaris 8 require patches to ensure that the host and
the ESS function correctly. See Table 20 for the minimum revision level that is
required for each Solaris patch ID.
Table 20. Solaris 2.6, 7, and 8 minimum revision level patches for fibre-channel
Solaris 2.6
Solaris 7
Solaris 8
105181-23 kernel update
106541-12 kernel update
108528-03 kernel update
105356-16 sd, ssd drivers
106924-06 isp driver
109524-02 ssd driver
105580-16 glm driver
106925-04 glm driver
109657-01 isp driver
105600-19 isp driver
107147-08 pci driver
108974-03 sd, uata drivers
Not applicable
107458-10 dad, sd, ssd, uata Not applicable
drivers
v Review device driver installation documents and configuration utility documents
for additional Solaris patches that you might need.
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS:
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. Either you or an IBM SSR defines the fibre-channel host system with the
worldwide port name identifiers. For the list of worldwide port names, see
“Appendix A. Locating the worldwide port name (WWPN)” on page 153.
3. Either you or an IBM SSR defines the fibre-port configuration if you did not do it
during the installation of the ESS or fibre-channel adapters.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
4. Either you or an IBM SSR configures the host system for the ESS by using the
instructions in your host system publications.
Note: The IBM Subsystem Device Driver does not support the Sun host system in
a clustering environment. To have failover protection on an open system, the
Chapter 10. Attaching to a Sun host
105
IBM Subsystem Device Driver requires a minimum of two fibre-channel
adapters. The maximum number of fibre-channel adapters supported is 16
for a total of 16 fibre-channel ports.
The following two Subsystem Device Drivers support Sun host systems::
v Sun host hardware platforms limited to 32-bit mode and all Sun host
systems running Solaris 2.6.
v Sun host hardware platforms with 64-bit mode capabilities that run Solaris
7 or Solaris 8.
Installing the Emulex LP8000 adapter card
This section tells you how to attach an ESS to a Sun host system with the Emulex
LP8000 adapter card.
Note: For fibre-channel connection through the SAN Data Gateway, the McData
ED-5000 switch uses only the Emulex adapter.
Perform the following steps to install the Emulex LP8000 adapter card:
1. Turn off and unplug the computer.
2. Remove the computer case.
3. Remove the blank panel from an empty PCI bus slot.
4. Insert the host adapter board into the empty PCI bus slot. Press firmly until
seated.
5. Secure the mounting bracket for the adapter to the case with the panel screw.
6. Replace the computer case by tightening the screws on the case or use the
clamp to secure the cover.
Downloading the current Emulex adapter driver
Perform the following steps to download the Emulex adapter driver:
1. Plug in and restart your host system.
2. Go to the following Web site:
www.emulex.com
3. Click Quick Links from the left navigation pane
4. Click Documentation, Drivers and Software.
5. From the Fibre Channel menu, select the adapter model from the Select
Model menu.
For example, click Emulex LP8000.
6. Click Drivers for Solaris.
7. From the table for Driver for Solaris, click SCSI/IP v4.xxx where xxx equals the
level of the driver for Solaris.
8.
9.
10.
11.
For the SPARC driver, click Download Now.
From the File Download window, click Save this file to disk.
Click OK.
In the Save As window, click Save.
A window that shows the progress of the download displays.
12. When the download completes, click Close.
Note: If you downloaded the driver file to a host system other than a Sun, you
must transfer the file to the Sun host system. Otherwise, go to
“Installing the Emulex LP8000 adapter drivers” on page 107.
106
ESS Host Systems Attachment Guide
Installing the Emulex LP8000 adapter drivers
Perform the following steps to install the fibre-channel adapter drivers:
1. Log in as root.
2. Type emlxtemp to create a temporary directory.
3. Type cd emlxtemp to change the directory to the temporary directory. If you are
downloading the file from the ftp site or have the file on the CD-ROM, go to
step 5.
4. Type /etc/init.d/volmgt stop and then unmount /dev/fd to copy the tar file
from a floppy diskette.
5. Copy or download the device driver file to the temporary directory.
6. If the file is in the format of filename.tar.Z, type uncompress filename.tar.Z. If
the file is in the format of filename.tar, go to step 7.
7. Type tar xvf lpfc-sparc.tar to untar the drive file from the temporary
directory.
8. Type pkgadd -d pwd to install the package.
Note: An installation script displays that prompts you to answer a number of
questions. For each question, enter the appropriate response or press
Enter to each question to accept the default setting.
9. Specify the package number or press enter to accept all packages.
10. Type y or n to answer the prompt that reads: Rebuild manual pages database
for section 7d [y,n?]:.
Note: Rebuilding the manual pages can take up to ten minutes. If you do not
want to build the manual pages, type n. You can run the command later.
If you typed y, go to step 11. If you typed n, go to step 12.
11. At the prompt that reads: Use IP networking over Fibre Channel [y,n?]:
type y or n. If you typed y go to step 12.
12. Type the name of the network host name for the adapter.
Note: The network host name identifies the host adapter on a fibre-channel
network. It is associated with a unique IP address.
13. Edit the /etc/hosts file to add the IP address to the host name.
Note: If you have more than one adapter in the system, you must create a
hostname.lpfn# file for each adapter.
14. At the prompt that reads, Do you want to continue with the installation of
<lpfc>, type y to proceed with the installation. Or, type n to undo all the
settings and end the installation.
15. At the prompt that reads, Select package(s) you wish to process (or 'all'
to process all packages). (default:all) [?,??,q]:, type q.
Note: IBM recommends that you configure the host adapter parameters
before you shutdown and restart the host system.
16. At the system prompt, type shutdown to restart the host system.
17. Log in as root.
18. Update the parameter list and restart the host system. See “Configuring host
device drivers” on page 112, for the parameters and recommended settings.
Chapter 10. Attaching to a Sun host
107
Installing the JNI PCI adapter card
This section tells you how to attach an ESS to a Sun host system with the JNI PCI
adapter card.
Perform the following steps to install the JNI PCI adapter card:
1. Turn off and unplug the computer.
2. Remove the computer case.
3. Remove the blank panel from an empty PCI bus slot.
4. Insert the host adapter board into the empty PCI bus slot. Press firmly until
seated.
5. Secure the mounting bracket for the adapter to the case with the panel screw.
6. Replace the computer case by tightening the screws on the case or use the
clamp to secure the cover.
Downloading the current JNI PCI adapter driver
This section tells you how to download the JNI PCI fibre-channel adapter driver.
1. Plug in and restart your host system.
2. Go to the following Web site:
www.jni.com
3. From the navigation menu at the top of the page, click Drivers.
4. From the Locate Driver by Product menu, click FCI-1063.
5. From the FCI-1063 menu, find the section for Solaris - JNI. Click fca-pci.pkg.
6. In the dialog box for File Download, click Save this file to disk. Click OK.
7. In the Save As dialog box, create a temporary folder. For example, create a
folder called Temp.
Note: If you already have a folder called Temp, change to the Temp directory.
8. Click Save.
A window opens that shows the progress of the download.
9. When the download completes, click Close.
10. If you downloaded the driver file from a Sun host system, go to “Installing the
JNI PCI adapter driver”. If you downloaded the driver file from a non-Sun host
system, transfer the drive file to a Sun host system, and then go to “Installing
the JNI PCI adapter driver”.
Installing the JNI PCI adapter driver
Perform the following steps to install the JNI PCI adapter drivers:
1. Go to the following Web site:
www.jni.com
2. From the navigation menu at the top of the page, click Drivers.
3. From the Locate Driver by Product menu, click FCI-1063.
4. From the FCI-1063 menu, find the section for Solaris - JNI. Click readme.txt.
5. Print the readme.txt file.
6. Follow the instructions in the readme.txt file to install the JNI PCI adapter card.
7. Update the parameter list and restart the host system. See Table 22 on
page 115 for the parameters and recommended settings.
108
ESS Host Systems Attachment Guide
Installing the JNI SBUS adapter card
This section tells you how to attach an ESS to a Sun host system with the JNI
SBUS adapter card.
Perform the following steps to install the JNI SBUS adapter card:
1. Turn off and unplug the computer.
2. Remove the computer case.
3. Remove the blank panel from an empty SBUS slot.
4. Insert the host adapter board into the empty SBUS slot. Press firmly until
seated.
5. Secure the mounting bracket for the adapter to the case with the panel screw.
6. Replace the computer case by tightening the screws on the case or use the
clamp to secure the cover.
Downloading the current JNI SBUS adapter driver
Perform the following steps to download the JNI SBUS adapter driver:
1. Plug in and restart your host system.
2. Go to the following Web site:
www.jni.com
3. From the navigation menu at the top of the page, click Drivers.
4. From the Locate Driver by Product menu, click FC64-1063.
5. From the FCI-1063 menu, find the section for Solaris - JNI. Click fcw.pkg.
6. In the dialog box for File Download, click Save this file to disk. Click OK.
7. In the Save As dialog box, create a temporary folder. For example, create a
folder called Temp.
Note: If you already have a folder called Temp, change to the Temp directory.
8. Click Save.
A window opens that shows the progress of the download.
9. When the download completes, click Close.
10. If you downloaded the driver file from a Sun host system, go to “Installing the
JNI SBUS adapter driver”. If you downloaded the driver file from a non-Sun
host system, transfer the driver file to a Sun host system, and then go to
“Installing the JNI SBUS adapter driver”.
Installing the JNI SBUS adapter driver
Perform the following steps to install the JNI SBUS adapter driver:
1. Go to the following Web site:
www.jni.com
2. From the navigation menu at the top of the page, click Drivers.
3. From the Locate Driver by Product menu, click FC64-1063.
4. From the FC64-1063 menu, find the section for Solaris - JNI. Click readme.txt.
5. Print the readme.txt file.
6. Follow the instructions in the readme.txt file to install the JNI SBUS adapter
card.
7. Update the parameter list and restart the host system. See Table 22 on
page 115. for the parameters and recommended settings.
Chapter 10. Attaching to a Sun host
109
Installing the QLogic QLA2200F adapter card
This section tells you how to attach an ESS to a Sun host system with the QLogic
QLA2200F adapter card.
Perform the following steps to install the QLogic QLA2200F adapter card:
1. Install the QLogic QLA2200F adapter card in the host system.
2. Connect the cable to the ESS port.
3. Restart the server.
4. Press Alt+Q to get to the FAST!Util menu.
5. From the Configuration Settings menu, select Host Adapter Settings.
From the Host Adapter Settings menu, set the following parameters and
values:
a. Host adapter BIOS: Disabled
b. Frame size: 2048
c. Loop reset delay: 5 (minimum)
d. Adapter hard loop ID: Disabled
6. From the Advanced Adapter Settings menu, press the Down Arrow to
highlight LUNs per target. Press Enter. Set the parameters and values from
the Advanced Adapter Settings menu as follows:
a. Execution throttle: 100
b. Fast command posting: Enabled
c. >4 GB addressing: Disabled for 32 bit systems
d. LUNs per target: 0
e. Enable LIP reset: No
f. Enable LIP full login: No
Note: In a clustering environment, set Enable LIP full login to Yes.
g. Enable target reset: Yes
h. Login retry count: 20 (minimum)
i.
j.
k.
l.
Port down retry count: 20 (minimum)
Driver load RISC code: Enabled
Enable database updates: No
Disable database load: No
m. IOCB allocation: 256
n. Extended error logging: Disabled (might be enabled for debugging)
7.
8.
9.
10.
Note: The Enable LIP reset, Enable LIP full logon, and Enable target reset
parameters control the behavior of the adapter when Windows NT
tries to do a SCSI bus reset. You must perform a target reset to
make cluster failovers work. Use the SCSI bus device reset option
to clear SCSI reservations.
Press Esc to return to the Configuration Settings menu.
From the Configuration Settings menu, scroll down to Extended Firmware
Settings. Press Enter.
From the Extended Firmware Settings menu, scroll down to Connection
Options to open the Option and Type of Connection window.
Select the option:
v 0: Loop only
110
ESS Host Systems Attachment Guide
v 1: Point-to-point (preferred setting)
v 2: Loop preferred (If you cannot use arbitrated loop, then default to
point-to-point)
v 3: Point-to point, otherwise loop (If you cannot use point-to-point, default to
arbitrated loop).
Notes:
a. If you connect the ESS directly to the host system, the option you select
must match the port connections on the ESS.
b. If you connect through a switch, the options do not need to match the port
connections because the ESS is point-to-point.
c. The appropriate host bus adapter on the server must also support
point-to-point connection on a direct connection.
d. If you use switches from different manufacturers, the switches will not
function properly in a direct point-to-point connection. This is not true if you
connect through a switch because the ESS is point-to-point.
11. Press Esc.
12. Save the changes. Highlight Yes.
13. Restart the server.
Downloading the current QLogic adapter driver
Perform the following steps to download the current driver onto the QLogic adapter
card:
1. Go to the following Web site:
www.qlogic.com
2. From the home page, click Driver Download.
3. Click Use QLogic Drivers.
4. Click Fibre Channel Adapter Drivers and Software.
5. In the table for QLogic Fibre Channel Adapters, click QLA22xx.
6. From the Software and Drivers available menu, click Solaris.
7. From the table for QLA22xx Driver Download Page, Solaris Sparc 2.6/2.7/2.8,
and Current released (Sparc) driver PCI to FC Adapter, click Link to Driver.
This action might display a window for File Download. If you see the window
for File Download, click OK.
8. In the window for Save As, click Save.
Note: You have the option to save the driver file to a floppy diskette or a
directory on your hard drive.
A window that shows the progress of the download displays.
9. When the download completes, click Close.
10. If you downloaded the driver from a host system other than a SUN host
system, you must transfer the file to a Sun host system.
11. To install the driver file, use the tar command. Go to “Installing the QLogic
adapter drivers”.
Installing the QLogic adapter drivers
Perform the following steps to install the fibre-channel adapter drivers.
Chapter 10. Attaching to a Sun host
111
Note: If you are installing the fibre-channel adapter for the first time, you must
specify the correct topology. You must also select the appropriate device
mapping driver.
1. Go to the following Web site:
www.qlogic.com
2. From the home page, click Driver Download.
3. Click Use QLogic Drivers button.
4. Click Fibre Channel Adapter Drivers and Software.
5. In the table for QLogic Fibre Channel Adapters, click QLA22xx.
6. From the Software and Drivers available menu, click Solaris.
7. From the table for QLA22xx Driver Download Page, Solaris Sparc 2.6/2.7/2.8,
and Current released (Sparc) driver PCI to FC Adapter, click Read Me.
This action displays the contents of the README file, which contains the
instructions to install the driver file.
Configuring host device drivers
Perform the following steps to update the Solaris SCSI driver configuration file. This
gives you to access to target and LUN pairs that are configured on the ESS.
1. Change to the directory by typing: cd /kernel/drv
2. Backup the sd.conf file in this subdirectory.
3. Edit the sd.conf file to add support for the target and LUN pairs that are
configured on the host system. Figure 55 on page 113 shows the lines that you
would add to the file to access LUNs 0 - 7 on target 0 for SCSI.
|
|
|
Note: Do not add duplicate target and LUN pairs.
|
|
112
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
name="sd" class="scsi" class_prop="atapi"
target=0 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=1 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=2 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=3 lun=0;
name="sd" class="scsi"
target=4 lun=0;
name="sd" class="scsi"
target=5 lun=0;
name="sd" class="scsi"
target=6 lun=0;
name="sd" class="scsi"
target=8 lun=0;
name="sd" class="scsi"
target=9 lun=0;
name="sd" class="scsi"
target=10 lun=0;
name="sd" class="scsi"
target=11 lun=0;
name="sd" class="scsi"
target=12 lun=0;
name="sd" class="scsi"
target=13 lun=0;
name="sd" class="scsi"
target=14 lun=0;
name="sd" class="scsi"
target=15 lun=0;
Figure 55. Example of sd.conf file entries for SCSI
Figure 56 on page 114 shows the lines that you would add to the file to access
LUNs 0 - 49 on target 0 for fibre-channel.
Chapter 10. Attaching to a Sun host
113
name="sd" class="scsi"
target=0 lun=0;
name="sd" class="scsi"
target=0 lun=1;
name="sd" class="scsi"
target=0 lun=2;
name="sd" class="scsi"
target=0 lun=3;
name="sd" class="scsi"
target=0 lun=4;
name="sd" class="scsi"
target=0 lun=5;
name="sd" class="scsi"
target=0 lun=6;
name="sd" class="scsi"
target=0 lun=7;
name="sd" class="scsi"
target=0 lun=8;
name="sd" class="scsi"
target=0 lun=9;
name="sd" class="scsi"
target=0 lun=10;
.
.
.
name="sd" class="scsi"
target=0 lun=48;
name="sd" class="scsi"
target=0 lun=49;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
Figure 56. Example of sd.conf file entries for fibre-channel
Figure 57 shows the start lpfc auto-generated configuration.
Note: Anything you put within this auto-generated section will be deleted if you
execute pkgrm to remove the lpfc driver package. You might need to add
additional lines to probe for additional LUNs or targets. You should delete
any lines that represent lpfc targets or LUNs that are not used.
|
|
|
|
|
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
parent="lpfc"
target=0 lun=0;
target=1 lun=0;
target=2 lun=0;
target=3 lun=0;
target=4 lun=0;
target=5 lun=0;
target=6 lun=0;
target=7 lun=0;
target=8 lun=0;
target=9 lun=0;
target=10 lun=0;
target=11 lun=0;
target=12 lun=0;
target=13 lun=0;
target=14 lun=0;
target=15 lun=0;
target=16 lun=0;
target=17 lun=0;
target=17 lun=1;
target=17 lun=2;
target=17 lun=3;
Figure 57. Example of start lpfc auto-generated configuration
114
ESS Host Systems Attachment Guide
4. Type reboot ---r from the Open Windows window to shutdown and restart the
Sun host system with the kernel reconfiguration option. Or, type boot -- -r
from the OK prompt after you shutdown.
The fibre-channel adapters that are supported for attaching the ESS to a Sun host
are capable of full-fabric support. IBM recommends that all fibre-channel driver
configurations include worldwide port name, worldwide node name, port ID, or
host-bus-adapter binding of target LUN pairs.
Binding of target LUN pairs implements the Solaris SCSI driver configuration file or
the fibre-channel host-bus-adapter configuration file installed by the adapter
software package. Refer to the vendor adapter documentation and utilities for
detailed configuration instructions.
You can tune fibre-channel host-bus-adapter configuration files for host system
reliability and performance.
See Table 21 for recommended host bus adapter configuration file parameters for
an Emulex LP-8000 adapter.
Table 21. Recommended configuration file parameters for the host bus adapters for the
Emulex LP-8000 adapter
Parameters
Recommended settings
automap
1: Default. Automatically assigns SCSI IDs to fibre-channel protocol
(FCP) targets.
fcp-on
1: Default. Turn on FCP.
lun-queue-depth
16: Recommended when there are less then 17 LUNs per adapter.
Set value = 256/(total LUNs per adapter) when there are more than
16 LUNs per adapter. If your configuration includes more then one
LP8000 adapter per server, calculate the LUN-queue-depth value
using the adapter with the most LUNs attached.
no-device-delay
15: Recommended. Delay to failback and I/O.
network-on
0: Default. Recommended for fabric. Do not turn on IP networking.
1: Turn on IP networking.
scan-down
2: Recommended. Use an inverted ALPA map and create a target
assignment in a private loop.
topology
2: Recommended for fabric. Point-to-point mode only.
4: Recommended for nonfabric. Arbitrated loop mode only.
zone-rscn
0: Default
1: Recommended for fabric. Check name server for RSCNs.
See Table 22 for the recommended configuration settings for the host-bus-adapter
for a JNI FC64-1063 and a JNI FCI-1063.
Table 22. Recommended configuration file parameters for the host bus adapters for the JNI
FC64-1063 and JNI FCI-1063.
Parameters
Recommended settings
fca_nport
0: Default. Initializes on a loop
1: Recommended for fabric. Initializes as an N_Port.
Chapter 10. Attaching to a Sun host
115
Table 22. Recommended configuration file parameters for the host bus adapters for the JNI
FC64-1063 and JNI FCI-1063. (continued)
Parameters
Recommended settings
public loop
0: Default. Recommended. Initializes according to which fca_nport is
set as disabled.
ip_disable
0: Default. IP side of the driver is enabled.
1: Recommended for fabric. IP side of the adapters is completely
disabled.
failover
60: Recommend without the McData switch
300: Recommended with the McData switch
busy_retry_delay
500: Recommended. Delay between retries after device returns a
busy response for a command.
scsi_probe_delay
5000: Recommended. Delay before SCSI probes are allowed during
startup.
See Table 23 for recommended host bus adapter configuration file parameters for a
QLogic QLA2200F adapter. The settings in Table 23 are for an ESS Model F20 that
is either attached directly or through a fabric switch.
Table 23. Recommended configuration file parameters for the host bus adapters for the
QLogic QLA2200F adapter
Parameters
Recommended settings
hba0-max-frame-length
=2048;
hba0-max-iocb-allocation
=256;
hba0-execution-throttle
=31;
hba0-login-timeout
=4;
hba0-login-retry-count
=1;
hba0-fabric-retry-count
=10;
hba0-enable-adapter-hard-loop-
=0;
hba0-adapter-hard-loop-I
=0;
hba0-enable-64bit-addressing
=0;
hba0-enable-LIP-reset
=0;
hba0-enable-LIP-full-login
=1;
hba0-enable-target-reset
=0: non-clustered
=1: clustered
hba0-reset-delay
=5;
hba0-port-down-retry-count
=30;
hba0-link-down-error
=1;
hba0-loop-down-timeout
=60;
hba0-connection-options
=1: fabric connection
=2: direct connection
116
hba0-device-configuration-mode
=1;
hba0-fc-tape
=0;
hba0-command-completion-option
=1;
ESS Host Systems Attachment Guide
Installing the IBM Subsystem Device Driver
The following instructions explain how to install the IBM Subsystem Device Driver
from a compact disc. You can use the IBM Subsystem Device Driver in conjunction
with the IBM Copy Services command-line interface program.
1. Type ps -ef | grep vold to ensure that the volume manager is running.
This command displays the /usr/sbin/vold process. If it does not display, type
/etc/init.d/volmgt start
2. Insert the IBM Subsystem Device Driver CD into the CD-ROM drive.
A File Manager window opens showing the paths for the Subsystem Device
Driver package subdirectories. Figure 58 shows an example of the path.
Note: You must be on the host console to see this window.
/cdrom/unnamed_cdrom
Figure 58. Example of a path to the IBM Subsystem Device Driver package subdirectories
3. Change to the subdirectory that contains the Subsystem Device Driver
package.
a. For Sun host hardware platforms limited to 32-bit mode and for all Sun
host systems running Solaris 2.6, type:
cd /cdrom/unnamed_cdrom/Sun32bit
b. For Sun host hardware platforms with 64-bit capabilities running Solaris 7
or Solaris 8, type:
cd /cdrom/unnamed_cdrom/Sun64bit
4. Type pkgadd -d to initiate the Package Add menu.
5. Select the option number for the IBM DPO driver (IBMdpo), and press Enter.
6. Select y to continue the installation for all prompts until the package installation
is complete.
7. Select q and press Enter to exit the package options menu.
8. Type cd to change back to the root directory.
9. Type eject cdrom and press Enter to remove the Subsystem Device Driver
CD.
10. Edit the .profile file in the root directory, and add the lines shown in Figure 59
to include the IBM DPO subdirectory in the system path.
PATH=$PATH:/opt/IBMdpo/bin
export PATH
Figure 59. Example of how to edit the .profile file in the root director to include the IBM DPO
subdirectory
11. Restart the host system to add the IBM DPO driver subdirectory automatically
to the path.
Chapter 10. Attaching to a Sun host
117
Setting the Sun host system parameters
The following sections contain the procedures to set the Sun host system
parameters for optimum performance on the ESS with the following adapters:
v JNI
v Emulex
v QLogic
JNI adapters
The following sections contain the procedures to set the Sun host system
parameters for optimum performance on the ESS with the JNI adapter:
1. Type cd/etc to change to the /etc subdirectory.
2. Backup the system file in the subdirectory.
3. Edit the system file and set the following parameters for servers with
configurations that use JNI adapters:
sd_max_throttle
This sd_max_throttle parameter specifies the maximum number of
commands that the sd driver can queue to the host bus adapter driver. The
default value is 256, but you must set the parameter to a value less than or
equal to a maximum queue depth for each LUN connected. Determine the
value using the following formula:
256/(LUNs per adapter)
The parameter shows thirty two 2105 LUNs attached to controller 1,
(c1t#d#), and forty eight 2105 LUNs attached to controller 2, (c2t#d#). The
value for sd_max_throttle is calculated using the controller with the highest
number of LUNs attached.
The sd_max_throttle parameter for the ESS LUNs in this example would be
set by adding the following line to the /etc/system file:
set sd:sd_max_throttle=5
sd_io_time
This parameter specifies the time-out value for disk operations. Add the
following line to the /etc/system file to set the sd_io_time parameter for the
ESS LUNs:
set sd:sd_io_time=0x78
sd_retry_count
This parameter specifies the retry count for disk operations. Add the
following line to the /etc/system file to set the sd_retry_count parameter for
the ESS LUNs:
set sd:sd_retry_count=5
maxphys
This parameter specifies the maximum number of bytes you can transfer for
each SCSI transaction. The default value is 126976 (124 KB). If the I/O
block size that you requested exceeds the default value, the request is
broken into more than one request. The value should be tuned to the
intended use and application requirements. For maximum bandwidth, set
the maxphys parameter by adding the following line to the /etc/system file:
set maxphys=8388608
118
ESS Host Systems Attachment Guide
If you are use the Veritas volume manager on the ESS LUNs, you must set
the VxVM max I/O size parameter (vol_maxio) to match the maxphys
parameter. If you set the maxphys parameter to 8388608, add the following
line to the /etc/system file to set the VxVM I/O size to 8 MB:
set vxio:vol_maxio=16384
Emulex or QLogic adapters
Perform the following steps to set the Sun host system parameters for optimum
performance on the ESS with the Emulex or QLogic adapter:
1. Type cd / etc to change to the /etc subdirectory.
2. Backup the system file in the subdirectory.
3. Edit the system file and set the following parameters for servers with
configurations that only use Emulex or QLogic adapters.
sd_io_time
This parameter specifies the time-out value for disk operations. Add the
following line to the /etc/system file to set the sd_io_time parameter for the
ESS LUNs:
set sd:sd_io_time=0x78
sd_retry_count
This parameter specifies the retry count for disk operations. Add the
following line to the /etc/system file to set the sd_retry_count parameter for
the ESS LUNs:
set sd:sd_retry_count=5
maxphys
This parameter specifies the maximum number of bytes you can transfer for
each SCSI transaction. The default value is 12 6976 (124 KB). If the I/O
block size that you requested exceeds the default value, the request is
broken into more than one request. The value should be tuned to the
intended use and application requirements. For maximum bandwidth, set
the maxphys parameter by adding the following line to the /etc/system file:
set maxphys=8388608
If you use Veritas volume manager on the ESS LUNs, you must set the
VxVM max I/O size parameter (vol_maxio) to match the maxphys
parameter. If you set the maxphys parameter to 8388608, add the following
line to the /etc/system file to set the VxVM I/O size to 8 MB:
set vxio:vol_maxio=16384
Chapter 10. Attaching to a Sun host
119
120
ESS Host Systems Attachment Guide
Chapter 11. Attaching to a Windows NT 4.0 host
This chapter tells you how to attach the ESS to a Windows NT host system with
SCSI or fibre-channel adapters.
Attaching with SCSI adapters
This section describes how to attach a Windows NT host system to an ESS with
the following SCSI adapters.
v Adaptec AHA-2944UW
v Symbios 8751D
v QLogic QLA1041
For procedures about how to attach an ESS to a Windows NT host system with
fibre-channel adapters, see “Attaching with fibre-channel adapters” on page 126.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Check the LUN limitations for your host system; see Table 6 on page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
For details about the release level for your operating system, see the following
Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. You assign the SCSI hosts to the SCSI ports on the ESS.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
3. You configure the host system for the ESS by using the instructions in your host
system publications.
Notes:
1. Version 1.2.1 or later of the IBM Subsystem Device Driver supports the
Windows NT 4.0 host system in a clustering environment. To have failover
protection on an open system, the IBM Subsystem Device Driver requires a
minimum of two adapters. You can run the Subsystem Device Driver with one
SCSI adapter, but you have no failover protection. The maximum number of
adapters supported is 16 for a total of 32 SCSI ports.
2. To improve performance, IBM recommends that you map to the LUNs for the
target volumes of the Windows NT host until you need access to the data on
the target volume. Perform the LUN mapping after the Peer-to-Peer Remote
Copy operation and immediately before you need access to the data. You must
restart the host system before you can access the data on the target volume.
You can greatly reduce the time it takes for the host system to restart if you
© Copyright IBM Corp. 1999, 2001
121
perform the LUN mapping. Otherwise, the time to restart could take 10 minutes
per Peer-to-Peer Remote Copy target volume.
See the following Web site for the most current information about the IBM
Subsystem Device Driver:
www.ibm.com/storage/support/techsup/swtechsup.nsf/support/sddupdates
Installing and configuring the Adaptec AHA-2944UW adapter card
Note: The steps to install and configure adapter cards are an example. Your
configuration might be different.
Perform the following steps to install and configure the Adaptec AHA-2944UW
adapter card:
1. Install the Adaptec AHA-2944UW in the server.
2. Connect the cable to the ESS port.
3. Start the server.
4. Press Ctrl+A to get to the SCSISelect menu and the list of adapter cards to
configure.
5. From the SCSISelect menu, select Configure/View Host Adapter Settings.
v Set the parameters on the Configure/View Host Adapter Settings panel as
follows:
– Host Adapter SCSI ID: 7
– SCSI Parity Checking: Enabled
– Host Adapter SCSI Termination: Automatic
6. Select SCSI Device Configuration
v Set the parameters on the SCSI Device Configuration panel as follows:
– Sync Transfer Rate (megabytes per second): 40.0
– Initiate Wide Negotiation: Yes
– Enable Disconnection: Yes
– Send Start Unit Command: No
– Enable Write Back Cache: No
– BIOs Multiple LUN Support: Yes
– Include in BIOS Scan: Yes
7. Select Advanced Configuration Options
v Set the parameters on the Advanced Configuration Options panel as follows:
– Reset SCSI BIOS at IC Int: Enabled
– Display Ctrl+A Message During BIOS: Enabled
– Extend BIOS translation for DOS drives > 1 GB: Enabled
– Verbose or Silent Mode: Verbose
– Host Adapter BIOS: Disabled:scan bus
– Support Removable Disks under Basic Input/Output System (BIOS) as
fixed disks: Disabled
– BIOS support for bootable CD-ROM: Disabled
– BIOS support for INT 13 extensions: Enabled
a.
122
ESS Host Systems Attachment Guide
8. Save the changes and select SCSISelect again to verify that you saved the
changes.
9. Restart the server.
10. Load the Adaptec drive, and restart the system if instructed to do so.
Installing and configuring the Symbios 8751D adapter card
Perform the following steps to install and configure the Symbios 8751D adapter
card.
Note: The parameter settings shown are an example. The settings for your
environment might be different.
1. Install the Symbios 8751D in the server.
2. Connect the cable to the ESS port.
3. Start the server.
4. Press Ctrl+C to get to the Symbios Configuration Utility menu.
5. From the Symbios Configuration Utility menu, select LSI Logic Host Bus
Adapters.
6. Set the parameters on the LSI Logic Host Bus Adapters panel as follows:
a. Press F2 at the first panel.
b. Select the Boot Adapter list option to display the boot adapter list. See
Figure 60 for an example of the boot adapter list.
Note: The boot adapter list shows only user-definable parameters.
Boot Order [0]
Next Boot [Off]
Figure 60. Example of boot adapter list for the Symbios 8751D adapter card for Windows NT
c. Perform the following steps to change the BIOS settings:
1) Highlight Next Boot and then click On to change the setting.
2) Restart the host.
3) Select Symbios Configuration Utility again, and make the changes.
4) After you make the changes, highlight and then click Off to change the
setting.
5) Restart the host.
d. Set the parameters on the Global Properties panel as follows:
v Pause When Boot Alert Displayed: [No]
v Boot Information Display Mode: [Verbose]
v Negotiate With Devices: [Supported]
v Video Mode: [Color]
v Restore Defaults (restores defaults)
e. Set the parameters on the Adapters Properties panel as follows:
v SCSI Parity: [Yes]
v Host SCSI ID: [7]
v SCSI Bus Scan Order: [Low to High (0..Max)]
Chapter 11. Attaching to a Windows NT 4.0 host
123
v
v
v
v
v
Removable Media Support: [None]
CHS Mapping: [SCSI Plug and Play Mapping]
Spinup Delay (Secs): [2]
Secondary Cluster Server: [No]
Termination Control: [Auto]
v Restore Defaults: (restores defaults)
f. Set the parameters on the Device Properties panel as follows:
v MT or Sec: [20]
v Data Width: [16]
v Scan ID: [Yes]
v Scan LUNs >0: [Yes]
v Disconnect: [On]
v
v
v
v
v
v
SCSI Time-out: 240
Queue Tags: [On]
Boot Choice: [No]
Format: [Format]
Verify: [Verify]
Restore defaults: (restores defaults)
g. Save the changes and select Symbios Configuration Utility again to verify
that you saved the changes.
7. Restart the server.
8. Load the Symbios driver, and restart the system if instructed to do so.
Installing and configuring the QLogic adapter card
Perform the following steps to install and configure the QLogic QLA1041 adapter
card.
Note: The parameter settings shown are an example. The settings for your
environment might be different.
1. Install the QLogic QLA1041 adapter card in the server.
2. Connect the cable to the ESS port.
3. Start the server.
4. Press Alt+Q to get to the FAST!Util menu.
a. From the Configuration Settings menu, select Host Adapter Settings. Set
the following parameters:
v Host Adapter: Enabled
v Host Adapter BIOS: Disabled
v Host Adapter SCSI ID: 7
v PCI Bus direct memory access (DMA) Burst: Enabled
v Compact disc Boot: Disabled
v SCSI Bus Reset: Enabled
v SCSI Bus Reset Delay: 5
v Concurrent Command or Data: Enabled
v Drivers Load RISC Code: Enabled
v Adapter Configuration: Auto
b. Set the parameters in the SCSI Device Settings menu as follows:
124
ESS Host Systems Attachment Guide
v
v
v
v
v
Disconnects OK: Yes
Check Parity: Yes
Enable LUNS: Yes
Enable Devices: Yes
Negotiate Wide: Yes
v Negotiate Sync: Yes
v Tagged Queueing: Yes
v Sync Offset: 8
v Sync Period: 12
v Exec Throttle: 16
c. Save the changes and select FAST!Util again to verify that you saved the
changes.
5. Restart the server.
6. Load the QLogic driver, and restart the system if instructed to do so.
Configuring for availability and recoverability
This section describes how to ensure optimum availability and recoverability when
you attach an ESS to a Windows NT host system. You must set the timeout value
associated with the supported host bus adapters to 240 seconds. The setting is
consistent with the configuration for IBM SSA adapters and disk subsystems when
attached to Windows NT host system.
The host bus adapter uses the timeout parameter to bound its recovery actions and
responses to the disk subsystem. The value exists in different places in the system
configuration. You can retrieve and use it in different ways depending on the type of
host bus adapter. The following instructions tell you how to modify the value safely
in either the Windows NT registry or in the device adapter parameters.
Setting the TimeOutValue registry
The following instructions tell you how to set the timeout value registry:
1. From the Run menu or command prompt, type:
Regedt32.exe
2. Navigate to the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk
3. Look for the value called TimeOutValue. If the value called TimeOutValue does
not exist, go to step 3a. If the TimeOutValue exists, go to step 4.
a. Click Edit → Add Value....
b. For ValueName, click TimeOutValue.
c. For data type, click REG_DWORD.
d. Click OK.
e. For data, click f0.
f. For radix, click Hex .
g. Click OK.
4. If the value exists and is less than 0x000000f0 (240 decimal), perform the
following steps to increase it to 0xf0.
a. Click TimeOutValue.
b. Click Edit → DWORD....
c. For data, click f0.
Chapter 11. Attaching to a Windows NT 4.0 host
125
d. For radix, click hex.
e. Click OK.
5. Exit the Regedt32 program.
6. Restart your Windows NT server for the changes to take effect.
Performing a FlashCopy from one volume to another volume
Perform the following steps to perform a FlashCopy from one Windows NT 4.0
volume to another volume. Before you perform the steps, you must log on with
administrator authority. The following steps assume you are performing the steps
from the host with the FlashCopy target.
1. From the task bar, click Start → Programs → Administrative Tools → Disk
Administrator.
2. The Disk Administrator error message window opens. Click OK.
3. From the Disk Administrator window, select the disk drive letter that is your
target.
4. From the menu bar, click Tools → Assign Drive letter.
5. From the Assign Driver letter window, click Do Not Assign a Drive Letter.
6. Click OK.
7. Perform the FlashCopy operation.
Note: If the ESS uses the volume serial numbers to do a FlashCopy on a
Windows NT host system, use the IBM Subsystem Device Driver to
obtain the volume serial numbers.
8. Go back to the Disk Administrator Window. From the task bar, click Start →
Programs → Administrative Tools → Disk Administrator and select the
FlashCopy target.
9. From the menu bar, click Tools → Assign Drive letter.
10. From the Assign Driver letter window, click Select the Assigned Drive letter.
11. Click OK.
Attaching with fibre-channel adapters
This section describes how to attach an ESS to a Windows NT host system with
the following fibre-channel adapters.
v QLogic QLA2100F adapter card
v QLogic QLA2200F adapter card
v Emulex LP8000 adapter card
This section also tells you how to install, download, and configure the adapter
cards.
For procedures about how to attach an ESS to a Windows NT host system with
SCSI adapters, see “Attaching with SCSI adapters” on page 121.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Check the LUN limitations for your host system; see Table 6 on page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
126
ESS Host Systems Attachment Guide
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. Either you or an IBM SSR defines the fibre-channel host system with the
worldwide port name identifiers. For the list of worldwide port names see
“Appendix A. Locating the worldwide port name (WWPN)” on page 153.
3. Either you or an IBM SSR defines the fibre-port configuration if you did not do it
during the installation of the ESS or fibre-channel adapters.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
4. Either you or an IBM SSR configures the host system for the ESS by using the
instructions in your host system publications.
Notes:
1. Version 1.2.1 or later of the IBM Subsystem Device Driver supports the
Windows NT 4.0 host system in a clustering environment. To have failover
protection on an open system, the IBM Subsystem Device Driver requires a
minimum of two fibre-channel adapters. The maximum number of fibre-channel
adapters supported is 16 for a total of 16 fibre-channel ports.
See the following web site for the most current information about the IBM
Subsystem Device Driver:
www.ibm.com/storage/support/techsup/swtechsup.nsf/support/sddupdates
2. To improve performance, IBM recommends that you map to the LUNs for the
target volumes of the Windows NT host until you need access to the data on
the target volume. Perform the LUN mapping after the Peer-to-Peer Remote
Copy operation and immediately before you need access to the data. You must
restart the host system before you can access the data on the target volume.
You can greatly reduce the time it takes for the host system to restart if you
perform the LUN mapping. Otherwise, the time to restart could take 10 minutes
per Peer-to-Peer Remote Copy target volume.
Installing the QLogic QLA2100F adapter card
This section tells you how to attach an ESS to a Windows NT host system with the
QLogic QLA2100F adapter card.
Note: The arbitrated-loop topology is the only topology available for the QLogic
QLA2100F adapter card.
Perform the following steps to install the QLogic QLA2100F adapter card:
1. Install the QLogic QLA2100F adapter card in the host system.
2. Connect the cable to the ESS port.
3. Restart the host system.
4. Press Alt+Q to get to the FAST!Util menu.
5. From the Configuration Settings menu, select Host Adapter Settings.
Chapter 11. Attaching to a Windows NT 4.0 host
127
6. From the Advanced Adapter Settings menu, press the Down Arrow to
highlight LUNs per target; then press Enter.
7. Use the Down Arrow to find and highlight 256. Press Enter.
8. Press Esc.
9. To save the changes, click Yes. Press Enter.
10. Restart the server.
Installing the QLogic QLA2200F adapter card
This section tells you how to attach an ESS to a Windows NT host system with the
QLogic QLA2200F adapter card.
Perform the following steps to install the QLogic QLA2200F adapter card:
1. Install the QLogic QLA2200F adapter card in the host system.
2. Connect the cable to the ESS port.
3. Restart the server.
4. Press Alt+Q to get to the FAST!Util menu.
5. From the Configuration Settings menu, select Host Adapter Settings.
From the Host Adapter Settings menu, set the following parameters and
values:
a. Host adapter BIOS: Disabled
b. Frame size: 2048
c. Loop reset delay: 5 (minimum)
d. Adapter hard loop ID: Disabled
6. From the Advanced Adapter Settings menu, press the Down Arrow to
highlight LUNs per target. Press Enter. Set the parameters and values from
the Advanced Adapter Settings menu as follows:
a. Execution throttle: 100
b. Fast command posting: Enabled
c. >4 GB addressing: Disabled for 32 bit systems
d. LUNs per target: 0
e. Enable LIP reset: No
f. Enable LIP full login: No
Note: In a clustering environment, set Enable LIP full login to Yes.
g. Enable target reset: Yes
h.
i.
j.
k.
l.
Login retry count: 20 (minimum)
Port down retry count: 20 (minimum)
Driver load RISC code: Enabled
Enable database updates: No
Disable database load: No
m. IOCB allocation: 256
n. Extended error logging: Disabled (might be enabled for debugging)
Note: The Enable LIP reset, Enable LIP full logon, and Enable target reset
parameters control the behavior of the adapter when Windows NT
tries to do a SCSI bus reset. You must perform a target reset to
make cluster failovers work. Use the SCSI bus device reset option
to clear SCSI reservations.
128
ESS Host Systems Attachment Guide
7. Press Esc to return to the Configuration Settings menu.
8. From the Configuration Settings menu, scroll down to Extended Firmware
Settings. Press Enter.
9. From the Extended Firmware Settings menu, scroll down to Connection
Options to open the Option and Type of Connection window.
10. Select the option:
v 0: Loop only
v 1: Point-to-point (preferred setting)
v 2: Loop preferred (If you cannot use arbitrated loop, then default to
point-to-point)
v 3: Point-to point, otherwise loop (If you cannot use point-to-point, default to
arbitrated loop).
Notes:
a. If you connect the ESS directly to the host system, the option you select
must match the port connections on the ESS.
b. If you connect through a switch, the options do not need to match the port
connections because the ESS is point-to-point.
c. The appropriate host bus adapter on the server must also support
point-to-point connection on a direct connection.
d. If you use adapter cards from different manufacturers, they will not function
properly in a direct point-to-point connection. This is not true if you connect
through a switch because the ESS is point-to-point.
11. Press Esc.
12. Save the changes. Highlight Yes.
13. Restart the server.
Downloading the QLogic adapter driver
Perform the following steps to load the current driver onto the QLogic adapter card:
1. Go to the following Web site:
2.
3.
4.
5.
www.qlogic.com
From the home page, click Driver Download.
Click Use Qlogic Drivers button.
Click IBM Enterprise Subsystems Division approved drivers.
Click IBM Approved QLA22xx drivers.
6. Click Link to Driver (for Windows NT).
7. In the File Download window, click Save this Program to Disk.
You have the option to save the driver file to a floppy diskette or a directory on
your hard drive.
8. Click Save.
9.
10.
11.
12.
A window that shows the progress of the download is displayed.
When the download completes, click Close.
Go to the file directory where you stored the file.
Unzip the file by double clicking the icon.
When you double click the icon, a window displays.
Click Unzip.
Chapter 11. Attaching to a Windows NT 4.0 host
129
When the unzip process completes, you should see a message that says, x
files unzipped successfully, where x equals the number of files you
unzipped. Click OK.
13. Click Close to close the window for unzipping the file.
Installing the QLogic adapter drivers
Perform the following steps to install the fibre-channel adapter drivers.
Note: If you are installing the fibre-channel adapter for the first time, you must
specify the correct topology. You must also select the appropriate device
mapping driver.
1. From your Windows NT desktop, double click the icon for My Computer.
2. Double click the icon for Control Panel.
3. Double click the icon for SCSI Adapters.
4. In the SCSI Adapters window, click the Drivers tab.
5. Click Add.
6. In the Install Drivers window, click Have Disk.
7. In the Install from Disk window, ensure the drive letter in the Copy
Manufacturer’s Files From field is the drive letter you specified to save the
2xxxxxxx.exe file in step 7 on page 129 in “Downloading the QLogic adapter
driver” on page 129.
8. Type the name of the current driver file after the drive letter prompt in the
Copy Manufacturer’s Files From field.
9. Click OK.
10. Click Cancel to exit.
11. Restart your host system.
Configuring the QLogic host adapter cards
To configure the QLogic QLA2100F or QLA2200F adapter card, use the ESS
StorWatch Specialist.
Installing the Emulex LP8000 adapter card
This section tells you how to attach an ESS to a Windows NT host system with the
Emulex LP8000 adapter card.
Note: If you use the Emulex LP8000 adapter with the McData ED-5000,
non-switched configurations are not supported. For fibre-channel connection
through the SAN Data Gateway, the ED-5000 is only supported on the
Emulex adapter.
Perform the following steps to install the Emulex LP8000 adapter card.
1. Turn off and unplug the computer.
2. Remove the computer case.
3. Remove the blank menu from an empty PCI bus slot.
4. Insert the host adapter board into the empty PCI bus slot. Press firmly until
seated.
5. Secure the mounting bracket for the adapter to the case with the panel screw.
6. Replace the computer case by tightening the screws on the case or use the
clamp to secure the cover.
130
ESS Host Systems Attachment Guide
Downloading the Emulex adapter driver
Perform the following steps to install the port driver.
1. Plug in and restart your host system.
2. Go to the following Web site:
www.emulex.com
|
|
3. From the Quick Links menu, click Documentation, Drivers and Software.
4. Click the host adapter type from the host adapter menu.
For example, click Emulex LP8000.
5. Click Drivers for Windows NT.
6. Click Specialized Drivers.
7. Click SCSI/ID multi-port xxxxx or SCSI port xxxxx, where xxxxx is the name
of the adapter driver.
8. Click Download Now.
9. From the File Download window, click the appropriate button and proceed as
indicated:
v Open this file from its current location
Go to step 10.
v Save this file to disk
Go to step 17.
10. In the Winzip window, click I agree.
11. In the WinZip Wizard - Welcome window, click Next.
12. In the WinZip Wizard - Select Zip File xxxxxxxx.zip window where
xxxxxxxx is the name of the file, highlight the file that you want to unzip.
13. Click Next.
14. In the WinZip Wizard - Unzip window, click Unzip now.
A window opens that indicates the progress of the download operation. When
progress indicator window closes, the download is complete. When the
operation to unzip the file completes, a window opens to display the following
file names:
v Lpscsi
v Lputilnt
v Oemsetup
15.
16.
17.
18.
v Readme
v Txtsetup.oem
Double click Readme to get the instructions to install the fibre-channel adapter
driver. Print the Readme file.
In the WinZip Wizard - Unzip Complete window, click Close.
Ensure that the name of the file you want to download is displayed in the
window.
If the name of the file you want to download is not displayed in the window, go
to step 2 in “Downloading the Emulex adapter driver”.
Click Save to download and unzip the file to your hard drive.
A window opens that indicates the progress of the download operation. When
progress indicator window closes, the download is complete.
Installing the Emulex adapter drivers
Perform the following steps to install the fibre-channel adapter drivers.
Chapter 11. Attaching to a Windows NT 4.0 host
131
Note: If you are installing the fibre-channel adapter for the first time, you must
specify the correct topology. You must also select the appropriate device
mapping driver.
From your desktop, click Start → Settings.
Double click Control Panel.
Double click SCSI Adapters.
Click the Drivers tab.
Click Add to create a list of drivers.
A window opens that indicates the progress of the operation. When the
operation completes, the window closes and displays another window called
Install Driver.
6. From the Install Driver window, click Have Disk.
7. Enter the path to the driver file that you downloaded and click OK.
For example, if you downloaded the adapter driver file to a folder called
Emulex, type c:\emulex\emulex.zip.
1.
2.
3.
4.
5.
8. To install the driver, highlight the line that lists the driver you want and click
OK.
Note: The driver affects every adapter in the system. If you have more than
one adapter that requires different parameter settings, you must change
the parameter settings with the port utility and restart your host system.
9. Click Yes to restart the host system.
10. After you restart your host system, click Start → Settings.
11. Double click Control Panel.
12. Double click SCSI Adapters.
13. Click the Drivers tab.
Verify that the Emulex SCSI driver is present and started.
14. Click the Devices tab.
Verify that the host adapter is on the list.
|
|
Parameter settings for the Emulex LP8000 on a Windows NT host
system
|
|
|
|
|
See Table 24 for recommended host bus adapter configuration file parameters for
an Emulex LP8000 adapter. The settings are for an ESS Model F20 that is attached
through a switch using the fabric, automap SCSI devices port driver, an ESS Model
F20 that is attached directly, using the arbitrated loop, automap SCSI devices port
driver.
|
|
Table 24. Recommended configuration file parameters for the host bus adapters for the
Emulex LP8000 adapter on a Windows NT host system
|
Parameters
Recommended settings
|
Automatically map SCSI devices
Checked (enabled)
|
Query name server for all N-ports
Checked (enabled)
|
Allow multiple paths to SCSI targets
Checked (enabled)
||
|
|
Point-to-point
v Not checked (disabled) for direct
attach
|
Register for state change
v Not shown for the fabric attach
132
ESS Host Systems Attachment Guide
Checked (enabled)
|
|
Table 24. Recommended configuration file parameters for the host bus adapters for the
Emulex LP8000 adapter on a Windows NT host system (continued)
|
Parameters
Recommended settings
|
Use report LUNs
Checked (enabled)
|
Use name server after RSCN
Checked (enabled)
|
LUN mapping
Checked (enabled)
|
Automatic LUN mapping
Checked (enabled)
|
Scan in device ID order
Not checked (disabled)
|
Enable class 2 for SCSI devices
Not checked (disabled)
|
Report unknown SCSI devices
Not checked (disabled)
|
Look for disappearing devices
Not checked (disabled)
|
Translate queue full to busy
Not checked (disabled)
|
Use bus reset status for retries
Not checked (disabled)
|
Retry unit attention
Not checked (disabled)
|
Retry PLOGI open failures
Not checked (disabled)
|
|
|
Maximum number of LUNs
Equal to or greater than the number of
the ESS LUNs available to the host
bus adapter
|
Maximum queue depth
8
|
Link Timer
30 seconds
|
Retries
64
|
E_D_TOV
2000 milliseconds
|
AL_TOV
15 milliseconds
|
Wait ready timer
45 seconds
|
Retry timer
2000 milliseconds
|
R_A_TOV
2 seconds
|
ARB_TOV
1000 milliseconds
|
Link Control See note.
||
|
Topology
|
Link speed
|
|
Note: Link control is not shown for direct attachment.
|
Configuring the ESS with the Emulex LP8000 host adapter card
|
|
v Point-to-point (fabric)
v Arbitrated loop (direct attachment)
Auto
To configure the Emulex LP8000 host adapter card, use the ESS Specialist.
Configuring for availability and recoverability
|
|
|
|
|
This section describes how to ensure optimum availability and recoverability when
you attach an ESS to a Windows NT host system. You must set the timeout value
associated with the supported host bus adapters to 240 seconds. The setting is
consistent with the configuration for IBM SSA adapters and disk subsystems when
attached to Windows NT host system.
|
|
The host bus adapter uses the timeout parameter to bound its recovery actions and
responses to the disk subsystem. The value exists in different places in the system
Chapter 11. Attaching to a Windows NT 4.0 host
133
configuration. You can retrieve and use it in different ways depending on the type of
host bus adapter. The following instructions tell you how to modify the value safely
in either the Windows NT registry or in the device adapter parameters.
|
|
|
Setting the TimeOutValue registry
Perform the following steps to set the timeout value registry:
1. From the Run menu or command prompt, type:
Regedt32.exe
2. Navigate to the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk
3. Look for the value called TimeOutValue. If the value called TimeOutValue does
not exist, go to step 3a. If the TimeOutValue exists, go to step 4.
a. Click Edit → Add Value...
b. For ValueName, click TimeOutValue
c. For data type, click REG_DWORD
d. Click OK.
e. For data, click f0.
f. For radix, click Hex .
g. Click OK.
4. If the value exists and is less than 0x000000f0 (240 decimal), perform the
following steps to increase it to 0xf0.
a. Click TimeOutValue.
b. Click Edit → DWORD....
c. For data, click f0.
d. For radix, click hex.
e. Click OK.
5. Exit the Regedt32 program.
6. Restart your Windows NT server for the changes to take effect.
|
Verifying the host system is configured for storage
Perform the following steps to determine whether or not your Windows NT 4.0 host
system is configured for storage:
|
|
|
|
|
|
|
|
1. Partition the new drives using Disk Administrator.
2. From the Windows NT desktop, right click Start.
3. Click Explore and verify that you can see the fibre-channel drives.
4. Select a large file (for example, a 9 MB file), and drag (copy) it to a
fibre-channel drive.
Performing a FlashCopy from one volume to another volume
Perform the following steps to perform a FlashCopy from one Windows NT 4.0
volume to another volume. Before you perform the steps, you must log on with
administrator authority. The following steps assume you are performing the steps
from the host with the FlashCopy target:
1. From the task bar, click Start → Programs → Administrative Tools → Disk
Administrator.
2. The Disk Administrator error message window opens. Click OK.
|
|
|
|
|
|
|
134
ESS Host Systems Attachment Guide
|
|
|
|
|
|
3. From the Disk Administrator window, select the disk drive letter that is your
target.
4. From the menu bar, click Tools → Assign Drive letter.
5. From the Assign Driver letter window, click Do Not Assign a Drive Letter.
6. Click OK.
7. Perform the FlashCopy operation.
|
|
|
|
|
|
|
Note: If the ESS uses the volume serial numbers to do a FlashCopy on a
Windows NT host system, use the IBM Subsystem Device Driver to
obtain the volume serial numbers.
8. Go back to the Disk Administrator Window. From the task bar, click Start →
Programs → Administrative Tools → Disk Administrator and select the
FlashCopy target.
9. From the menu bar, click Tools Assign → Drive letter.
|
|
10. From the Assign Driver letter window, click Select the Assigned Drive letter.
11. Click OK.
|
|
For more information about performing a FlashCopy, see Implementing ESS Copy
Services on UNIX and Windows NT/2000.
Chapter 11. Attaching to a Windows NT 4.0 host
135
136
ESS Host Systems Attachment Guide
Chapter 12. Attaching to a Windows 2000 host
This chapter tells you how to attach the ESS to a Windows 2000 host system with
SCSI and fibre-channel adapters.
Attaching with SCSI adapters
This section describes how to attach an ESS to a Windows NT host system with
SCSI adapters. For procedures about how to attach an ESS to a Windows 2000
host system with fibre-channel adapters, see “Attaching with fibre-channel adapters”
on page 142.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Check the LUN limitations for your host system; see Table 6 on page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
For details about the release level for your operating system, see the following
Web site:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. You assign the SCSI hosts to the SCSI ports on the ESS.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
3. You configure the host system for the ESS by using the instructions in your host
system publications.
Notes:
1. Version 1.3.0.0 of the IBM Subsystem Device Driver supports the Windows
2000 host system in a clustering environment. To have failover protection on an
open system, the IBM Subsystem Device Driver requires a minimum of two
adapters. You can run the Subsystem Device Driver with one SCSI adapter, but
you have no failover protection. The maximum number of adapters supported is
16 for a total of 32 SCSI ports.
|
|
|
See the following web site for the most current information about the IBM
Subsystem Device Driver:
www.ibm.com/storage/support/techsup/swtechsup.nsf/support/sddupdates
2. To improve performance, IBM recommends that you map to the LUNs for the
target volumes of the Windows 2000 host until you need access to the data on
the target volume. Perform the LUN mapping after the Peer-to-Peer Remote
Copy operation and immediately before you need access to the data. You must
restart the host system before you can access the data on the target volume.
You can greatly reduce the time it takes for the host system to restart if you
perform the LUN mapping. Otherwise, the time to restart could take 10 minutes
per Peer-to-Peer Remote Copy target volume.
© Copyright IBM Corp. 1999, 2001
137
Attaching an ESS to a Windows 2000 host system
This section describes how to attach a Windows 2000 host system to an ESS with
the following adapter cards:
v Adaptec AHA-2944UW
v Symbios 8751D
v QLogic QLA1041
Installing and configuring the Adaptec AHA-2944UW adapter card
Note: The steps to install and configure adapter cards are examples. Your
configuration might be different.
Perform the following steps to install and configure the Adaptec AHA-2944UW
adapter card:
1. Install the Adaptec AHA-2944UW in the server.
2. Connect the cable to the ESS port.
3. Start the server.
4. Press Ctrl+A to get to the SCSISelect menu and the list of adapter cards to
configure.
5. From the SCSISelect menu, select Configure/View Host Adapter Settings.
v Set the parameters on the Configure/View Host Adapter Settings panel as
follows:
– Host Adapter SCSI ID: 7
– SCSI Parity Checking: Enabled
– Host Adapter SCSI Termination: Automatic
6. Select SCSI Device Configuration
v Set the parameters on the SCSI Device Configuration panel as follows:
– Sync Transfer Rate (megabytes per second): 40.0
– Initiate Wide Negotiation: Yes
– Enable Disconnection: Yes
– Send Start Unit Command: No
– Enable Write Back Cache: No
– BIOs Multiple LUN Support: Yes
– Include in BIOS Scan: Yes
7. Select Advanced Configuration Options
v Set the parameters on the Advanced Configuration Options panel as follows:
– Reset SCSI BIOS at IC Int Enabled
– Display Ctrl+A Message During BIOS: Enabled
– Extend BIOS translation for DOS drives > 1 GB: Enabled
– Verbose or Silent Mode: Verbose
– Host Adapter BIOS: Disabled:scan bus
– Support Removable Disks under Basic Input/Output System (BIOS) as
fixed disks: Disabled
– BIOS support for bootable CD-ROM: Disabled
– BIOS support for INT 13 extensions: Enabled
8. Save the changes and select SCSISelect again to verify that you saved the
changes.
138
ESS Host Systems Attachment Guide
9. Restart the server.
10. Load the Adaptec drive, and restart the system if instructed to do so.
Installing and configuring the Symbios 8751D adapter card
Perform the following steps to install and configure the Symbios 8751D adapter
card.
Note: The parameter settings shown are an example. The settings for your
environment might be different.
1. Install the Symbios 8751D in the server.
2. Connect the cable to the ESS port.
3. Start the server.
4. Press Ctrl+C to get to the Symbios Configuration Utility menu.
5. From the Symbios Configuration Utility menu, select LSI Logic Host Bus
Adapters.
6. Set the parameters on the LSI Logic Host Bus Adapters panel as follows:
a. Press F2 at the first panel.
b. Select the Boot Adapter list option to display the boot adapter list. See
Figure 61 for an example of the boot adapter list.
Note: The boot adapter list shows only user-definable parameters.
Boot Order [0]
Next Boot [Off]
Figure 61. Example of boot adapter list for the Symbios 8751D adapter card for Windows
2000
c. Perform the following steps to change the BIOS settings:
1)
2)
3)
4)
Highlight Next Boot and then click On to change the setting.
Restart the host.
Select Symbios Configuration Utility again, and make the changes.
After you make the changes, highlight and then click Off to change the
setting.
5) Restart the host.
d. Set the parameters on the Global Properties panel as follows:
v Pause When Boot Alert Displayed: [No]
v Boot Information Display Mode: [Verbose]
v Negotiate With Devices: [Supported]
v Video Mode: [Color]
v Restore Defaults (restores defaults)
e. Set the parameters on the Adapters Properties panel as follows:
v SCSI Parity: [Yes]
v Host SCSI ID: [7]
v SCSI Bus Scan Order: [Low to High (0..Max)]
v Removable Media Support: [None]
Chapter 12. Attaching to a Windows 2000 host
139
v
v
v
v
v
CHS Mapping: [SCSI Plug and Play Mapping]
Spinup Delay (Secs): [2]
Secondary Cluster Server: [No]
Termination Control: [Auto]
Restore Defaults: (restores defaults)
f. Set the parameters on the Device Properties panel as follows:
v MT or Sec: [20]
v Data Width: [16]
v Scan ID: [Yes]
v Scan LUNs >0: [Yes]
v Disconnect: [On]
v SCSI Time-out: 240
v Queue Tags: [On]
v Boot Choice: [No]
v Format: [Format]
v Verify: [Verify]
v Restore defaults: (restores defaults)
g. Save the changes and select Symbios Configuration Utility again to verify
that you saved the changes.
7. Restart the server.
8. Load the Symbios driver, and restart the system if instructed to do so.
Installing and configuring the QLogic adapter card
Perform the following steps to install and configure the QLogic QLA1041 adapter
card.
Note: The parameter settings shown are an example. The settings for your
environment might be different.
1. Install the QLogic QLA1041 adapter card in the server.
2. Connect the cable to the ESS port.
3. Start the server.
4. Press Alt+Q to get to the FAST!Util menu.
a. From the Configuration Settings menu, select Host Adapter Settings. Set
the following parameters:
v Host Adapter: Enabled
v Host Adapter BIOS: Disabled
v Host Adapter SCSI ID: 7
v
v
v
v
v
PCI Bus direct memory access (DMA) Burst: Enabled
Compact disc Boot: Disabled
SCSI Bus Reset: Enabled
SCSI Bus Reset Delay: 5
Concurrent Command or Data: Enabled
v Drivers Load RISC Code: Enabled
v Adapter Configuration: Auto
b. Set the parameters in the SCSI Device Settings menu as follows:
v Disconnects OK: Yes
140
ESS Host Systems Attachment Guide
v
v
v
v
v
Check Parity: Yes
Enable LUNS: Yes
Enable Devices: Yes
Negotiate Wide: Yes
Negotiate Sync: Yes
v Tagged Queueing: Yes
v Sync Offset: 8
v Sync Period: 12
v Exec Throttle: 16
c. Save the changes and select FAST!Util again to verify that you saved the
changes.
5. Restart the server.
6. Load the QLogic driver, and restart the system if instructed to do so.
Configuring for availability and recoverability
This section describes how to ensure optimum availability and recoverability when
you attach an ESS to a Windows 2000 host system. You must set the timeout value
associated with the supported host bus adapters to 240 seconds. The setting is
consistent with the configuration for IBM SSA adapters and disk subsystems when
attached to Windows 2000 host system.
The host bus adapter uses the timeout parameter to bound its recovery actions and
responses to the disk subsystem. The value exists in different places in the system
configuration. You can retrieve and use it in different ways depending on the type of
host bus adapter. The following instructions tell you how to modify the value safely
in either the Windows 2000 registry or in the device adapter parameters.
Setting the TimeOutValue registry
The following instructions tell you how to set the timeout value registry:
1. From the Run menu or command prompt, type:
Regedt32.exe
2. Navigate to the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk
3. Look for the value called TimeOutValue. If the value called TimeOutValue does
not exist, go to step 3a. If the value called TimeOutValue exists, go to step 4.
a. Click Edit → Add Value...
b. For ValueName, click TimeOutValue.
c. For data type, click REG_DWORD.
d. Click OK.
e. For data, click f0.
f. For radix, click Hex.
g. Click OK.
4. If the value exists and is less than 0x000000f0 (240 decimal), perform the
following steps to increase it to 0xf0.
a. Click TimeOutValue.
b. Click Edit → DWORD...
c. For data, click f0.
d. For radix, click hex.
Chapter 12. Attaching to a Windows 2000 host
141
e. Click OK.
5. Exit the Regedt32 program.
6. Restart your Windows 2000 server for the changes to take effect.
Performing a FlashCopy from one volume to another volume
Perform the following steps to perform a basic FlashCopy from one Windows 2000
volume to another volume. Before you perform the steps, you must log on with
administrator authority. The following steps assume you are performing the steps
from the host with the FlashCopy target.
1. Perform the FlashCopy operation.
Note: If the ESS uses the volume serial numbers to do a FlashCopy on a
Windows 2000 host system, use the IBM Subsystem Device Driver to
obtain the volume serial numbers.
2. Restart the server that has the target volume.
3. From the taskbar, click Start → Programs → Administrator → Computer
Management → Disk Management to launch the disk management function.
This function assigns the drive letter to the target if needed.
If the volume is in the basic mode, you are finished.
If the volume is in dynamic mode, perform the following steps:
1. Perform the FlashCopy operation.
Note: If the ESS uses the volume serial numbers to do a FlashCopy on a
Windows 2000 host system, use the IBM Subsystem Device Driver to
obtain the volume serial numbers.
2. Restart the host machine (target).
3. From the taskbar, click Start → Programs → Administrator → Computer
Management → Disk Management to launch the disk management function.
4. Find the disk that is associated with your volume.
There are two panels for each disk. The panel on the left should read Dynamic
and Foreign. It is probable that no drive letter will be associated with that
volume.
5. Right click on the panel and select Import Foreign Disks. Select OK, then OK
again.
The volume now has a drive letter assigned to it. It is defined as Simple Layout
and Dynamic Type. You can read and write to that volume.
6. Run CHKDSK if requested by Windows 2000.
Attaching with fibre-channel adapters
This section tells you how to attach an ESS to a Windows 2000 host system with
the following fibre-channel adapters.
v Qlogic QLA2100F adapter card
v Qlogic QLA2200F adapter card
v Emulex LP8000 adapter card
This section also tells you how to install, download, and configure the adapter
cards.
142
ESS Host Systems Attachment Guide
For procedures that describe how to attach a Windows 2000 host system with SCSI
adapters, see “Attaching with SCSI adapters” on page 137.
Attachment requirements
This section lists the requirements for attaching the ESS to your host system:
v Check the LUN limitations for your host system; see Table 6 on page 11.
v Ensure that you have the documentation for your host system and the IBM
Enterprise Storage Server User’s Guide. The User’s Guide is on the compact
disc that you receive with the ESS.
v See the following Web site for details about the release level for your operating
system:
www.storage.ibm.com/hardsoft/products/ess/supserver.htm
Either you or an IBM service support representative (SSR) must perform the
following tasks to install and configure an ESS.
1. The IBM SSR installs the ESS by using the procedures in the IBM Enterprise
Storage Server Service Guide.
2. You or an IBM SSR defines the fibre-channel host system with the worldwide
port name identifiers. For the list of worldwide port names see “Appendix A.
Locating the worldwide port name (WWPN)” on page 153.
3. You or an IBM SSR defines the fibre-port configuration if you did not do it during
the installation of the ESS or fibre-channel adapters.
Note: Use the information on the logical configuration work sheet in the IBM
Enterprise Storage Server Configuration Planner that you should have
previously filled out.
4. Either you or an IBM SSR configures the host system for the ESS by using the
instructions in your host system publications.
Notes:
1. Version 1.3.0.0 of the IBM Subsystem Device Driver supports the Windows
2000 host system in a clustering environment. To have failover protection on an
open system, the IBM Subsystem Device Driver requires a minimum of two
fibre-channel adapters. The maximum number of fibre-channel adapters
supported is 16 for a total of 16 fibre-channel ports.
See the following web site for the most current information about the IBM
Subsystem Device Driver:
www.ibm.com/storage/support/techsup/swtechsup.nsf/support/sddupdates
2. To improve performance, IBM recommends that you map to the LUNs for the
target volumes of the Windows 2000 host until you need access to the data on
the target volume. Perform the LUN mapping after the Peer-to-Peer Remote
Copy operation and immediately before you need access to the data. You must
restart the host system before you can access the data on the target volume.
You can greatly reduce the time it takes for the host system to restart if you
perform the LUN mapping. Otherwise, the time to restart could take 10 minutes
per PARCH target volume.
Installing the QLogic QLA2100F adapter card
This section tells you how to attach an ESS to a Windows 2000 host system with
the QLogic QLA2100F adapter card.
Chapter 12. Attaching to a Windows 2000 host
143
Note: The arbitrated-loop topology is the only topology available for the QLogic
QLA2100F adapter card.
Perform the following steps to install the QLogic QLA2100F adapter card:
1. Install the QLogic QLA2100F adapter card in the host system.
2. Connect the cable to the ESS port.
3. Restart the host system.
4. Press Alt+Q to get to the FAST!Util menu.
5. From the Configuration Settings menu, select Host Adapter Settings.
6. From the Advanced Adapter Settings menu, press the Down Arrow to
highlight LUNs per target. Press Enter.
7. Use the Down Arrow find and highlight 256. Press Enter.
8. Press Esc.
9. To save the changes, highlight Yes. Press Enter.
10. Restart the host system.
Installing the QLogic QLA2200F adapter card
This section tells you how to attach an ESS to a Windows 2000 host system with
the QLogic QLA2200F adapter card.
Perform the following steps to install the QLogic QLA2200F adapter card:
1. Install the QLogic QLA2200F adapter card in the host system.
2. Connect the cable to the ESS port.
3. Restart the host system.
4. Press Alt+Q to get to the FAST!Util menu.
5. From the Configuration Settings menu, select Host Adapter Settings.
From the Host Adapter Settings menu, set the following parameters and
values:
a. Host adapter BIOS: Disabled
b. Frame size: 2048
c. Loop reset delay: 5 (minimum)
d. Adapter hard loop ID: Disabled
6. From the Advanced Adapter Settings menu, press the Down Arrow to
highlight LUNs per target; then press Enter. Set the parameters and values
from the Advanced Adapter Settings menu as follows:
a. Execution throttle: 100
b. Fast command posting: Enabled
c.
d.
e.
f.
>4 GB addressing: Disabled for 32 bit systems
LUNs per target: 0
Enable LIP reset: No
Enable LIP full login: No
Note: In a clustering environment, set Enable LIP full login to Yes.
g. Enable target reset: Yes
h. Login retry count: 20 (minimum)
i. Port down retry count: 20 (minimum)
j. Driver load RISC code: Enabled
144
ESS Host Systems Attachment Guide
k. Enable database updates: No
l. Disable database load: No
m. IOCB allocation: 256
n. Extended error logging: Disabled (might be enabled for debugging)
Note: The Enable LIP reset, Enable LIP full logon, and Enable target reset
parameters control the behavior of the adapter when Windows 2000
tries to do a SCSI bus reset. You must perform a target reset to
make cluster failovers work. Use the SCSI bus device reset option
to clear SCSI reservations. The SAN Data Gateway does not
support LIP reset and full login is not necessary after the target
reset.
7. Press Esc to return to the Configuration Settings menu.
8. From the Configuration Settings menu, scroll down to the Extended
Firmware Settings menu. Press Enter.
9. From the Extended Firmware Settings menu, scroll down to Connection
Options to open the Option and Type of Connection window.
10. Select the option:
v 0: Loop only
v 1: Point-to-point
v 2: Loop preferred (If you cannot use arbitrated loop, then default to
point-to-point)
v 3: Point-to point, otherwise loop (If you cannot use point-to-point, default to
arbitrated loop).
Notes:
a. If you connect the ESS directly to the host system, the option you select
must match the port connections on the ESS.
b. If you connect through a switch, the options do not need to match the port
connections because the ESS is point-to-point.
c. The appropriate host bus adapter on the server must also support
point-to-point connection on a direct connection.
d. If you use adapter cards from different manufacturers, they will not function
properly in a direct point-to-point connection. This is not true if you connect
through a switch because the ESS is point-to-point.
11. Press Esc.
12. Save the changes. Highlight Yes.
13. Restart the host system.
Downloading the QLogic adapter driver
Perform the following steps to load the current driver onto the QLogic adapter card.
1. Go to the following Web site:
www.qlogic.com
2.
3.
4.
5.
6.
7.
From the home page, click Driver Download.
Click Use Qlogic Drivers.
Click IBM Enterprise Subsystems Division approved drivers.
Click IBM Approved QLA22xx drivers.
Click Link to Driver for (Windows 2000).
In the File Download window, click Save this Program to Disk.
Chapter 12. Attaching to a Windows 2000 host
145
8.
9.
10.
11.
You have the option to save the driver file to a floppy diskette or a directory on
your hard drive.
Click Save.
A window that shows the progress of the download is displayed.
When the download completes, click Close.
Go to the file directory where you stored the file.
Unzip the file by double clicking the icon.
When you double click the icon, a window opens.
12. Click Unzip.
When the unzip process completes, you should see a message that says, x
files unzipped successfully, where x equals the number of files you
unzipped. Click OK.
13. Click Close to close the window for unzipping the file.
Installing the QLogic adapter drivers
Perform the following steps to install the fibre-channel adapter drivers.
Note: If you are installing the fibre-channel adapter for the first time, you must
specify the correct topology. You must also select the appropriate device
mapping driver.
1. From your Windows 2000 desktop, double click the icon for My Computer.
2. Double click the icon for Control Panel.
3. Double click the icon for SCSI Adapters.
4. In the SCSI Adapters window, click Drivers.
5. Click Add.
6. In the Install Drivers window, click Have Disk.
7. In the Install from Disk window, ensure that the drive letter in the Copy
Manufacturer’s Files From field is the drive letter you specified to save the
2xxxxxxx.exe file in step 7 on page 145 in “Downloading the QLogic adapter
driver” on page 145.
8. Type the name of the current driver file after the drive letter prompt in the
Copy Manufacturer’s Files From field.
9. Click OK.
10. Click OK to exit.
11. Restart your host system.
Configuring the ESS with the QLogic QLA2100F or QLA2200F adapter
card
To configure the host adapter card, use the ESS Specialist.
Installing the Emulex LP8000 adapter card
This section tells you how to attach an ESS to a Windows 2000 host system with
the Emulex LP8000 adapter card. Single- and dual-port fibre-channel interfaces with
the Emulex LP8000 adapter card support the following public and private loop
modes:
v Target
v Public initiator
v Private initiator
146
ESS Host Systems Attachment Guide
v Target and public initiator
v Target and private initiator
Note: If you use the Emulex LP8000 adapter card with the McData ED-5000
switch, non-switched configurations are not supported. For fibre-channel
connection through the SAN Data Gateway, the ED-5000 is only supported
on the Emulex adapter.
Perform the following steps to install the Emulex LP8000 adapter card:
1. Turn off and unplug the computer.
2. Remove the computer case.
3. Remove the blank panel from an empty PCI bus slot.
4. Insert host adapter board into the empty PCI bus slot. Press firmly until seated.
5. Secure the mounting bracket for the adapter to the case with the panel screw.
6. Replace the computer case by tightening the screws on the case or use the
clamp to secure the cover.
Downloading the Emulex adapter driver
Perform the following steps to install the adapter driver.
1. Plug in and restart your host system.
2. Go to the following Web site:
www.emulex.com
3. From the Quick Links menu, click Documentation, Drivers and Software.
4. Click the host adapter type from the host adapter menu.
|
|
For example, click Emulex LP8000.
5. Click Drivers for Windows 2000.
6. Click Specialized Drivers.
7. Click SCSI/ID multi-port xxxxx or SCSI port xxxxx where xxxxx is the name
of the adapter driver.
8. Click Download Now.
9. From the File Download window, click the appropriate button and proceed as
indicated:
v Open this file from its current location
Go to step 10.
v Save this file to disk
Go to step 17 on page 148.
10. In the Winzip window, click I agree.
11. In the WinZip Wizard - Welcome window, click Next.
12. In the WinZip Wizard - Select Zip File xxxxxxxx.zip window, where
xxxxxxxx is the name of the file, highlight the file that you want to unzip.
13. Click Next.
14. In the WinZip Wizard - Unzip window, click Unzip now.
A window opens that indicates the progress of the download operation. When
progress indicator window closes, the download is complete. When the
operation to unzip the file completes, a window opens to display the following
file names:
v Lpscsi
v Lputilnt
Chapter 12. Attaching to a Windows 2000 host
147
15.
16.
17.
18.
v Oemsetup
v Readme
v Txtsetup.oem
Double click Readme to get the instructions to install the fibre-channel adapter
driver. Print the Readme file.
In the WinZip Wizard - Unzip Complete window, click Close.
Ensure that the name of the file you want to download is displayed in the
window.
If the name of the file you want to download is not displayed in the window, go
to step 2 on page 147 in “Downloading the Emulex adapter driver” on
page 131.
Click Save to download and unzip the file to your hard drive.
A window opens that indicates the progress of the download operation. When
the progress indicator window closes, the download is complete.
Installing the Emulex adapter drivers
Perform the following steps to install the fibre-channel adapter drivers.
Note: If you are installing the fibre-channel adapter for the first time, you must
specify the correct topology. You must also select the appropriate device
mapping driver.
1. From your desktop, click Start → Settings.
2. Double click Control Panel.
3. Double click SCSI Adapters.
4. Click the Drivers tab.
5. Click Add to create a list of drivers.
A window opens that indicates the progress of the operation. When the
operation completes, the window closes and displays another window called
Install Driver.
6. From the Install Driver window, click Have Disk.
7. Enter the path to the driver file that you downloaded and click OK.
For example, if you downloaded the adapter driver file to a folder called
Emulex, type c:\emulex\emulex.zip.
8. To install the driver, highlight the line that lists the driver you want and click
OK.
Note: The driver affects every adapter in the system. If you have more than
one adapter that requires different parameter settings, you must change
the parameter settings with the port utility and restart your host system.
9. Click Yes to restart the host system.
10. After you restart your host system, click Start → Settings.
11. Double click Control Panel.
12. Double click SCSI Adapters.
13. Click the Drivers tab.
Verify that the Emulex SCSI driver is present and started.
14. Click the Devices tab.
Verify that the host adapter is on the list.
148
ESS Host Systems Attachment Guide
|
|
Parameter settings for the Emulex LP8000 for a Windows 2000 host
system
See Table 25 for recommended host bus adapter configuration file parameters for
an Emulex LP8000 adapter. The settings are for an ESS model F20 that is attached
through a switch using the fabric, automap SCSI devices port driver, and an ESS
model F20 that is attached directly, using the arbitrated loop, automap SCSI
devices port driver.
|
|
|
|
|
|
|
Table 25. Recommended configuration file parameters for the host bus adapters for the Emulex LP8000 adapter on a
Windows 2000 host system
|
Parameters
Recommended settings
|
Automatically map SCSI devices
Checked (enabled)
|
Query name server for all N-ports
Checked (enabled)
|
Allow multiple paths to SCSI targets
Checked (enabled)
||
|
Point-to-point
v Not checked (disabled) for direct attach
|
Register for state change
Checked (enabled)
|
Use report LUNs
Checked (enabled)
|
Use name server after RSCN
Checked (enabled)
|
LUN mapping
Checked (enabled)
|
Automatic LUN mapping
Checked (enabled)
|
Scan in device ID order
Not checked (disabled)
|
Enable class 2 for SCSI devices
Not checked (disabled)
|
Report unknown SCSI devices
Not checked (disabled)
|
Look for disappearing devices
Not checked (disabled)
|
Translate queue full to busy
Not checked (disabled)
|
Use bus reset status for retries
Not checked (disabled)
|
Retry unit attention
Not checked (disabled)
|
Retry PLOGI open failures
Not checked (disabled)
|
|
Maximum number of LUNs
Equal to or greater than the number of the ESS
LUNs available to the host bus adapter
|
Maximum queue depth
8
|
Link Timer
30 seconds
|
Retries
64
|
E_D_TOV
2000 milliseconds
|
AL_TOV
15 milliseconds
|
Wait ready timer
45 seconds
|
Retry timer
2000 milliseconds
|
R_A_TOV
2 seconds
|
ARB_TOV
1000 milliseconds
|
Link Control See note.
||
|
Topology
v Not shown for the fabric attach
v Point-to-point (fabric)
v Arbitrated loop (direct attachment)
Chapter 12. Attaching to a Windows 2000 host
149
|
|
Table 25. Recommended configuration file parameters for the host bus adapters for the Emulex LP8000 adapter on a
Windows 2000 host system (continued)
|
Parameters
Recommended settings
|
Link speed
Auto
|
|
Note: ¹ Link control is not shown for direct attachment.
|
Configuring the ESS with the Emulex LP8000 host adapter card
To configure the Emulex LP8000 adapter card, use the ESS Specialist.
Configuring for availability and recoverability for a Windows 2000 host
system
This section describes how to ensure optimum availability and recoverability when
you attach an ESS to a Windows 2000 host system. You must set the timeout value
associated with the supported host bus adapters to 240 seconds. The setting is
consistent with the configuration for IBM SSA adapters and disk subsystems when
attached to Windows 2000 host system.
The host bus adapter uses the timeout parameter to bound its recovery actions and
responses to the disk subsystem. The value exists in different places in the system
configuration. You can retrieve and use it in different ways depending on the type of
host bus adapter. The following instructions tell you how to modify the value safely
in either the Windows 2000 registry or in the device adapter parameters.
Setting the TimeOutValue registry
Perform the following steps to set the timeout value registry:
1. From the Run menu or command prompt, type:
Regedt32.exe
2. Navigate to the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk
3. Look for the value called TimeOutValue. If the value called TimeOutValue does
not exist, go to step 3a. If the TimeOutValue exists, go to step 4.
a. Click Edit → Add Value...
b. For ValueName, click TimeOutValue
c. For data type, click REG_DWORD
d. Click OK.
e. For data, click f0.
f. For radix, click Hex .
g. Click OK.
4. If the value exists and is less than 0x000000f0 (240 decimal), perform the
following steps to increase it to 0xf0.
a. Click TimeOutValue.
b. Click Edit → DWORD....
c. For data, click f0.
d. For radix, click Hex.
e. Click OK.
5. Exit the Regedt32 program.
6. Restart your Windows NT server for the changes to take effect.
150
ESS Host Systems Attachment Guide
Verifying the host is configured for storage
Perform the following steps to determine whether or not your Windows 2000 host
system is configured for storage:
1. Partition new drives with Disk Administrator.
2. From the Windows 2000 desktop, right click Start.
3. Click Explore and verify that you can see the fibre-channel drives.
4. Select a large file (for example, 9 MB file), and drag (copy) it to a fibre-channel
drive.
Performing a FlashCopy from one volume to another volume
|
|
|
|
You can perform two types of a FlashCopy from one Windows 2000 volume to
another volume.
1. Basic
2. Dynamic
Performing a basic FlashCopy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Perform the following steps to perform a basic FlashCopy from one Windows 2000
volume to another volume. Before you perform the steps, you must log on with
administrator authority. The following steps assume you perform the steps from the
host where the FlashCopy target is.
1. Perform the FlashCopy operation.
Note: If the ESS uses the volume serial numbers to do a FlashCopy on a
Windows 2000 host system, use the IBM Subsystem Device Driver to
obtain the volume serial numbers.
2. Restart the server that has the target volume.
3. From the taskbar, click Start →Settings → Control Panel .
4. From the Control Panel window, double click Administrative Tools.
5. From the Administrative Tools window, double click Computer Management.
6. From the Computer Management window, double click Disk Management to
launch Disk Management.
This assigns the driver letter to the target if needed.
|
If the volume is in the basic mode, you are finished.
|
|
|
|
|
|
Performing a dynamic FlashCopy
|
|
|
|
|
|
|
|
|
Perform the following steps to perform a dynamic FlashCopy from one Windows
2000 volume to another volume. Before you perform the steps, you must log on
with administrator authority. The following steps assume you perform the steps from
the host where the FlashCopy target is.
1. Perform the FlashCopy operation.
2.
3.
4.
5.
6.
Note: If the ESS uses the volume serial numbers to do a FlashCopy on a
Windows 2000 host system, use the IBM Subsystem Device Driver to
obtain the volume serial numbers.
Restart the server that has the target volume.
From the taskbar, click Start →Settings → Control Panel .
From the Control Panel window, double click Administrative Tools.
From the Administrative Tools window, double click Computer Management.
From the Computer Management window, double click Disk Management to
launch Disk Management.
Chapter 12. Attaching to a Windows 2000 host
151
This assigns the driver letter to the target if needed.
7. Find the disk that is associated with your volume.
There are two panels for each disk. The panel on the left should read Dynamic
and Foreign. It is probable that a drive letter is not associated with that
volume.
8. Right click on that panel and select Import Foreign Disks.
9. Click OK, then OK again.
The volume now has a drive letter assigned to it. It is defined as Simple
Layout and Dynamic Type. You can read and write to that volume.
10. Run CHKDSK if requested by Windows 2000.
|
|
|
|
|
|
|
|
|
|
152
ESS Host Systems Attachment Guide
Appendix A. Locating the worldwide port name (WWPN)
This chapter tells you how to locate the WWPN value for a fibre-channel adapter on
the following host systems:
v Compaq
v Hewlett-Packard 9000
v IBM eServer AS/400 and iSeries
v IBM eServer NUMA-Q or xSeries 430
v IBM eServer RS/6000 and pSeries
v Novell NetWare
v Sun
v Windows NT 4.0
v Windows 2000
Fibre-channel port name identification
The WWPN consists of exactly 16 hexadecimal characters (0 - 9 and A - F). The
ESS uses it to uniquely identify the fibre-channel adapter card that is installed in
your host system. The ESS automatically finds the WWPN for your host
fibre-channel adapter when you attach your host system to the ESS.
Note: If your host uses more than one fibre-channel adapter to connect to your
ESS, you must add multiple entries to the host list for this host. You must
add one for each fibre-channel adapter. Each adapter will have its own
unique WWPN.
The format and content of the fibre-channel port identifier are determined by the
manufacturer of the link control facility for the applicable fibre-channel port. The
identifier is an eight-byte field, which the fibre-channel protocols use to uniquely
identify the fibre-channel port.
You can manually locate a unique worldwide port name for the ESS by performing
the steps in the following sections.
Locating the WWPN for a Compaq host
To locate the WWPN for a Compaq host system, perform the following steps:
1. From the console prompt, type: P0>>>wwidmgr -show ada
See Figure 62 for an example of what displays when you type the
P0>>>wwidmgr -show command.
Probing timeout
item
adapter
WWN
[ 0] pga0.0.0.7.1
1000-0000-c922-d469
[ 1] pgb0.0.0.8.1
2000-0000-c922-6a63
[9999] All of the above.
Cur. Topo
FABRIC
FABRIC
Next Topo
FABRIC
FABRIC
Figure 62. Example of the output from the Compaq wwidmgr -show command
You might receive the following errors:
© Copyright IBM Corp. 1999, 2001
153
Message:
wwidmgr available only prior to booting. Reinit system and try again.
Explanation:
Type P00>>>init wwidmgr again.
Message:
wwidmgr: No such command
Explanation:
Type P00>>set mode diag wwidmgr
If the system is already running, you can find the WWPN in the log file
/var/adm/messages.
2. Type: #fgrep wwn /var/adm/messages
Figure 63 shows an example of the output when you type #fgrep wwn /
var/adm/messages. You can find the WWPN in the last column.
...
Nov 9
Nov 10
Nov 13
Nov 14
Nov 15
...
09:01:16
10:07:12
17:25:28
11:08:16
10:49:31
osplcpq-ds20
osplcpq-ds20
osplcpq-ds20
osplcpq-ds20
osplcpq-ds20
vmunix:
vmunix:
vmunix:
vmunix:
vmunix:
KGPSA-BC
KGPSA-BC
KGPSA-BC
KGPSA-BC
KGPSA-BC
:
:
:
:
:
Driver
Driver
Driver
Driver
Driver
Rev
Rev
Rev
Rev
Rev
1.21
1.21
1.21
1.21
1.21
:
:
:
:
:
F/W
F/W
F/W
F/W
F/W
Rev
Rev
Rev
Rev
Rev
2.22X1(1.13)
2.22X1(1.13)
2.22X1(1.13)
2.22X1(1.13)
2.22X1(1.13)
:
:
:
:
:
wwn
wwn
wwn
wwn
wwn
1000-0000-c922-d469
1000-0000-c922-d469
1000-0000-c922-d469
1000-0000-c922-d469
1000-0000-c922-d469
Figure 63. Example of the output from the Compaq #fgrep wwn /var/adm/messages command
Locating the WWPN for a Hewlett-Packard host
To locate the WWPN for a Hewlett-Packard host system, perform the following
steps:
1. Go to the root directory.
2. Type: ioscan -fn | more
3. Look under the description for the Fibre Channel Mass Storage adapter.
For example, look for the device path name /dev/td1.
4. Type: fcmsutil /dev/td1 where /dev/td1 is the path.
Locating the WWPN for an iSeries host
To locate the WWPN for an iSeries host system, perform the following steps:
1. On the iSeries Main Menu panel, type strsst.
2. On the Start Service Tools (STRSST) Sign On panel, type your service tools
user ID and password.
3. On the System Service Tools (SST) panel, type 1 to select Start a service tool.
4. On the Start a Service Tool panel, type 7 to select Hardware service manager.
154
ESS Host Systems Attachment Guide
5. On the Hardware Service Manager panel, type 1 to select Packaging hardware
resources (systems, frames, cards,...).
6. On the Packaging Hardware Resources panel, type 9 to select the System
Expansion unit.
7. On the Packaging Hardware Resources panel, type 8 to select Multiple Function
IOA.
8. On the Logical Resources Associated with a Packaging Resource panel, type 5
to select Multiple Function IOA.
9. On the Auxiliary Storage Hardware Resource Detail panel, locate the field name
for the Worldwide Port Name. The number in the right column is the WWPN.
Note: If you have exchanged a 2766 Fibre Channel IOA in the iSeries system, the
IBM 2105 ESS disk unit subsystem you must update the worldwide port
name of the new 2766 IOA. You can find the name in the port worldwide
name field on the iSeries system by displaying the detail on the 2766 IOA
Logical Hardware Resource information in Hardware Service Manager in
SST/DST.
Locating the WWPN for an IBM eServer xSeries or IBM NUMA-Q host
To locate the WWPN for an IBM eServer xSeries or IBM NUMA-Q host with an
IOC-0210-54 adapter, perform the following steps:
1. From the Enterprise Storage Specialist Welcome panel, click Storage
Allocation.
2. From the Storage Allocation Graphical View panel, click Open System
Storage.
3. From the Open System Storage panel, click Modify Host Systems.
4. In the Host Nickname field, type the nickname.
5. In the Host Name field, click either IBM NUMA Server (WinNt) or IBM NUMA
Server (UNIX) from the list.
6. Click the Down Arrow to the right of the Host Attachment field.
7. From the list, highlight then click Fibre-Channel Attached.
8.
9.
10.
11.
In the Hostname/IP Address field, type the hostname.
Click the Down Arrow to the right of the Worldwide Port Name field.
Select the worldwide port name from the list.
Click Perform Configuration Update.
Locating the WWPN for an IBM eServer RS/6000 and pSeries host
To locate the WWPN for an RS/6000 or pSeries host system, perform the following
steps:
1. Log in as root.
2. Type lscfg -vl fcsx, where x is the adapter number.
The network address is the fibre-channel adapter port WWPN value.
Note: The lscfg -vl fcsx ROS level identifies the fibre-channel adapter firmware
level.
Appendix A. Locating the worldwide port name (WWPN)
155
|
Locating the WWPN for a Linux host
To locate the WWPN for an Intel server running Linux with Red Hat 7.1 and SuSE
7.1 with a QLlogic adapter, perform the following steps:
1. Restart the server.
2. Press Alt+Q to get the FAST!Util menu.
If you have more than one fibre-channel adapter installed, all the fibre-channel
adapters display. Scroll down to the adapter you want. Press Enter.
3. From the Fast Util! menu, scroll down and select Select Host Adapter.
4. Scroll up and highlight Configuration Settings. Press Enter.
5. From the Configuration Settings menu, click Host Adapter Settings.
6. Write down the host adapter name, for example: 200000E08B00C2D5.
|
|
|
|
|
|
|
|
|
|
To locate the WWPN for a Windows 2000 host system with an Emulex LP8000
adapter, perform the following steps:
1. Click Start → Programs → Emulex Configuration Tool
|
|
|
|
|
|
2. From the Emulex Configuration Tool window in the Available Adapters window,
double click the adapter entry for which you want to display the WWPN
information.
Locating the WWPN for a Novell NetWare host
To locate the WWPN for a Novell NetWare host system with a QLogic adapter,
perform the following steps:
1. Restart the server.
2. Press Alt+Q to get the FAST!Util menu.
If you have more than one fibre-channel adapter installed, all the adapters
display. Scroll down to the adapter you want. Press Enter.
3. From the Fast Util! menu, scroll down and select Select Host Adapter.
4. Scroll up and highlight Configuration Settings. Press Enter.
5. From the Configuration Settings menu, click Host Adapter Settings.
6. Write down the host adapter name, for example: 200000E08B00C2D5.
|
Locating the WWPN for a Sun host
|
|
Note: If you have multiple host adapters installed, you will see more than one
WWPN.
|
Perform the following steps to locate the WWPN for the following adapters:
v JNI PCI adapter
v JNI SBUS adapter
|
|
|
|
|
|
|
v QLogic QLA2200F adapter
v Emulex LP8000 adaptet
1. After you install the adapter and you restart the host system, view the
/usr/adm/messages file.
2. Search for the line that contains the following phrase:
a. For the JNI SBUS adapter, search for fcawx: Fibre Channel WWNN, where x
is the adapter number (0, 1, and so on). You can find the WWPN on the
same line immediately after the WWNN.
|
|
|
156
ESS Host Systems Attachment Guide
|
|
|
b. For the JNI PCI adapter, search for fca-pcix: Fibre Channel WWNN, where x
is the adapter number (0, 1, and so on). You can find the WWPN on the
same line following the WWNN.
|
|
|
|
c. For the Qlogic QLA2200F adapter, search for qla2200-hbax-adapter-portname where x is the adapter number (0, 1, and so on).
d. For the Emulex LP8000 adapter, search for lpfcx: Fibre Channel WWNN
where x is the adapter number 0, 1, and so on).
Locating the WWPN for a Windows NT host
To locate the WWPN for a Windows NT host system with an Emulex LP8000
adapter, perform the following steps:
1. Click Start → Programs → Emulex Configuration Tool
2. From the Emulex Configuration Tool menu in the Available Adapters window,
double click the adapter entry for which you want to display the WWPN
information.
To locate the WWPN for a Windows NT host system with a Qlogic adapter, perform
the following steps:
1. Restart the server.
2. Press Alt+Q to get the FAST!Util menu.
If you have more than one fibre-channel adapter installed, all the adapters
display. Scroll down to the adapter you want. Press Enter.
3. From the Fast Util! menu, scroll down and select Select Host Adapter.
4. Scroll up and highlight Configuration Settings. Press Enter.
5. From the Configuration Settings menu, click Host Adapter Settings.
6. Write down the host adapter name, for example: 200000E08B00C2D5.
Locating the WWPN for a Windows 2000 host
To locate the WWPN for a Windows 2000 host system with a QLogic adapter,
perform the following steps:
1. Restart the server.
2. Press Alt+Q to get the FAST!Util menu.
If you have more than one fibre-channel adapter installed, all the fibre-channel
adapters display. Scroll down to the adapter you want. Press Enter.
3. From the Fast Util! menu, scroll down and select Select Host Adapter.
4. Scroll up and highlight Configuration Settings. Press Enter.
5. From the Configuration Settings menu, click Host Adapter Settings.
6. Write down the host adapter name, for example: 200000E08B00C2D5.
To locate the WWPN for a Windows 2000 host system with an Emulex LP8000
adapter, perform the following steps:
1. Click Start → Programs → Emulex Configuration Tool
2. From the Emulex Configuration Tool menu in the Available Adapters window,
double click the adapter entry for which you want to display the WWPN
information.
Appendix A. Locating the worldwide port name (WWPN)
157
158
ESS Host Systems Attachment Guide
Appendix B. Migrating from SCSI to fibre-channel
|
|
|
|
|
|
This chapter describes how to migrate disks or logical volumes within the ESS from
SCSI to fibre-channel for the following host systems:
v Hewlett-Packard
v RS/6000 (AIX
v Windows NT
v Windows 2000
|
An experienced system administrator should perform the migration.
|
|
|
There are two methods for migrating:
v Nonconcurrent
v Concurrent
|
|
|
|
See the ESS Fibre-Channel Migration Scenarios white paper for information about
changing your host system attachment to the ESS from SCSI and SAN Data
Gateway to native fibre-channel attachment. The white paper is available at the
following ESS Web site:
www.storage.ibm.com/hardsoft/products/ess/refinfo.htm
See the Implementing Fibre-Channel Attachment on the ESS Redbook. This book
helps you to install, tailor, and configure fibre-channel attachment of open-systems
hosts to the ESS. It gives you a broad understanding of the procedures involved
and describes the prerequisites and requirements. It also shows you how to
implement fibre-channel attachment. This book also describes the steps you must
perform to migrate to direct fibre-channel attachment from native SCSI adapters. It
describes the steps you must perform to migrate from fibre-channel attachment
through the SAN Data Gateway (SDG).
www.storage.ibm.com/hardsoft/products/ess/support/essfcwp.pdf
Software requirements
You must have the following software before you migrate from SCSI adapters to
fibre-channel adapters:
v Internet browser (Netscape or Internet Explorer)
v SAN Data Gateway Explorer
v Disk Administrator
Preparing a host system to change from SCSI to fibre-channel
attachment
Before you migrate, you must have at least one ESS with SCSI adapters already
installed.
Before you can use ESS Specialist to change a host system attached to the ESS
from SCSI to fibre-channel, perform the following steps:
1. Upgrade the microcode and install the IBM Subsystem Device Driver (optional).
For more information about the IBM Subsystem Device Driver, see the Web
site:
© Copyright IBM Corp. 1999, 2001
159
www.ibm.com/storage/support/techsup/swtechsup.nsf/support/sddupdates/
2. Reset your host system or execute the appropriate command to initiate device
discovery.
Nonconcurrent migration
This section tells you how to migrate disks using a nonconcurrent method. Other
host systems attached to the same ESS can continue their work while this
procedure is performed.
Note: The migration procedures in this chapter have been tested successfully.
However, there are other ways you can migrate disks to fibre-channel.
The procedures are valid if the following assumptions are true:
v Your host system has disks that are located on an ESS that is in use. They are
attached to the host system with one SCSI interface.
v The host system has one or more fibre-channel adapters installed.
v The ESS has a fibre-channel adapter, feature code 3022, installed and the
interconnections between the host system and the adapter have been
established.
v All appropriate software prerequisites, drivers, and program temporary fixes have
been implemented.
Migrating from native SCSI to fibre-channel
The following procedure outlines the tasks involved in performing a nonconcurrent
migration from native SCSI to fibre-channel attachment.
1. Stop the applications that are using the disks.
2. Perform all operating system actions required to free the disks. For example,
a. Unmount file systems.
b. Vary off the volume group.
c. Export the volume group.
3. Unassign the disks using the ESS Specialist.
This step makes the volumes inaccessible from the host system. The disks are
still available inside the ESS and the data on these volumes is also retained.
4. Use the ESS Specialist to assign these disks to fibre-channel.
5. Perform the following steps to make the volumes usable by the operating
system:
a. Restart the host system.
b. Import the volume group.
c. Mount the file system.
Migrating on a Hewlett-Packard host
Perform the following steps to migrate from SCSI to fibre-channel on a
Hewlett-Packard host system.
1. Shutdown databases or applications that use the disks that you are migrating.
2. Unmount the file systems. (unmount <file system>) with the System
Administration Menu utilities.
3. Use the ESS Specialist to identify all the disks on the ESS that are assigned to
the affected volume group.
4. Deactivate the volume group using the sequential access method (SAM).
160
ESS Host Systems Attachment Guide
5. Export the volume group.
Enter a file name (full path) where you want to store the logical volume
information for the volume group that you are exporting. See Figure 64 for an
example of the information that is displayed.
6. Disconnect the SCSI cable and connect a fibre-channel cable to the host
system and to the ESS. You do not need to remove the SCSI cable.
7. Use the ESS Specialist to unassign all the LUNs on the ESS that were
assigned to the host system through the SCSI cable. Assign them to a
fibre-channel port. Use the worldwide port name to configure the ESS.
8. After the configuration completes successfully, you will be able to see those
disks the next time you check the disk devices.
9. Type sam at your host system command prompt. Click Disks and File
Systems. Press Enter.
10. Click Volume Groups. Use the Tab key to select an item from the list.
11. Click Actions and press Enter. From the list of options, click Import. Press
Enter.
The volume groups that are available to import display.
12. Click the volume group you exported to see all the disks associated with that
volume group.
13. Use the Tab key to go to the New Volume Group Name field.
14. Type the volume group name.
You can use the same name or any name you like.
15. Tab to the next field and type in the map file name that was created while
exporting the volume group. Figure 64 shows an example of the information
that is displayed.
S009030
Figure 64. SAM display of an export volume group
16. Use the Tab key or up and down arrows to select OK. Press Enter to import the
volume group.
Appendix B. Migrating from SCSI to fibre-channel
161
See Figure 65 for an example of what is displayed by SAM when you import a
volume group.
S009031
Figure 65. SAM display of an import volume group
After the command completes successfully, the essvg volume group is active
as shown in Figure 65.
17. Mount all your file systems that were unmounted in the beginning of this
migration process.
18. Restart your databases and applications when all the file systems are
available.
Migrating on an IBM RS/6000 host
Perform the following steps to migrate from SCSI to fibre-channel on an IBM AIX
host system:
1. Shutdown databases and applications that use the disks that you are
migrating.
2. Unmount the file systems.
3. Identify all the disks that are assigned to the affected volume group using the
lsvg -p <volumegroup name> command. For example, type:
lsvg -p <volume group> | grep hdisk | cut -f1 -d" " > /tmp/disk1
4. Execute the script shown in Figure 66 on page 163.
162
ESS Host Systems Attachment Guide
for i in 'cat /tmp/disk1'
do
SN='lscfg -vl $i | grep Serial'
echo $i, $SN >> /tmp/output
done
/#
Figure 66. Sample script to get the hdisk number and the serial number
See Figure 67 for an example of the output for the script.
hdisk101,
hdisk102,
hdisk103,
hdisk104,
hdisk105,
hdisk106,
hdisk107,
hdisk108,
hdisk109,
hdisk110,
hdisk111,
hdisk112,
hdisk113,
hdisk114,
hdisk115,
hdisk116,
hdisk117,
hdisk118,
hdisk119,
hdisk120,
hdisk121,
hdisk122,
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Serial
Number...............017FCA49
Number...............018FCA49
Number...............019FCA49
Number...............01AFCA49
Number...............01BFCA49
Number...............01CFCA49
Number...............01DFCA49
Number...............01EFCA49
Number...............01FFCA49
Number...............734FCA49
Number...............735FCA49
Number...............736FCA49
Number...............634FCA49
Number...............635FCA49
Number...............636FCA49
Number...............534FCA49
Number...............535FCA49
Number...............536FCA49
Number...............434FCA49
Number...............435FCA49
Number...............436FCA49
Number...............334FCA49
Figure 67. Example list of the hdisk and serial numbers on the ESS
5. Vary off the volume group by typing varyoffvg <volume group name>.
6. Export the volume group (exportvg <volume group name>).
7. Use the ESS Specialist to unassign all disks (LUNs) on the ESS that were
listed when you ran the script in step 4 on page 162.
8. Delete all disks that belong to the volume group that you just exported by
typing rmdev -l hdisk1 -d.
9. Assign all those disks documented in step 3 on page 162 to the host system
that needs access to them. Use the worldwide port name to identify the host
bus adapter.
10. Run the cfgmgr command at the AIX command prompt. Execute the function
to install or configure devices that were added after IPL using SMIT, or restart
the host system.
11. After the command completes successfully or after a restart, check to see that
the disks are available that were part of the volume group. Compare to the
document created by step 4 on page 162.
12. When you see that all disks are available, you can import the volume group by
typing importvg <volumegroup> hdisk1.
13. At the command prompt, verify that all disks are available (imported) to use in
the volume group (lsvg -p <volumegroup>).
14. Mount all file systems that were unmounted in step 2 on page 162.
Appendix B. Migrating from SCSI to fibre-channel
163
15. Start your application or databases.
Migrating on a Windows NT or Windows 2000 host system
This section tells you how to migrate from SCSI to fibre-channel adapters on a
Windows NT or Windows 2000 host system. Before you migrate, ensure that the
following prerequisites are complete:
v Your host system has disks or logical volumes that are located on an ESS that is
already in use.
v The disk or logical volumes are attached to the host system with two or more
SCSI interfaces.
v The IBM Subsystem Device Driver is installed and running properly.
v The host system has one or more fibre-channel adapters installed, including
feature code 3022.
v All appropriate software prerequisites, drivers, and program temporary fixes have
been implemented.
Figure 68 shows an example of a setup between the Windows NT or Windows
2000 host and the ESS before you migrate from SCSI to fibre-channel.
SCSI
Fibre
Switch
Fibre
Host
ESS
S009027
Figure 68. SCSI setup between the Windows NT or Windows 2000 host system and the ESS
You can view the volumes using one of three methods:
v Internet browser
v SAN Gateway Explorer
v Disk Administrator
Perform the following steps to view the setup of the volumes that are attached to
SCSI adapters on the host system:
1. From your desktop, double click My Computer.
2. Double click Control panel.
3. Double click SCSI adapters.
Figure 69 on page 165 shows an example of how volumes are attached to SCSI
adapters on the host. In the example, disks were already labeled and drive letters
were previously assigned.
164
ESS Host Systems Attachment Guide
S009028
Figure 69. Initial setup of volumes attached to SCSI adapters on the host
See Figure 70 for an example of how this information looks in the Disk
Administrator window.
S009029
Figure 70. Disk Administrator panel showing the initial setup
Appendix B. Migrating from SCSI to fibre-channel
165
Figure 70 on page 165 shows the volumes (only 5 out of 10 are shown here
because of the limit of the Disk Administrator window). In this example, the volumes
are labeled using their logical drive letter and volume identifier (3 characters in
length).
Table 26 shows the relationship between the volumes and their mapping.
Table 26. Volume mapping before migration
Volume ID
Disk
Administrator
SCSI port on SCSI target ID and logical drive
the ESS
LUN
assignment
Disk
Administrator disk
number
251
A
6,0
E
1
252
A
6,1
F
2
253
A
6,2
G
3
254
A
6,3
H
4
255
A
6,4
I
5
00D
B
6,0
J
6
00E
B
6,1
K
7
00F
B
6,2
L
8
010
B
6,3
M
9
011
B
6,4
N
10
Perform the following steps to map the disks:
1. Shutdown any databases or applications that use the disks that you are
migrating.
2. Unassign the volumes from the SCSI host.
a. Go the ESS Specialist.
b.
c.
d.
e.
f.
g.
Click Storage Allocation.
Click Open System Storage.
Click Modify Volume Assignments.
Select the volumes from the list.
From the Action menu, select Unassign selected volume(s).
Click Perform Configuration Update. Click OK.
3. Assign the volumes to the fibre-channel host.
a. Click Modify Volume Assignments.
b. Select the volumes from the list.
c. From the Action menu, select Assign selected volume(s) to target host.
d. Select the fibre-channel host adapter to which you want to assign the
volumes.
e. Click Perform Configuration Update.
f. Restart the host system for the changes to take effect.
g. Open the SAN Gateway Explorer to view the mapped disks.
Concurrent migration
This section describes the process of migrating disks or volumes using a concurrent
method. Before you migrate, ensure that you meet the following prerequisites:
166
ESS Host Systems Attachment Guide
v Your host system has disks or logical volumes that are located on an ESS that is
already in use.
v The disk or logical volumes are attached to the host system with two or more
SCSI interfaces.
v The IBM Subsystem Device Driver is installed and running properly.
Notes:
1. If you use the IBM Subsystem Device Driver for AIX, you cannot perform a
concurrent migration because it does not support the simultaneous
attachment to LUNs through SCSI and fibre-channel paths.
2. You cannot use the IBM Subsystem Device Driver with IBM NUMA-Q,
Hewlett-Packard, Sun, or Novell NetWare host systems.
v The host system has one or more fibre-channel adapters installed, including
feature code 3022.
v All appropriate software prerequisites, drivers, and program temporary fixes have
been implemented.
Perform the following steps to migrate disks or volumes using a concurrent method.
1. Test the connectivity on both SCSI paths.
2. Remove the disks from one of the SCSI ports on the ESS.
3. Assign the disks to the worldwide port name of one of the fibre-channel host
bus adapters.
4. Test the system to ensure that you have connectivity on the SCSI path and the
fibre-channel path.
5. Remove the disks from the second SCSI port on the ESS.
6. Assign the disks to the worldwide port name of the second fibre-channel host
bus adapter.
7. Test the system to ensure that you have connectivity on both fibre-channel
paths.
Appendix B. Migrating from SCSI to fibre-channel
167
168
ESS Host Systems Attachment Guide
Appendix C. Migrating from the IBM SAN Data Gateway to
fibre-channel attachment
This chapter describes how to migrate a Storage Area Network (SAN) Data
Gateway to a fibre-channel attachment. You must use a bridge or the SAN Data
Gateway to connect between the fibre-channel interface and the SCSI, SSA, or
other interfaces.
Overview of the IBM SAN Data Gateway
The SAN Data Gateway is a protocol converter between fibre channel interfaces
and SCSI interfaces. It supports the following tape drives:
v IBM MP3570
v IBM Magstar 3590
It also supports the following tape libraries:
v IBM Magstar MP3575
v IBM Magstar 3494
v IBM 3502 DLT
The SAN Data Gateway also supports disk subsystems on Intel-based host
systems that runs Windows NT and UNIX-based machines. For the latest list of
operating systems and storage products that IBM supports, visit the followingIBM
SAN Data Gateway Web site:
www.ibm.com/storage/SANGateway/
The SAN Data Gateway is a fully scalable product with up to three fibre-channel
and four ultra-SCSI differential interfaces for disk attachment and tape storage
attachment. Each fibre-channel interface supports dual or single shortwave ports
and single longwave ports. The following is a list of details about the interfaces on a
SAN Data Gateway.
v Fibre-channel
– Supports both loop (private and public) and point-to-point topologies
– Supports distances between the node or switch up to 500 m (50-micron) for
shortwave and 10 km for long-wave
v SCSI
– Has automatic speed negotiation capability for wide or narrow bus widths and
standard, fast, or ultra speeds
– Supports up to 15 target IDs and up to 32 LUNs per ID (subject to an overall
total of 255 devices)
– Supports cable lengths up to 25 m
Note: Although the SAN Data Gateway has a limitation of 255 LUNs, you must
consider the limitations of the operating system, adapter, and storage
subsystem. Whichever is the lowest dictates the maximum number of LUNs
that you can use. Table 27 shows an example of a configuration.
Table 27. LUN limitations for various components
System component
LUN limit
Windows NT 4.0 with SP3
120 (15 target IDs times 8 LUNs)
© Copyright IBM Corp. 1999, 2001
169
Table 27. LUN limitations for various components (continued)
System component
LUN limit
Host bus adapter
256
ESS
960 (15 target IDs times 64 LUNs)
SAN Data Gateway
255
Table 27 on page 169 shows a maximum LUN limit of 120 because it is the lowest
number supported by one of the components.
However, fixes are available to eliminate problems and limitations. For example,
Service Pack 4 and above already supports up to 256 devices.
The following shows the details of the Ethernet port and the service port:
Ethernet
10BASE-T port for out-band management (using StorWatch SAN Data
Gateway Specialist)
Service port
9-pin D-shell connector for local service, configuration, and diagnostics
The SAN Data Gateway supports channel zoning and LUN masking between fibre
channel ports and SCSI ports. Channel zoning and masking enable you to specify
which fibre-channel hosts can connect to the LUNs that are defined on SCSI ports.
This is important if you do not want multiple host systems accessing the same
LUNs.
Migrating volumes from the SAN Data Gateway to native fibre-channel
The steps to migrate from a SAN Data Gateway to fibre-channel are similar to the
steps to migrate from native SCSI to fibre-channel.
Perform the following steps on the ESS to migrate volumes from the SAN Data
Gateway to native fibre channel:
1. Use the ESS Specialist to define the new fibre-channel host bus adapter to the
ESS.
2. Assign the volumes to the newly defined host. In practice, this means assigning
the volumes to the worldwide port name of the host adapter.
3. Unassign the volumes from the SCSI host.
This procedure assumes that there is a spare slot in the ESS host bays for the
fibre-channel adapter to coexist with the SCSI adapter. If the SCSI adapter has to
be removed before you can install the the fibre-channel adapter, you must list the
volume assignments before you remove the SCSI adapter.
170
ESS Host Systems Attachment Guide
Statement of Limited Warranty
Part 1 – General Terms
International Business Machines Corporation
Armonk, New York, 10504
This Statement of Limited Warranty includes Part 1 - General Terms and Part 2 Country or region-unique Terms. The terms of Part 2 may replace or modify those
of Part 1. The warranties provided by IBM in this Statement of Limited Warranty
apply only to Machines you purchase for your use, and not for resale, from IBM or
your reseller. The term ″Machine″ means an IBM machine, its features,
conversions, upgrades, elements, or accessories, or any combination of them. The
term ″Machine″ does not include any software programs, whether pre-loaded with
the Machine, installed subsequently or otherwise. Unless IBM specifies otherwise,
the following warranties apply only in the country or region where you acquire the
Machine. Nothing in this Statement of Warranty affects any statutory rights of
consumers that cannot be waived or limited by contract. If you have any questions,
contact IBM or your reseller.
Unless IBM specifies otherwise, the following warranties apply only in the country or
region where you acquire the Machine. If you have any questions, contact IBM or
your reseller.
Machine: IBM 2105 (Models E10, E20, F10, and F20) TotalStorage Enterprise
Storage Server (ESS)
Warranty Period: Three Years *
*Contact your place of purchase for warranty service information. Some IBM
Machines are eligible for On-site warranty service depending on the country or
region where service is performed.
The IBM Warranty for Machines
IBM warrants that each Machine 1) is free from defects in materials and
workmanship and 2) conforms to IBM’s Official Published Specifications
(″Specifications″). The warranty period for a Machine is a specified, fixed period
commencing on its Date of Installation. The date on your sales receipt is the Date
of Installation, unless IBM or your reseller informs you otherwise.
During the warranty period IBM or your reseller, if approved by IBM to provide
warranty service, will provide repair and exchange service for the Machine, without
charge, under the type of service designated for the Machine and will manage and
install engineering changes that apply to the Machine.
If a Machine does not function as warranted during the warranty period, and IBM or
your reseller are unable to either 1) make it do so or 2) replace it with one that is at
least functionally equivalent, you may return it to your place of purchase and your
money will be refunded. The replacement may not be new, but will be in good
working order.
Extent of Warranty
The warranty does not cover the repair or exchange of a Machine resulting from
misuse, accident, modification, unsuitable physical or operating environment,
© Copyright IBM Corp. 1999, 2001
171
improper maintenance by you, or failure caused by a product for which IBM is not
responsible. The warranty is voided by removal or alteration of Machine or parts
identification labels.
THESE WARRANTIES ARE YOUR EXCLUSIVE WARRANTIES AND REPLACE
ALL OTHER WARRANTIES OR CONDITIONS, EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OR
CONDITIONS OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THESE WARRANTIES GIVE YOU SPECIFIC LEGAL RIGHTS AND
YOU MAY ALSO HAVE OTHER RIGHTS WHICH VARY FROM JURISDICTION TO
JURISDICTION. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR
LIMITATION OF EXPRESS OR IMPLIED WARRANTIES, SO THE ABOVE
EXCLUSION OR LIMITATION MAY NOT APPLY TO YOU. IN THAT EVENT, SUCH
WARRANTIES ARE LIMITED IN DURATION TO THE WARRANTY PERIOD. NO
WARRANTIES APPLY AFTER THAT PERIOD.
Items Not Covered by Warranty
IBM does not warrant uninterrupted or error-free operation of a Machine.
Unless specified otherwise, IBM provides non-IBM machines WITHOUT
WARRANTIES OF ANY KIND.
Any technical or other support provided for a Machine under warranty, such as
assistance via telephone with ″how-to″ questions and those regarding Machine
setup and installation, will be provided WITHOUT WARRANTIES OF ANY KIND.
Warranty Service
To obtain warranty service for the Machine, contact your reseller or IBM. In the
United States, call IBM at 1-800-IBM-SERV (426-7378). In Canada, call IBM at
1-800-465-6666 . You may be required to present proof of purchase.
IBM or your reseller provides certain types of repair and exchange service, either at
your location or at a service center, to keep Machines in, or restore them to,
conformance with their Specifications. IBM or your reseller will inform you of the
available types of service for a Machine based on its country or region of
installation. IBM may repair the failing Machine or exchange it at its discretion.
When warranty service involves the exchange of a Machine or part, the item IBM or
your reseller replaces becomes its property and the replacement becomes yours.
You represent that all removed items are genuine and unaltered. The replacement
may not be new, but will be in good working order and at least functionally
equivalent to the item replaced. The replacement assumes the warranty service
status of the replaced item.
Any feature, conversion, or upgrade IBM or your reseller services must be installed
on a Machine which is 1) for certain Machines, the designated, serial-numbered
Machine and 2) at an engineering-change level compatible with the feature,
conversion, or upgrade. Many features, conversions, or upgrades involve the
removal of parts and their return to IBM. A part that replaces a removed part will
assume the warranty service status of the removed part.
Before IBM or your reseller exchanges a Machine or part, you agree to remove all
features, parts, options, alterations, and attachments not under warranty service.
You also agree to
172
ESS Host Systems Attachment Guide
1. ensure that the Machine is free of any legal obligations or restrictions that
prevent its exchange.
2. obtain authorization from the owner to have IBM or your reseller service a
Machine that you do not own; and
3. where applicable, before service is provided
a. follow the problem determination, problem analysis, and service request
procedures that IBM or your reseller provides,
b. secure all programs, data, and funds contained in a Machine,
c. provide IBM or your reseller with sufficient, free, and safe access to your
facilities to permit them to fulfill their obligations, and
d. inform IBM or your reseller of changes in a Machine’s location.
IBM is responsible for loss of, or damage to, your Machine while it is 1) in IBM’s
possession or 2) in transit in those cases where IBM is responsible for the
transportation charges.
Neither IBM nor your reseller is responsible for any of your confidential, proprietary
or personal information contained in a Machine which you return to IBM or your
reseller for any reason. You should remove all such information from the Machine
prior to its return.
Production Status
Each IBM Machine is manufactured from new parts, or new and used parts. In
some cases, the Machine may not be new and may have been previously installed.
Regardless of the Machine’s production status, IBM’s appropriate warranty terms
apply.
Limitation of Liability
Circumstances may arise where, because of a default on IBM’s part or other
liability, you are entitled to recover damages from IBM. In each such instance,
regardless of the basis on which you are entitled to claim damages from IBM
(including fundamental breach, negligence, misrepresentation, or other contract or
tort claim), IBM is liable for no more than
1. damages for bodily injury (including death) and damage to real property and
tangible personal property; and
2. the amount of any other actual direct damages, up to the greater of U.S.
$100,000 (or equivalent in local currency) or the charges (if recurring, 12
months’ charges apply) for the Machine that is the subject of the claim.
This limit also applies to IBM’s suppliers and your reseller. It is the maximum for
which IBM, its suppliers, and your reseller are collectively responsible.
UNDER NO CIRCUMSTANCES IS IBM LIABLE FOR ANY OF THE FOLLOWING:
1) THIRD-PARTY CLAIMS AGAINST YOU FOR DAMAGES (OTHER THAN
THOSE UNDER THE FIRST ITEM LISTED ABOVE); 2) LOSS OF, OR DAMAGE
TO, YOUR RECORDS OR DATA; OR 3) SPECIAL, INCIDENTAL, OR INDIRECT
DAMAGES OR FOR ANY ECONOMIC CONSEQUENTIAL DAMAGES
(INCLUDING LOST PROFITS OR SAVINGS), EVEN IF IBM, ITS SUPPLIERS OR
YOUR RESELLER IS INFORMED OF THEIR POSSIBILITY. SOME
JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF
INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION
OR EXCLUSION MAY NOT APPLY TO YOU.
Statement of Limited Warranty
173
Part 2 - Country or region-unique Terms
ASIA PACIFIC
AUSTRALIA: The IBM Warranty for Machines: The following paragraph is added
to this Section: The warranties specified in this Section are in addition to any rights
you may have under the Trade Practices Act 1974 or other legislation and are only
limited to the extent permitted by the applicable legislation.
Extent of Warranty: The following replaces the first and second sentences of this
Section: The warranty does not cover the repair or exchange of a Machine resulting
from misuse, accident, modification, unsuitable physical or operating environment,
operation in other than the Specified Operating Environment, improper maintenance
by you, or failure caused by a product for which IBM is not responsible.
Limitation of Liability: The following is added to this Section: Where IBM is in
breach of a condition or warranty implied by the Trade Practices Act 1974, IBM’s
liability is limited to the repair or replacement of the goods or the supply of
equivalent goods. Where that condition or warranty relates to right to sell, quiet
possession or clear title, or the goods are of a kind ordinarily acquired for personal,
domestic or household use or consumption, then none of the limitations in this
paragraph apply.
PEOPLE’S REPUBLIC OF CHINA: Governing Law: The following is added to this
Statement: The laws of the State of New York govern this Statement.
INDIA: Limitation of Liability: The following replaces items 1 and 2 of this Section:
1. liability for bodily injury (including death) or damage to real property and tangible
personal property will be limited to that caused by IBM’s negligence; 2. as to any
other actual damage arising in any situation involving nonperformance by IBM
pursuant to, or in any way related to the subject of this Statement of Limited
Warranty, IBM’s liability will be limited to the charge paid by you for the individual
Machine that is the subject of the claim.
NEW ZEALAND: The IBM Warranty for Machines: The following paragraph is
added to this Section: The warranties specified in this Section are in addition to any
rights you may have under the Consumer Guarantees Act 1993 or other legislation
which cannot be excluded or limited. The Consumer Guarantees Act 1993 will not
apply in respect of any goods which IBM provides, if you require the goods for the
purposes of a business as defined in that Act.
Limitation of Liability: The following is added to this Section: Where Machines are
not acquired for the purposes of a business as defined in the Consumer
Guarantees Act 1993, the limitations in this Section are subject to the limitations in
that Act.
EUROPE, MIDDLE EAST, AFRICA (EMEA)
The following terms apply to all EMEA countries or regions.
The terms of this Statement of Limited Warranty apply to Machines purchased from
an IBM reseller. If you purchased this Machine from IBM, the terms and conditions
of the applicable IBM agreement prevail over this warranty statement.
Warranty Service
174
ESS Host Systems Attachment Guide
If you purchased an IBM Machine in Austria, Belgium, Denmark, Estonia, Finland,
France, Germany, Greece, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg,
Netherlands, Norway, Portugal, Spain, Sweden, Switzerland or United Kingdom, you
may obtain warranty service for that Machine in any of those countries or regions
from either (1) an IBM reseller approved to perform warranty service or (2) from
IBM.
If you purchased an IBM Personal Computer Machine in Albania, Armenia, Belarus,
Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Georgia, Hungary,
Kazakhstan, Kirghizia, Federal Republic of Yugoslavia, Former Yugoslav Republic of
Macedonia (FYROM), Moldova, Poland, Romania, Russia, Slovak Republic,
Slovenia, or Ukraine, you may obtain warranty service for that Machine in any of
those countries or regions from either (1) an IBM reseller approved to perform
warranty service or (2) from IBM.
The applicable laws, Country or region-unique terms and competent court for this
Statement are those of the country or region in which the warranty service is being
provided. However, the laws of Austria govern this Statement if the warranty service
is provided in Albania, Armenia, Belarus, Bosnia and Herzegovina, Bulgaria,
Croatia, Czech Republic, Federal Republic of Yugoslavia, Georgia, Hungary,
Kazakhstan, Kirghizia, Former Yugoslav Republic of Macedonia (FYROM), Moldova,
Poland, Romania, Russia, Slovak Republic, Slovenia, and Ukraine.
The following terms apply to the country or region specified:
EGYPT: Limitation of Liability: The following replaces item 2 in this Section: 2. as
to any other actual direct damages, IBM’s liability will be limited to the total amount
you paid for the Machine that is the subject of the claim.
Applicability of suppliers and resellers (unchanged).
FRANCE: Limitation of Liability: The following replaces the second sentence of
the first paragraph of this Section:
In such instances, regardless of the basis on which you are entitled to claim
damages from IBM, IBM is liable for no more than: (items 1 and 2 unchanged).
GERMANY: The IBM Warranty for Machines: The following replaces the first
sentence of the first paragraph of this Section:
The warranty for an IBM Machine covers the functionality of the Machine for its
normal use and the Machine’s conformity to its Specifications.
The following paragraphs are added to this Section:
The minimum warranty period for Machines is six months.
In case IBM or your reseller are unable to repair an IBM Machine, you can
alternatively ask for a partial refund as far as justified by the reduced value of the
unrepaired Machine or ask for a cancellation of the respective agreement for such
Machine and get your money refunded.
Extent of Warranty: The second paragraph does not apply.
Statement of Limited Warranty
175
Warranty Service: The following is added to this Section: During the warranty
period, transportation for delivery of the failing Machine to IBM will be at IBM’s
expense.
Production Status: The following paragraph replaces this Section: Each Machine
is newly manufactured. It may incorporate in addition to new parts, reused parts as
well.
Limitation of Liability: The following is added to this Section:
The limitations and exclusions specified in the Statement of Limited Warranty will
not apply to damages caused by IBM with fraud or gross negligence and for
express warranty.
In item 2, replace ″U.S. $100,000″ with ″1,000,000 DM.″
The following sentence is added to the end of the first paragraph of item 2:
IBM’s liability under this item is limited to the violation of essential contractual terms
in cases of ordinary negligence.
IRELAND: Extent of Warranty: The following is added to this Section:
Except as expressly provided in these terms and conditions, all statutory conditions,
including all warranties implied, but without prejudice to the generality of the
foregoing all warranties implied by the Sale of Goods Act 1893 or the Sale of
Goods and Supply of Services Act 1980 are hereby excluded.
Limitation of Liability: The following replaces items one and two of the first
paragraph of this Section:
1. death or personal injury or physical damage to your real property solely caused
by IBM’s negligence; and 2. the amount of any other actual direct damages, up to
the greater of Irish Pounds 75,000 or 125 percent of the charges (if recurring, the
12 months’ charges apply) for the Machine that is the subject of the claim or which
otherwise gives rise to the claim.
Applicability of suppliers and resellers (unchanged).
The following paragraph is added at the end of this Section:
IBM’s entire liability and your sole remedy, whether in contract or in tort, in respect
of any default shall be limited to damages.
ITALY: Limitation of Liability: The following replaces the second sentence in the
first paragraph:
In each such instance unless otherwise provided by mandatory law, IBM is liable for
no more than: (item 1 unchanged) 2) as to any other actual damage arising in all
situations involving nonperformance by IBM pursuant to, or in any way related to
the subject matter of this Statement of Warranty, IBM’s liability, will be limited to the
total amount you paid for the Machine that is the subject of the claim.
Applicability of suppliers and resellers (unchanged).
The following replaces the second paragraph of this Section:
176
ESS Host Systems Attachment Guide
Unless otherwise provided by mandatory law, IBM and your reseller are not liable
for any of the following: (items 1 and 2 unchanged) 3) indirect damages, even if
IBM or your reseller is informed of their possibility.
SOUTH AFRICA, NAMIBIA, BOTSWANA, LESOTHO AND SWAZILAND:
Limitation of Liability: The following is added to this Section:
IBM’s entire liability to you for actual damages arising in all situations involving
nonperformance by IBM in respect of the subject matter of this Statement of
Warranty will be limited to the charge paid by you for the individual Machine that is
the subject of your claim from IBM.
TURKIYE: Production Status: The following replaces this Section:
IBM fulfills customer orders for IBM Machines as newly manufactured in accordance
with IBM’s production standards.
UNITED KINGDOM: Limitation of Liability: The following replaces items 1 and 2
of the first paragraph of this Section:
1. death or personal injury or physical damage to your real property solely caused
by IBM’s negligence;
2. the amount of any other actual direct damages or loss, up to the greater of
Pounds Sterling 150,000 or 125 percent of the charges (if recurring, the 12 months’
charges apply) for the Machine that is the subject of the claim or which otherwise
gives rise to the claim;
The following item is added to this paragraph:
3. breach of IBM’s obligations implied by Section 12 of the Sale of Goods Act 1979
or Section 2 of the Supply of Goods and Services Act 1982.
Applicability of suppliers and resellers (unchanged).
The following is added to the end of this Section:
IBM’s entire liability and your sole remedy, whether in contract or in tort, in respect
of any default will be limited to damages.
Statement of Limited Warranty
177
178
ESS Host Systems Attachment Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may be
used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATIONS ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR
A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply to
you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publications. IBM may make improvements
and/or changes in the product(s) and/or program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. the materials at those Web sites are not part of the materials for this
IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those
products, their published announcements or other publicly available sources. IBM
has not tested those products and cannot confirm the accuracy of performance,
compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those
products.
© Copyright IBM Corp. 1999, 2001
179
All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
Trademarks
The following terms are trademarks of the International Business Machines
Corporation in the United States, other countries, or both:
AIX
AS/400
DFSMS/MVS
Eserver
Enterprise Storage Server
ES/9000
ESCON
FICON
FlashCopy
HACMP/6000
IBM
eServer
MVS/ESA
Netfinity
NetVista
NUMA-Q
Operating System/400
OS/390
OS/400
RS/6000
S/390
Seascape
SNAP/SHOT
SP
StorWatch
System/360
System/370
System/390
TotalStorage
Versatile Storage Server
VM/ESA
VSE/ESA
Microsoft and Windows NT are trademarks of Microsoft Corporation in the United
States, other countries, or both.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in
the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product, and service names may be trademarks or service marks
of others.
Electronic emission notices
This section contains the electronic emission notices or statements for the United
States and other countries.
180
ESS Host Systems Attachment Guide
Federal Communications Commission (FCC) statement
This equipment has been tested and complies with the limits for a Class A digital
device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide
reasonable protection against harmful interference when the equipment is operated
in a commercial environment. This equipment generates, uses, and can radiate
radio frequency energy and, if not installed and used in accordance with the
instruction manual, might cause harmful interference to radio communications.
Operation of this equipment in a residential area is likely to cause harmful
interference, in which case the user will be required to correct the interference at
his own expense.
Properly shielded and grounded cables and connectors must be used to meet FCC
emission limits. IBM is not responsible for any radio or television interference
caused by using other than recommended cables and connectors, or by
unauthorized changes or modifications to this equipment. Unauthorized changes or
modifications could void the users authority to operate the equipment.
This device complies with Part 15 of the FCC Rules. Operation is subject to the
following two conditions: (1) this device might not cause harmful interference, and
(2) this device must accept any interference received, including interference that
might cause undesired operation.
Industry Canada compliance statement
This Class A digital apparatus complies with Canadian ICES-003.
Cet appareil numérique de la classe A est conform à la norme NMB-003 du
Canada.
European community compliance statement
This product is in conformity with the protection requirements of EC Council
Directive 89/336/EEC on the approximation of the laws of the Member States
relating to electromagnetic compatibility. IBM cannot accept responsibility for any
failure to satisfy the protection requirements resulting from a nonrecommended
modification of the product, including the fitting of non-IBM option cards.
Germany only
Zulassungsbescheinigung laut Gesetz ueber die elektromagnetische
Vertraeglichkeit von Geraeten (EMVG) vom 30. August 1995.
Dieses Geraet ist berechtigt, in Uebereinstimmung mit dem deutschen EMVG das
EG-Konformitaetszeichen - CE - zu fuehren.
Der Aussteller der Konformitaetserklaeung ist die IBM Deutschland.
Informationen in Hinsicht EMVG Paragraph 3 Abs. (2) 2:
Das Geraet erfuellt die Schutzanforderungen nach EN 50082-1 und
EN 55022 Klasse A.
EN 55022 Klasse A Geraete beduerfen folgender Hinweise:
Nach dem EMVG:
"Geraete duerfen an Orten, fuer die sie nicht ausreichend entstoert
sind, nur mit besonderer Genehmigung des Bundesministeriums
fuer Post und Telekommunikation oder des Bundesamtes fuer Post und
Telekommunikation
Notices
181
betrieben werden. Die Genehmigung wird erteilt, wenn keine
elektromagnetischen Stoerungen zu erwarten sind." (Auszug aus dem
EMVG, Paragraph 3, Abs.4)
Dieses Genehmigungsverfahren ist nach Paragraph 9 EMVG in Verbindung
mit der entsprechenden Kostenverordnung (Amtsblatt 14/93)
kostenpflichtig.
Nach der EN 55022:
"Dies ist eine Einrichtung der Klasse A. Diese Einrichtung kann im
Wohnbereich Funkstoerungen verursachen; in diesem Fall kann vom
Betreiber verlangt werden, angemessene Massnahmen durchzufuehren
und dafuer aufzukommen."
Anmerkung:
Um die Einhaltung des EMVG sicherzustellen, sind die Geraete wie in den
Handbuechern angegeben zu installieren und zu betreiben.
Japanese Voluntary Control Council for Interference (VCCI) class A
statement
Korean government Ministry of Communication (MOC) statement
Please note that this device has been approved for business purpose with regard to
electromagnetic interference. If you find this is not suitable for your use, you may
exchange it for a nonbusiness purpose one.
Taiwan class A compliance statement
182
ESS Host Systems Attachment Guide
IBM agreement for licensed internal code
Read Before Using
IMPORTANT
YOU ACCEPT THE TERMS OF THIS IBM LICENSE AGREEMENT FOR
MACHINE CODE BY YOUR USE OF THE HARDWARE PRODUCT OR
MACHINE CODE. PLEASE READ THE AGREEMENT CONTAINED IN THIS
BOOK BEFORE USING THE HARDWARE PRODUCT. SEE “IBM agreement
for licensed internal code”.
You accept the terms of this Agreement1 by your initial use of a machine that
contains IBM Licensed Internal Code (called “Code”). These terms apply to Code
used by certain machines IBM or your reseller specifies (called “Specific
Machines”). International Business Machines Corporation or one of its subsidiaries
(“IBM”) owns copyrights in Code or has the right to license Code. IBM or a third
party owns all copies of Code, including all copies made from them.
If you are the rightful possessor of a Specific Machine, IBM grants you a license to
use the Code (or any replacement IBM provides) on, or in conjunction with, only the
Specific Machine for which the Code is provided. IBM licenses the Code to only one
rightful possessor at a time.
Under each license, IBM authorizes you to do only the following:
1. execute the Code to enable the Specific Machine to function according to its
Official Published Specifications (called “Specifications”);
2. make a backup or archival copy of the Code (unless IBM makes one available
for your use), provided you reproduce the copyright notice and any other legend
of ownership on the copy. You may use the copy only to replace the original,
when necessary; and
3. execute and display the Code as necessary to maintain the Specific Machine.
You agree to acquire any replacement for, or additional copy of, Code directly from
IBM in accordance with IBM’s standard policies and practices. You also agree to
use that Code under these terms.
You may transfer possession of the Code to another party only with the transfer of
the Specific Machine. If you do so, you must 1) destroy all your copies of the Code
that were not provided by IBM, 2) either give the other party all your IBM-provided
copies of the Code or destroy them, and 3) notify the other party of these terms.
IBM licenses the other party when it accepts these terms. These terms apply to all
Code you acquire from any source.
Your license terminates when you no longer rightfully possess the Specific Machine.
Actions you must not take
You agree to use the Code only as authorized above. You must not do, for
example, any of the following:
1. Form Z125-4144
Notices
183
1. Otherwise copy, display, transfer, adapt, modify, or distribute the Code
(electronically or otherwise), except as IBM may authorize in the Specific
Machine’s Specifications or in writing to you;
2. Reverse assemble, reverse compile, or otherwise translate the Code unless
expressly permitted by applicable law without the possibility of contractual
waiver;
3. Sublicense or assign the license for the Code; or
4. Lease the Code or any copy of it.
184
ESS Host Systems Attachment Guide
Glossary
This glossary includes terms for the IBM
TotalStorage Enterprise Storage Server (ESS) and
other Seascape solution products.
This glossary includes selected terms and
definitions from:
v The American National Standard Dictionary for
Information Systems, ANSI X3.172–1990,
copyright 1990 by the American National
Standards Institute (ANSI), 11 West 42nd
Street, New York, New York 10036. Definitions
derived from this book have the symbol (A)
after the definition.
v The Information Technology Vocabulary
developed by Subcommittee 1, Joint Technical
Committee 1, of the International Organization
for Standardization and the International
Electrotechnical Commission (SIO/IEC
JTC1/SC1). Definitions derived from this book
have the symbol (I) after the definition.
Definitions taken from draft international
standards, committee drafts, and working
papers being developed by ISO/IEC JTC1/SCI
have the symbol (T) after the definition,
indicating that final agreement has not been
reached among the participating National
Bodies of SCI.
This glossary uses the following cross-reference
form:
See
This refers the reader to one of three
kinds of related information:
v A related term
v A term that is the expanded form of an
abbreviation or acronym
v A synonym or more preferred term
|
|
|
|
|
|
|
|
alert. A message or log that a storage facility
generates as the result of error event collection and
analysis. An alert indicates that a service action is
required.
allegiance. In Enterprise Systems Architecture/390, a
relationship that is created between a device and one or
more channel paths during the processing of certain
conditions. See implicit allegiance, contingent
allegiance, and reserved allegiance.
allocated storage. On an ESS, the space allocated to
volumes, but not yet assigned. See assigned storage.
American National Standards Institute (ANSI). An
organization of producers, consumers, and general
interest groups that establishes the procedures by which
accredited organizations create and maintain voluntary
industry standards in the United States. (A)
|
|
|
|
|
ANSI. See American National Standards Institute.
APAR. See authorized program analysis report.
arbitrated loop. For fibre-channel connections, a
topology that enables the interconnection of a set of
nodes. See point-to-point connection and switched
fabric.
access. (1) To obtain the use of a computer resource.
(2) In computer security, a specific type of interaction
between a subject and an object that results in flow of
information from one to the other.
access-any mode. One of the two access modes that
can be set for the ESS during initial configuration. It
enables all fibre-channel-attached host systems with no
defined access profile to access all logical volumes on
the ESS. With a profile defined in ESS Specialist for a
particular host, that host has access only to volumes
that are assigned to the WWPN for that host. See
pseudo-host and worldwide port name.
© Copyright IBM Corp. 1999, 2001
Anonymous. The label in ESS Specialist on an icon
representing all connections using fibre-channel
adapters between the ESS and hosts that are not
completely defined to the ESS. See anonymous host,
pseudo-host, and access-any mode.
| anonymous host. Synonym for “pseudo-host” (in
| contrast to the Anonymous label that appears on some
| pseudo-host icons. See Anonymous and pseudo-host.
A
|
|
|
|
|
|
|
|
active Copy Services server. The Copy Services
server that manages the Copy Services domain. Either
the primary or the backup Copy Services server can be
the active Copy Services server. The backup Copy
Services server is available to become the active Copy
Services server if the primary Copy Services server
fails. See backup Copy Services server and primary
Copy Services server.
array. An ordered collection, or group, of physical
devices (disk drive modules) that are used to define
logical volumes or devices. More specifically, regarding
the ESS, an array is a group of disks designated by the
user to be managed by the RAID-5 technique. See
redundant array of inexpensive disks.
|
|
|
|
ASCII. American Standard Code for Information
Interchange. An ANSI standard (X3.4–1977) for
assignment of 7-bit numeric codes (plus 1 bit for parity)
to represent alphabetic and numeric characters and
185
| common symbols. Some organizations, including IBM,
| have used the parity bit to expand the basic code set.
assigned storage. On an ESS, the space allocated to
a volume and assigned to a port.
instructions or data. The cache memory is typically
smaller and faster than the primary memory or storage
medium. In addition to residing in cache memory, the
same data also resides on the storage devices in the
storage facility.
authorized program analysis report (APAR). A
report of a problem caused by a suspected defect in a
current, unaltered release of a program.
cache miss. An event that occurs when a read
operation is sent to the cluster, but the data is not found
in cache. The opposite of cache hit.
availability. The degree to which a system or resource
is capable of performing its normal function. See data
availability.
B
|
|
|
|
|
|
|
|
|
|
backup Copy Services server. One of two Copy
Services servers in a Copy Services domain. The other
Copy Services server is the primary Copy Services
server. The backup Copy Services server is available to
become the active Copy Services server if the primary
Copy Services server fails. A Copy Services server is
software that runs in one of the two clusters of an ESS,
and manages data-copy operations for that Copy
Services server group. See primary Copy Services
server and active Copy Services server.
bay. Physical space on an ESS used for installing
SCSI, ESCON, and fibre-channel host adapter cards.
The ESS has four bays, two in each cluster. See
service boundary.
|
|
|
|
bit. (1) A binary digit. (2) The storage medium required
to store a single binary digit. (3) Either of the digits 0 or
1 when used in the binary numeration system. (T) See
byte.
|
|
|
|
block. A group of consecutive bytes used as the basic
storage unit in fixed-block architecture (FBA). All blocks
on the storage device are the same size (fixed size).
See fixed-block architecture and data record.
byte. (1) A group of eight adjacent binary digits that
represent one EBCDIC character. (2) The storage
medium required to store eight bits. See bit.
C
cache. A buffer storage that contains frequently
accessed instructions and data, thereby reducing
access time.
cache fast write. A form of the fast-write operation in
which the subsystem writes the data directly to cache
where it is available for later destaging.
cache hit. An event that occurs when a read operation
is sent to the cluster, and the requested data is found in
cache. The opposite of cache miss.
cache memory. Memory, typically volatile memory,
that a subsystem uses to improve access times to
186
ESS Host Systems Attachment Guide
|
|
|
|
|
|
|
|
call home. A communication link established between
the ESS and a service provider. The ESS can use this
link to place a call to IBM or to another service provider
when it requires service. With access to the machine,
service personnel can perform service tasks, such as
viewing error logs and problem logs or initiating trace
and dump retrievals. See heartbeat and remote
technical assistance information network.
cascading. (1) Connecting network controllers to each
other in a succession of levels, to concentrate many
more lines than a single level permits. (2) In
high-availability cluster multiprocessing (HACMP),
cascading pertains to a cluster configuration in which
the cluster node with the highest priority for a particular
resource acquires the resource if the primary node fails.
The cluster node relinquishes the resource to the
primary node upon reintegration of the primary node
into the cluster.
| catcher. A server that service personnel use to collect
| and retain status data that an ESS sends to it.
CCR. See channel-command retry.
CCW. See channel command word.
CD-ROM. See compact disc, read-only memory.
CEC. See computer-electronic complex.
channel. In Enterprise Systems Architecture/390, the
part of a channel subsystem that manages a single I/O
interface between a channel subsystem and a set of
control units.
channel command retry (CCR). In Enterprise
Systems Architecture/390, the protocol used between a
channel and a control unit that enables the control unit
to request that the channel reissue the current
command.
channel command word (CCW). In Enterprise
Systems Architecture/390, a data structure that specifies
an I/O operation to the channel subsystem.
channel path. In Enterprise Systems Architecture/390,
the interconnection between a channel and its
associated control units.
concurrent maintenance. Service that is performed
on a unit while it is operational.
channel subsystem. In Enterprise Systems
Architecture/390, the part of a host computer that
manages I/O communication between the program and
any attached control units.
concurrent media maintenance. Service performed
on a disk drive module (DDM) without losing access to
the data.
channel-subsystem image. In Enterprise Systems
Architecture/390, the logical functions that a system
requires to perform the function of a channel
subsystem. With ESCON multiple image facility (EMIF),
one channel subsystem image exists in the channel
subsystem for each logical partition (LPAR). Each image
appears to be an independent channel subsystem
program, but all images share a common set of
hardware facilities.
CKD. See count key data.
CLI. See command-line interface.
configure. To define the logical and physical
configuration of the input/output (I/O) subsystem through
the user interface provided for this function on the
storage facility.
consistent copy. A copy of a data entity (a logical
volume, for example) that contains the contents of the
entire data entity at a single instant in time.
| console. A user interface to a server, such as can be
| provided by a personal computer. See IBM TotalStorage
| ESS Master Console.
cluster. (1) A partition in the ESS capable of
performing all ESS functions. With two clusters in the
ESS, any operational cluster can take over the
processing of a failing cluster. (2) On an AIX platform, a
group of nodes within a complex.
contingent allegiance. In Enterprise Systems
Architecture/390, a relationship that is created in a
control unit between a device and a channel when
unit-check status is accepted by the channel. The
allegiance causes the control unit to guarantee access;
the control unit does not present the busy status to the
device. This enables the channel to retrieve sense data
that is associated with the unit-check status on the
channel path associated with the allegiance.
cluster processor complex (CPC). The unit within a
cluster that provides the management function for the
storage server. It consists of cluster processors, cluster
memory, and related logic.
|
|
|
|
|
|
|
|
control unit (CU). (1) A device that coordinates and
controls the operation of one or more input/output
devices, and synchronizes the operation of such
devices with the operation of the system as a whole. (2)
In Enterprise Systems Architecture/390, a storage
server with ESCON, FICON, or OEMI interfaces. The
control unit adapts a native device interface to an I/O
interface supported by an ESA/390 host system. On an
ESS, the control unit would be the parts of the storage
server that support the attachment of emulated CKD
devices over ESCON, FICON, or OEMI interfaces. See
cluster.
command-line interface (CLI). (1) An interface
provided by an operating system that defines a set of
commands and enables a user (or a script-like
language) to issue these commands by typing text in
response to the command prompt (for example, DOS
commands, UNIX shell commands). (2) An optional ESS
software that enables a user to issue commands to and
retrieve information from the Copy Services server.
compact disc, read-only memory (CD-ROM).
High-capacity read-only memory in the form of an
optically read compact disc.
compression. (1) The process of eliminating gaps,
empty fields, redundancies, and unnecessary data to
shorten the length of records or blocks. (2) Any
encoding that reduces the number of bits used to
represent a given message or record.
control-unit image. In Enterprise Systems
Architecture/390, a logical subsystem that is accessed
through an ESCON or FICON I/O interface. One or
more control-unit images exist in each control unit. Each
image appears as an independent control unit, but all
control-unit image share a common set of hardware
facilities. The ESS can emulate 3990-3, TPF, 3990-6, or
2105 control units.
computer-electronic complex (CEC). The set of
hardware facilities associated with a host computer.
Concurrent Copy. A facility on a storage server that
enables a program to make a backup of a data set
while the logical volume remains available for
subsequent processing. The data in the backup copy is
frozen at the point in time that the server responds to
the request.
concurrent installation of licensed internal code.
Process of installing licensed internal code on an ESS
while applications continue to run.
|
|
|
|
|
|
|
|
|
|
control-unit initiated reconfiguration (CUIR). A
software mechanism used by the ESS to request that
an operating system verify that one or more subsystem
resources can be taken off-line for service. The ESS
can use this process to automatically vary channel
paths offline and online to facilitate bay service or
concurrent code installation. Depending on the
operating system, support for this process may be
model-dependent, may depend on the IBM Subsystem
Device Driver, or may not exist.
Glossary
187
| Coordinated Universal Time (UTC). The international
| standard of time that is kept by atomic clocks around
| the world.
cylinder. A unit of storage on a CKD device. A cylinder
has a fixed number of tracks.
D
Copy Services client. Software that runs on each
ESS cluster in the Copy Services server group and that
performs the following functions:
DA. See device adapter and SSA adapter.
v Communicates configuration, status, and connectivity
information to the Copy Services server.
daisy chain. See serial connection.
DASD. See direct access storage device.
v Performs data-copy functions on behalf of the Copy
Services server.
DASD fast write (DFW). Caching of active write data
by a storage server by journaling the data in nonvolatile
storage, avoiding exposure to data loss.
Copy Services server group. A collection of
user-designated ESS clusters participating in Copy
Services functions managed by a designated active
Copy Services server. A Copy Services server group is
also called a Copy Services domain.
data availability. The degree to which data is
available when needed, typically measured as a
percentage of time that the system would be capable of
responding to any data request (for example, 99.999%
available).
count field. The first field of a count key data (CKD)
record. This eight-byte field contains a four-byte track
address (CCHH). It defines the cylinder and head that
are associated with the track, and a one-byte record
number (R) that identifies the record on the track. It
defines a one-byte key length that specifies the length
of the record’s key field (0 means no key field). It
defines a two-byte data length that specifies the length
of the record’s data field (0 means no data field). Only
the end-of-file record has a data length of zero.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
count key data (CKD). In Enterprise Systems
Architecture/390, a data-record format employing
self-defining record formats in which each record is
represented by up to three fields—a count area
identifying the record and specifying its format, an
optional key area that can be used to identify the data
area contents; and an optional data area that typically
would contain the user data for the record. For CKD
records on the ESS, the logical volume size is defined
in terms of the device emulation mode (3390 or 3380
track format). The count field is always 8 bytes long and
contains the lengths of the key and data fields, the key
field has a length of 0 to 255 bytes, and the data field
has a length of 0 to 65 535 or the maximum that will fit
on the track. Typically, customer data appears in the
data field. The use of the key field is dependent on the
software managing the storage. See data record.
CPC. See cluster processor complex.
CRC. See cyclic redundancy check.
Data Facility Storage Management Subsystem. An
operating environment that helps automate and
centralize the management of storage. To manage
storage, DFSMS provides the storage administrator with
control over data class, storage class, management
class, storage group, and automatic class selection
routine definitions.
data field. The optional third field of a count key data
(CKD) record. The count field specifies the length of the
data field. The data field contains data that the program
writes.
|
|
|
|
|
|
|
|
data record. The basic unit of S/390 and zSeries
storage on an ESS, also known as a count-key-data
(CKD) record. Data records are stored on a track. The
records are sequentially numbered starting with 0. The
first record, R0, is typically called the track descriptor
record and contains data normally used by the
operating system to manage the track. See
count-key-data and fixed-block architecture.
customer console. See consoleand IBM TotalStorage
ESS Master Console.
data sharing. The ability of homogeneous or divergent
host systems to concurrently utilize data that they store
on one or more storage devices. The storage facility
enables configured storage to be accessible to any, or
all, attached host systems. To use this capability, the
host program must be designed to support data that it is
sharing.
CUT. See Coordinated Universal Time.
DDM. See disk drive module.
cyclic redundancy check (CRC). A redundancy
check in which the check key is generated by a cyclic
algorithm. (T)
DDM group. See disk drive module group.
CU. See control unit.
CUIR. See control-unit initiated reconfiguration.
|
|
data compression. A technique or algorithm used to
encode data such that the encoded result can be stored
in less space than the original data. The original data
can be recovered from the encoded result through a
reverse technique or reverse algorithm. See
compression.
188
ESS Host Systems Attachment Guide
dedicated storage. Storage within a storage facility
that is configured such that a single host system has
exclusive access to the storage.
demote. To remove a logical data unit from cache
memory. A subsystem demotes a data unit in order to
make room for other logical data units in the cache. It
might also demote a data unit because the logical data
unit is not valid. A subsystem must destage logical data
units with active write units before they can be demoted.
|
|
|
|
|
|
|
|
|
| DNS. See domain name system.
domain. (1) That part of a computer network in which
the data processing resources are under common
control. (2) In TCP/IP, the naming system used in
hierarchical networks. (3) A Copy Services server group,
in other words, the set of clusters designated by the
user to be managed by a particular Copy Services
server.
destaging. (1) Movement of data from an online or
higher priority to an offline or lower (2) priority device.
The ESS stages incoming data into cache and then
destages it to disk.
device. In Enterprise Systems Architecture/390, a disk
drive.
device adapter (DA). A physical component of the
ESS that provides communication between the clusters
and the storage devices. The ESS has eight device
adapters that it deploys in pairs, one from each cluster.
DA pairing enables the ESS to access any disk drive
from either of two paths, providing fault tolerance and
enhanced availability.
|
|
|
|
|
device number. In Enterprise Systems
Architecture/390, a four-hexadecimal-character identifier,
for example 13A0, that the systems administrator
associates with a device to facilitate communication
between the program and the host operator. The device
number is associated with a subchannel.
device sparing. A subsystem function that
automatically copies data from a failing DDM to a spare
DDM. The subsystem maintains data access during the
process.
direct access storage device (DASD). (1) A mass
storage medium on which a computer stores data. (2) A
disk device.
disk drive. Standard term for a disk-based nonvolatile
storage medium. The ESS uses hard disk drives as the
primary nonvolatile storage media to store host data.
| disk drive module (DDM). A field replaceable unit that
| consists of a single disk drive and its associated
| packaging.
disk drive module group. In the ESS, a group of
eight disk drive modules (DDMs) contained in an 8-pack
and installed as a unit.
domain name system (DNS). In TCP/IP, the server
program that supplies name-to-address translation by
mapping domain names to internet addresses. The
address of a DNS server is the internet address of the
server that hosts the DNS software for the network.
drawer. A unit that contains multiple DDMs and
provides power, cooling, and related interconnection
logic to make the DDMs accessible to attached host
systems.
drive. (1) A peripheral device, especially one that has
addressed storage media. See disk drive module. (2)
The mechanism used to seek, read, and write
information on a storage medium.
device address. In Enterprise Systems
Architecture/390, the field of an ESCON or FICON
device-level frame that selects a specific device on a
control-unit image.
device interface card. A physical subunit of a storage
cluster that provides the communication with the
attached DDMs.
disk group. In the ESS, a collection of seven or eight
disk drives in the same SSA loop and set up by the
ESS to be available to be assigned as a RAID-5 rank.
You can format a disk group as CKD or FB, and as
RAID or non-RAID, or leave it unassigned.
|
|
|
|
|
duplex. (1) Regarding ESS Copy Services, the state
of a volume pair after PPRC has completed the copy
operation and the volume pair is synchronized. (2) In
general, pertaining to a communication mode in which
data can be sent and received at the same time.
dynamic sparing. The ability of a storage server to
move data from a failing disk drive module (DDM) to a
spare DDM while maintaining storage functions.
E
| E10. The forerunner of the F10 model of the ESS. See
| F10.
| E20. The forerunner of the F20 model of the ESS. See
| F20
EBCDIC. See extended binary-coded decimal
interchange code.
EC. See engineering change.
ECKD. See extended count key data.
electrostatic discharge (ESD). An undesirable
discharge of static electricity that can damage
equipment and degrade electrical circuitry.
emergency power off (EPO). A means of turning off
power during an emergency, usually a switch.
Glossary
189
LPARs to share an ESCON channel path by providing
each LPAR with its own channel-subsystem image.
EMIF. See ESCON multiple image facility.
enclosure. A unit that houses the components of a
storage subsystem, such as a control unit, disk drives,
and power source.
end of file. A coded character recorded on a data
medium to indicate the end of the medium. On a CKD
direct access storage device, the subsystem indicates
the end of a file by including a record with a data length
of zero.
engineering change (EC). An update to a machine,
part, or program.
|
|
|
|
|
ESD. See electrostatic discharge.
eserver. See IBM Eserver.
|
Enterprise Systems Connection (ESCON). (1) An
ESA/390 and zSeries computer peripheral interface.
The I/O interface uses ESA/390 logical protocols over a
serial interface that configures attached units to a
communication fabric. (2) A set of IBM products and
services that provide a dynamically connected
environment within an enterprise.
EPO. See emergency power off.
ERP. See error recovery procedure.
error recovery procedure (ERP). Procedures
designed to help isolate and, where possible, to recover
from errors in equipment. The procedures are often
used in conjunction with programs that record
information on machine malfunctions.
|
|
ESS Specialist. See IBM TotalStorage Enterprise
Storage Server Specialist.
|
|
ESS Copy Services. See IBM TotalStorage Enterprise
Storage Server Copy Services.
|
|
ESS Master Console. See IBM TotalStorage ESS
Master Console.
ESSNet. See IBM TotalStorage Enterprise Storage
Server Network.
Expert. See IBM StorWatch Enterprise Storage Server
Expert.
|
|
|
|
|
Extended Remote Copy (XRC). A function of a
storage server that assists a control program to
maintain a consistent copy of a logical volume on
another storage facility. All modifications of the primary
logical volume by any attached host are presented in
order to a single host. The host then makes these
modifications on the secondary logical volume.
ESCD. See ESCON director.
ESCON. See Enterprise System Connection.
ESCON director (ESCD). An I/O interface switch that
provides for the interconnection of multiple ESCON
interfaces in a distributed-star topology.
|
|
|
|
ESCON host systems. S/390 or zSeries hosts that
attach to the ESS with an ESCON adapter. Such host
systems run on MVS, VM, VSE, or TPF operating
systems.
ESCON multiple image facility (EMIF). In Enterprise
Systems Architecture/390, a function that enables
190
ESS Host Systems Attachment Guide
extended binary-coded decimal interchange code
(EBCDIC). A coding scheme developed by IBM used
to represent various alphabetic, numeric, and special
symbols with a coded character set of 256 eight-bit
codes.
extended count key data (ECKD). An extension of
the CKD architecture.
ESA/390. See Enterprise Systems Architecture/390.
ESCON channel. An S/390 or zSeries channel that
supports ESCON protocols.
ESS. See IBM TotalStorage Enterprise Storage Server.
ESS Expert. See IBM StorWatch Enterprise Storage
Server Expert.
| Enterprise Storage Server. See IBM TotalStorage
| Enterprise Storage Server.
Enterprise Systems Architecture/390® (ESA/390) and
z/Architecture. IBM architectures for mainframe
computers and peripherals. Processor systems that
follow the ESA/390 architecture include the ES/9000®
family, while the IBM Eserver zSeries server uses the
z/Architecture.
EsconNet. In ESS Specialist, the label on a
pseudo-host icon representing a host connection that
uses the ESCON protocol and that is not completely
defined on the ESS. See pseudo-host and access-any
mode.
|
|
|
|
extent. A continuous space on a disk that is occupied
by or reserved for a particular data set, data space, or
file. The unit of increment is a track. See multiple
allegiance and parallel access volumes.
F
F10. A model of the ESS featuring a single-phase
power supply. It has fewer expansion capabilities than
the Model F20.
F20. A model of the ESS featuring a three-phase
power supply. It has more expansion capabilities than
the Model F10, including the ability to support a
separate expansion enclosure.
fabric. In fibre-channel technology, a routing structure,
such as a switch, receives addressed information and
routes to the appropriate destination. A fabric can
consist of more than one switch. When multiple
fibre-channel switches are interconnected, they are said
to be cascaded.
|
|
|
|
|
|
|
|
|
| failback. Cluster recovery from failover following
| repair. See failover.
failover. On the ESS, the process of transferring all
control of a storage facility to a single cluster when the
other cluster in the storage facility fails.
fast write. A write operation at cache speed that does
not require immediate transfer of data to a disk drive.
The subsystem writes the data directly to cache, to
nonvolatile storage, or to both. The data is then
available for destaging. A fast-write operation reduces
the time an application must wait for the I/O operation to
complete.
FICON. See fibre-channel connection.
|
|
|
|
|
FIFO. See first-in-first-out.
firewall. A protection against unauthorized connection
to a computer or a data storage system. The protection
is usually in the form of software on a gateway server
that grants access to users who meet authorization
criteria.
FC-AL. See Fibre Channel-Arbitrated Loop.
FCP. See fibre-channel protocol.
FCS. See fibre-channel standard.
feature code. A code that identifies a particular
orderable option and that is used by service personnel
to process hardware and software orders. Individual
optional features are each identified by a unique feature
code.
|
|
|
|
|
|
fibre channel (FC). A data-transmission architecture
based on the ANSI fibre-channel standard, which
supports full-duplex communication. The ESS supports
data transmission over fiber-optic cable through its
fibre-channel adapters. See fibre-channel protocoland
fibre-channel standard.
|
|
|
|
|
|
|
Fibre Channel-Arbitrated Loop (FC-AL). An
implementation of the fibre-channel standard that uses a
ring topology for the communication fabric. Refer to
American National Standards Institute (ANSI)
X3T11/93-275. In this topology, two or more
fibre-channel end points are interconnected through a
looped interface. The ESS supports this topology.
| fibre-channel connection (FICON). A fibre-channel
| communications protocol designed for IBM mainframe
| computers and peripherals.
| fibre-channel protocol (FCP). For fibre-channel
communication, the protocol has five layers. The layers
define how fibre-channel ports interact through their
physical links to communicate with other ports.
FiconNet. In ESS Specialist, the label on a
pseudo-host icon representing a host connection that
uses the FICON protocol and that is not completely
defined on the ESS. See pseudo-host and access-any
mode.
field replaceable unit (FRU). An assembly that is
replaced in its entirety when any one of its components
fails. In some cases, a field replaceable unit may
contain other field replaceable units.
FBA. See fixed-block architecture.
|
|
|
|
|
fibre-channel standard (FCS). An ANSI standard for
a computer peripheral interface. The I/O interface
defines a protocol for communication over a serial
interface that configures attached units to a
communication fabric. The protocol has two layers. The
IP layer defines basic interconnection protocols. The
upper layer supports one or more logical protocols.
Refer to American National Standards Institute (ANSI)
X3.230-199x.
first-in-first-out (FIFO). A queuing technique in which
the next item to be retrieved is the item that has been in
the queue for the longest time. (A)
fixed-block architecture (FBA). An architecture for
logical devices that specifies the format of and access
mechanisms for the logical data units on the device.
The logical data unit is a block. All blocks on the device
are the same size (fixed size). The subsystem can
access them independently.
|
|
|
|
|
|
|
|
|
fixed-block device. An architecture for logical devices
that specifies the format of the logical data units on the
device. The logical data unit is a block. All blocks on the
device are the same size (fixed size); the subsystem
can access them independently. This is the required
format of the logical data units for host systems that
attach with a Small Computer System Interface (SCSI)
or fibre-channel interface. See fibre-channel, Small
Computer System Interface and SCSI-FCP.
FlashCopy. An optional feature for the ESS that can
make an instant copy of data, that is, a point-in-time
copy of a volume.
FRU. See field replaceable unit.
full duplex. See duplex.
G
GB. See gigabyte.
Glossary
191
gigabyte (GB). A gigabyte of storage is 109 bytes. A
gigabyte of memory is 230 bytes.
hop. Interswitch connection. A hop count is the
number of connections that a particular block of data
traverses between source and destination. For example,
data traveling from one hub over a wire to another hub
traverses one hop.
| GDPS. Geographically Dispersed Parallel Sysplex, an
| S/390 multi-site application availability solution.
group. See disk drive module group or Copy Services
server group.
host adapter (HA). A physical subunit of a storage
server that provides the ability to attach to one or more
host I/O interfaces. The Enterprise Storage Server has
four HA bays, two in each cluster. Each bay supports up
to four host adapters.
H
HA. See host adapter.
host processor. A processor that controls all or part of
a user application network. In a network, the processing
unit in which the data communication access method
resides. See host system.
HACMP. Software that provides host clustering, so that
a failure of one host is recovered by moving jobs to
other hosts within the cluster; named for high-availability
cluster multiprocessing
host system. (1) A computer system that is connected
to the ESS. The ESS supports both mainframe (S/390
or zSeries) hosts as well as open-systems hosts. S/390
or zSeries hosts are connected to the ESS through
ESCON or FICON interfaces. Open-systems hosts are
connected to the ESS by SCSI or fibre-channel
interfaces. (2) The data processing system to which a
network is connected and with which the system can
communicate. (3) The controlling or highest level
system in a data communication configuration.
hard disk drive (HDD). (1) A storage medium within a
storage server used to maintain information that the
storage server requires. (2) A mass storage medium for
computers that is typically available as a fixed disk
(such as the disks used in system units of personal
computers or in drives that are external to a personal
computer) or a removable cartridge.
| Hardware Service Manager (HSM). An option
selected from System Service Tools or Dedicated
Service Tools on the AS/400 or iSeries host that
enables the user to display and work with system
hardware resources, and to debug input-output
processors (IOP), input-output adapters (IOA), and
devices.
hot plug. Pertaining to the ability to add or remove a
hardware facility or resource to a unit while power is on.
|
I
HDA. See head and disk assembly.
IBM Eserver. The brand name for a series of server
products that are optimized for e-commerce. The
products include the iSeries, pSeries, xSeries, and
zSeries.
HDD. See hard disk drive.
hdisk. An AIX term for storage space.
head and disk assembly (HDA). The portion of an
HDD associated with the medium and the read/write
head.
IBM product engineering (PE). The third-level of IBM
service support. Product engineering is composed of
IBM engineers who have experience in supporting a
product or who are knowledgeable about the product.
heartbeat. A status report sent at regular intervals
from the ESS. The service provider uses this report to
monitor the health of the call home process. See call
home, heartbeat call home record, and remote technical
assistance information network.
heartbeat call home record. Machine operating and
service information sent to a service machine. These
records might include such information as feature code
information and product logical configuration
information.
|
High Speed Link (HSL). Bus technology for
input-output tower attachment on iSeries host.
home address. A nine-byte field at the beginning of a
track that contains information that identifies the
physical track and its association with a cylinder.
192
ESS Host Systems Attachment Guide
HSL. See High Speed Link.
IBM StorWatch Enterprise Storage Server Expert
(ESS Expert). The software that gathers performance
data from the ESS and presents it through a Web
browser.
|
|
|
|
|
|
|
|
|
|
|
IBM TotalStorage Enterprise Storage Server (ESS).
A member of the Seascape® product family of storage
servers and attached storage devices (disk drive
modules). The ESS provides for high-performance,
fault-tolerant storage and management of enterprise
data, providing access through multiple concurrent
operating systems and communication protocols. High
performance is provided by four symmetric
multiprocessors, integrated caching, RAID support for
the disk drive modules, and disk access through a
high-speed serial storage architecture (SSA) interface.
| IBM TotalStorage Enterprise Storage Server
| Specialist (ESS Specialist). Software with a
| Web-browser interface for configuring the ESS.
|
|
|
|
|
IBM TotalStorage Enterprise Storage Server Copy
Services (ESS Copy Services). Software with a
Web-browser interface for configuring, managing, and
monitoring the data-copy functions of FlashCopy and
PPRC.
|
|
|
|
|
IBM TotalStorage Enterprise Storage Server Network
(ESSNet). A private network providing Web browser
access to the ESS. IBM installs the ESSNet software on
an IBM workstation called the IBM TotalStorage ESS
Master Console, supplied with the first ESS delivery.
|
|
|
|
|
|
|
|
IBM TotalStorage ESS Master Console (ESS Master
Console). An IBM workstation (formerly named the
ESSNet console and hereafter referred to simply as the
ESS Master Console) that IBM installs to provide the
ESSNet facility when they install your ESS. It includes a
Web browser that provides links to the ESS user
interface, including ESS Specialist and ESS Copy
Services.
|
|
|
|
|
|
|
|
|
|
invalidate. To remove a logical data unit from cache
memory, because it cannot support continued access to
the logical data unit on the device. This removal may be
the result of a failure within the storage server or a
storage device that is associated with the device.
I/O. See input/output.
|
I/O interface. An interface that enables a host to
perform read and write operations with its associated
peripheral devices.
identifier (ID). A unique name or address that
identifies things such as programs, devices, or systems.
implicit allegiance. In Enterprise Systems
Architecture/390, a relationship that a control unit
creates between a device and a channel path when the
device accepts a read or write operation. The control
unit guarantees access to the channel program over the
set of channel paths that it associates with the
allegiance.
I/O adapter (IOA). Input-output adapter on the PCI
bus.
I/O device. An addressable read and write unit, such
as a disk drive device, magnetic tape device, or printer.
ID. See identifier.
IML. See initial microprogram load.
Internet Protocol (IP). In the Internet suite of
protocols, a protocol without connections that routes
data through a network or interconnecting networks and
acts as an intermediary between the higher protocol
layers and the physical network. The upper layer
supports one or more logical protocols (for example, a
SCSI-command protocol and an ESA/390 command
protocol). Refer to ANSI X3.230-199x. The IP acronym
is the IP in TCP/IP. See Transmission Control
Protocol/Internet Protocol.
|
|
|
|
|
|
I/O Priority Queueing. Facility provided by the
Workload Manager of OS/390 and supported by the
ESS that enables the systems administrator to set
priorities for queueing I/Os from different system
images. See multiple allegiance and parallel access
volume.
|
I/O processor (IOP). Controls input-output adapters
and other devices.
IP. See Internet Protocol.
initial microprogram load (IML). To load and initiate
microcode or firmware that controls a hardware entity
such as a processor or a storage server.
initial program load (IPL). To load and initiate the
software, typically an operating system that controls a
host computer.
initiator. A SCSI device that communicates with and
controls one or more targets. An initiator is typically an
I/O adapter on a host computer. A SCSI initiator is
analogous to an S/390 channel. A SCSI logical unit is
analogous to an S/390 device. See target.
i-node. The internal structure in an AIX operating
system that describes the individual files in the
operating system. It contains the code, type, location,
and owner of a file.
input/output (I/O). Pertaining to (a) input, output, or
both or (b) a device, process, or channel involved in
data input, data output, or both.
IPL. See initial program load.
iSeries. An IBM Eserver product that emphasizes
integration.
J
Java virtual machine (JVM). A software
implementation of a central processing unit (CPU) that
runs compiled Java code (applets and applications).
JVM. See Java virtual machine.
K
KB. See kilobyte.
key field. The second (optional) field of a CKD record.
The key length is specified in the count field. The key
length determines the field length. The program writes
Glossary
193
the data in the key field and use the key field to identify
or locate a given record. The subsystem does not use
the key field.
subsystem that allow communication over an ESCON or
FICON write interface and an ESCON or FICON read
interface.
kilobyte (KB). (1) For processor storage, real, and
virtual storage, and channel volume, 210 or 1024 bytes.
(2) For disk storage capacity and communications
volume, 1000 bytes.
local area network (LAN). A computer network
located on a user’s premises within a limited geographic
area.
Korn shell. Interactive command interpreter and a
command programming language.
|
|
|
|
local e-mail. An e-mail configuration option for storage
servers that are connected to a host-system network
that does not have a domain name system (DNS)
server.
KPOH. See thousands of power-on hours.
logical address. On an ESCON or FICON interface,
the portion of a source or destination address in a frame
used to select a specific channel-subsystem or
control-unit image.
L
LAN. See local area network.
logical block address (LBA). The address assigned
by the ESS to a sector of a disk.
last-in first-out (LIFO). A queuing technique in which
the next item to be retrieved is the item most recently
placed in the queue. (A)
logical control unit (LCU). See control-unit image.
LBA. See logical block address.
logical data unit. A unit of storage that is accessible
on a given device.
LCU. See logical control unit.
logical device. The facilities of a storage server (such
as the ESS) associated with the processing of I/O
operations directed to a single host-accessible emulated
I/O device. The associated storage is referred to as a
logical volume. The logical device is mapped to one or
more host-addressable units, such as a device on an
S/390 I/O interface or a logical unit on a SCSI I/O
interface, such that the host initiating I/O operations to
the I/O-addressable unit interacts with the storage on
the associated logical device.
least recently used (LRU). (1) The algorithm used to
identify and make available the cache space that
contains the least-recently used data. (2) A policy for a
caching algorithm that chooses to remove from cache
the item that has the longest elapsed time since its last
access.
LED. See light-emitting diode.
LIC. See licensed internal code.
logical partition (LPAR). A set of functions that create
the programming environment that is defined by the
ESA/390 architecture. ESA/390 architecture uses this
term when more than one LPAR is established on a
processor. An LPAR is conceptually similar to a virtual
machine environment except that the LPAR is a function
of the processor. Also the LPAR does not depend on an
operating system to create the virtual machine
environment.
licensed internal code (LIC). Microcode that IBM
does not sell as part of a machine, but licenses to the
customer. LIC is implemented in a part of storage that is
not addressable by user programs. Some IBM products
use it to implement functions as an alternate to
hard-wired circuitry.
LIFO. See last-in first-out.
light-emitting diode (LED). A semiconductor chip that
gives off visible or infrared light when activated.
link address. On an ESCON or FICON interface, the
portion of a source or destination address in a frame
that ESCON or FICON uses to route a frame through
an ESCON or FICON director. ESCON or FICON
associates the link address with a specific switch port
that is on the ESCON or FICON director. Equivalently, it
associates the link address with the channel-subsystem
or control unit link-level functions that are attached to
the switch port.
link-level facility. The ESCON or FICON hardware
and logical functions of a control unit or channel
194
ESS Host Systems Attachment Guide
logical path. For Copy Services, a relationship
between a source logical subsystem and target logical
subsystem that is created over a physical path through
the interconnection fabric used for Copy Services
functions.
|
|
|
|
|
|
|
|
|
|
logical subsystem (LSS). Pertaining to the ESS, a
construct that consists of a group of up to 256 logical
devices. An ESS can have up to 16 CKD-formatted
logical subsystems (4096 CKD logical devices) and also
up to 16 fixed-block (FB) logical subsystems (4096 FB
logical devices). The logical subsystem facilitates
configuration of the ESS and may have other
implications relative to the operation of certain functions.
There is a one-to-one mapping between a CKD logical
subsystem and an S/390 control-unit image.
|
|
|
|
records might include such information as feature code
information and product logical configuration
information.
For S/390 or zSeries hosts, a logical subsystem
represents a logical control unit (LCU). Each control-unit
image is associated with only one logical subsystem.
See control-unit image.
mainframe. A computer, usually in a computer center,
with extensive capabilities and resources to which other
computers may be connected so that they can share
facilities. (T)
logical unit. The open-systems term for a logical disk
drive.
logical unit number (LUN). A SCSI term for a unique
number used on a SCSI bus to enable it to differentiate
between up to eight separate devices, each of which is
a logical unit.
|
|
|
|
|
|
|
|
logical volume. The storage medium associated with
a logical disk drive. A logical volume typically resides on
one or more storage devices. The ESS administrator
defines this unit of storage. The logical volume, when
residing on a RAID-5 array, is spread over 6 +P or 7 +P
drives, where P is parity. A logical volume can also
reside on a non-RAID storage device. See count key
data and fixed block address.
logical volume manager (LVM). A set of system
commands, library routines, and other tools that allow
the user to establish and control logical volume storage.
The LVM maps data between the logical view of storage
space and the physical disk drive module (DDM).
|
|
|
|
|
|
longitudinal redundancy check (LRC). A method of
error-checking during data transfer that involves
checking parity on a row of binary digits that are
members of a set that forms a matrix. Longitudinal
redundancy check is also called a longitudinal parity
check.
longwave laser adapter. A connector used between
host and the ESS to support longwave fibre-channel
communication.
loop. The physical connection between a pair of
device adapters in the ESS. See device adapter.
LPAR. See logical partition.
LRC. See longitudinal redundancy check.
LRU. See least recently used.
LSS. See logical subsystem.
LUN. See logical unit number.
LVM. See logical volume manager.
M
machine level control (MLC). A database that
contains the EC level and configuration of products in
the field.
machine reported product data (MRPD). Product
data gathered by a machine and sent to a destination
such as an IBM support server or RETAIN. These
maintenance analysis procedure (MAP). A hardware
maintenance document that gives an IBM service
representative a step-by-step procedure for tracing a
symptom to the cause of a failure.
|
|
|
|
|
|
|
|
|
|
|
|
|
management information base (MIB). (1) A schema
for defining a tree structure that identifies and defines
certain objects that can be passed between units using
an SNMP protocol. The objects passed typically contain
certain information about the product such as the
physical or logical characteristics of the product. (2)
Shorthand for referring to the MIB-based record of a
network device. Information about a managed device is
defined and stored in the management information base
(MIB) of the device. Each ESS has a MIB. SNMP-based
network management software uses the record to
identify the device. See simple network management
protocol.
MAP. See maintenance analysis procedure.
| Master Console. See IBM TotalStorage ESS Master
| Console.
MB. See megabyte.
MCA. See Micro Channel architecture.
mean time between failures (MTBF). (1) A projection
of the time that an individual unit remains functional.
The time is based on averaging the performance, or
projected performance, of a population of statistically
independent units. The units operate under a set of
conditions or assumptions. (2) For a stated period in the
life of a functional unit, the mean value of the lengths of
time between consecutive failures under stated
conditions. (I) (A)
medium. For a storage facility, the disk surface on
which data is stored.
megabyte (MB). (1) For processor storage, real and
virtual storage, and channel volume, 220 or 1 048 576
bytes. (2) For disk storage capacity and
communications volume, 1 000 000 bytes.
MES. See miscellaneous equipment specification.
MIB. See management information base.
Micro Channel architecture (MCA). The rules that
define how subsystems and adapters use the Micro
Channel bus in a computer. The architecture defines the
services that each subsystem can or must provide.
Glossary
195
| Microsoft Internet Explorer (MSIE). Web browser
| software manufactured by Microsoft.
group to store data using the redundant array of disks
(RAID) data-striping methodology.
nonremovable medium. A recording medium that
cannot be added to or removed from a storage device.
MIH. See missing-interrupt handler.
mirrored pair. Two units that contain the same data.
The system refers to them as one entity.
nonretentive data. Data that the control program can
easily recreate in the event it is lost. The control
program may cache nonretentive write data in volatile
memory.
mirroring. In host systems, the process of writing the
same data to two disk units within the same auxiliary
storage pool at the same time.
nonvolatile storage (NVS). (1) Typically refers to
nonvolatile memory on a processor rather than to a
nonvolatile disk storage device. On a storage facility,
nonvolatile storage is used to store active write data to
avoid data loss in the event of a power loss. (2) A
storage device whose contents are not lost when power
is cut off.
miscellaneous equipment specification (MES). IBM
field-installed change to a machine.
missing-interrupt handler (MIH). An MVS and
MVS/XA facility that tracks I/O interrupts. MIH informs
the operator and creates a record whenever an
expected interrupt fails to occur before a specified
elapsed time is exceeded.
NVS. See nonvolatile storage.
MLC. See machine level control.
O
mobile service terminal (MoST). The mobile terminal
used by service personnel.
octet. In Internet Protocol (IP) addressing, one of the
four parts of a 32-bit integer presented in dotted decimal
notation. dotted decimal notation consists of four 8-bit
numbers written in base 10. For example, 9.113.76.250
is an IP address containing the octets 9, 113, 76, and
250.
| Model 100. A 2105 Model 100, often simply referred to
| as a Mod 100, is an expansion enclosure for the ESS.
| See 2105 and
MoST. See mobile service terminal.
OEMI. See original equipment manufacturer’s
information.
MRPD. See machine reported product data.
|
MSIE. See Microsoft Internet Explorer.
MTBF. See mean time between failures.
|
|
|
|
|
|
multiple allegiance. An ESS hardware function that is
independent of software support. This function enables
multiple system images to concurrently access the
same logical volume on the ESS as long as the system
images are accessing different extents. See extent and
parallel access volumes.
multiple virtual storage (MVS). Implies MVS/390,
MVS/XA, MVS/ESA, and the MVS element of the
OS/390 operating system.
|
|
|
|
|
|
|
open system. A system whose characteristics comply
with standards made available throughout the industry
and that therefore can be connected to other systems
complying with the same standards. Applied to the ESS,
such systems are those hosts that connect to the ESS
through SCSI or SCSI-FCP adapters. See Small
Computer System Interface and SCSI-FCP.
|
|
|
|
|
|
organizationally unique identifier (OUI). An
IEEE-standards number that identifies an organization
with a 24-bit globally unique assigned number
referenced by various standards. OUI is used in the
family of 802 LAN standards, such as Ethernet and
Token Ring.
MVS. See multiple virtual storage.
N
| Netfinity. Obsolete brand name of an IBM
| Intel-processor-based server.
| Netscape Navigator. Web browser software
| manufactured by Netscape.
node. The unit that is connected in a fibre-channel
network. An ESS is a node in a fibre-channel network.
non-RAID. A disk drive set up independently of other
disk drives and not set up as part of a disk drive module
196
ESS Host Systems Attachment Guide
original equipment manufacturer’s information
(OEMI). A reference to an IBM guideline for a
computer peripheral interface. The interface uses
ESA/390 logical protocols over an I/O interface that
configures attached units in a multidrop bus topology.
OUI. See organizationally unique identifier.
P
panel. The formatted display of information that
appears on a display screen.
|
|
|
|
|
|
|
|
|
|
power-on self test (POST). A diagnostic test run by
servers or computers when they are turned on.
parallel access volume (PAV). An advanced function
of the ESS that enables OS/390 and z/OS systems to
issue concurrent I/O requests against a CKD logical
volume by associating multiple devices of a single
control-unit image with a single logical device. Up to 8
device addresses can be assigned to a parallel access
volume. PAV enables two or more concurrent writes to
the same logical volume, as long as the writes are not
to the same extents. See extent, I/O Priority Queueing,
and multiple allegiance.
PPRC. See Peer-to-Peer Remote Copy.
predictable write. A write operation that can cache
without knowledge of the existing format on the
medium. All writes on FBA DASD devices are
predictable. On CKD DASD devices, a write is
predictable if it does a format write for the first data
record on the track.
parity. A data checking scheme used in a computer
system to ensure the integrity of the data. The RAID
implementation uses parity to recreate data if a disk
drive fails.
primary Copy Services server. One of two Copy
Services servers in a Copy Services domain. The
primary Copy Services server is the active Copy
Services server until it fails; it is then replaced by the
backup Copy Services server. A Copy Services server is
software that runs in one of the two clusters of an ESS
and performs data-copy operations within that group.
See active Copy Services server and backup Copy
Services server.
path group. The ESA/390 term for a set of channel
paths that are defined to a control unit as being
associated with a single logical partition (LPAR). The
channel paths are in a group state and are online to the
host. See logical partition.
product engineering. See IBM product engineering.
path group identifier. The ESA/390 term for the
identifier that uniquely identifies a given logical partition
(LPAR). The path group identifier is used in
communication between the LPAR program and a
device. The identifier associates the path group with
one or more channel paths, thereby defining these
paths to the control unit as being associated with the
same LPAR.
program. On a computer, a generic term for software
that controls the operation of the computer. Typically,
the program is a logical assemblage of software
modules that perform multiple related tasks.
program-controlled interruption. An interruption that
occurs when an I/O channel fetches a channel
command word with the program-controlled interruption
flag on.
PAV. See parallel access volume.
PCI. See peripheral component interconnect.
program temporary fix (PTF). A temporary solution or
bypass of a problem diagnosed by IBM in a current
unaltered release of a program
PE. See IBM product engineering.
Peer-to-Peer Remote Copy (PPRC). A function of a
storage server that maintains a consistent copy of a
logical volume on the same storage server or on
another storage server. All modifications that any
attached host performs on the primary logical volume
are also performed on the secondary logical volume.
peripheral component interconnect (PCI). An
architecture for a system bus and associated protocols
that supports attachments of adapter cards to a system
backplane.
physical path. A single path through the I/O
interconnection fabric that attaches two units. For Copy
Services, this is the path from a host adapter on one
ESS (through cabling and switches) to a host adapter
on another ESS.
point-to-point connection. For fibre-channel
connections, a topology that enables the direct
interconnection of ports. See arbitrated loop and
switched fabric.
POST. See power-on self test.
promote. To add a logical data unit to cache memory.
|
|
|
|
|
|
|
|
protected volume. An AS/400 term for a disk storage
device that is protected from data loss by RAID
techniques. An AS/400 host does not mirror a volume
configured as a protected volume, while it does mirror
all volumes configured as unprotected volumes. The
ESS, however, can be configured to indicate that an
AS/400 volume is protected or unprotected and give it
RAID protection in either case.
|
|
pSeries. An IBM Eserver product that emphasizes
performance.
|
|
|
|
|
|
|
|
|
|
|
pseudo-host. A host connection that is not explicitly
defined to the ESS and that has access to at least one
volume that is configured on the ESS. The FiconNet
pseudo-host icon represents the FICON protocol. The
EsconNet pseudo-host icon represents the ESCON
protocol. The pseudo-host icon labelled “Anonymous”
represents hosts connected through the SCSI-FCP
protocol. Anonymous host is a commonly used synonym
for pseudo-host. The ESS adds a pseudo-host icon only
when the ESS is set to access-any mode. See
access-any mode.
Glossary
197
S
PTF. See program temporary fix.
|
|
|
|
PV Links. Short for Physical Volume Links, an
alternate pathing solution from Hewlett-Packard
providing for multiple paths to a volume, as well as
static load balancing.
R
rack. See enclosure.
S/390 and zSeries. IBM enterprise servers based on
Enterprise Systems Architecture/390 (ESA/390) and
z/Architecture, respectively. “S/390” is a shortened form
of the original name “System/390”.
|
|
|
|
RAID. See redundant array of inexpensive disks and
array. RAID also is expanded to redundant array of
independent disks.
SAID. See system adapter identification number.
SAM. See sequential access method.
RAID 5. A type of RAID that optimizes cost-effective
performance through data striping. RAID 5 provides
fault tolerance for up to two failed disk drives by
distributing parity across all of the drives in the array
plus one parity disk drive. The ESS automatically
reserves spare disk drives when it assigns arrays to a
device adapter pair (DA pair). See device adapter.
random access. A mode of accessing data on a
medium in a manner that requires the storage device to
access nonconsecutive storage locations on the
medium.
redundant array of inexpensive disks (RAID). A
methodology of grouping disk drives for managing disk
storage to insulate data from a failing disk drive.
remote technical assistance information network
(RETAIN). The initial service tracking system for IBM
service support, which captures heartbeat and
call-home records. See support catcher and support
catcher telephone number.
|
|
REQ/ACK. See request for acknowledgement and
acknowledgement.
|
|
|
|
|
|
|
request for acknowledgement and
acknowledgement (REQ/ACK). A cycle of
communication between two data transport devices for
the purpose of verifying the connection, which starts
with a request for acknowledgement from one of the
devices and ends with an acknowledgement from the
second device.
reserved allegiance. In Enterprise Systems
Architecture/390, a relationship that is created in a
control unit between a device and a channel path when
a Sense Reserve command is completed by the device.
The allegiance causes the control unit to guarantee
access (busy status is not presented) to the device.
Access is over the set of channel paths that are
associated with the allegiance; access is for one or
more channel programs, until the allegiance ends.
RETAIN. See remote technical assistance information
network.
R0. See track-descriptor record.
198
ESS Host Systems Attachment Guide
S/390 and zSeries storage. Storage arrays and
logical volumes that are defined in the ESS as
connected to S/390 and zSeries servers. This term is
synonymous with count-key-data (CKD) storage.
SAN. See storage area network.
SBCON. See Single-Byte Command Code Sets
Connection.
screen. The physical surface of a display device upon
which information is shown to users.
SCSI. See Small Computer System Interface.
|
|
|
|
SCSI device. A disk drive connected to a host through
a an I/O interface using the SCSI protocol. A SCSI
device is either an initiator or a target. See initiator and
Small Computer System Interface.
|
|
|
|
SCSI host systems. Host systems that are attached
to the ESS with a SCSI interface. Such host systems
run on UNIX, OS/400, Windows NT, Windows 2000, or
Novell NetWare operating systems.
SCSI ID. A unique identifier assigned to a SCSI device
that is used in protocols on the SCSI interface to
identify or select the device. The number of data bits on
the SCSI bus determines the number of available SCSI
IDs. A wide interface has 16 bits, with 16 possible IDs.
|
|
|
|
|
SCSI-FCP. Short for SCSI-to-fibre-channel protocol, a
protocol used to transport data between a SCSI adapter
on an open-systems host and a fibre-channel adapter
on an ESS. See fibre-channel protocol and Small
Computer System Interface.
Seascape architecture. A storage system architecture
developed by IBM for open-systems servers and S/390
and zSeries host systems. It provides storage solutions
that integrate software, storage management, and
technology for disk, tape, and optical storage.
serial connection. A method of device interconnection
for determining interrupt priority by connecting the
interrupt sources serially.
self-timed interface (STI). An interface that has one
or more conductors that transmit information serially
between two interconnected units without requiring any
clock signals to recover the data. The interface performs
clock recovery independently on each serial data stream
and uses information in the data stream to determine
character boundaries and inter-conductor
synchronization.
sequential access. A mode of accessing data on a
medium in a manner that requires the storage device to
access consecutive storage locations on the medium.
sequential access method (SAM). An access method
for storing, deleting, or retrieving data in a continuous
sequence based on the logical order of the records in
the file.
|
|
|
|
|
SIM. See service-information message.
|
|
|
|
|
|
|
simplex volume. A volume that is not part of a
FlashCopy, XRC, or PPRC volume pair.
Single-Byte Command Code Sets Connection
(SBCON). The ANSI standard for the ESCON or
FICON I/O interface.
serial storage architecture (SSA). An IBM standard
for a computer peripheral interface. The interface uses a
SCSI logical protocol over a serial interface that
configures attached targets and initiators in a ring
topology. See SSA adapter.
Small Computer System Interface (SCSI). (1) An
ANSI standard for a logical interface to computer
peripherals and for a computer peripheral interface. The
interface uses a SCSI logical protocol over an I/O
interface that configures attached initiators and targets
in a multidrop bus topology. (2) A standard hardware
interface that enables a variety of peripheral devices to
communicate with one another.
server. (1) A type of host that provides certain services
to other hosts that are referred to as clients. (2) A
functional unit that provides services to one or more
clients over a network.
|
|
|
|
|
service boundary. A category that identifies a group
of components that are unavailable for use when one of
the components of the group is being serviced. Service
boundaries are provided on the ESS, for example, in
each host bay and in each cluster.
SMIT. See System Management Interface Tool.
SMP. See symmetric multi-processor.
SNMP. See simple network management protocol.
service information message (SIM). A message sent
by a storage server to service personnel through an
S/390 operating system.
|
|
|
|
|
|
service personnel. A generalization referring to
individuals or companies authorized to service the ESS.
The terms “service provider”, “service representative”,
and “IBM service support representative (SSR)” refer to
types of service personnel. See service support
representative.
service processor. A dedicated processing unit used
to service a storage facility.
software transparency. Criteria applied to a
processing environment that states that changes do not
require modifications to the host software in order to
continue to provide an existing function.
|
|
|
|
|
shortwave laser adapter. A connector used between
host and ESS to support shortwave fibre-channel
communication.
spare. A disk drive on the ESS that can replace a
failed disk drive. A spare can be predesignated to allow
automatic dynamic sparing. Any data preexisting on a
disk drive that is invoked as a spare is destroyed by the
dynamic sparing copy process.
spatial reuse. A feature of serial storage architecture
that enables a device adapter loop to support many
simultaneous read/write operations. See serial storage
architecture.
service support representative (SSR). Individuals or
a company authorized to service the ESS. This term
also refers to a service provider, a service
representative, or an IBM service support representative
(SSR). An IBM SSR installs the ESS.
shared storage. Storage within an ESS that is
configured so that multiple homogeneous or divergent
hosts can concurrently access the storage. The storage
has a uniform appearance to all hosts. The host
programs that access the storage must have a common
model for the information on a storage device. The
programs must be designed to handle the effects of
concurrent access.
Simple Network Management Protocol (SNMP). In
the Internet suite of protocols, a network management
protocol that is used to monitor routers and attached
networks. SNMP is an application layer protocol.
Information on devices managed is defined and stored
in the application’s Management Information Base
(MIB). See management information base.
Specialist. See IBM TotalStorage Enterprise Storage
Server Specialist.
SSA. See serial storage architecture.
|
|
|
|
SSA adapter. A physical adapter based on serial
storage architecture. SSA adapters connect disk drive
modules to ESS clusters. See serial storage
architecture.
SSID. See subsystem identifier.
SSR. See service support representative.
stacked status. In Enterprise Systems
Architecture/390, the condition when the control unit is
Glossary
199
continuing the operation of the computer. The ESS
contains four processors set up in SMP mode.
holding status for the channel, and the channel
responded with the stack-status control the last time the
control unit attempted to present the status.
synchronous write. A write operation whose
completion is indicated after the data has been stored
on a storage device.
stage operation. The operation of reading data from
the physical disk drive into the cache.
System/390. See S/390.
staging. To move data from an offline or low-priority
device back to an online or higher priority device,
usually on demand of the system or on request of the
user.
system adapter identification number (SAID).
System Management Interface Tool (SMIT). An
interface tool of the AIX operating system for installing,
maintaining, configuring, and diagnosing tasks.
STI. See self-timed interface.
storage area network. A network that connects a
company’s heterogeneous storage resources.
System Modification Program (SMP). A program
used to install software and software changes on MVS
systems.
storage complex. Multiple storage facilities.
storage device. A physical unit that provides a
mechanism to store data on a given medium such that it
can be subsequently retrieved. See disk drive module.
T
|
storage facility. (1) A physical unit that consists of a
storage server integrated with one or more storage
devices to provide storage capability to a host computer.
(2) A storage server and its attached storage devices.
target. A SCSI device that acts as a slave to an
initiator and consists of a set of one or more logical
units, each with an assigned logical unit number (LUN).
The logical units on the target are typically I/O devices.
A SCSI target is analogous to an S/390 control unit. A
SCSI initiator is analogous to an S/390 channel. A SCSI
logical unit is analogous to an S/390 device. See Small
Computer System Interface.
storage server. A physical unit that manages attached
storage devices and provides an interface between
them and a host computer by providing the function of
one or more logical subsystems. The storage server can
provide functions that are not provided by the storage
device. The storage server has one or more clusters.
striping. A technique that distributes data in bit, byte,
multi-byte, record, or block increments across multiple
disk drives.
subchannel. A logical function of a channel subsystem
associated with the management of a single device.
subsystem identifier (SSID). A number that uniquely
identifies a logical subsystem within a computer
installation.
support catcher. A server to which a machine sends
a trace or a dump package.
support catcher telephone number. The telephone
number that connects the support catcher server to the
ESS to receive a trace or dump package. See support
catcher. See remote technical assistance information
network.
switched fabric. One of three a fibre-channel
connection topologies supported by the ESS. See
arbitrated loop and point-to-point.
symmetric multi-processor (SMP). An
implementation of a multi-processor computer consisting
of several identical processors configured in a way that
any subset of the set of processors is capable of
200
ESS Host Systems Attachment Guide
TAP. See Telocator Alphanumeric Protocol.
TB. See terabyte.
TCP/IP. See Transmission Control Protocol/Internet
Protocol.
|
|
Telocator Alphanumeric Protocol (TAP). An industry
standard protocol for the input of paging requests.
terabyte (TB). (1) Nominally, 1 000 000 000 000
bytes, which is accurate when speaking of bandwidth
and disk storage capacity. (2) For ESS cache memory,
processor storage, real and virtual storage, a terabyte
refers to 240 or 1 099 511 627 776 bytes.
thousands of power-on hours (KPOH). A unit of time
used to measure the mean time between failures
(MTBF).
time sharing option (TSO). An operating system
option that provides interactive time sharing from remote
terminals.
TPF. See transaction processing facility.
track. A unit of storage on a CKD device that can be
formatted to contain a number of data records. See
home address, track-descriptor record, and data record.
track-descriptor record (R0). A special record on a
track that follows the home address. The control
program uses it to maintain certain information about
V
the track. The record has a count field with a key length
of zero, a data length of 8, and a record number of 0.
This record is sometimes referred to as R0.
transaction processing facility (TPF). A
high-availability, high-performance IBM operating
system, designed to support real-time,
transaction-driven applications. The specialized
architecture of TPF is intended to optimize system
efficiency, reliability, and responsiveness for data
communication and database processing. TPF provides
real-time inquiry and updates to a large, centralized
database, where message length is relatively short in
both directions, and response time is generally less than
three seconds. Formerly known as the Airline Control
Program/Transaction Processing Facility (ACP/TPF).
Transmission Control Protocol/Internet Protocol
(TCP/IP). (1) Together, the Transmission Control
Protocol and the Internet Protocol provide end-to-end
connections between applications over interconnected
networks of different types. (2) The suite of transport
and application protocols that run over the Internet
Protocol. See Internet Protocol.
transparency. See software transparency.
TSO. See time sharing option.
U
UFS. UNIX filing system.
ultra-SCSI. An enhanced Small Computer System
Interface.
unit address. The ESA/390 term for the address
associated with a device on a given control unit. On
ESCON or FICON interfaces, the unit address is the
same as the device address. On OEMI interfaces, the
unit address specifies a control unit and device pair on
the interface.
unprotected volume. An AS/400 term that indicates
that the AS/400 host recognizes the volume as an
unprotected device, even though the storage resides on
a RAID array and is therefore fault tolerant by definition.
The data in an unprotected volume can be mirrored.
Also referred to as an unprotected device.
|
|
|
|
|
upper-layer protocol. The layer of the Internet
Protocol (IP) that supports one or more logical protocols
(for example, a SCSI-command protocol and an
ESA/390 command protocol). Refer to ANSI
X3.230-199x.
virtual machine (VM). A virtual data processing
machine that appears to be for the exclusive use of a
particular user, but whose functions are accomplished
by sharing the resources of a real data processing
system.
vital product data (VPD). Information that uniquely
defines the system, hardware, software, and microcode
elements of a processing system.
VM. See virtual machine.
|
|
|
|
|
|
|
|
|
volume. In Enterprise Systems Architecture/390, the
information recorded on a single unit of recording
medium. Indirectly, it can refer to the unit of recording
medium itself. On a nonremovable-medium storage
device, the term can also indirectly refer to the storage
device associated with the volume. When multiple
volumes are stored on a single storage medium
transparently to the program, the volumes can be
referred to as logical volumes.
VPD. See vital product data.
W
| Web Copy Services. See IBM TotalStorage Enterprise
| Storage Server Copy Services.
worldwide node name (WWNN). A unique 64-bit
identifier for a host containing a fibre-channel port. See
worldwide port name.
worldwide port name (WWPN). A unique 64-bit
identifier associated with a fibre-channel adapter port. It
is assigned in an implementation- and
protocol-independent manner.
write hit. A write operation in which the requested
data is in the cache.
write penalty. The performance impact of a classical
RAID 5 write operation.
WWPN. See worldwide port name.
X
XRC. See Extended Remote Copy.
xSeries. An IBM Eserver product that emphasizes
architecture.
UTC. See Coordinated Universal Time.
Z
utility device. The ESA/390 term for the device used
with the Extended Remote Copy facility to access
information that describes the modifications performed
on the primary copy.
zSeries. An IBM Eserver product that emphasizes
near-zero downtime.
zSeries storage. See S/390 and zSeries storage.
Glossary
201
Numerics
|
|
|
|
|
2105. The machine number for the IBM Enterprise
Storage Server (ESS). 2105-100 is an ESS expansion
enclosure typically referred to as the Model 100. See
IBM TotalStorage Enterprise Storage Server and Model
100.
|
|
|
|
|
3390. The machine number of an IBM disk storage
system. The ESS, when interfaced to IBM S/390 or
zSeries hosts, is set up to appear as one or more 3390
devices, with a choice of 3390-2, 3390-3, or 3390-9
track formats.
|
3990. The machine number of an IBM control unit.
|
|
|
|
7133. The machine number of an IBM disk storage
system. The Model D40 and 020 drawers of the 7133
can be installed in the 2105-100 expansion enclosure of
the ESS.
8-pack. See disk drive module group.
202
ESS Host Systems Attachment Guide
Index
Numerics
2000 host system, Windows
attaching with SCSI adapters 137
fibre-channel attachment 142
migrating from SCSI to fibre-channel 164
2105
host attachment package
RS/6000 or pSeries 58, 61
host install script file
Hewlett-Packard host system 40, 42
8751D adapter card
configuring 123, 139
installing 123, 139
9032 Model 5 support 70
9337 subsystem emulation 46
A
about this guide xv
Adaptec adapter card
configuring
Novell NetWare host system 91
Windows 2000 host system 138
Windows NT host system 122
installing
Novell NetWare host system 91
Windows 2000 host system 138
Windows NT host system 122
adapter
6501 for AS/400 SCSI attachment 46
card for NUMA-Q, installing 56
driver, loading 85
Symbios 8751D
installing and configuring 123, 139
adapter card
Adaptec AHA-2944UW
installing and configuring 122
driver, downloading 145
driver, loading 129
Qlogic QLA1041
installing and configuring 124
QLogic QLA1041
installing and configuring 140
adapter cards
Adaptec AHA-2944
installing and configuring 138
Adaptec AHA-2944UW
installing and configuring 91
driver, loading 96, 111
ESCON host 70
QLogic QLA1041
installing and configuring 93
adapter drivers
installing the JNI PCI fibre channel 108
installing the JNI SBUS fibre channel 109
AdvFS file system, configuring for Compaq 20
affinity for LUNs 11, 15
© Copyright IBM Corp. 1999, 2001
agreement for licensed internal code 183
AIX host system, migrating from SCSI to
fibre-channel 162
arbitrated-loop topology
description 14
illustration of 14
AS/400
9337 subsystem emulation 46
attaching the IBM ESS 45
recommended configurations 46
AS/400 and iSeries
support for SCSI attachment 4
attaching
AS/400 host system 45
Compaq host system 19, 27
ESCON adapter 72
ESS
SCSI host systems 7
Hewlett-Packard host system 39, 41
iSeries host system 48
Linux host system 83
Microsoft
Windows 2000 host systems 137
Windows NT host system 121
multiple host systems
hardware requirements 64
software requirements 64
multiple RS/6000 or pSeries hosts without
HACMP/6000 64
Novell NetWare host system 91, 94
NUMA-Q host system 55
pSeries host system 57, 60, 64
RS/6000 host system 57, 60, 64
S/390 host system 69
Sun host system 100
Windows 2000 host system 137, 142
Windows NT host system 121, 126
with HACMP/6000 67
zSeries host system 69
attachment
checking SCSI 10
package
RS/6000 and pSeries host system, before you
install 61
RS/6000 and pSeries host system, installing 58
package for
RS/6000 and pSeries host systems 58
package for RS/6000 and pSeries host systems 61
problems, solving SCSI 10
procedures for multiple RS/6000 and pSeries hosts
without HACMP/6000 65
requirements
AS/400 host system 45
Compaq 19
Hewlett-Packard host system 39, 41
iSeries host system 48
Novell NetWare host system 91
NUMA-Q host system 55
203
attachment (continued)
requirements (continued)
pSeries host system 57, 61
RS/6000 host system 57, 61
Sun host system 99, 105
Windows 2000 host system 137, 143
Windows NT 83
Windows NT host system 121, 126
xSeries host system 55
audience of this guide xv
availability
configuring for 125, 133, 141, 150
continuous 2
B
battery disposal xiii
bridge feature, ESCON director for S/390 and
zSeries 70
C
cable
connecting the SCSI 9
distances for S/390 and zSeries, ESCON 70
ESCON host 70
interconnection for SCSI 7
lengths
for S/390 and zSeries 71
for SCSI 9
specifications for S/390 and zSeries, ESCON 70
Canadian compliance statement 181
caution notice xiii
changes, summary of xv
changing the Sun system kernel 105
channel
directors for S/390 and zSeries 71
extenders for S/390 and zSeries 71
checking
S/390 and zSeries attachment 72
SCSI attachment 10
class A compliance statement, Taiwan 182
command, port identification for S/390 and zSeries
TSO 71
communications statement 181
Compaq host system
attachment requirements 19
console device check 19
fibre-channel attachment requirements 27
fibre-channel attachment to an ESS 27
initializing disk drives 20
SCSI attachment requirements 27
SCSI attachment to an ESS 19
Tru64 UNIX
Version 4.0x 19
WWPN 153
compliance statement
German 181
radio frequency energy 180
Taiwan class A 182
concurrent, migration 166
204
ESS Host Systems Attachment Guide
configuring
Adaptec AHA-2944UW adapter card 122, 138
AdvFS file system for Compaq 20
AS/400 46
Compaq host system 19, 21
ESS devices with multiple paths per LUN 59, 63
for availability and recoverability for a Windows 2000
operating system 141, 150
for availability and recoverability for a Windows NT
operating system 125, 133
HACMP/6000 67
IOC-0210-54 adapter card 56
iSeries 50
limitations, SCSI host systems 11
MC/ServiceGuard on an HP-UX 11.00 with the
ESS 41, 43
QLogic adapter card 86, 97, 124, 130, 140, 146
QLogic QLA1041 adapter card 93
Sun host system device drivers 100
Symbios 8751D adapter card 123, 139
VSS devices with multiple paths per LUN 59, 63
connecting
fibre-channel 13
SCSI cables
picture of 10
procedure 9
console, device check for Compaq 19
controller
images for S/390 and zSeries 70
S/390 and zSeries 70
D
danger notice xiii
data
restoring 66
saving 66
transferring 71
devices
checking
Compaq 19
configuring for multiple paths per LUN 63
configuring to mount automatically for Compaq 21
recognition for Compaq 20
special files for Compaq 20
directors and channel extenders for S/390 and
zSeries 71
disk drives for Compaq, initializing 20
disposal, product xiii
documents, ordering xvii
downloading
Emulex LP8000 fibre-channel adapter driver 106,
131, 147, 148
JNI PCI fibre-channel adapter driver 108
JNI SBUS fibre-channel adapter driver 109
downloading QLogic
fibre-channel adapter driver 129
downloading the current fibre-channel adapter
driver 145
drivers
Compaq, initializing 20
drivers (continued)
installing
JNI PCI fibre-channel adapter 108
JNI SBUS fibre-channel adapter 109
Linux, installing 86
Novell NetWare, installing 97
Sun, installing 111
Windows 2000, installing 146
Windows NT, installing 130
E
edition notice ii
electronic emission notices 180
electrostatic discharge (ESD) sensitive components,
handling 10
emulation
9337 subsystem 46
UNIX 60
Emulex LP8000 adapter card
downloading 131, 147, 148
Sun host system 106
installing
Sun host system 106
Windows 2000 host system 146
Windows NT host system 130
enclosure
expansion 3
Enterprise Storage Server. See ESS. 2
environmental notices xiii
ESCON
attaching to a S/390 and zSeries host system 69
cable distances for S/390 and zSeries host
system 70
cabling specifications for S/390 and zSeries 70
controller images and controls 70
director’s FICON bridge feature, support for 9032
Model 5, for S/390 and zSeries 70
host adapters for 70
host cables 70
host systems 5
ESCON distances 70
ESD 10
ESS
attaching to a Linux host system 83
attaching to a Novell NetWare host system 91, 94
configuration
verifying for a pSeries host system 63
verifying for an RS/6000 63
verifying for an RS/6000 and pSeries host
system 59
emulation of a 9337 subsystem 46
host systems supported by 3
overview 2
publications xvi
restoring data on the 66
saving data 66
ESS, configuring MC/ServiceGuard 41, 43
European Community Compliance statement 181
expansion enclosure 3
extenders for S/390 and zSeries
channels 71
directors 71
F
Federal Communications Commission (FCC)
statement 181
fibre-channel
adapter 15
adapter driver, downloading 145
adapter driver, loading 85, 96, 111, 129
adapter drivers
installing the JNI PCI 108
installing the JNI SBUS 109
AIX host system
migrating from SCSI 162
arbitrated loop 14, 15
attachment to a Compaq host system 27
attachment to a NUMA-Q host system 55
attachment to a Windows 2000 host system 142
attachment to a Windows NT host system 126
cable 15
connection 13
Hewlett-Packard host system
migrating from SCSI 160
host systems 4
loop initialization 15
LUN access modes 16
LUN affinity 15
migrating from native SCSI 160
migrating from native SCSI to a Hewlett-Packard
host system 160
migrating from native SCSI to a Windows 2000 host
system 164
migrating from native SCSI to a Windows NT host
system 164
migrating from native SCSI to an AIX host
system 162
port name identification 153
ports 14
storage area networks (SANs) 17
targets and LUNs 15
topologies 13
Windows 2000 host system
migrating from SCSI 164
Windows NT host system
migrating from SCSI 164
FICON
bridge feature 70
host systems 6
migrating from a bridge 75
migrating from ESCON 72
figures, list of ix
files, device special for Compaq 20
FlashCopy
restrictions for open system hosts 16
restrictions for open-systems hosts 11
Windows NT host system 126, 142
Index
205
G
German compliance statement 181
glossary 185
groups, path for S/390 and zSeries 71
H
HACMP/6000
attaching a ESS to multiple RS/6000 pSeries hosts
without 64
configuring for high-availability 67
handling electrostatic discharge (ESD) sensitive
components 10
hardware requirements for attaching multiple host
systems 64
Hewlett-Packard host system
attaching the ESS 39, 41
configuring MC/ServiceGuard 41
configuring MC/ServiceGuard on an 43
installing the 2105 host install script 40, 42
locating the WWPN 154
migrating from SCSI to fibre-channel 160
high-availability (HACMP/6000), configuring RS/6000
and pSeries for 67
host attachment
package
RS/6000 and pSeries host system, installing 58
package for RS/6000 and pSeries host systems 58,
62
host system
attaching the IBM ESS 7
configuration limitations 11
limitations of SCSI 11
host systems
attaching multiple 64
attaching the ESS
AS/400 45
Hewlett-Packard 39, 41
iSeries host system 48
pSeries 57, 60
RS/6000 57, 60
Sun 99, 104
fibre channel 4
migrating from SCSI to fibre-channel
Hewlett-Packard 160
Windows 2000 164
Windows NT 164
S/390 and zSeries 5
SCSI 4
software requirements for attaching multiple 64
supported by the ESS 3
without HACMP/6000
attaching an ESS to multiple RS/6000 or
pSeries 64
J
I
I/O queuing for SCSI, initiators 9
IBM
AS/400 host system, attaching the IBM ESS
206
IBM (continued)
iSeries host system, attaching the ESS 48
pSeries host system, attaching the ESS 57
RS/6000 host system, attaching the ESS 57
IBM Subsystem Device Driver
installing 102, 117
Web site xxii
image for S/390 and zSeries 70
Industry Canada Compliance statement 181
initializing disk drives, for Compaq 20
initiator
I/O queuing for SCSI 9
installation package for an RS/6000 and pSeries host
system
fibre-channel 62
SCSI 58
installing
2105 host attachment package for RS/6000 and
pSeries host systems 58, 61
2105 host install script for Hewlett-Packard 40, 42
Adaptec AHA-2944UW adapter card 91, 122, 138
Compaq Tru64 UNIX Version 4.0x 19
drivers for Linux 86
drivers for Novell NetWare 97
drivers for Sun 111
drivers for Windows 2000 146
drivers for Windows NT 130
Emulex LP8000 adapter card 106, 130, 146
fibre-channel adapter drivers 86, 97, 111, 130, 146
host attachment package for an RS/6000 and
pSeries 58
host attachment package for RS/6000 and pSeries
host systems 58, 61, 62
IBM Subsystem Device Driver 102, 117
IOC-0210-54 adapter card for NUMA-Q 56
JNI PCI adapter card 108
JNI PCI adapter driver
fibre-channel 108
JNI SBUS adapter card 109
JNI SBUS adapter driver
fibre-channel 109
QLogic QLA1041 adapter card 93, 124, 140
QLogic QLA2100F adapter card 94, 95, 127, 143
QLogic QLA2200F adapter card 84, 94, 95, 110,
128, 144
QLogic QLA2300F adapter card 84
Symbios 8751D adapter card 123, 139
interconnection, cabling 7
introduction 1
IOC-0210-54 adapter card for NUMA-Q, installing 56
iSeries host system
attaching the ESS 48
locating the WWPN 154
recommended configurations 50
ESS Host Systems Attachment Guide
Japanese Voluntary Control Council for Interference
(VCCI) statement 182
45
JNI PCI adapter card
downloading
Sun host system 108
installing
Sun host system 108
JNI PCI fibre-channel adapter driver, installing 108
JNI SBUS adapter card
downloading
Sun host system 109
installing
Sun host system 109
JNI SBUS fibre-channel adapter driver, installing 109
K
Korean government Ministry of Communication (MOC)
statement 182
L
lengths
cable 71
cable for SCSI 9
licensed internal code (LIC)
agreement 183
limitations, SCSI host systems configuration 11
limited warranty statement 171
Linux host
host system, locating the WWPN 156
Linux operating system
attaching the IBM ESS 83
loading the current fibre-channel adapter driver 85, 96,
111
locating the WWPN
Compaq host 153
Hewlett-Packard host 154
iSeries host 154
Linux host 156
Novell NetWare host 156
NUMA-Q host 155
RS/6000 and pSeries host 155
Sun host 156
Windows 2000 host 157
Windows NT host 157
logical
paths, for S/390 and zSeries 71
loop
arbitrated 14
LUN
access modes 11
affinity for fibre-channel 15
affinity for SCSI 11
configuring ESS devices with multiple paths 63
configuring VSS and ESS devices with multiple
paths 59
configuring VSS devices with multiple paths 63
targets 11
M
manuals, ordering
xvii
mapping hardware
for a Sun host system 100
MC/ServiceGuard on an Hewlett-Packard host system,
configuring 41, 43
Microsoft Windows 2000 operating systems, attaching
hosts with the 137
Microsoft Windows NT operating systems, attaching
hosts with the 121
migrating
ESCON to native FICON 72
FICON bridge to native FICON 75
SAN Data Gateway to fibre-channel 169
SCSI to fibre-channel
\ AIX host system 162
Hewlett-Packard host system 160
overview 160
preparing your host system 159
software requirements 159
Windows 2000 host system 164
Windows NT host system 164
migration
concurrent 166
nonconcurrent 160
missing-interrupt handler
setting 6
multiple host systems 64
multiple paths per LUN, configuring VSS and ESS
devices 59, 63
N
nonconcurrent migration 160
notices
caution xiii
danger xiii
edition ii
electronic emission 180
environmental xiii
European community 181
FCC statement 181
German 181
Industry Canada 181
Japanese 182
Korean 182
licensed internal code 183
notices statement 179
safety xiii
Taiwan 182
Novell NetWare
host system, locating the WWPN 156
Novell NetWare operating system
attaching the IBM ESS 91, 94
NT
fibre-channel attachment 126
operating systems
attaching hosts with Microsoft Windows 121
NT host system, migrating from SCSI to
fibre-channel 164
NUMA-Q host system
attachment requirements 55
configuring the IOC-0210-54 adapter card 56
Index
207
NUMA-Q host system (continued)
installing the IOC-0210-54 adapter card 56
locating the WWPN 155
performing a fibre-channel attachment 55
system requirements 56
O
open-systems hosts
fibre-channel 4
SCSI 4
operating system
AIX 4
attaching hosts with the Microsoft Windows NT 121
attaching the IBM ESS to a Linux 83
attaching the IBM ESS to a Novell NetWare 91, 94
device recognition for Compaq 20
HP-UX 4
Microsoft Windows 2000 4
Microsoft Windows NT 4
Novell NetWare 4
OpenVMS 4
OS/400 4
ptx 4
Solaris 4
Tru64 UNIX 4
ordering publications xvii
overview
ESS 2
Q
P
package
RS/6000 and pSeries
before you install 61
installing 58, 61, 62
replacing a prior version of the installation 58, 62
package for RS/6000 and pSeries 58
parameters
Sun host system 103
maxphys 104
sd_io_time 104
sd_max_throttle 104
sd_retry_count 104
path group
S/390 and zSeries 71
paths
logical for S/390 and zSeries 71
per LUN configuring VSS and ESS devices 59, 63
types for S/390 and zSeries 71
PCI fibre-channel adapter driver, installing the JNI 108
performing
fibre-channel attachment to a Compaq host
system 27
FlashCopy
Windows NT host system 126, 142
performing a fibre-channel attachment
ESS
NUMA-Q host system 55
Windows 2000 host system 142
Windows NT host system 126
208
performing a SCSI attachment to an IBM ESS to a
Compaq host system 19
picture
of the expansion enclosure 3
of the Models E10, E20, F10, and F20 base
enclosure; front and rear views 2
point-to-point topology
description 13
illustration of 13
port identification for S/390 and zSeries TSO
commands 71
port name identification for fibre-channel 153
PPRC
restrictions for open system hosts 16
restrictions for open-systems hosts 11
preface See About this guide xv
preparing a host system to change from SCSI to
fibre-channel attachment 159
problems, solving SCSI attachment 10
procedures, attachment 65
product
disposal xiii
recycling xiii
publications
ESS xvi
library xvi
ordering xvii
related xviii
ESS Host Systems Attachment Guide
QLA1041 adapter cards
configuring 93
installing 93
QLA2100F adapter card
configuring 97, 130, 146
installing 94, 95, 127, 143, 144
QLA2200F adapter card
configuring 86, 97, 130, 146
installing 84, 94, 95, 110, 127, 128, 143, 144
QLA2300F adapter card
configuring 86
installing 84
QLogic adapter cards
configuring 124, 140
installing 124, 140
QLogic QLA2100F adapter card
installing 94, 95, 127, 143, 144
QLogic QLA2200F adapter card
installing 84, 94, 95, 110, 127, 128, 143, 144
QLogic QLA2300F adapter card
installing 84
queuing for SCSI, initiators 9
R
radio frequency energy compliance statement 180
recommended configurations for the AS/400 46
recommended configurations for the iSeries 50
recoverability
configuring for 125, 133, 141, 150
recycling, product xiii
registry, setting the TimeOutValue for Windows
2000 141, 150
registry, setting the TimeOutValue for Windows
NT 125, 134
related publications xviii
replacing a prior version of the installation package for
an RS/6000 and pSeries host system 58, 62
requirements
for attaching multiple host systems
hardware 64
software 64
for Hewlett-Packard, attachment 39, 41
for Novell NetWare, attachment 91
for NUMA-Q, attachment 55
for RS/6000 and pSeries
fibre-channel attachment 61
for RS/6000 and pSeries attachment 57
for the AS/400
SCSI attachment 45
for the iSeries
SCSI attachment 48
for Windows 2000, attachment 137, 143
for Windows NT, attachment 83, 121, 126
restoring data on ESS 66
restrictions for open system hosts
FlashCopy 16
PPRC 16
RS/6000 and pSeries
support for SCSI attachment 4
RS/6000 and pSeries host
locating the WWPN 155
RS/6000 and pSeries host system
attaching the ESS
SCSI 57
attaching the IBM ESS
fibre-channel 60
configuring devices with multiple paths per LUN 59,
63
installing the 2105 host attachment package 58, 62
replacing a prior version of the 2105 host attachment
package 58, 62
UNIX emulation 60
verifying the configuration 59, 63
RS/6000 pSeries hosts without HACMP/6000, attaching
a ESS to multiple RISC 64
S
S/390 and zSeries
cable lengths 71
checking the attachment 72
controller images and controls 70
data transfer 71
directors and channel extenders 71
ESCON cabling specifications 70
ESCON host cables 70
host adapters for ESCON 70
host systems 5
hosts, attaching 69
logical paths 71
S/390 and zSeries (continued)
operating system 5
path groups 71
path types 71
port identification 71
support for 9032 Model 5 ESCON director FICON
bridge feature 70
TSO commands, port identification for 71
safety notices xiii
SAN Data Gateway
migration considerations 170
overview 169
SAN Data Gateway Web site xxii
saving data on the ESS 66
SBUS fibre-channel adapter driver, installing the
JNI 109
script for Hewlett-Packard, instructions for installing the
2105 host install 40, 42
SCSI
attachment, checking 10
attachment problems, solving 10
attachment to an IBM ESS to a Compaq host
system 19
cables, connecting 9
host systems 4
attaching the IBM ESS 7
limitations 11
SCSI to fibre-channel
migrating from 160
on a Hewlett-Packard host system, migrating 160
on a Windows NT host system, migrating 164
on an AIX host system, migrating 162
sd_max_throttle, parameter 104
server
restoring data on the ESS 66
saving data on the ESS 66
server Web site xxii
setting the parameters for a Sun host system 103, 118
setting the TimeOutValue registry for Windows
2000 141, 150
setting the TimeOutValue registry for Windows
NT 125, 134
sites, Web browser xxii
software
open-systems hosts 4
requirements for attaching multiple host systems 64
S/390 and zSeries 5
solving SCSI attachment problems 10
special files for Compaq, device 20
specifications, ESCON cabling for S/390 and
zSeries 70
statement
of compliance
Canada 181
European 181
Federal Communications Commission 181
Japan 182
Korean government Ministry of Communication
(MOC) 182
Taiwan 182
statement of limited warranty 171
Index
209
storage area networks (SANs), fibre-channel 17
storage server
restoring data 66
saving data 66
Subsystem Device Driver
installing 102
Subsystem Device Driver (SDD)
installing 117
Web site xxii
subsystem emulation, 9337 46
summary of changes xv
Sun host
locating the WWPN 156
Sun host system
attaching the ESS 99, 104
kernel, changing 105
mapping hardware 100
setting parameters 103, 118
support
for 9032 Model 5 ESCON director FICON bridge
feature, for S/390 and zSeries 70
switched-fabric topology
description 14
illustration of 14
Symbios 8751D adapter card
configuring 123, 139
installing 123, 139
systems
attaching hosts with the Microsoft Windows
2000 137
attaching hosts with the Microsoft Windows NT 121
attaching the ESS to an IBM RS/6000 and pSeries
host 60
attaching the ESS to an RS/6000 and pSeries
host 57
attaching the IBM ESS to SCSI host 7
hardware requirements for attaching multiple
host 64
S/390 and zSeries host 5
SCSI host 4
software requirements for attaching multiple host 64
T
tables, list of xi
Taiwan class A compliance statement 182
targets and LUNs 11, 15
TimeOutValue registry for Windows 2000, setting 141,
150
TimeOutValue registry for Windows NT, setting 125,
134
topology
arbitrated loop 14
fibre-channel 13
point-to-point 13
trademarks 180
transfer, data for S/390 and zSeries 71
Tru64 UNIX
Version 4.0x 19
TSO commands, port identification for S/390 and
zSeries 71
210
ESS Host Systems Attachment Guide
U
unit
controls, for S/390 and zSeries 70
images, for S/390 and zSeries 70
UNIX
emulation 60
V
verifying
ESS configuration for an RS/6000 and pSeries host
system 59, 63
VSS devices
configuring with multiple paths per LUN 59, 63
W
warranty
limited 171
Web site
Copy Services xxii
ESS publications xxii
host systems supported by the ESS xxii
IBM storage servers xxii
IBM Subsystem Device Driver xxii
SAN Data Gateway xxii
Windows 2000 host
host system, locating the WWPN 157
Windows 2000 host system
attaching to an ESS 137
attaching with fibre-channel adapters 142
attaching with SCSI adapters 137
configuring for availability and recoverability
150
fibre-channel attachment 142
migrating from SCSI to fibre-channel 164
performing a SCSI attachment 138
Windows NT host
locating the WWPN 157
Windows NT host system
attaching hosts 121
attaching to an ESS 121
configuring for availability and recoverability
133
fibre-channel attachment 126
migrating from SCSI to fibre-channel 164
worldwide port name 153
locating
for a Compaq host 153
for a Hewlett-Packard host 154
for a Linux host 156
for a Novell NetWare host 156
for a NUMA-Q host 155
for a RS/6000 and pSeries host 155
for a Sun host 156
for a Windows 2000 host 157
for a Windows NT host 157
for an iSeries host 154
WWPN. See worldwide port name 153
141,
125,
Z
zSeries. SeeS/390 and zSeries.
5
Index
211
212
ESS Host Systems Attachment Guide
Readers’ comments — we would like to hear from you
IBM TotalStorage™ Enterprise Storage Server™
Host Systems Attachment Guide
2105 Models E10, E20, F10, and F20
Publication No. SC26-7296-05
Overall, how satisfied are you with the information in this book?
Overall satisfaction
Very Satisfied
h
Satisfied
h
Neutral
h
Dissatisfied
h
Very Dissatisfied
h
Neutral
h
h
h
h
h
h
Dissatisfied
h
h
h
h
h
h
Very Dissatisfied
h
h
h
h
h
h
How satisfied are you that the information in this book is:
Accurate
Complete
Easy to find
Easy to understand
Well organized
Applicable to your tasks
Very Satisfied
h
h
h
h
h
h
Satisfied
h
h
h
h
h
h
Please tell us how we can improve this book:
Thank you for your responses. May we contact you?
h Yes
h No
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you.
Name
Company or Organization
Phone No.
Address
SC26-7296-05
___________________________________________________________________________________________________
Readers’ Comments — We’d Like to Hear from You
Cut or Fold
Along Line
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
BUSINESS REPLY MAIL
FIRST-CLASS MAIL PERMIT NO. 40 ARMONK, NEW YORK
POSTAGE WILL BE PAID BY ADDRESSEE
International Business Machines Corporation
RCF Processing Department
G26/050
5600 Cottle Road
San Jose, CA 95193-0001
_________________________________________________________________________________________
Fold and Tape
Please do not staple
Fold and Tape
SC26-7296-05
Cut or Fold
Along Line
Printed in the United States of America
on recycled paper containing 10%
recovered post-consumer fiber.
SC26-7296-05
Spine information:
IBM TotalStorage™ Enterprise
Storage Server™
ESS Host Systems Attachment Guide