IBM Tivoli Storage Manager for Windows: Administrator'

Tivoli Storage Manager
for Windows
®
Version 6.1
Administrator’s Guide
SC23-9773-01
Tivoli Storage Manager
for Windows
®
Version 6.1
Administrator’s Guide
SC23-9773-01
Note
Before using this information and the product it supports, read the information in “Notices” on page 951.
This edition applies to Version 6.1 of IBM Tivoli Storage Manager and to all subsequent releases and modifications
until otherwise indicated in new editions or technical newsletters.
© Copyright International Business Machines Corporation 1993, 2009.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . xiii
Who should read this guide . . . .
Publications . . . . . . . . . .
Tivoli Storage Manager publications .
Related hardware publications . . .
Support information . . . . . . .
Getting technical training . . . .
Searching knowledge bases . . . .
Contacting IBM Software Support .
Conventions used in this guide . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. xiii
. xiii
. xiii
. xv
. xv
. xvi
. xvi
. xvii
. xix
New for IBM Tivoli Storage Manager
Version 6.1. . . . . . . . . . . . . xxi
|
|
|
|
|
|
|
New for the server in Version 6.1.2 . . . . . . xxi
Enabled functions . . . . . . . . . . . xxi
Licensing changes . . . . . . . . . . . xxi
ACSLS functionality for 64-bit Windows
systems . . . . . . . . . . . . . . xxii
PREVIEW parameter for DSMSERV INSERTDB xxii
New for the server in Version 6.1.0 . . . . . . xxii
Disabled functions in 6.1.0 and 6.1.1. . . . . xxii
Changes to the Version 6.1 Administration
Center . . . . . . . . . . . . . . xxiii
Data deduplication . . . . . . . . . . xxiv
Storage devices. . . . . . . . . . . . xxv
Disaster recovery manager support for
active-data pools . . . . . . . . . . . xxvi
EXPIRE INVENTORY command enhancements xxvi
No-query restore changes . . . . . . . . xxvi
Server database . . . . . . . . . . . xxvii
Support for NetApp SnapMirror to Tape
feature . . . . . . . . . . . . . . xxvii
Reporting and monitoring feature . . . . . xxvii
ODBC driver support . . . . . . . . . xxvii
Backup sets and client node enhancements
xxviii
Part 1. Tivoli Storage Manager
basics . . . . . . . . . . . . . . . 1
Chapter 1. Tivoli Storage Manager
overview . . . . . . . . . . . . . . 3
How client data is stored . . . . . . . .
Data-protection options . . . . . . . .
Data movement to server storage . . . .
Consolidation of backed-up client data . .
How the server manages storage . . . . .
Device support . . . . . . . . . .
Data migration through the storage hierarchy
Removal of expired data . . . . . . .
.
.
.
.
.
.
.
.
. 5
. 8
. 14
. 14
. 15
. 15
. 16
. 16
Chapter 2. Tivoli Storage Manager
concepts . . . . . . . . . . . . . . 19
Interfaces to Tivoli Storage Manager .
© Copyright IBM Corp. 1993, 2009
.
.
.
.
Server options . . . . . . . . . . . .
Storage configuration and management . . . .
Hard disk devices . . . . . . . . . .
Removable media devices . . . . . . .
Migrating data from disk to tape . . . . .
Managing storage pools and volumes . . .
Windows cluster environments . . . . . . .
Management of client operations . . . . . .
Managing client nodes. . . . . . . . .
Managing client data with policies . . . .
Schedules for client operations . . . . . .
Server maintenance . . . . . . . . . . .
Server-operation management . . . . . .
Server script automation . . . . . . . .
Database and recovery-log management . . .
Sources of information about the server . . .
Tivoli Storage Manager server networks . . .
Exporting and importing data . . . . . .
Protecting Tivoli Storage Manager and client data
Protecting the server . . . . . . . . .
Managing servers with the Administration Center
Using the Administration Center . . . . .
Functions not in the Administration Center . .
Protecting the Administration Center . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
20
20
21
22
22
23
24
24
25
27
27
28
29
29
30
30
31
31
32
32
33
33
37
39
Chapter 3. Configuring the server . . . 41
Initial configuration overview . . . . . . . .
Standard configuration . . . . . . . . .
Minimal configuration . . . . . . . . . .
Stopping the initial configuration . . . . . .
Performing the initial configuration . . . . . .
Initial configuration environment wizard and
tasks. . . . . . . . . . . . . . . .
Server initialization wizard . . . . . . . .
Device configuration wizard . . . . . . . .
Client node configuration wizard . . . . . .
Media labeling wizard . . . . . . . . . .
Default configuration results. . . . . . . . .
Data management policy objects . . . . . .
Storage device and media policy objects . . . .
Objects for Tivoli Storage Manager clients . . .
Verifying the initial configuration . . . . . . .
Performing pre-backup tasks for remote clients
Backing up a client . . . . . . . . . . .
Restoring client files or directories . . . . . .
Archiving and retrieving files . . . . . . .
Getting started with administrative tasks . . . .
Managing Tivoli Storage Manager servers . . .
Installing and configuring backup-archive clients
Working with schedules on network clients. . .
Setting client and server communications options
Registering additional administrators . . . . .
Changing administrator passwords . . . . .
41
42
42
43
43
45
47
48
50
54
58
59
59
60
61
61
62
62
63
64
65
68
70
70
72
72
. 19
iii
Part 2. Configuring and managing
server storage . . . . . . . . . . . 73
Chapter 4. Storage device concepts . . 75
Road map for key device-related task information
75
Tivoli Storage Manager storage devices . . . . . 76
Tivoli Storage Manager storage objects . . . . . 76
Libraries . . . . . . . . . . . . . . 77
Drives . . . . . . . . . . . . . . . 79
Device class . . . . . . . . . . . . . 79
Library, drive, and device-class objects . . . . 82
Storage pools and storage-pool volumes . . . . 82
Data movers . . . . . . . . . . . . . 84
Paths . . . . . . . . . . . . . . . 84
Server objects. . . . . . . . . . . . . 84
Tivoli Storage Manager volumes . . . . . . . 85
Volume inventory for an automated library. . . 86
Device configurations . . . . . . . . . . . 86
Devices on local area networks . . . . . . . 86
Devices on storage area networks . . . . . . 86
LAN-free data movement. . . . . . . . . 88
Network-attached storage . . . . . . . . 89
Mixed device types in libraries . . . . . . . 92
Removable media mounts and dismounts . . . . 94
How Tivoli Storage Manager uses and reuses
removable media . . . . . . . . . . . . 95
Required definitions for storage devices . . . . . 98
Example: Mapping devices to device classes . . 99
Example: Mapping storage pools to device
classes and devices . . . . . . . . . . . 99
Planning for server storage . . . . . . . . . 100
Server options that affect storage operations . . . 102
Chapter 5. Magnetic disk devices . . . 103
Requirements for disk subsystems . . . . .
Random access and sequential access disk devices
Configuring random access volumes on disk
devices . . . . . . . . . . . . . .
Configuring FILE sequential volumes on disk
devices . . . . . . . . . . . . . .
Varying disk volumes online or offline . . . .
Cache copies for files stored on disk. . . . .
Freeing space on disk . . . . . . . . .
Scratch FILE volumes. . . . . . . . . .
Volume history file and volume reuse . . . .
. 103
104
. 108
.
.
.
.
.
.
108
109
109
109
110
110
Chapter 6. Using devices with the
server system . . . . . . . . . . . 111
Attaching a manual drive . . . . . . . .
Attaching an automated library device . . . .
Device alias names . . . . . . . . . .
Obtaining device alias names . . . . . .
Selecting a device driver. . . . . . . . .
Drivers for IBM devices . . . . . . . .
Drivers for non-IBM devices . . . . . .
Installing device drivers for IBM 3494 libraries
Installing the Tivoli Storage Manager device
driver . . . . . . . . . . . . . .
iv
.
.
.
.
.
.
.
111
112
113
114
114
114
116
117
. 117
Uninstalling the Tivoli Storage Manager device
driver . . . . . . . . . . . . . . .
Windows device drivers . . . . . . . . .
Creating a file to list devices and their attributes
Controlling devices with the Tivoli Storage
Manager device driver . . . . . . . . .
Installing the Centera SDK for Centera shared
libraries . . . . . . . . . . . . . . .
117
118
118
119
119
Chapter 7. Configuring storage
devices . . . . . . . . . . . . . . 121
Device configuration overview . . . . . . .
Windows device configuration wizard . . . .
Manually configuring devices . . . . . . .
Configuring devices using Tivoli Storage Manager
commands . . . . . . . . . . . . . .
Defining Tivoli Storage Manager storage objects
with commands . . . . . . . . . . .
Determining backup strategies . . . . . .
Determining the media and device type for
client backups . . . . . . . . . . . .
Configuring IBM 3494 libraries . . . . . . .
Categories in an IBM 3494 library . . . . .
Configuring an IBM 3494 library for use by one
server . . . . . . . . . . . . . . .
Sharing an IBM 3494 library among servers . .
Migrating a shared IBM 3494 library to a library
manager . . . . . . . . . . . . . .
Sharing an IBM 3494 library by static
partitioning of drives . . . . . . . . . .
ACSLS-managed libraries . . . . . . . . .
Configuring an ACSLS-managed library . . .
Configuring an ACSLS library with a single
drive device type . . . . . . . . . . .
Configuring an ACSLS library with multiple
drive device type . . . . . . . . . . .
Setting up an ACSLS library manager server
Setting up an ACSLS library client server . . .
Checking in and labeling ACSLS library
volumes . . . . . . . . . . . . . .
Configuring Tivoli Storage Manager servers to
share SAN-connected devices . . . . . . . .
Setting up server communications . . . . .
Setting up the library manager server . . . .
Setting up the library client servers . . . . .
Configuring Tivoli Storage Manager for LAN-free
data movement. . . . . . . . . . . . .
Validating your LAN-free configuration . . .
Configuring Tivoli Storage Manager for NDMP
operations . . . . . . . . . . . . . .
Troubleshooting device configuration . . . . .
Displaying device information. . . . . . .
Displaying the event log to find device errors
Troubleshooting problems with devices. . . .
Impact of device changes on the SAN . . . .
Defining devices and paths . . . . . . . . .
Defining libraries . . . . . . . . . . .
Defining drives . . . . . . . . . . . .
Defining data movers . . . . . . . . .
Defining paths . . . . . . . . . . . .
Increased block size for writing to tape . . . . .
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
122
123
128
133
134
136
137
137
137
138
143
145
146
150
150
150
152
153
155
156
157
157
158
160
161
162
162
163
163
163
163
165
166
166
167
168
169
170
Chapter 8. Managing removable media
operations. . . . . . . . . . . . . 173
Defining volumes . . . . . . . . . . . .
Managing volumes . . . . . . . . . . .
Partially-written volumes . . . . . . . .
Volume inventory for automated libraries . . .
Changing the status of database-backup and
database-export volumes . . . . . . . .
Preparing media for automated libraries . . . .
Labeling media . . . . . . . . . . . .
Checking media into automated library devices
Write-once, read-many (WORM) tape media . .
Managing media in automated libraries . . . .
Changing the status of automated library
volumes . . . . . . . . . . . . . .
Removing volumes from automated libraries
Returning partially-written volumes to
automated libraries . . . . . . . . . .
Returning reclaimed volumes to a library
(Windows) . . . . . . . . . . . . .
Auditing volume inventories in libraries . . .
Adding scratch volumes to automated library
devices . . . . . . . . . . . . . .
Setting up volume overflow locations for
automated libraries . . . . . . . . . .
Modifying volume access modes . . . . . .
Shared libraries. . . . . . . . . . . .
Category numbers for IBM 3494 libraries . . .
Media reuse in automated libraries . . . . .
Labeling media for manual libraries . . . . . .
Media management in manual libraries . . . .
Tivoli Storage Manager server requests . . . . .
Starting the administrative client as a server
console monitor . . . . . . . . . . .
Displaying information about volumes that are
currently mounted. . . . . . . . . . .
Displaying information about mount requests
that are pending . . . . . . . . . . .
Replying to mount requests . . . . . . .
Canceling mount requests . . . . . . . .
Responding to requests for volume checkin . .
Dismounting idle volumes . . . . . . . .
Dismounting volumes from stand-alone
removable-file devices . . . . . . . . .
Obtaining tape alert messages . . . . . . .
Tape rotation . . . . . . . . . . . . .
Labeling volumes using commands . . . . . .
Using removable media managers . . . . . .
Tivoli Storage Manager media-manager support
Setting up Tivoli Storage Manager to use RSM
Using external media managers to control
media . . . . . . . . . . . . . . .
Removing devices from media-manager control
Troubleshooting database errors . . . . . .
Managing libraries . . . . . . . . . . .
Obtaining information about libraries . . . .
Updating automated libraries . . . . . . .
Deleting libraries . . . . . . . . . . .
Managing drives . . . . . . . . . . . .
Requesting information about drives . . . .
Updating drives . . . . . . . . . . .
173
174
174
175
175
175
175
177
180
182
183
183
Using drive encryption . . . . . . .
Replacement of tape and optical drives . .
Cleaning drives . . . . . . . . .
Deleting drives . . . . . . . . . .
Managing paths . . . . . . . . . .
Obtaining information about paths . . .
Updating paths. . . . . . . . . .
Deleting paths . . . . . . . . . .
Managing data movers . . . . . . . .
Obtaining information about data movers .
Updating data movers . . . . . . .
Deleting data movers. . . . . . . .
Managing disks . . . . . . . . . .
Obtaining information about disks . . .
Updating disks . . . . . . . . . .
Deleting disks . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
204
205
210
214
215
215
215
215
216
216
216
216
216
216
217
217
184
Chapter 9. Using NDMP for operations
with NAS file servers . . . . . . . . 219
184
184
NDMP requirements . . . . . . . . . . .
Interfaces for NDMP operations . . . . . .
Data formats for NDMP backup operations . .
NDMP operations management . . . . . . .
Managing NAS file server nodes . . . . . .
Managing data movers used in NDMP
operations . . . . . . . . . . . . .
Dedicating a Tivoli Storage Manager drive to
NDMP operations . . . . . . . . . . .
Storage pool management for NDMP operations
Managing table of contents . . . . . . . .
Configuring Tivoli Storage Manager for NDMP
operations . . . . . . . . . . . . . .
Configuring Tivoli Storage Manager policy for
NDMP operations . . . . . . . . . . .
Tape libraries and drives for NDMP operations
Attaching tape library robotics for NAS-attached
libraries . . . . . . . . . . . . . .
Registering NAS nodes with the Tivoli Storage
Manager server. . . . . . . . . . . .
Defining a data mover for the NAS file server
Defining tape drives and paths for NDMP
operations . . . . . . . . . . . . .
Labeling and checking tapes into the library . .
Scheduling NDMP operations . . . . . . .
Defining virtual file spaces . . . . . . . .
Tape-to-tape copy to back up data . . . . .
Tape-to-tape copy to move data . . . . . .
Backing up and restoring NAS file servers using
NDMP . . . . . . . . . . . . . . .
NAS file servers; backups to a single Tivoli
Storage Manager server . . . . . . . . .
Performing NDMP filer to Tivoli Storage
Manager server backups. . . . . . . . .
File-level backup and restore for NDMP operations
Interfaces for file-level restore . . . . . . .
International characters for NetApp file servers
File level restore from a directory-level backup
image . . . . . . . . . . . . . . .
Directory-level backup and restore . . . . . .
Directory-level backup and restore for NDMP
operations . . . . . . . . . . . . .
185
185
186
186
187
188
188
189
190
190
191
191
191
191
192
192
193
193
193
195
195
196
196
199
200
201
201
201
201
202
203
203
203
Contents
219
221
222
222
222
223
224
224
224
225
226
229
232
237
237
238
240
240
240
240
241
241
242
243
244
245
245
246
246
247
v
Backing up and restoring with snapshots . . . 247
Backup and restore using NetApp SnapMirror to
Tape feature . . . . . . . . . . . . . . 248
NDMP backup operations using Celerra file server
integrated checkpoints . . . . . . . . . . 249
Chapter 10. Defining device classes
251
Sequential-access device types . . . . . . . .
Defining tape and optical device classes . . . .
Specifying the estimated capacity of tape and
optical volumes . . . . . . . . . . .
Specifying recording formats for tape and
optical media . . . . . . . . . . . .
Associating library objects with device classes
Controlling media-mount operations for tape
and optical devices . . . . . . . . . .
Write-once, read-many (WORM) devices . . .
Defining 3592 device classes . . . . . . .
Device classes for devices supported by
operating-system drivers . . . . . . . .
Defining device classes for removable media
devices . . . . . . . . . . . . . . .
Defining sequential-access disk (FILE) device
classes. . . . . . . . . . . . . . . .
Concurrent access to FILE volumes . . . . .
Mitigating performance degradation when
backing up or archiving to FILE volumes . . .
Specifying directories in FILE device-class
definitions . . . . . . . . . . . . .
Controlling the size of FILE volumes . . . .
Controlling the number of concurrently open
FILE volumes . . . . . . . . . . . .
Defining LTO device classes . . . . . . . .
Mixing LTO drives and media in a library. . .
Mount limits in LTO mixed-media environments
Encrypting data using LTO generation 4 drives
Defining SERVER device classes . . . . . . .
Controlling the size of files created on a target
server . . . . . . . . . . . . . . .
Controlling the number of simultaneous
sessions between source and target servers . .
Controlling the amount of time a SERVER
volume remains mounted . . . . . . . .
Defining device classes for StorageTek VolSafe
devices . . . . . . . . . . . . . . .
Defining device classes for CENTERA devices . .
Server operations not supported by centera . .
Controlling the number of concurrently open
mount points for centera devices . . . . . .
Obtaining information about device classes . . .
How Tivoli Storage Manager fills volumes . . .
Data compression . . . . . . . . . . .
Tape volume capacity and data compression
252
253
254
254
254
255
256
257
260
260
260
261
261
262
263
264
264
264
265
266
267
267
267
268
268
269
269
269
270
271
272
272
Chapter 11. Managing storage pools
and volumes. . . . . . . . . . . . 275
Storage pools . . . .
Primary storage pools
Copy storage pools .
Active-data pools . .
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
276
276
277
277
Example: Setting up server storage . . . . .
Defining storage pools . . . . . . . . .
Task tips for storage pools . . . . . . . .
Storage pool volumes . . . . . . . . . .
Random-access storage pool volumes . . . .
Sequential-access storage pool volumes. . . .
Preparing volumes for random-access storage
pools . . . . . . . . . . . . . . .
Preparing volumes for sequential-access storage
pools . . . . . . . . . . . . . . .
Updating storage pool volumes . . . . . .
Access modes for storage pool volumes . . .
Storage pool hierarchies . . . . . . . . . .
Setting up a storage pool hierarchy . . . . .
How the server groups files before storing . .
Where the server stores files . . . . . . .
Example: How the server determines where to
store files in a hierarchy . . . . . . . . .
Backing up the data in a storage hierarchy . .
Staging client data from disk to tape . . . .
Migrating files in a storage pool hierarchy. . . .
Migrating disk storage pools . . . . . . .
Migrating sequential-access storage pools . . .
The effect of migration on copy storage pools
and active-data pools . . . . . . . . . .
Caching in disk storage pools . . . . . . . .
How the server removes cached files . . . .
Effect of caching on storage pool statistics . . .
Deduplicating data . . . . . . . . . . .
Data deduplication overview . . . . . . .
Planning for deduplication . . . . . . . .
Setting up storage pools for deduplication. . .
Controlling duplicate-identification processing
Displaying statistics about deduplication . . .
Effects on deduplication when moving or
copying data . . . . . . . . . . . .
Improving performance when reading from
deduplicated storage pools . . . . . . . .
Writing data simultaneously to primary, copy, and
active-data pools . . . . . . . . . . . .
Simultaneous-write overview . . . . . . .
How simultaneous write works . . . . . .
Implementing simultaneous write . . . . .
Example: Making simultaneous write part of a
backup strategy . . . . . . . . . . .
Keeping client files together using collocation . .
The effects of collocation on operations . . . .
How the server selects volumes with collocation
enabled . . . . . . . . . . . . . .
How the server selects volumes with collocation
disabled . . . . . . . . . . . . . .
Collocation on or off settings . . . . . . .
Collocation of copy storage pools and
active-data pools . . . . . . . . . . .
Planning for and enabling collocation . . . .
Reclaiming space in sequential-access storage pools
How Tivoli Storage Manager reclamation works
Reclamation thresholds . . . . . . . . .
Reclaiming volumes with the most reclaimable
space . . . . . . . . . . . . . . .
Starting reclamation manually or in a schedule
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
279
281
287
288
288
289
290
291
293
294
296
296
298
299
299
301
306
307
308
313
317
317
318
318
319
319
322
323
324
326
327
328
329
329
331
336
339
340
342
344
346
346
347
347
350
350
352
352
353
Optimizing drive usage using multiple
concurrent reclamation processes . . . . . .
Reclaiming volumes in a storage pool with one
drive . . . . . . . . . . . . . . .
Reducing the time to reclaim tape volumes with
high capacity . . . . . . . . . . . .
Reclamation of write-once, read-many (WORM)
media . . . . . . . . . . . . . . .
Controlling reclamation of virtual volumes . .
Reclaiming copy storage pools and active-data
pools . . . . . . . . . . . . . . .
How collocation affects reclamation . . . . .
Estimating space needs for storage pools . . . .
Estimating space requirments in random-access
storage pools . . . . . . . . . . . .
Estimating space needs in sequential-access
storage pools . . . . . . . . . . . .
Monitoring storage-pool and volume usage . . .
Monitoring space available in a storage pool
Monitoring the use of storage pool volumes . .
Monitoring migration processes . . . . . .
Monitoring the use of cache space on disk
storage . . . . . . . . . . . . . .
Obtaining information about the use of storage
space . . . . . . . . . . . . . . .
Moving data from one volume to another volume
Data movement within the same storage pool
Data movement to a different storage pool . .
Data movement from off-site volumes in copy
storage pools or active-data pools . . . . .
Moving data . . . . . . . . . . . .
Moving data belonging to a client node . . . .
Moving data in all file spaces belonging to one
or more nodes . . . . . . . . . . . .
Moving data in selected file spaces belonging to
a single node . . . . . . . . . . . .
Obtaining information about data-movement
processes . . . . . . . . . . . . . .
Troubleshooting incomplete data-movement
operations . . . . . . . . . . . . .
Renaming storage pools . . . . . . . . . .
Defining copy storage pools and active-data pools
Example: Defining a copy storage pool . . . .
Properties of primary, copy, and active-data
pools . . . . . . . . . . . . . . .
Deleting storage pools . . . . . . . . . .
Deleting storage pool volumes . . . . . . .
Deleting empty storage pool volumes . . . .
Deleting storage pool volumes that contain data
353
354
355
355
356
356
360
361
361
363
363
364
366
374
376
377
381
382
382
383
383
386
386
387
388
388
389
389
391
391
393
393
394
395
Part 3. Managing client operations 397
Chapter 12. Adding client nodes . . . 399
Overview of clients and servers as nodes . . . .
Installing client node software . . . . . . . .
Registering nodes with the server . . . . . .
Accepting default closed registration or enabling
open registration . . . . . . . . . . .
Registering nodes with the IBM Tivoli Storage
Manager Client Node Configuration wizard . .
399
399
400
400
402
Registering nodes with client options sets . . .
Registering a network-attached storage file
server as a node . . . . . . . . . . .
Registering a source server as a node on a target
server . . . . . . . . . . . . . . .
Registering an API to the server . . . . . .
Connecting nodes with the server . . . . . .
Required client options . . . . . . . . .
UNIX and Linux client options . . . . . .
Updating the password for scheduling operations
Creating or updating a client options file . . . .
Using a text editor to create or configure a client
options file . . . . . . . . . . . . .
Using the client configuration wizard to create
or update a client options file . . . . . . .
Using the Client Options File wizard (Windows
32-bit clients) to create or update a client
options file . . . . . . . . . . . . .
Using the Remote Client Configuration wizard
(networked Windows 32-bit clients) . . . . .
Comparing network-attached nodes to local nodes
Adding clients through the administrative
command line client . . . . . . . . . . .
Enabling open registration . . . . . . . .
Example: registering three client nodes using
the administrative command line . . . . . .
402
402
403
403
404
404
405
405
405
406
406
406
407
408
409
409
409
Chapter 13. Managing client nodes
411
Managing client node registration techniques . .
Managing nodes . . . . . . . . . . .
Managing client nodes across a firewall . .
Updating client node information . . . .
Renaming client nodes . . . . . . . .
Locking and unlocking client nodes . . . .
Deleting client nodes . . . . . . . . .
Consolidating multiple clients under a single
client node name . . . . . . . . . .
Displaying information about client nodes. .
Overview of remote access to web
backup-archive clients . . . . . . . .
Managing client access authority levels . . .
Managing file spaces . . . . . . . . . .
Defining client nodes and file spaces . . .
Supporting Unicode-enabled clients . . . .
Displaying information about file spaces . .
Moving data for a client node . . . . . .
Deleting file spaces . . . . . . . . .
Managing client option files . . . . . . .
Creating client option sets on the server . .
Managing client option sets . . . . . .
Managing IBM Tivoli Storage Manager sessions
Displaying information about IBM Tivoli
Storage Manager sessions . . . . . . .
Canceling an IBM Tivoli Storage Manager
session . . . . . . . . . . . . .
When a client session is automatically canceled
Disabling or enabling access to the server . .
Managing client restartable restore sessions .
Managing IBM Tivoli Storage Manager security
Securing the server console . . . . . . .
Administrative authority and privilege classes
.
.
.
.
.
.
.
. 440
440
. 441
. 442
443
. 443
444
Contents
vii
411
411
412
414
414
414
415
. 415
. 418
.
.
.
.
.
.
.
.
.
.
.
419
421
423
424
426
434
435
435
436
436
437
438
. 439
Managing access to the server and clients .
Managing IBM Tivoli Storage Manager
administrators . . . . . . . . . .
Managing levels of administrative authority
Managing passwords and login procedures
.
. 445
.
.
.
. 446
. 448
. 450
Chapter 14. Implementing policies for
client data . . . . . . . . . . . . . 455
Basic policy planning . . . . . . . . . . .
Reviewing the standard policy . . . . . .
Getting users started . . . . . . . . . .
Changing policy . . . . . . . . . . .
File expiration and expiration processing . . .
Client operations controlled by policy . . . . .
Backup and restore . . . . . . . . . .
Archive and retrieve . . . . . . . . . .
Client migration and recall . . . . . . . .
The parts of a policy . . . . . . . . . . .
Relationships among clients, storage, and policy
More on management classes . . . . . . . .
Contents of a management class . . . . . .
Default management classes . . . . . . .
The include-exclude list . . . . . . . . .
How files and directories are associated with a
management class . . . . . . . . . . .
How IBM Tivoli Storage Manager selects files for
policy operations . . . . . . . . . . . .
Incremental backup . . . . . . . . . .
Selective backup . . . . . . . . . . .
Logical volume backup . . . . . . . . .
Archive . . . . . . . . . . . . . .
Automatic migration from a client node . . .
How client migration works with backup and
archive . . . . . . . . . . . . . . .
Creating your own policies . . . . . . . . .
Example: sample policy objects . . . . . .
Defining and updating a policy domain . . .
Defining and updating a policy set . . . . .
Defining and updating a management class . .
Defining and updating a backup copy group
Defining and updating an archive copy group
Assigning a default management class . . . .
Validating and activating a policy set . . . .
Assigning client nodes to a policy domain. . . .
Running expiration processing to delete expired
files . . . . . . . . . . . . . . . .
Running expiration processing automatically
Using commands and scheduling to control
expiration processing . . . . . . . . . .
Additional expiration processing with disaster
recovery manager . . . . . . . . . . .
Protection and expiration of archive data . . . .
Data retention protection . . . . . . . .
Deletion hold . . . . . . . . . . . .
Protecting data using the NetApp SnapLock
licensed feature. . . . . . . . . . . . .
Reclamation and the SnapLock feature . . . .
Set up SnapLock volumes as Tivoli Storage
Manager WORM FILE volumes . . . . . .
Policy configuration scenarios . . . . . . . .
Configuring policy for direct-to-tape backups
viii
455
456
457
457
458
458
459
459
460
461
462
464
464
465
466
467
469
470
472
472
473
473
474
474
475
476
478
478
479
486
487
488
490
490
491
491
492
492
492
493
494
495
499
500
500
Configuring policy for Tivoli Storage Manager
application clients . . . . . . . . . . .
Policy for logical volume backups . . . . .
Configuring policy for NDMP operations . . .
Configuring policy for LAN-free data
movement . . . . . . . . . . . . .
Policy for IBM Tivoli Storage Manager servers
as clients . . . . . . . . . . . . . .
Setting policy to enable point-in-time restore for
clients . . . . . . . . . . . . . . .
Distributing policy using enterprise configuration
Querying policy . . . . . . . . . . . .
Querying copy groups . . . . . . . . .
Querying management classes. . . . . . .
Querying policy sets . . . . . . . . . .
Querying policy domains . . . . . . . .
Deleting policy . . . . . . . . . . . . .
Deleting copy groups. . . . . . . . . .
Deleting management classes . . . . . . .
Deleting policy sets . . . . . . . . . .
Deleting policy domains. . . . . . . . .
501
501
502
503
505
505
506
506
507
507
508
508
509
509
510
510
510
Chapter 15. Managing data for client
nodes. . . . . . . . . . . . . . . 513
Validating a node’s data . . . . . . . . .
Performance considerations for data validation
Validating a node’s data during a client session
Securing client and server communications . .
Setting up SSL . . . . . . . . . . .
Encrypting data on tape . . . . . . . . .
Choosing an encryption method . . . . .
Changing your encryption method and
hardware configuration . . . . . . . .
Securing sensitive client data . . . . . . .
Setting up shredding . . . . . . . . .
Ensuring that shredding is enforced . . . .
Creating and using client backup sets . . . .
Generating client backup sets on the server .
Restoring backup sets from a backup-archive
client . . . . . . . . . . . . . .
Moving backup sets to other servers. . . .
Managing client backup sets . . . . . .
Enabling clients to use subfile backup . . . .
Setting up clients to use subfile backup. . .
Managing subfile backups . . . . . . .
Optimizing restore operations for clients . . .
Environment considerations . . . . . .
Restoring entire file systems . . . . . .
Restoring parts of file systems . . . . . .
Restoring databases for applications . . . .
Restoring files to a point-in-time . . . . .
Concepts for client restore operations . . .
Managing archive data . . . . . . . . .
Archive operations overview . . . . . .
Managing storage usage for archives . . .
. 513
514
514
. 514
. 515
. 516
. 517
.
.
.
.
.
.
518
519
519
521
522
523
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
527
528
528
531
532
532
533
534
535
535
536
536
537
540
540
541
Chapter 16. Scheduling operations for
client nodes . . . . . . . . . . . . 545
Prerequisites to scheduling operations .
Scheduling a client operation . . . .
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
.
.
.
.
.
.
. 545
. 546
Creating Tivoli Storage Manager schedules . .
Associating client nodes with schedules . . .
Starting the scheduler on the clients . . . . .
Displaying schedule information . . . . . .
Checking the status of scheduled operations . .
Creating schedules for running command files . .
Updating the client options file to automatically
generate a new password . . . . . . . . .
Configuring the scheduler to run under the
site-server account. . . . . . . . . . . .
Overview of the Tivoli Storage Manager scheduler
running as a Windows service . . . . . . . .
Starting the Tivoli Storage Manager device driver
Stopping the Tivoli Storage Manager device driver
Managing server processes . . . . . . . . .
Requesting information about server processes
Canceling server processes . . . . . . . .
Preemption of client or server operations . . .
Setting the server name . . . . . . . . . .
Adding or updating server options . . . . . .
Adding or updating a server option without
restarting the server . . . . . . . . . .
Getting help on commands and error messages . .
546
547
547
548
548
549
550
550
Chapter 19. Automating server
operations. . . . . . . . . . . . . 589
Automating a basic administrative command
schedule . . . . . . . . . . . . . . .
Defining the schedule . . . . . . . . .
Verifying the schedule . . . . . . . . .
Tailoring schedules . . . . . . . . . . .
Using classic and enhanced command schedules
Copying schedules . . . . . . . . . . .
Deleting schedules . . . . . . . . . . .
Managing scheduled event records . . . . . .
Querying events . . . . . . . . . . .
Removing event records from the database . .
IBM Tivoli Storage Manager server scripts. . . .
Defining a server script . . . . . . . . .
Managing server scripts . . . . . . . . .
Running a server script . . . . . . . . .
Using macros . . . . . . . . . . . . .
Writing commands in a macro. . . . . . .
Writing comments in a macro . . . . . . .
Using continuation characters . . . . . . .
Using substitution variables in a macro. . . .
Running a macro . . . . . . . . . . .
Command processing in a macro . . . . . .
553
553
554
554
555
555
556
556
556
556
557
557
557
559
559
560
562
562
564
566
Chapter 18. Managing server
operations. . . . . . . . . . . . . 571
. 571
. 572
. 573
.
.
.
.
.
573
574
576
577
577
. 578
.
.
.
.
579
580
581
581
590
590
591
591
593
594
594
594
595
595
596
596
602
604
605
606
606
607
607
608
608
Chapter 20. Managing the database
and recovery log . . . . . . . . . . 611
566
Part 4. Maintaining the server . . . 569
Licensing IBM Tivoli Storage Manager . . . .
Registering licensed features . . . . . .
Monitoring licenses . . . . . . . . .
Working with the IBM Tivoli Storage Manager
Server and Active Directory . . . . . . .
Configuring the active directory schema . .
Starting the Tivoli Storage Manager server . .
Starting the server on Windows . . . . .
Stand-alone mode for server startup . . . .
Starting the IBM Tivoli Storage Manager server
as a service . . . . . . . . . . . .
Starting the IBM Tivoli Storage Manager Server
Console . . . . . . . . . . . . .
Halting the server . . . . . . . . . . .
Moving Tivoli Storage Manager . . . . . .
Date and time on the server . . . . . . .
587
588
550
Chapter 17. Managing schedules for
client nodes . . . . . . . . . . . . 553
Managing IBM Tivoli Storage Manager schedules
Adding new schedules . . . . . . . . .
Copying existing schedules . . . . . . . .
Modifying schedules . . . . . . . . . .
Deleting schedules . . . . . . . . . .
Displaying information about schedules . . .
Managing node associations with schedules . . .
Adding new nodes to existing schedules . . .
Moving nodes from one schedule to another
Displaying nodes associated with schedules . .
Removing nodes from schedules . . . . . .
Managing event records . . . . . . . . . .
Displaying information about scheduled events
Managing event records in the server database
Managing the throughput of scheduled operations
Modifying the default scheduling mode . . .
Specifying the schedule period for incremental
backup operations . . . . . . . . . . .
Balancing the scheduled workload for the server
Controlling how often client nodes contact the
server . . . . . . . . . . . . . . .
Specifying one-time actions for client nodes . . .
Determining how long the one-time schedule
remains active . . . . . . . . . . . .
582
582
582
583
584
584
586
587
|
Database and recovery log overview. . . . . .
Database . . . . . . . . . . . . . .
Recovery log . . . . . . . . . . . .
Where to locate the database and log directories
Estimating database space requirements . . . .
Estimating recovery log space requirements . . .
Active log space . . . . . . . . . . .
Archive log space . . . . . . . . . . .
Archive failover log space . . . . . . . .
Monitoring the database and recovery log . . . .
Increasing the size of the database . . . . . .
Reducing the size of the database . . . . . .
Increasing the size of the active log . . . . . .
Backing up the database. . . . . . . . . .
Preparing the system for database backups . .
Scheduling database backups . . . . . . .
Backing up the database manually . . . . .
Restoring the database . . . . . . . . . .
Moving the database and recovery logs on a server
Moving both the database and recovery logs
Moving only the database . . . . . . . .
Contents
611
612
613
616
617
618
619
619
620
620
621
622
622
623
623
624
624
624
625
625
626
ix
Moving only the active log . . . . . .
Moving only the archive log . . . . .
Moving only the archive failover log . .
Adding optional logs after server initialization
Transaction processing . . . . . . . .
Files moved as a group between client and
server . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
626
626
627
627
627
.
. 628
Chapter 21. Monitoring the Tivoli
Storage Manager server . . . . . . . 631
Using IBM Tivoli Storage Manager queries to
display information . . . . . . . . . . .
Requesting information about IBM Tivoli
Storage Manager definitions . . . . . . .
Requesting information about client sessions
Requesting information about server processes
Requesting information about server settings
Querying server options . . . . . . . . .
Querying the system . . . . . . . . . .
Using SQL to query the IBM Tivoli Storage
Manager database . . . . . . . . . . . .
Using SELECT commands . . . . . . . .
Using SELECT commands in IBM Tivoli Storage
Manager scripts . . . . . . . . . . .
Querying the SQL activity summary table . . .
Creating output for use by another application
Using the IBM Tivoli Storage Manager activity log
Requesting information from the activity log
Setting a retention period for the activity log
Setting a size limit for the activity log . . . .
Logging IBM Tivoli Storage Manager events to
receivers . . . . . . . . . . . . . . .
Enabling and disabling events . . . . . . .
Beginning and ending event logging . . . .
Logging events to the IBM Tivoli Storage
Manager server console and activity log . . .
Logging events to a file exit and a user exit . .
Logging events to the Tivoli Enterprise Console
Logging events to an SNMP manager . . . .
Logging events to the Windows event log . . .
Enterprise event logging: logging events to
another server . . . . . . . . . . . .
Querying event logging . . . . . . . . .
User exit and file exit receivers . . . . . .
Monitoring errors and diagnosing problems . . .
Monitoring IBM Tivoli Storage Manager accounting
records . . . . . . . . . . . . . . .
Daily monitoring scenario . . . . . . . . .
Tivoli Storage Manager reporting and monitoring
Client activity reports . . . . . . . . .
Server trend reports . . . . . . . . . .
Monitoring workspaces . . . . . . . . .
Running the Tivoli Storage Manager client and
server reports . . . . . . . . . . . .
Monitoring Tivoli Storage Manager real-time
data . . . . . . . . . . . . . . .
Modifying the IBM Tivoli Monitoring
environment file . . . . . . . . . . .
x
631
631
632
633
634
634
635
636
636
639
640
641
641
642
643
643
644
645
646
646
647
648
652
656
656
658
658
663
663
664
665
668
673
675
685
686
687
Chapter 22. Managing a network of
Tivoli Storage Manager servers . . . 689
Concepts for managing server networks . . . .
Enterprise configuration . . . . . . . . .
Command routing. . . . . . . . . . .
Central monitoring . . . . . . . . . .
Data storage on another server . . . . . .
Examples: management of multiple Tivoli
Storage Manager servers . . . . . . . .
Enterprise-administration planning . . . . . .
Setting up communications among servers . . .
Setting up communications for enterprise
configuration and enterprise event logging . .
Setting up communications for command
routing . . . . . . . . . . . . . .
Updating and deleting servers. . . . . . .
Setting up enterprise configurations . . . . . .
Enterprise configuration scenario . . . . . .
Creating the default profile on a configuration
manager . . . . . . . . . . . . . .
Creating and changing configuration profiles
Getting information about profiles . . . . .
Subscribing to a profile . . . . . . . . .
Refreshing configuration information . . . . .
Managing problems with configuration refresh
Returning managed objects to local control . . .
Setting up administrators for the servers . . . .
Managing problems with synchronization of
profiles . . . . . . . . . . . . . . .
Switching a managed server to a different
configuration manager . . . . . . . . . .
Deleting subscribers from a configuration manager
Renaming a managed server . . . . . . . .
Performing tasks on multiple servers . . . . .
Working with multiple servers using the
Administration Center . . . . . . . . .
Routing commands . . . . . . . . . .
Setting up server groups . . . . . . . .
Querying server availability . . . . . . .
Using virtual volumes to store data on another
server . . . . . . . . . . . . . . . .
Setting up source and target servers for virtual
volumes . . . . . . . . . . . . . .
Performing operations at the source server . .
Reconciling virtual volumes and archive files
689
690
691
691
692
692
694
694
694
698
702
703
703
707
708
715
717
721
722
722
723
723
724
724
724
725
725
725
728
730
730
732
733
735
Chapter 23. Exporting and importing
data . . . . . . . . . . . . . . . 737
Reviewing data that can be exported and imported
Exporting restrictions. . . . . . . . . .
Deciding what information to export . . . .
Deciding when to export . . . . . . . .
Exporting data directly to another server . . . .
Options to consider before exporting . . . .
Preparing to export to another server for
immediate import . . . . . . . . . . .
Monitoring the server-to-server export process
Exporting administrator information to another
server . . . . . . . . . . . . . . .
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
737
738
738
739
740
740
744
745
745
Exporting client node information to another
server . . . . . . . . . . . . . .
Exporting policy information to another server
Exporting server data to another server . .
Exporting and importing data using sequential
media volumes . . . . . . . . . . . .
Using preview before exporting or importing
data . . . . . . . . . . . . . .
Planning for sequential media used to export
data . . . . . . . . . . . . . .
Exporting tasks. . . . . . . . . . .
Importing data from sequential media volumes
Monitoring export and import processes . .
Exporting and importing data from virtual
volumes . . . . . . . . . . . . .
. 746
747
. 747
. 747
. 747
. 748
. 749
752
. 762
. 765
Part 5. Protecting the server . . . 767
Chapter 24. Protecting and recovering
your server . . . . . . . . . . . . 769
Levels of protection . . . . . . . . . . .
Storage pool protection overview . . . . . . .
Storage pool restore processing . . . . . .
Marking volumes as destroyed . . . . . .
Database and recovery log protection overview . .
Types of database restores . . . . . . . .
Active log mirroring . . . . . . . . . .
Snapshot database backup . . . . . . . . .
Backing up storage pools . . . . . . . . .
Scheduling storage pool backups . . . . . .
Scenario: scheduling a backup with one copy
storage pool . . . . . . . . . . . . .
Backing up data in a Centera storage pool. . .
Simultaneous writing to copy storage pools . .
Using multiple copy storage pools and
active-data pools . . . . . . . . . . .
Delaying reuse of volumes for recovery
purposes . . . . . . . . . . . . . .
Backing up the database. . . . . . . . . .
Defining device classes for backups . . . . .
Estimating the size of the active log . . . . .
Scheduling database backups . . . . . . .
Saving the volume history file . . . . . . .
Saving the device configuration file . . . . .
Saving the server options and database and
recovery log information . . . . . . . .
Running full and incremental backups . . . .
Running snapshot database backups . . . .
Recovering the server using database and storage
pool backups . . . . . . . . . . . . .
Restoring a database to a point in time . . . .
Restoring a database to its most current state
Restoring storage pools . . . . . . . . .
Restoring storage pool volumes . . . . . . .
Volume restoration . . . . . . . . . .
Fixing an incomplete volume restoration . . .
Auditing storage pool volumes . . . . . . .
Storage pool volume audit . . . . . . . .
Data validation during audit volume processing
Auditing a disk storage pool volume . . . .
769
770
771
772
772
773
774
774
774
777
778
779
779
779
780
781
781
782
782
782
784
786
787
787
788
789
791
791
794
796
796
797
797
799
803
Auditing multiple volumes in a sequential
access storage pool . . . . . . . . . .
Auditing a single volume in a sequential access
storage pool . . . . . . . . . . . . .
Auditing volumes by date written . . . . .
Auditing volumes in a specific storage pool . .
Scheduling volume audits . . . . . . . .
Fixing damaged files . . . . . . . . . . .
Ensuring the integrity of files . . . . . . .
Restoring damaged files . . . . . . . . .
Backup and recovery scenarios . . . . . . .
Protecting the database and storage pools . . .
Recovering to a point-in-time from a disaster
Recovering a lost or damaged storage pool
volume . . . . . . . . . . . . . .
Restoring a library manager database . . . . .
Restoring a library client database . . . . . .
804
804
804
805
805
805
806
806
807
807
809
811
812
813
Chapter 25. Using disaster recovery
manager . . . . . . . . . . . . . 815
Querying defaults for the disaster recovery plan
file . . . . . . . . . . . . . . . . .
Specifying defaults for the disaster recovery
plan file . . . . . . . . . . . . . .
Specifying defaults for offsite recovery media
management . . . . . . . . . . . .
Specifying recovery instructions for your site . . .
Specifying information about your server and
client node machines . . . . . . . . . . .
Specifying recovery media for client machines . .
Creating and storing the disaster recovery plan . .
Storing the disaster recovery plan locally . . .
Storing the disaster recovery plan on a target
server . . . . . . . . . . . . . . .
Disaster recovery plan environmental
considerations . . . . . . . . . . . .
Managing disaster recovery plan files stored on
target servers . . . . . . . . . . . . .
Displaying information about recovery plan files
Displaying the contents of a recovery plan file
Restoring a recovery plan file . . . . . . .
Expiring recovery plan files automatically . . .
Deleting recovery plan files manually . . . .
Moving backup media . . . . . . . . . .
Moving copy storage pool and active-data pool
volumes off-site . . . . . . . . . . .
Moving copy storage pool and active-data pool
volumes on-site . . . . . . . . . . .
Summary of disaster recovery manager daily tasks
Staying prepared for a disaster . . . . . . .
Recovering from a disaster . . . . . . . . .
Server recovery scenario. . . . . . . . .
Client recovery scenario . . . . . . . . .
Recovering with different hardware at the recovery
site . . . . . . . . . . . . . . . . .
Automated SCSI library at the original and
recovery sites . . . . . . . . . . . .
Automated SCSI library at the original site and
a manual scsi library at the recovery site . . .
Managing copy storage pool volumes and
active-data pool volumes at the recovery site . .
Contents
815
816
819
821
822
826
827
828
828
829
831
831
831
832
832
832
833
835
836
838
840
841
842
845
848
848
849
850
xi
Disaster recovery manager checklist . . . .
The disaster recovery plan file. . . . . .
Breaking out a disaster recovery plan file .
Structure of the disaster recovery plan file.
Example disaster recovery plan file . . .
.
.
.
.
.
.
.
.
.
.
851
856
856
858
861
Part 6. Appendixes . . . . . . . . 879
Appendix A. Comparing Tivoli Storage
Manager and Tivoli Storage Manager
Express . . . . . . . . . . . . . . 881
Key terminology changes . . . . . . . . . 881
Configuration objects migrated from Tivoli Storage
Manager Express . . . . . . . . . . . . 882
Resources for more information . . . . . . . 883
System status . . . . . . . . . . . . . 885
Automatic backups . . . . . . . . . . . 886
Restore and manual backup . . . . . . . . 887
Copy backups to media . . . . . . . . . . 888
Libraries, drives, and tapes . . . . . . . . . 889
Backup server settings . . . . . . . . . . 891
Computers and applications . . . . . . . . 892
Reports . . . . . . . . . . . . . . . 894
Appendix B. Configuring clusters. . . 895
Cluster nodes . . . . . . . . . . . . .
MSCS virtual servers . . . . . . . . . .
Planning for cluster hardware and software
configuration . . . . . . . . . . . .
SCSI tape failover . . . . . . . . . . . .
Setting up SCSI failover . . . . . . . . .
Shared SCSI bus termination . . . . . . .
Fibre tape failover . . . . . . . . . . . .
Configuration considerations . . . . . . . .
Cluster configuration planning . . . . . . .
Setting up MSCS clusters with Tivoli Storage
Manager . . . . . . . . . . . . . .
Setting up Tivoli Storage Manager clusters with
VCS . . . . . . . . . . . . . . .
Clustering configuration worksheet . . . . .
Administrator’s tasks for cluster creation . . . .
Adding a node to an existing cluster . . . .
Migrating an existing Tivoli Storage Manager
server into a cluster . . . . . . . . . .
Adding a Tivoli Storage Manager server with
backup and restore . . . . . . . . . .
Managing Tivoli Storage Manager on a cluster
Managing tape failover in a cluster . . . . .
895
896
898
899
900
901
901
902
902
xii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
916
917
918
918
919
920
924
Appendix D. User exit and file exit
receivers . . . . . . . . . . . . . 925
Sample user-exit declarations . . . . . . .
Sample user exit program . . . . . . . .
Readable text file exit (FILETEXTEXIT) format .
. 925
. 927
. 928
Appendix E. Configuring Secure
Sockets Layer for the Integrated
Solutions Console . . . . . . . . . 931
Enabling SSL for the Integrated Solutions Console
official certificates . . . . . . . . . . . .
Creating the SSL server key file . . . . . .
Creating the SSL client key file . . . . . .
Creating the SSL server trust file . . . . . .
Creating the client trust file. . . . . . . .
Creating the JACL script in
<iscroot>\AppServer\bin . . . . . . . .
Modifying the wsadmin.properties file to reflect
the correct SOAP port . . . . . . . . .
Running wsadmin on the JACL script . . . .
Modifying the configservice.properties file
Modifying the web.xml file . . . . . . . .
Stopping the ISC_Portal . . . . . . . . .
Modifying the soap.client.props file . . . .
Starting the ISC_Portal . . . . . . . . .
Confirming your SSL setup. . . . . . . .
Setting up LDAP over SSL . . . . . . . . .
931
931
933
933
934
935
936
936
936
937
937
937
938
938
938
903
Appendix F. Configuring Active
Directory . . . . . . . . . . . . . 941
906
908
908
908
Overview: using Tivoli Storage Manager with
Active Directory . . . . . . . . . . . .
Configuring Active Directory . . . . . . . .
Active Directory configuration for a Windows
server . . . . . . . . . . . . . . .
Performing the one-time configuration . . . .
Configuring each Tivoli Storage Manager server
instance . . . . . . . . . . . . . .
Storage and replication impact . . . . . . .
909
909
910
911
Appendix C. External media
management interface description . . 913
CreateProcess call . . . . . . .
Processing during server initialization
Processing for mount requests . . .
Processing for release requests. . .
Processing for batch requests . . .
Error handling . . . . . . . .
Begin batch request . . . . . .
End batch request . . .
Volume query request .
Initialization requests. .
Volume eject request . .
Volume release request .
Volume mount request .
Volume dismount request
913
914
914
915
915
916
916
941
942
942
942
944
946
Appendix G. Accessibility features for
Tivoli Storage Manager . . . . . . . 949
Notices . . . . . . . . . . . . . . 951
Trademarks .
.
.
.
.
.
.
.
.
.
.
.
.
. 953
Glossary . . . . . . . . . . . . . 955
Index . . . . . . . . . . . . . . . 977
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Preface
IBM® Tivoli® Storage Manager is a client/server program that provides storage
management solutions to customers in a multi-vendor computer environment. IBM
Tivoli Storage Manager provides an automated, centrally scheduled,
policy-managed backup, archive, and space-management facility for file servers
and workstations.
Who should read this guide
This guide is intended for anyone who is registered as an administrator for Tivoli
Storage Manager. A single administrator can manage Tivoli Storage Manager, or
several people can share administrative responsibilities.
You should be familiar with the operating system on which the server resides and
the communication protocols required for the client/server environment. You also
need to understand the storage management practices of your organization, such
as how you are currently backing up workstation files and how you are using
storage devices.
Publications
Tivoli Storage Manager publications and other related publications are available
online.
You can search all publications in the Tivoli Storage Manager Information Center:
http://publib.boulder.ibm.com/infocenter/tsminfo/v6.
You can download PDF versions of publications from the Tivoli Storage Manager
Information Center or from the IBM Publications Center at http://www.ibm.com/
shop/publications/order/.
You can also order some related publications from the IBM Publications Center
Web site. The Web site provides information for ordering publications from
countries other than the United States. In the United States, you can order
publications by calling 800-879-2755.
Tivoli Storage Manager publications
Publications are available for the server, storage agent, client, and Data Protection.
Table 1. Tivoli Storage Manager server publications
Publication title
Order number
IBM Tivoli Storage Manager Messages
GC23-9787
IBM Tivoli Storage Manager Performance Tuning Guide
GC23-9788
IBM Tivoli Storage Manager Problem Determination Guide
GC23-9789
IBM Tivoli Storage Manager for AIX Installation Guide
GC23-9781
IBM Tivoli Storage Manager for AIX Administrator’s Guide
SC23-9769
IBM Tivoli Storage Manager for AIX Administrator’s Reference
SC23-9775
IBM Tivoli Storage Manager for HP-UX Installation Guide
GC23-9782
© Copyright IBM Corp. 1993, 2009
xiii
Table 1. Tivoli Storage Manager server publications (continued)
Publication title
Order number
IBM Tivoli Storage Manager for HP-UX Administrator’s Guide
SC23-9770
IBM Tivoli Storage Manager for HP-UX Administrator’s Reference
SC23-9776
IBM Tivoli Storage Manager for Linux Installation Guide
GC23-9783
IBM Tivoli Storage Manager for Linux Administrator’s Guide
SC23-9771
IBM Tivoli Storage Manager for Linux Administrator’s Reference
SC23-9777
IBM Tivoli Storage Manager for Sun Solaris Installation Guide
GC23-9784
IBM Tivoli Storage Manager for Sun Solaris Administrator’s Guide
SC23-9772
IBM Tivoli Storage Manager for Sun Solaris Administrator’s Reference
SC23-9778
IBM Tivoli Storage Manager for Windows Installation Guide
GC23-9785
IBM Tivoli Storage Manager for Windows Administrator’s Guide
SC23-9773
IBM Tivoli Storage Manager for Windows Administrator’s Reference
SC23-9779
IBM Tivoli Storage Manager Server Upgrade Guide
SC23-9554
IBM Tivoli Storage Manager for System Backup and Recovery Installation SC32-6543
and User’s Guide
Table 2. Tivoli Storage Manager storage agent publications
Publication title
Order number
IBM Tivoli Storage Manager for SAN for AIX Storage Agent User’s
Guide
SC23-9797
IBM Tivoli Storage Manager for SAN for HP-UX Storage Agent User’s
Guide
SC23-9798
IBM Tivoli Storage Manager for SAN for Linux Storage Agent User’s
Guide
SC23-9799
IBM Tivoli Storage Manager for SAN for Sun Solaris Storage Agent
User’s Guide
SC23-9800
IBM Tivoli Storage Manager for SAN for Windows Storage Agent User’s
Guide
SC23-9553
Table 3. Tivoli Storage Manager client publications
Publication title
Order number
IBM Tivoli Storage Manager for UNIX and Linux: Backup-Archive
Clients Installation and User’s Guide
SC23-9791
IBM Tivoli Storage Manager for Windows: Backup-Archive Clients
Installation and User’s Guide
SC23-9792
IBM Tivoli Storage Manager for Space Management for UNIX and Linux: SC23-9794
User’s Guide
IBM Tivoli Storage Manager for HSM for Windows Administration Guide SC23-9795
xiv
IBM Tivoli Storage Manager Using the Application Program Interface
SC23-9793
Program Directory for IBM Tivoli Storage Manager z/OS Edition
Backup-Archive Client
GI11-8912
Program Directory for IBM Tivoli Storage Manager z/OS Edition
Application Program Interface
GI11-8911
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 4. Tivoli Storage Manager Data Protection publications
Publication title
Order number
IBM Tivoli Storage Manager for Advanced Copy Services: Data Protection SC33-8331
for Snapshot Devices Installation and User’s Guide
IBM Tivoli Storage Manager for Databases: Data Protection for Microsoft
SQL Server Installation and User’s Guide
SC32-9059
IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for SC32-9064
UNIX and Linux Installation and User’s Guide
IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for SC32-9065
Windows Installation and User’s Guide
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data
Protection for SAP Installation and User’s Guide for DB2
SC33-6341
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data
Protection for SAP Installation and User’s Guide for Oracle
SC33-6340
IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino
for UNIX, Linux, and OS/400 Installation and User’s Guide
SC32-9056
IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino
for Windows Installation and User’s Guide
SC32-9057
IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft
Exchange Server Installation and User’s Guide
SC23-9796
Program Directory for IBM Tivoli Storage Manager for Mail (Data
Protection for Lotus Domino)
GI11-8909
Related hardware publications
The following table lists related IBM hardware products publications.
For additional information on hardware, see the resource library for tape products
at http://www.ibm.com/systems/storage/tape/library.html.
Title
Order Number
IBM TotalStorage 3494 Tape Library Introduction and Planning Guide
GA32-0448
IBM TotalStorage 3494 Tape Library Operator Guide
GA32-0449
IBM 3490E Model E01 and E11 User’s Guide
GA32-0298
IBM Tape Device Drivers Installation and User’s Guide
GC27-2130
IBM TotalStorage Enterprise Tape System 3590 Operator Guide
GA32-0330
IBM TotalStorage Enterprise Tape System 3592 Operator Guide
GA32-0465
Support information
You can find support information for IBM products from a variety of sources.
Preface
xv
Getting technical training
Information about Tivoli technical training courses is available online.
Go to http://www.ibm.com/software/tivoli/education/.
Searching knowledge bases
If you have a problem with Tivoli Storage Manager, there are several knowledge
bases that you can search.
You can begin with the Tivoli Storage Manager Information Center at
http://publib.boulder.ibm.com/infocenter/tsminfo/v6. From this Web site, you
can search all Tivoli Storage Manager publications.
Searching the Internet
If you cannot find an answer to your question in the Tivoli Storage Manager
information center, search the Internet for the latest, most complete information
that might help you resolve your problem.
To search multiple Internet resources, go to the support Web site for Tivoli Storage
Manager at http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html. From there, you can search a variety of resources
including:
v IBM technotes
v IBM downloads
v IBM Redbooks®
If you still cannot find the solution to the problem, you can search forums and
newsgroups on the Internet for the latest information that might help you resolve
your problem. To share your experiences and learn from others in the user
community, go to the Tivoli Storage Manager wiki at http://www.ibm.com/
developerworks/wikis/display/tivolistoragemanager/Home.
Using IBM Support Assistant
At no additional cost, you can install on any workstation the IBM Support
Assistant, a stand-alone application. You can then enhance the application by
installing product-specific plug-in modules for the IBM products that you use.
The IBM Support Assistant helps you gather support information when you need
to open a problem management record (PMR), which you can then use to track the
problem. The product-specific plug-in modules provide you with the following
resources:
v Support links
v Education links
v Ability to submit problem management reports
For more information, see the IBM Support Assistant Web site at
http://www.ibm.com/software/support/isa/.
xvi
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Finding product fixes
A product fix to resolve your problem might be available from the IBM Software
Support Web site.
You can determine what fixes are available by checking the Web site:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/
tivoli/products/storage-mgr/product-links.html.
2. Click the Support Pages link for your Tivoli Storage Manager product.
3. Click Download, and then click Fixes by version.
Getting e-mail notification of product fixes
You can get notifications about fixes and other news about IBM products.
To receive weekly e-mail notifications about fixes and other news about IBM
products, follow these steps:
1. From the support page for any IBM product, click My support in the
upper-right corner of the page.
2. If you have already registered, skip to the next step. If you have not registered,
click Register in the upper-right corner of the support page to establish your
user ID and password.
3. Sign in to My support.
4. On the My support page, click Edit profiles in the left navigation pane, and
scroll to Select Mail Preferences. Select a product family and check the
appropriate boxes for the type of information you want.
5. Click Submit.
6. For e-mail notification for other products, repeat steps 4 and 5.
Contacting IBM Software Support
You can contact IBM Software Support if you have an active IBM software
maintenance contract and if you are authorized to submit problems to IBM.
Before you contact IBM Software Support, follow these steps:
1. Set up a software maintenance contract.
2. Determine the business impact of your problem.
3. Describe your problem and gather background information.
Then see “Submit the problem to IBM Software Support” on page xix for
information on contacting IBM Software Support.
Setting up a software maintenance contract
Set up a software maintenance contract. The type of contract that you need
depends on the type of product you have.
v For IBM distributed software products (including, but not limited to, Tivoli,
Lotus®, and Rational® products, as well as IBM DB2® and IBM WebSphere®
products that run on Microsoft® Windows® or UNIX® operating systems), enroll
in IBM Passport Advantage® in one of the following ways:
– Online: Go to the Passport Advantage Web page at http://www.ibm.com/
software/lotus/passportadvantage/, click How to enroll, and follow the
instructions.
– By Phone: For the phone number to call in your country, go to the IBM
Software Support Handbook Web page at http://www14.software.ibm.com/
webapp/set2/sas/f/handbook/home.html and click Contacts.
Preface
xvii
v For server software products, you can purchase a software maintenance
agreement by working directly with an IBM sales representative or an IBM
Business Partner. For more information about support for server software
products, go to the IBM Technical support advantage Web page at
http://www.ibm.com/servers/.
If you are not sure what type of software maintenance contract you need, call
1-800-IBMSERV (1-800-426-7378) in the United States. For a list of telephone
numbers of people who provide support for your location, go to the Software
Support Handbook page at http://www14.software.ibm.com/webapp/set2/sas/f/
handbook/home.html.
Determine the business impact
When you report a problem to IBM, you are asked to supply a severity level.
Therefore, you need to understand and assess the business impact of the problem
you are reporting.
Severity 1
Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2
Significant business impact: The program is usable but is
severely limited.
Severity 3
Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4
Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.
Describe the problem and gather background information
When explaining a problem to IBM, it is helpful to be as specific as possible.
Include all relevant background information so that IBM Software Support
specialists can help you solve the problem efficiently.
To save time, know the answers to these questions:
v What software versions were you running when the problem occurred?
v Do you have logs, traces, and messages that are related to the problem
symptoms? IBM Software Support is likely to ask for this information.
v Can the problem be recreated? If so, what steps led to the failure?
v Have any changes been made to the system? For example, hardware, operating
system, networking software, and so on.
v Are you currently using a workaround for this problem? If so, be prepared to
explain it when you report the problem.
xviii
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Submit the problem to IBM Software Support
You can submit the problem to IBM Software Support online or by phone.
Online
Go to the IBM Software Support Web site at http://www.ibm.com/
software/support/probsub.html. Enter your information into the
appropriate problem submission tool.
By phone
For the phone number to call in your country, go to the contacts page of
the IBM Software Support Handbook at http://www14.software.ibm.com/
webapp/set2/sas/f/handbook/home.html.
If the problem that you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. If a workaround is
possible, IBM Software Support provides one for you to implement until the APAR
is resolved and a fix is delivered. IBM publishes resolved APARs on the Tivoli
Storage Manager product support Web site at http://www.ibm.com/software/
sysmgmt/products/support/IBMTivoliStorageManager.html, so that users who
experience the same problem can benefit from the same resolutions.
Conventions used in this guide
v Command to be entered on the Windows command line:
> dsmadmc
v Command to be entered on the command line of an administrative client:
query devclass
In the usage and descriptions for administrative commands, the term characters
corresponds to the number of bytes available to store an item. For languages in
which it takes a single byte to represent a displayable character, the character to
byte ratio is 1 to 1. However, for DBCS and other multi-byte languages, the
reference to characters refers only to the number of bytes available for the item and
may represent fewer actual characters.
Preface
xix
xx
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
New for IBM Tivoli Storage Manager Version 6.1
Many features in the Tivoli Storage Manager Version 6.1 server are new for
previous Tivoli Storage Manager users.
|
|
|
|
New for the server in Version 6.1.2
Server fix pack 6.1.2 contains several new features, in addition to fixes for
problems.
Enabled functions
|
|
Functions that were disabled in Tivoli Storage Manager V6.1.0 and V6.1.1 are now
enabled in Version 6.1.2.
|
|
|
Until Tivoli Storage Manager V6.1.2, a database that contained backup sets or
tables of contents (TOCs) could not be upgraded to V6. These restrictions no
longer exist.
|
|
|
|
|
|
|
In addition, the following commands have been enabled in Version 6.1.2:
v BACKUP NAS client command if the TOC parameter specifies PREFERRED or
YES
v BACKUP NODE if the TOC parameter specifies PREFERRED or YES
v DEFINE BACKUPSET
v GENERATE BACKUPSET
|
v GENERATE BACKUPSETTOC
Licensing changes
|
|
|
|
|
Following the release of Tivoli Storage Manager Version 6.1.2, Tivoli Storage
Manager Version 6.1.0 will no longer be available for download or purchase. Due
to this unique circumstance, certain 6.1.2 packages will be available with a license
module. See the following information for details on how this situation affects your
environment.
|
|
|
|
|
Existing Version 6.1.0 and 6.1.1 users
If you have installed version 6.1.0 and are using a version 6.1.0 license, you
can download the 6.1.2 package from the Service FTP site. You can install
the 6.1.2 package using the instructions in Installing a Tivoli Storage
Manager fix pack.
|
|
|
|
|
|
Version 5 users
If you have not yet installed a version of the V6.1 server, when you
upgrade, you must upgrade directly to version 6.1.2. Version 6.1.2 is
available with a license module from Passport Advantage or from your
Tivoli Storage Manager sales representative. You can upgrade from V5 to
V6.1.2 using the instructions in Upgrading the server.
|
|
|
|
New users
Version 6.1.2 is available from Passport Advantage or from your Tivoli
Storage Manager sales representative. You can install version 6.1.2 using
the instructions in Installing Tivoli Storage Manager.
|
© Copyright IBM Corp. 1993, 2009
xxi
ACSLS functionality for 64-bit Windows systems
|
|
|
|
Tivoli Storage Manager Version 6.1.0 requires the installation of StorageTek Library
Attach software to utilize Sun StorageTek Automated Cartridge System Library
Software (ACSLS) functions for the Windows operating system.
|
|
Support for ACSLS library functions is now available for both 32-bit and 64-bit
Windows operating systems in fix pack level 6.1.2.
PREVIEW parameter for DSMSERV INSERTDB
|
|
|
|
A PREVIEW parameter is available for the DSMSERV INSERTDB utility in Tivoli
Storage Manager fix pack level 6.1.2. The DSMSERV INSERTDB utility is used only
as part of the process for upgrading a V5 Tivoli Storage Manager server to V6.1.
|
|
When you use the PREVIEW=YES parameter, the operation includes all the steps
of the process, except for the actual insertion of data into the new database.
|
|
|
When you preview the insertion operation, you can quickly verify that the source
database is readable. You can also identify any data constraint violations before
you run the actual upgrade process for your server.
New for the server in Version 6.1.0
Tivoli Storage Manager server version 6.1.0 contains many new features and
changes.
Disabled functions in 6.1.0 and 6.1.1
|
|
Some functions have been disabled in Tivoli Storage Manager 6.1.0 and 6.1.1.
||
|
|
Note: The restrictions described here have been removed in Tivoli Storage Manager V6.1.2.
If your server is at the V6.1.0 or V6.1.1 level, migrate to V6.1.2 to enable these functions.
|
|
|
|
|
A database containing backup sets or tables of contents (TOCs) cannot be
upgraded to V6.1.0 or 6.1.1. The database upgrade utilities check for defined
backup sets and existing TOCs. If either exists, the upgrade stops and a message is
issued saying that the upgrade is not possible at the time. In addition, any
operation on a V6.1 server that tries to create or load a TOC fails.
|
|
When support is restored by a future V6.1 fix pack, the database upgrade and all
backup set and TOC operations will be fully enabled.
|
|
|
|
|
|
|
In the meantime, the following commands have been disabled:
v BACKUP NAS client command if the TOC parameter specifies PREFERRED or
YES
v
v
v
v
xxii
BACKUP NODE if the TOC parameter specifies PREFERRED or YES
DEFINE BACKUPSET
GENERATE BACKUPSET
GENERATE BACKUPSETTOC
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Changes to the Version 6.1 Administration Center
Many features in the Tivoli Storage Manager Administration Center Version 6.1 are
new for previous users.
Updated Integrated Solutions Console
In V6.1, the Administration Center is hosted by the IBM Integrated Solutions
Console (ISC) Advanced Edition Version 7.1. After installation of the Integrated
Solutions Console installation completes, open a Web browser and enter the
following URL, which will display the logon screen for the Integrated Solutions
Console: https://local_host:9043/ibm/console. This screen indicates a successful
installation of the Integrated Solutions Console.
To
1.
2.
3.
learn about console updates:
Start the ISC.
Click Help in the ISC banner.
In the Help navigation tree, click Console Updates.
WebSphere Windows service
In V6.1, the WebSphere Windows service is named TSM Administration Center TsmAC.
Identify managing servers
The table of servers that is the hub of the enterprise-management work page has a
column that identifies the managing server, if one exists, for each listed server. By
sorting or filtering on the column, you can display the set of servers that are
managed by a given server.
Hover help for table links
The Administration Center typically displays Tivoli Storage Manager objects in a
table. In V6.1, when the cursor hovers over an object image, hover-help text is
displayed. The hover help identifies the default action that results when you click
the link that is associated with the object.
Links to information about server messages and Administration
Center messages
When a problem or issue occurs with the server or Administration Center, you are
immediately notified and provided with a brief message about the problem or
issue. The message number is also provided. In V6.1, you can obtain detailed
information about a message by clicking the link that is associated with the
message number. The information is displayed in a new browser window.
Maintenance script enhancements
Tivoli Storage Manager utilizes a maintenance script to perform scheduled
maintenance tasks. In V6.1, you can generate a maintenance script in one of two
styles: predefined and custom.
A predefined maintenance script is one that is generated through a wizard. This
script contains standard commands that cannot be altered. A predefined script can
only be modified in the wizard.
New for IBM Tivoli Storage Manager Version 6.1
xxiii
A custom maintenance script is created using the Administration Center
maintenance script editor. To have more control of your maintenance tasks, you
can modify the commands that you specify. You can also use the editor to update
your custom maintenance script.
Client nodes and backup sets enhancements
The redesigned Administration Center displays information about backup sets,
client nodes, and client-node groups in one portlet. The design includes search
functions that you can use to find and display information more quickly. When
you select a client node, a summary panel is displayed with the current operation
status, server actions, and client-node actions.
The work item Client nodes and backup sets appears in the ISC navigation tree.
Session and process information available in the health monitor
The Administration Center health monitor now includes information about server
processes and sessions. The information is also available in the properties
notebooks for servers.
Centralized server-connection management
In V6.1, server-connection tasks, such as adding a server connection, changing a
password, and creating a server instance, are consolidated in a single location: the
Manage Servers work item, located in the ISC navigation tree.
With actions available in this work item, you can quickly upload server-connection
information to the Administration Center using an XML file. This file can
optionally include a set of server credentials for multiple servers. To help create an
XML file, you can download a list of server connections, without the credential
information.
Changes to management-class activation
In the V6.1, Tivoli Storage Manager no longer activates changes to existing
management classes automatically. You must activate the changes manually. Before
the changes take effect, they are validated. Results of the validation are displayed.
You or another administrator can review them, and then either confirm or cancel
the activation.
Because changes are manually activated, you can prepare the management class in
advance and activate the changes at an appropriate time.
Data deduplication
Data deduplication is a method of eliminating redundant data in sequential-access
disk (FILE) primary, copy, and active-data storage pools. One unique instance of
the data is retained on storage media, and redundant data is replaced with a
pointer to the unique data copy. The goal of deduplication is to reduce the overall
amount of time that is required to retrieve data by letting you store more data on
disk, rather than on tape.
Data deduplication in Tivoli Storage Manager is a two-phase process. In the first
phase, duplicate data is identified. During the second phase, duplicate data is
removed by certain server processes, such as reclamation processing of
xxiv
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
storage-pool volumes. By default, a duplicate-identification process begins
automatically after you define a storage pool for deduplication. (If you specify a
duplicate-identification process when you update a storage pool, it also starts
automatically.) Because duplication identification requires extra disk I/O and CPU
resources, Tivoli Storage Manager lets you control when identification begins as
well as the number and duration of processes.
You can deduplicate any type of data except encrypted data. You can deduplicate
client backup and archive data, Tivoli Data Protection data, and so on. Tivoli
Storage Manager can deduplicate whole files as well as files that are members of
an aggregate. You can deduplicate data that has already been stored. No additional
backup, archive, or migration is required.
For optimal efficiency when deduplicating, upgrade to the version 6.1
backup-archive client.
Restriction: You can use the data-deduplication feature with Tivoli Storage
Manager Extended Edition only.
Related tasks
“Deduplicating data” on page 319
Data deduplication eliminates redundant data in sequential-access disk (FILE)
primary, copy, and active-data storage pools. One unique instance of the data is
retained on storage media, and redundant data is replaced with a pointer to a
unique data copy.
Storage devices
New device support and other changes to storage devices are available in Tivoli
Storage Manager Version 6.1.
ACSLS functionality for Windows systems
Tivoli Storage Manager Version 6.1.0 requires the installation of StorageTek Library
Attach software to utilize Sun StorageTek Automated Cartridge System Library
Software (ACSLS) functions for the Windows operating system.
Support for ACSLS library functions is only available on 32-bit Windows operating
systems in version 6.1.0.
Support for HP and Quantum DAT160 drives and media
With Tivoli Storage Manager, you can now use HP and Quantum DAT160 (DDS6)
tape drives and media. New recording formats are available for the 4MM device
type.
Support for Sun StorageTek T10000 drives, T10000B drives, and
T10000 media
With Tivoli Storage Manager, you can now use Sun StorageTek T10000 drives,
T10000B drives, and T10000 media. New recording formats are available for the
ECARTRIDGE device type. Tivoli Storage Manager supports Volsafe media with
the Sun StorageTek T10000 and T10000B drives.
New for IBM Tivoli Storage Manager Version 6.1
xxv
Disaster recovery manager support for active-data pools
To restore your client systems more quickly and efficiently, you can now use
active-data pools in your recovery plans and procedures.
Active-data pools are storage pools that contain only active versions of client
backup data. Like copy storage pool volumes, disaster recovery manager lets you:
v Specify the names of active-data pool volumes to be managed by the disaster
recovery manager.
v Recycle on-site and off-site active-data pool volumes according to server policies
and processes.
v Include active-data pool volumes in the scripts, macros, and documentation that
is part of the recovery plan file.
v Track and manage active-data pool media as required by your operations.
By default, active-data pools are not eligible for processing at the time of
installation. Copy storage pools, on the other hand, are processed at installation
time even if you have not explicitly specified a copy storage pool or pools to be
managed.
Related tasks
Chapter 25, “Using disaster recovery manager,” on page 815
You can use the disaster recovery manager (DRM) function to prepare a plan that
can help you to recover your applications if a disaster occurs.
EXPIRE INVENTORY command enhancements
The EXPIRE INVENTORY command is now enhanced with new functionality.
The additional parameters that you can now use are NODE, DOMAIN, TYPE,
DURATION, AND RESOURCE. You can use these parameters to target specific
client nodes and domains, and also to determine the type of data to be processed.
You can use the RESOURCE parameter to specify the number of parallel processes
that you want to run within the single EXPIRE INVENTORY process. You can run
up to ten threads at one time, but if you are processing one node, only one thread
is utilized.
No-query restore changes
The no-query restore (NQR) function and the internal algorithms responsible for
NQR were changed to take advantage of DB2 capabilities and to improve
performance.
The NQR function has been rewritten to resolve a performance problem
encountered when restoring a small number of objects for a client file system with
a large number of backup objects spread across a large number of Tivoli Storage
Manager server storage pool volumes. NQR performance is now comparable to
that of the classic restore under these conditions. NQR now performs a volume
determination phase that must be completed before any objects are restored from
DISK, FILE, or tape storage volumes.
xxvi
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Server database
Tivoli Storage Manager version 6.1 provides a new server database. Advantages
include automatic statistics collection and database reorganization, full-function
SQL queries, and elimination of the need for offline audits of the database.
Upgrading to V6.1 requires that data in a current Tivoli Storage Manager server
database be extracted and then inserted into the new database structure. Tivoli
Storage Manager provides utilities to perform the process.
Support for NetApp SnapMirror to Tape feature
With Tivoli Storage Manager you can create SnapMirror to Tape images of file
systems on NetApp file servers.
SnapMirror to Tape provides an alternative method for backing up very large
NetApp file systems. Because this backup method has limitations, use this method
when copying very large NetApp file systems to secondary storage for disaster
recovery purposes.
Related concepts
“Backup and restore using NetApp SnapMirror to Tape feature” on page 248
You can back up very large NetAppfile systems using the NetAppSnapMirror to
Tape feature. Using a block-level copy of data for backup, the SnapMirror to Tape
method is faster than a traditional Network Data Management Protocol (NDMP)
full backup and can be used when NDMP full backups are impractical.
Reporting and monitoring feature
The reporting and monitoring feature uses a combination of the Tivoli Common
Reporting tool, IBM Tivoli Monitoring, and the IBM Tivoli Data Warehouse to offer
you reports and real time monitoring information about Tivoli Storage Manager
servers and client activity.
Related concepts
“Tivoli Storage Manager reporting and monitoring” on page 665
The IBM Tivoli Storage Manager reporting and monitoring feature uses a
combination of reporting and monitoring components to offer you historical
reports and real-time monitoring information for the IBM Tivoli Storage Manager
servers and clients.
ODBC driver support
Tivoli Storage Manager Version 6.1 uses the DB2® open database connectivity
(ODBC) driver to query the database and display the results.
The Tivoli Storage Manager ODBC driver is no longer supported with the server.
Related tasks
“Using SQL to query the IBM Tivoli Storage Manager database” on page 636
You can use a standard SQL SELECT statement to get information from the
database.
New for IBM Tivoli Storage Manager Version 6.1
xxvii
Backup sets and client node enhancements
The Administration Center now displays the backup sets, client nodes, and client
node groups from one portlet.
You can view all of the client nodes, view them by server, or search for a node
from the three available client node tabs. The All Client Nodes tab lists all of the
nodes and has a Filter feature to help in your search. The filter works differently
than the other table filters in the Administration Center, in that here you do not
have to press the Enter key to get results. The search is initiated when you enter a
text character in the filter field. As you add characters, the results are filtered even
more.
When you select a client node, a summary panel is displayed with the current
operation status, server actions, and client node actions. You can also access the
Server Actions by right-clicking on the selected row.
The Search tab lets you refine your search parameters to include the server name,
the client node name, policy domain name, and other fields that are available.
In the Client Node Groups section, you can find a client node group from the All
Client Node Groups tab or from the By Server tab. You can use the filter and the
right-click menu on these pages also.
Backup sets are found in the Backup Set Collections section. Search a server by
selecting it and clicking Update Table.
xxviii
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Part 1. Tivoli Storage Manager basics
© Copyright IBM Corp. 1993, 2009
1
2
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 1. Tivoli Storage Manager overview
IBM Tivoli Storage Manager is an enterprise-wide storage management application.
It provides automated storage management services to workstations, personal
computers, and file servers from a variety of vendors, with a variety of operating
systems.
Tivoli Storage Manager includes the following components:
Server
Server program
The server program provides backup, archive, and space
management services to the clients.
You can set up multiple servers in your enterprise network to
balance storage, processor, and network resources.
Administrative interface
The administrative interface allows administrators to control and
monitor server activities, define management policies for clients,
and set up schedules to provide services to clients at regular
intervals. Administrative interfaces available include a
command-line administrative client and a Web browser interface
called the Administration Center. Tivoli Storage Manager allows
you to manage and control multiple servers from a single interface
that runs in a Web browser.
The Tivoli Storage Manager server for Windows also includes the
Tivoli Storage Manager Management Console (Tivoli Storage
Manager Console), which is a Microsoft Management Console
(MMC) snap-in.
Server database and recovery log
The Tivoli Storage Manager server uses a database to track
information about server storage, clients, client data, policy, and
schedules. The server uses the recovery log as a scratch pad for the
database, recording information about client and server actions
while the actions are being performed.
Server storage
The server can write data to hard disk drives, disk arrays and
subsystems, stand-alone tape drives, tape libraries, and other forms
of random- and sequential-access storage. The media that the
server uses are grouped into storage pools.
The storage devices can be connected directly to the server, or
connected via local area network (LAN) or storage area network
(SAN).
Client Nodes
A client node can be a workstation, a personal computer, a file server, or
even another Tivoli Storage Manager server. The client node has IBM Tivoli
Storage Manager client software installed and is registered with the server.
Network-attached storage (NAS) file servers can also be client nodes, but
when using NDMP, they do not have Tivoli Storage Manager client
software installed.
© Copyright IBM Corp. 1993, 2009
3
Backup-archive client
The backup-archive client allows users to maintain backup versions
of files, which they can restore if the original files are lost or
damaged. Users can also archive files for long-term storage and
retrieve the archived files when necessary. Users themselves or
administrators can register workstations and file servers as client
nodes with a Tivoli Storage Manager server.
The storage agent is an optional component that may also be
installed on a system that is a client node. The storage agent
enables LAN-free data movement for client operations and is
supported on a number of operating systems.
Network-attached storage file server (using NDMP)
The server can use the Network Data Management Protocol
(NDMP) to back up and restore file systems stored on a
network-attached storage (NAS) file server. The data on the NAS
file server is backed up to a tape library. No Tivoli Storage
Manager software needs to be installed on the NAS file server. A
NAS file server can also be backed up over the LAN to a Tivoli
Storage Manager server. See Chapter 9, “Using NDMP for
operations with NAS file servers,” on page 219 for more
information, including supported NAS file servers.
Application client
Application clients allow users to perform online backups of data
for applications such as database programs. After the application
program initiates a backup or restore, the application client acts as
the interface to Tivoli Storage Manager. The Tivoli Storage Manager
server then applies its storage management functions to the data.
The application client can perform its functions while application
users are working, with minimal disruption.
The following products provide application clients for use with the
Tivoli Storage Manager server:
v Tivoli Storage Manager for Application Servers
v Tivoli Storage Manager for Databases
v Tivoli Storage Manager for Enterprise Resource Planning
v Tivoli Storage Manager for Mail
Also available is Tivoli Storage Manager for Hardware, which
works with the backup-archive client and the API to help eliminate
backup-related performance effects.
Application program interface (API)
The API allows you to enhance existing applications to use the
backup, archive, restore, and retrieve services that Tivoli Storage
Manager provides. Tivoli Storage Manager API clients can register
as client nodes with a Tivoli Storage Manager server.
Tivoli Storage Manager for Space Management
Tivoli Storage Manager for Space Management provides space
management services for workstations on some platforms. The space
management function is essentially a more automated version of archive.
Tivoli Storage Manager for Space Management automatically migrates files
that are less frequently used to server storage, freeing space on the
workstation. The migrated files are also called space-managed files.
4
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Users can recall space-managed files automatically simply by accessing
them as they normally would from the workstation. Tivoli Storage
Manager for Space Management is also known as the space manager client,
or the hierarchical storage management (HSM) client.
Storage agents
The storage agent is an optional component that may be installed on a
system that is also a client node. The storage agent enables LAN-free data
movement for client operations.
The storage agent is available for use with backup-archive clients and
application clients on a number of operating systems. The Tivoli Storage
Manager for Storage Area Networks product includes the storage agent.
For information about supported operating systems for clients, see the IBM Tivoli
Storage Manager Web site at http://www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
Client programs such as the backup-archive client and the HSM client (space
manager) are installed on systems that are connected through a LAN and are
registered as client nodes. From these client nodes, users can back up, archive, or
migrate files to the server.
The following sections present key concepts and information about IBM Tivoli
Storage Manager. The sections describe how Tivoli Storage Manager manages client
files based on information provided in administrator-defined policies, and manages
devices and media based on information provided in administrator-defined Tivoli
Storage Manager storage objects.
The final section gives an overview of tasks for the administrator of the server,
including options for configuring the server and how to maintain the server.
Concepts:
“How client data is stored”
“How the server manages storage” on page 15
How client data is stored
Tivoli Storage Manager policies are rules that determine how the client data is
stored and managed. The rules include where the data is initially stored, how
many backup versions are kept, how long archive copies are kept, and so on.
You can have multiple policies and assign the different policies as needed to
specific clients, or even to specific files. Policy assigns a location in server storage
where data is initially stored. Server storage is divided into storage pools that are
groups of storage volumes.
Server storage can include hard disk, optical, and tape volumes.
When you install Tivoli Storage Manager, you have a default policy that you can
use. For details about this default policy, see “Reviewing the standard policy” on
page 456. You can modify this policy and define additional policies.
Clients use Tivoli Storage Manager to store data for any of the following purposes:
Chapter 1. Tivoli Storage Manager overview
5
Backup and restore
The backup process copies data from client workstations to server storage
to ensure against loss of data that is regularly changed. The server retains
versions of a file according to policy, and replaces older versions of the file
with newer versions. Policy includes the number of versions and the
retention time for versions.
A client can restore the most recent version of a file, or can restore earlier
versions.
Archive and retrieve
The archive process copies data from client workstations to server storage
for long-term storage. The process can optionally delete the archived files
from the client workstations. The server retains archive copies according to
the policy for archive retention time. A client can retrieve an archived copy
of a file.
Instant archive and rapid recovery
Instant archive is the creation of a complete set of backed-up files for a
client. The set of files is called a backup set. A backup set is created on the
server from the most recently backed-up files that are already stored in
server storage for the client. Policy for the backup set consists of the
retention time that you choose when you create the backup set.
You can copy a backup set onto compatible portable media, which can
then be taken directly to the client for rapid recovery without the use of a
network and without having to communicate with the Tivoli Storage
Manager server.
Migration and recall
Migration, a function of the Tivoli Storage Manager for Space Management
program, frees up client storage space by copying files from workstations
to server storage. On the client, the Tivoli Storage Manager for Space
Management program replaces the original file with a stub file that points
to the original in server storage. Files are recalled to the workstations when
needed.
This process is also called hierarchical storage management (HSM). Once
configured, the process is transparent to the users. Files are migrated and
recalled automatically.
Policy determines when files are considered for automatic migration. On
the UNIX or Linux® systems that support the Tivoli Storage Manager for
Space Management program, policies determine whether files must be
backed up to the server before being migrated. Space management is also
integrated with backup. If the file to be backed up is already migrated to
server storage, the file is backed up from there.
Figure 1 on page 7 shows how policy is part of the Tivoli Storage Manager process
for storing client data.
6
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Clients
Server
Client Data
Storage
Pools
4
Migration
Backup
or
Archive
Database
Policy Domain
Policy Set
Management Class
Copy Group
Figure 1. How IBM Tivoli Storage Manager Controls Backup, Archive, and Migration
Processes
The steps in the process are as follows:
v 1 A client initiates a backup, archive, or migration operation. The file involved
in the operation is bound to a management class. The management class is
either the default or one specified for the file in client options (the client’s
include-exclude list).
v 2 If the file is a candidate for backup, archive, or migration based on
information in the management class, the client sends the file and file
information to the server.
v 3 The server checks the management class that is bound to the file to
determine the destination, the name of the Tivoli Storage Manager storage pool
where the server initially stores the file. For backed-up and archived files,
destinations are assigned in the backup and archive copy groups, which are
within management classes. For space-managed files, destinations are assigned
in the management class itself.
The storage pool can be a group of disk volumes, tape volumes, or optical
volumes.
v 4 The server stores the file in the storage pool that is identified as the storage
destination.
The Tivoli Storage Manager server saves information in its database about each
file that it backs up, archives, or migrates.
If you set up server storage in a hierarchy, Tivoli Storage Manager can later
migrate the file to a storage pool different from the one where the file was
Chapter 1. Tivoli Storage Manager overview
7
initially stored. For example, you may want to set up server storage so that
Tivoli Storage Manager migrates files from a disk storage pool to tape volumes
in a tape storage pool.
Files remain in server storage until they expire and expiration processing occurs, or
until they are deleted from server storage. A file expires because of criteria that are
set in policy. For example, the criteria include the number of versions allowed for a
file and the number of days that have elapsed since a file was deleted from the
client’s file system. If data retention protection is activated, an archive object
cannot be inadvertently deleted.
For information on assigning storage destinations in copy groups and management
classes, and on binding management classes to client files, see Chapter 14,
“Implementing policies for client data,” on page 455.
For information on managing the database, see Chapter 20, “Managing the
database and recovery log,” on page 611.
For information about storage pools and storage pool volumes, see Chapter 11,
“Managing storage pools and volumes,” on page 275.
For information about event-based policy, deletion hold, and data retention
protection, see Chapter 14, “Implementing policies for client data,” on page 455.
Data-protection options
Tivoli Storage Manager provides a variety of backup and archive operations,
allowing you to select the right protection for the situation.
Table 5 shows some examples of the protection options.
Table 5. Examples of meeting your goals withTivoli Storage Manager
For this goal...
Do this...
Back up files that are on a user’s
workstation, and have the ability to restore
individual files.
Use the backup-archive client to perform
incremental backups or selective backups.
Back up a file server, and have the ability to
restore individual files.
Use the backup-archive client to perform
incremental backups or selective backups.
If the file server is a network-attached
storage file server that is supported, you can
have the server use NDMP to perform image
backups. This support is available in the
Tivoli Storage Manager Extended Edition
product.
8
Make restore media portable, or make
restores easier to perform remotely.
Use the backup-archive client to perform
incremental backups, and then generate
backup sets by using the Tivoli Storage
Manager server.
Provide the ability to more easily restore the
entire contents of a single logical volume,
instead of restoring individual files.
Use the backup-archive client to perform
logical volume backups (also called image
backups).
Set up records retention to meet legal or
other long-term storage needs.
Use the backup-archive client to occasionally
perform archiving. To ensure that the
archiving occurs at the required intervals, use
central scheduling.
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 5. Examples of meeting your goals withTivoli Storage Manager (continued)
For this goal...
Do this...
Create an archive for a backup-archive client, Use the backup-archive client to perform
from data that is already stored for backup.
incremental backups, and then generate a
backup set by using the Tivoli Storage
Manager server. This is also called instant
archive.
Provide the ability to restore data to a point
in time.
Use the backup-archive client to regularly
perform incremental backups (either
manually or automatically through
schedules). Then do one of the following:
v Set up policy to ensure that data is
preserved in server storage long enough to
provide the required service level. See
“Setting policy to enable point-in-time
restore for clients” on page 505 for details.
v Create backup sets for the backup-archive
client on a regular basis. Set the retention
time to provide the required service level.
See “Creating and using client backup
sets” on page 522 for details.
Save a set of files and directories before
making significant changes to them.
Use the backup-archive client to archive the
set of files and directories.
If this kind of protection is needed regularly,
consider creating backup sets from backup
data already stored for the client. Using
backup sets instead of frequent archive
operations can reduce the amount of
metadata that must be stored in the server’s
database.
Manage a set of related files, which are not
in the same file system, with the same
backup, restore, and server policies.
Use the backup group command on the
backup-archive client to create a logical
grouping of a set of files, which can be from
one or more physical file systems. The group
backup process creates a virtual file space in
server storage to manage the files, because
the files might not be from one file system
on the client. Actions such as policy binding,
migration, expiration, and export are applied
to the group as a whole. See the
Backup-Archive Clients Installation and User’s
Guide for details.
Back up data for an application that runs
continuously, such as a database application
(for example, DB2 or Oracle) or a mail
application (Lotus Domino®).
Use the appropriate application client. For
example, use Tivoli Storage Manager for Mail
to protect the Lotus Domino application.
Exploit disk hardware capable of data
snapshots.
Use the appropriate component in the Tivoli
Storage Manager for Hardware product, such
as System Storage™ Archive Manager for IBM
Enterprise Storage Server® for DB2.
Make backups transparent to end users.
Use the backup-archive client with centrally
scheduled backups that run during off-shift
hours. Monitor the schedule results.
Chapter 1. Tivoli Storage Manager overview
9
Table 5. Examples of meeting your goals withTivoli Storage Manager (continued)
For this goal...
Do this...
Reduce the load on the LAN by moving
backup data over your SAN.
Use LAN-free data movement or, for
supported network-attached storage (NAS)
file servers, use NDMP operations.
Schedule the backups of client data to help enforce the data management policy
that you establish. If you schedule the backups, rather than rely on the clients to
perform the backups, the policy that you establish is followed more consistently.
See Chapter 16, “Scheduling operations for client nodes,” on page 545.
The standard backup method that Tivoli Storage Manager uses is called progressive
incremental backup. It is a unique and efficient method for backup. See “Progressive
incremental backups” on page 13.
Table 6 summarizes the client operations that are available. In all cases, the server
tracks the location of the backup data in its database. Policy that you set
determines how the backup data is managed.
Table 6. Summary of client operations
Type of
operation
Description
Progressive The standard method of
incremental backup used by Tivoli
backup
Storage Manager. After
the first, full backup of a
client system, incremental
backups are done.
Incremental backup by
date is also available.
Usage
Restore options
Helps ensure complete,
effective, policy-based
backup of data. Eliminates
the need to retransmit
backup data that has not
been changed during
successive backup
operations.
See “Incremental
The user can restore just
the version of the file that backup” on page 470
and the
is needed.
Backup-Archive
Tivoli Storage Manager
Clients Installation
does not need to restore a and User’s Guide.
base file followed by
incremental backups. This
means reduced time and
fewer tape mounts, as
well as less data
transmitted over the
network.
Allows users to protect a
subset of their data
independent of the
normal incremental
backup process.
See “Selective
The user can restore just
the version of the file that backup” on page 472
and the
is needed.
Backup-Archive
Tivoli Storage Manager
Clients Installation
does not need to restore a and User’s Guide.
base file followed by
incremental backups. This
means reduced time and
fewer tape mounts, as
well as less data
transmitted over the
network.
No additional full
backups of a client are
required after the first
backup.
Selective
backup
10
Backup of files that are
selected by the user,
regardless of whether the
files have changed since
the last backup.
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
For more
information
Table 6. Summary of client operations (continued)
Type of
operation
Description
Usage
Restore options
For more
information
Adaptive
subfile
backup
A backup method that
backs up only the parts of
a file that have changed
since the last backup. The
server stores the base file
(the complete initial
backup of the file) and
subsequent subfiles (the
changed parts) that
depend on the base file.
Maintains backups of data The base file plus a
while minimizing connect maximum of one subfile
time and data
is restored to the client.
transmission for the
backup of mobile and
remote users.
See “Enabling clients
to use subfile
backup” on page 531
and the
Backup-Archive
Clients Installation
and User’s Guide.
Reduces the amount of
time required for backup.
The files eligible for
backup are known before
the backup operation
begins.
Journal-based backup has
no effect on how files are
restored; this depends on
the type of backup
performed.
See the
Backup-Archive
Clients Installation
and User’s Guide.
The process works with
either the standard
progressive incremental
backup or with selective
backup.
Applicable to clients on
Windows systems.
Journalbased
backup
Aids all types of backups
(progressive incremental
backup, selective backup,
adaptive subfile backup)
by basing the backups on
a list of changed files.
The list is maintained on
the client by the journal
engine service of IBM
Tivoli Storage Manager.
Applicable to clients on
AIX® and Windows
systems, except Windows
2003 64-bit IA64.
Image
backup
Full volume backup.
Allows backup of an
entire file system or raw
Nondisruptive, on-line
volume as a single object.
backup is possible for
Can be selected by
Windows clients by using backup-archive clients on
the Tivoli Storage
UNIX, Linux, and
Manager snapshot
Windows systems.
function.
The entire image is
restored.
See “Policy for
logical volume
backups” on page
501 and the
Backup-Archive
Clients Installation
and User’s Guide.
Image
backup
with
differential
backups
Full volume backup,
Used only for the image
which can be followed by backups of NAS file
subsequent differential
servers, performed by the
backups.
server using NDMP
operations.
The full image backup
plus a maximum of one
differential backup are
restored.
See Chapter 9,
“Using NDMP for
operations with NAS
file servers,” on page
219.
Backup
using
hardware
snapshot
capabilities
A method of backup that
exploits the capabilities
of IBM Enterprise Storage
Server FlashCopy® and
EMC TimeFinder to make
copies of volumes used
by database servers. The
Tivoli Storage Manager
for Hardware product
then uses the volume
copies to back up the
database volumes.
Details depend on the
hardware.
See the
documentation for
IBM Tivoli Storage
Manager for
hardware
components.
Implements
high-efficiency backup
and recovery of
business-critical
applications while
virtually eliminating
backup-related downtime
or user disruption on the
database server.
Chapter 1. Tivoli Storage Manager overview
11
Table 6. Summary of client operations (continued)
Type of
operation
Description
Group
backup
A method that backs up
files that you specify as a
named group. The files
can be from one or more
file spaces. The backup
can be a full or a
differential backup.
Usage
Creates a consistent
point-in-time backup of a
group of related files. The
files can reside in different
file spaces on the client.
All objects in the group
are assigned to the same
management class. The
Applicable to clients on
server manages the group
UNIX and Linux systems. as a single logical entity,
and stores the files in a
virtual file space in server
storage.
Restore options
For more
information
The user can select to
restore the entire group or
just selected members of
the group. The user can
restore just the version of
the file that is needed.
See the
Backup-Archive
Clients Installation
and User’s Guide.
A group can be included
in a backup set.
Archive
12
The process creates a
copy of files and stores
them for a specific time.
The selected version of
Use for maintaining
copies of vital records for the file is retrieved on
request.
legal or historical
purposes.
Note: If you need to
frequently create archives
for the same data,
consider using instant
archive (backup sets)
instead. Frequent archive
operations can create a
large amount of metadata
in the server database
resulting in increased
database growth and
decreased performance for
server operations such as
expiration. Frequently,
you can achieve the same
objectives with
incremental backup or
backup sets. Although the
archive function is a
powerful way to store
inactive data with fixed
retention, it should not be
used on a frequent and
large scale basis as the
primary backup method.
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
See “Archive” on
page 473 and the
Backup-Archive
Clients Installation
and User’s Guide.
Table 6. Summary of client operations (continued)
Type of
operation
Description
Usage
Restore options
For more
information
Instant
archive
The process creates a
backup set of the most
recent versions of the
files for the client, using
files already in server
storage from earlier
backup operations.
Use when portability of
the recovery media or
rapid recovery of a
backup-archive client is
important. Also use for
efficient archiving.
See “Creating and
The files are restored
using client backup
directly from the backup
sets” on page 522.
set. The backup set
resides on media that can
be mounted on the client
system, such as a CD, a
tape drive, or a file
system. The Tivoli Storage
Manager server does not
have to be contacted for
the restore process, so the
process does not use the
network or the server.
Progressive incremental backups
The terms differential and incremental are often used to describe backups. The
standard method of backup used by Tivoli Storage Manager is progressive
incremental.
The terms differential and incremental have the following meanings:
v A differential backup backs up files that have changed since the last full backup.
– If a file changes after the full backup, the changed file is backed up again by
every subsequent differential backup.
– All files are backed up at the next full backup.
v An incremental backup backs up only files that have changed since the last
backup, whether that backup was a full backup or another incremental backup.
– If a file changes after the full backup, the changed file is backed up only by
the next incremental backup, not by all subsequent incremental backups.
– If a file has not changed since the last backup, the file is not backed up.
Tivoli Storage Manager takes incremental backup one step further. After the initial
full backup of a client, no additional full backups are necessary because the server,
using its database, keeps track of whether files need to be backed up. Only files
that change are backed up, and then entire files are backed up, so that the server
does not need to reference base versions of the files. This means savings in
resources, including the network and storage.
If you choose, you can force full backup by using the selective backup function of
a client in addition to the incremental backup function. You can also choose to use
adaptive subfile backup, in which the server stores the base file (the complete
initial backup of the file) and subsequent subfiles (the changed parts) that depend
on the base file.
Backup methods are summarized in Table 6 on page 10.
Chapter 1. Tivoli Storage Manager overview
13
Storage-pool and server-database backups
Tivoli Storage Manager protects client data by letting you back up storage pools
and the database.
You can back up client backup, archive, and space-managed data in primary
storage pools to copy storage pools. You can also copy active versions of client
backup data from primary storage pools to active-data pools. The server can
automatically access copy storage pools and active-data pools to retrieve data. See
“Storage pool protection overview” on page 770.
You can also back up the server’s database. The database is key to the server’s
ability to track client data in server storage. See “Database and recovery log
protection overview” on page 772.
These backups can become part of a disaster recovery plan, created automatically
by the disaster recovery manager. See:
Chapter 25, “Using disaster recovery manager,” on page 815
Data movement to server storage
Tivoli Storage Manager provides several methods for sending client data to server
storage.
In many configurations, the Tivoli Storage Manager client sends its data to the
server over the LAN. The server then transfers the data to a device that is attached
to the server. You can also use storage agents that are installed on client nodes to
send data over a SAN. This minimizes use of the LAN and the use of the
computing resources of both the client and the server. For details, see:
“LAN-free data movement” on page 88
For network-attached storage, use NDMP operations to avoid data movement over
the LAN. For details, see “NDMP backup operations” on page 91.
Consolidation of backed-up client data
By grouping the backed-up data for a client, you can minimize the number of
media mounts required for client recovery.
The server offers you methods for doing this:
Collocation
The server can keep each client’s files on a minimal number of volumes
within a storage pool. Because client files are consolidated, restoring
collocated files requires fewer media mounts. However, backing up files
from different clients requires more mounts.
You can have the server collocate client data when the data is initially
stored in server storage. If you have a storage hierarchy, you can also have
the data collocated when the server migrates the data from the initial
storage pool to the next storage pool in the storage hierarchy.
Another choice you have is the level of collocation. You can collocate by
client, by file space per client, or by group. Your selection depends on the
size of the file spaces being stored and the restore requirements.
See “Keeping client files together using collocation” on page 340.
Active-data pools
Active-data pools are storage pools that contain only the active versions of
14
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
client backup data. Archive data and data migrated by Hierarchical Space
Management (HSM) clients are not allowed in active-data pools.
Active-data pools can be associated with three types of devices:
sequential-access disk (FILE), removable media (tape or optical), or
sequential-access volumes on another Tivoli Storage Manager server. There
are three types of active-data pool, each of which has distinct advantages.
For example, an active-data pool associated with sequential-access disk is
particularly well-suited for fast restores of client data because tapes do not
have to be mounted and because the server does not have to position past
inactive files.
For more information, see “Backing up storage pools” on page 774.
Backup set creation
You can generate a backup set for each backup-archive client. A backup set
contains all active backed-up files that currently exist for that client in
server storage. The process is also called instant archive.
The backup set is portable and is retained for the time that you specify.
Creation of the backup set consumes more media because it is a copy in
addition to the backups that are already stored.
See “Creating and using client backup sets” on page 522.
Moving data for a client node
You can consolidate data for a client node by moving the data within
server storage. You can move it to a different storage pool, or to other
volumes in the same storage pool.
See “Moving data belonging to a client node” on page 386.
How the server manages storage
Through the server, you manage the devices and media used to store client data.
The server integrates the management of storage with the policies that you define
for managing client data.
Device support
With Tivoli Storage Manager, you can use of a variety of devices for server storage.
Tivoli Storage Manager can use direct-attached storage devices as well as
network-attached storage devices.
See the current list on the IBM Tivoli Storage Manager Web site at
http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html.
The IBM Tivoli Storage Manager Management Console includes a device
configuration wizard that allows you to perform simple drag-and-drop device
configuration. However, a few devices cannot be configured with the wizard. In
troubleshooting situations, or if you are using Tivoli Storage Manager commands
to configure devices, you need to understand Tivoli Storage Manager storage
device concepts.
Tivoli Storage Manager represents physical storage devices and media with the
following administrator-defined objects:
Chapter 1. Tivoli Storage Manager overview
15
Library
A library is one or more drives (and possibly robotic devices) with similar
media mounting requirements.
Drive
Each drive represents a drive mechanism in a tape or optical device.
Data mover
A data mover represents a device that accepts requests from Tivoli Storage
Manager to transfer data on behalf of the server. Data movers transfer data
between storage devices.
Path
A path represents how a source accesses a destination. For example, the
source can be a server, and the destination can be a tape drive. A path
defines the one-to-one relationship between a source and a destination.
Data may flow from the source to the destination, and back.
Device class
Each device is associated with a device class that specifies the device type
and how the device manages its media.
Storage pools and volumes
A storage pool is a named collection of volumes that have the same media
type. A storage pool is associated with a device class. A storage pool
volume is associated with a specific storage pool.
For example, an LTO tape storage pool contains only LTO tape volumes.
For details about device concepts, see Chapter 4, “Storage device concepts,” on
page 75.
Data migration through the storage hierarchy
You can organize the server’s storage pools into one or more hierarchical
structures. This storage hierarchy allows flexibility in a number of ways. For
example, you can set policy to have clients send their backup data to disks for
faster backup operations, then later have the server automatically migrate the data
to tape.
See “Storage pool hierarchies” on page 296.
Removal of expired data
A policy that you define controls when client data automatically expires from the
Tivoli Storage Manager server. The expiration process is how the server
implements the policy.
For example, you have a backup policy that specifies that three versions of a file be
kept. File A is created on the client, and backed up. Over time, the user changes
file A, and three versions of the file are backed up to the server. Then the user
changes file A again. When the next incremental backup occurs, a fourth version of
file A is stored, and the oldest of the four versions is eligible for expiration.
To remove data that is eligible for expiration, a server expiration process marks
data as expired and deletes metadata for the expired data from the database. The
space occupied by the expired data is then available for new data.
You control the frequency of the expiration process by using a server option, or
you can start the expiration processing by command or scheduled command.
16
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
See “Running expiration processing to delete expired files” on page 490.
Media reuse by reclamation
As server policies automatically expire data, the media where the data is stored
accumulates unused space. The Tivoli Storage Manager server implements a
process, called reclamation, which allows you to reuse media without traditional
tape rotation.
Reclamation is a server process that automatically defragments media by
consolidating unexpired data onto other media when the free space on media
reaches a defined level. The reclaimed media can then be used again by the server.
Reclaiming media allows the automated circulation of media through the storage
management process. Use of reclamation can help minimize the number of media
that you need to have available.
Chapter 1. Tivoli Storage Manager overview
17
18
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 2. Tivoli Storage Manager concepts
The server comes with many defaults so that you can begin using its services
immediately. The amount and importance of the data protected by Tivoli Storage
Manager, your business process requirements, and other factors make it likely that
you need to adjust and customize the server’s behavior.
Your changing storage needs and client requirements can mean on-going
configuration changes and monitoring. The server’s capabilities are extensively
described in this guide. To get an introduction to the tasks available to an
administrator of Tivoli Storage Manager, read the following sections:
Administrative Tasks:
“Interfaces to Tivoli Storage Manager”
“Storage configuration and management” on page 20
“Management of client operations” on page 24
“Server maintenance” on page 28
“Protecting the server” on page 32
“Managing servers with the Administration Center” on page 33
Interfaces to Tivoli Storage Manager
Tivoli Storage Manager has several types of interfaces that allow you to work with
many different applications.
The following interfaces are provided:
v Graphical user interfaces
For the clients, there are graphical user interfaces for the backup-archive client
and the space manager client (if installed, on supported operating systems). For
information about using the interfaces, see the online information or the
Installation Guide.
Special interfaces for the Windows server include:
– The IBM Tivoli Storage Manager for Windows program folder.
– The IBM Tivoli Storage Manager Management Console, selected from the IBM
Tivoli Storage Manager program folder or the desktop. The IBM Tivoli
Storage Manager Console is a Microsoft Management Console snap-in that
provides:
- Wizards to assist with Tivoli Storage Manager administration and
configuration tasks
- A Windows-style tree view of the storage management resource network
- Network scan utilities that can be used to locate Tivoli Storage Manager
client nodes and server nodes for remote management
- A net send feature that can be used to notify operators of Tivoli Storage
Manager mount requests and status messages
v Web interfaces for server administration and for the backup-archive client
The Administration Center allows you to access Tivoli Storage Manager server
functions from any workstation using a supported Web browser. The interface
© Copyright IBM Corp. 1993, 2009
19
also allows Web access to the command line. See “Managing servers with the
Administration Center” on page 33 for more information.
The Web backup-archive client (Web client) allows an authorized user to
remotely access a client to run backup, archive, restore, and retrieve processes.
The Web browser must have the appropriate support for Java™. See the
Backup-Archive Clients Installation and User’s Guide for requirements.
v The command-line interface
For information about using the command-line interface of the administrative
client, see the Administrator’s Reference. For information about using the
command-line interface of the backup-archive client or other clients, see the
user’s guide for that client.
v The application program interface
For more information, see the IBM Tivoli Storage Manager Using the Application
Program Interface.
v Access to information in the server’s database using standard SQL SELECT
statements. See “Using SQL to query the IBM Tivoli Storage Manager database”
on page 636.
Server options
Server options let you customize the server and its operations.
Server options can affect the following:
v Server communications
v Storage
v Database and recovery log operations
v Client transaction performance
Server options are in the server options file. Some options can be changed and
made active immediately by using the command, SETOPT. Most server options are
changed by editing the server options file and then halting and restarting the
server to make the changes active. See the Administrator’s Reference for details about
the server options file and reference information for all server options.
You can also change the options through the IBM Tivoli Storage Manager Console.
See the Installation Guide for information about the IBM Tivoli Storage Manager
Console.
Storage configuration and management
Configuring and managing storage for efficiency and capacity are important tasks
for an administrator.
The server uses its storage for the data it manages for clients. The storage can be a
combination of devices.
v Disk
v Tape drives that are either manually operated or automated
v Optical drives
v Other drives that use removable media
Devices can be locally attached, or accessible through a SAN. Key decisions in
configuring and managing the storage include:
20
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v Selecting the devices and media that will form the server storage. This includes
deciding whether library devices will be shared among Tivoli Storage Manager
servers.
v Designing the storage hierarchy for efficient backups and optimal storage usage.
v Using product features that allow the server to provide services to clients while
minimizing traffic on the communications network:
– LAN-free data movement
– Data movement using NDMP to protect data on network-attached storage
(NAS) file servers when backing up to libraries directly attached to the NAS
file servers
v Using the Tivoli Storage Manager product to help you to manage the drives and
media, or using an external media manager to do the management outside of
the Tivoli Storage Manager product.
For an introduction to key storage concepts, see Chapter 4, “Storage device
concepts,” on page 75.
Hard disk devices
Hard disk devices can be used with Tivoli Storage Manager for storing the
database and recovery log or client data that is backed up, archived, or migrated
from client nodes.
The server can store data on hard disk by using random-access volumes (device
type of DISK) or sequential-access volumes (device type of FILE).
The Tivoli Storage Manager product allows you to exploit disk storage in ways
that other products do not. You can have multiple client nodes back up to the
same disk storage pool at the same time, and still keep the data for the different
client nodes separate. Other products also allow you to back up different systems
at the same time, but only by interleaving the data for the systems, leading to
slower restore processes.
If you have enough disk storage space, data can remain on disk permanently or
temporarily, depending on the amount of disk storage space that you have. Restore
process performance from disk can be very fast compared to tape.
You can have the server later move the data from disk to tape; this is called
migration through the storage hierarchy. Other advantages to this later move to
tape include:
v Ability to collocate data for clients as the data is moved to tape
v Streaming operation of tape drives, leading to better tape drive performance
v More efficient use of tape drives by spreading out the times when the drives are
in use
For information about storage hierarchy and setting up storage pools on disk
devices, see:
Chapter 5, “Magnetic disk devices,” on page 103 and “Storage pool hierarchies”
on page 296
Chapter 2. Tivoli Storage Manager concepts
21
Removable media devices
Removable media devices can be used with Tivoli Storage Manager for storage of
client data that is backed up, archived, or migrated from client nodes; storage of
database backups; and the exporting, that is, moving, of data to another server.
The following topics provide an overview of how to use removable media devices
with Tivoli Storage Manager.
For guidance and scenarios on configuring your tape devices, see:
Chapter 7, “Configuring storage devices,” on page 121
Device classes
A device class represents a set of storage devices with similar availability,
performance, and storage characteristics.
You must define device classes for the drives available to the Tivoli Storage
Manager server. You specify a device class when you define a storage pool so that
the storage pool is associated with drives.
For more information about defining device classes, see Chapter 10, “Defining
device classes,” on page 251.
Removable media operations
Routine removable media operations include preparing and controlling media for
reuse, ensuring that sufficient media are available, and mounting volumes in
response to server requests, for manually operated drives. Removable media
operations also include managing libraries and drives.
For information about removable media operations, see:
Chapter 8, “Managing removable media operations,” on page 173
Migrating data from disk to tape
After you set up disk and tape storage pools, you can configure the server so that
client data can be migrated to tape. By migrating data to tape from a disk storage
pool, you can verify that tape devices are properly set up.
Migration requires tape mounts. The mount messages are directed to the console
message queue and to any administrative client that has been started with either
the mount mode or console mode option. To have the server migrate data from
BACKUPPOOL to AUTOPOOL and from ARCHIVEPOOL to TAPEPOOL do the
following:
update stgpool backuppool nextstgpool=autopool
update stgpool archivepool nextstgpool=tapepool
The server can perform migration as needed, based on migration thresholds that
you set for the storage pools. Because migration from a disk to a tape storage pool
uses resources such as drives and operators, you might want to control when
migration occurs. To do so, you can use the MIGRATE STGPOOL command:
migrate stgpool backuppool
To migrate from a disk storage pool to a tape storage pool, devices must be
allocated and tapes must be mounted. For these reasons, you may want to ensure
that migration occurs at a time that is best for your situation. You can control
when migration occurs by using migration thresholds.
22
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
You might not want to empty the disk storage pool every time migration occurs by
setting the low migration threshold to 0. Normally, you might want to keep the
low threshold at 40%, and vary the high threshold from as high as 90% to as low
as 50%.
See “Migrating disk storage pools” on page 308 and the Administrator’s Reference
for more information.
Managing storage pools and volumes
Backed-up, archived, and space-managed files are stored in groups of volumes that
are called storage pools. Because each storage pool is assigned to a device class,
you can logically group your storage devices to meet your storage-management
needs.
The following are other examples of what you can control for a storage pool:
Collocation
The server can keep each client’s files on a minimal number of volumes
within a storage pool. Because client files are consolidated, restoring
collocated files requires fewer media mounts. However, backing up files
from different clients requires more mounts.
Reclamation
Files on sequential access volumes may expire, move, or be deleted. The
reclamation process consolidates the active, unexpired data on many
volumes onto fewer volumes. The original volumes can then be reused for
new data, making more efficient use of media.
Storage pool backup
Client backup, archive, and space-managed data in primary storage pools
can be backed up to copy storage pools for disaster recovery purposes. As
client data is written to the primary storage pools, it can also be
simultaneously written to copy storage pools.
Copy active data
The active versions of client backup data can be copied to active-data
pools. Active-data pools provide a number of benefits. For example, if the
device type associated with an active-data pool is sequential-access disk
(FILE), you can eliminate the need for disk staging pools. Restoring client
data is faster because FILE volumes are not physically mounted, and the
server does not need to position past inactive files that do not need to be
restored.
An active-data pool that uses removable media, such as tape or optical, lets
you reduce the number of volumes for onsite and offsite storage. (Like
volumes in copy storage pools, volumes in active-data pools can be moved
offsite for protection in case of disaster.) If you vault data electronically to
a remote location, a SERVER-type active-data pool lets you save bandwidth
by copying and restoring only active data.
As backup client data is written to primary storage pools, the active
versions can be simultaneously written to active-data pools.
Cache When the server migrates files from disk storage pools, duplicate copies of
the files can remain in cache (disk storage) for faster retrieval. Cached files
are deleted only when space is needed. However, client backup operations
that use the disk storage pool may have poorer performance.
Chapter 2. Tivoli Storage Manager concepts
23
You can establish a hierarchy of storage pools. The hierarchy can be based on the
speed or the cost of the devices associated with the pools. Tivoli Storage Manager
migrates client files through this hierarchy to ensure the most efficient use of a
server’s storage devices.
You manage storage volumes by defining, updating, and deleting volumes, and by
monitoring the use of server storage. You can also move files within and across
storage pools to optimize the use of server storage.
For more information about storage pools and volumes and taking advantage of
storage pool features, see Chapter 11, “Managing storage pools and volumes,” on
page 275.
Windows cluster environments
A Windows cluster environment is a configuration of independent computing
systems. The systems are connected to the same disk subsystem and provide a
high-availability solution that minimizes or eliminates many potential sources of
downtime.
Tivoli Storage Manager is a cluster-aware application and can be configured as a
Microsoft Cluster Server (MSCS) virtual server. MSCS is software that helps
configure, monitor, and control applications and hardware components that are
deployed on a Windows cluster. The administrator uses the MSCS Cluster
Administrator interface and Tivoli Storage Manager to designate cluster
arrangements and define the failover pattern.
Tivoli Storage Manager can also support SCSI tape failover. (However, MSCS does
not support the failover of tape devices, so it cannot be used to configure the SCSI
tape failover.) After the configuration has been set up, it can be monitored through
MSCS and the Cluster Administrator interface.
For more information about configuring and managing clusters, see Appendix B,
“Configuring clusters,” on page 895.
Management of client operations
Because the key task of the server is to provide services to clients, many of the
server administrator’s tasks deal with client operations.
Tasks include the following:
v Registering clients and customizing client operations
v Ensuring that client operations meet security requirements
v Providing required levels of service by customizing policies
v Automating protection by using schedules
After you have created schedules, you manage and coordinate those schedules.
Your tasks include the following:
v Verify that the schedules ran successfully.
v Determine how long Tivoli Storage Manager retains information about schedule
results (event records) in the database.
v Balance the workload on the server so that all scheduled operations complete.
For more information about client operations, see the following sections:
24
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v For setting up an include-exclude list for clients, see “Getting users started” on
page 457.
v For automating client operations, see Chapter 16, “Scheduling operations for
client nodes,” on page 545.
v For running the scheduler on a client system, see the user’s guide for the client.
v For setting up policy domains and management classes, see Chapter 14,
“Implementing policies for client data,” on page 455.
For more information about these tasks, see Chapter 17, “Managing schedules for
client nodes,” on page 553
Managing client nodes
A basic administrative task is adding client nodes and giving the systems that the
nodes represent access to the services and resources of the Tivoli Storage Manager
server.
The Tivoli Storage Manager server supports a variety of client nodes. You can
register the following types of clients and servers as client nodes:
v Tivoli Storage Manager backup-archive client
v Application clients that provide data protection through one of the following
products: Tivoli Storage Manager for Application Servers, Tivoli Storage
Manager for Databases, Tivoli Storage Manager for Enterprise Resource
Planning, or Tivoli Storage Manager for Mail.
v Tivoli Storage Manager for Space Management client (called space manager
client or HSM client)
v A NAS file server for which the Tivoli Storage Manager server uses NDMP for
backup and restore operations
v Tivoli Storage Manager source server (registered as a node on a target server)
When you register clients, you have choices to make about the following:
v Whether the client should compress files before sending them to the server for
backup
v Whether the client node ID has the authority to delete its files from server
storage
v Whether an administrator ID that matches the client ID is created, for remote
client operations
Other important tasks include the following:
Controlling client options from the server
Client options on client systems allow users to customize backup, archive,
and space management operations, as well as schedules for these
operations. On most client systems, the options are in a file called dsm.opt.
In some cases, you may need or want to provide the clients with options to
use. To help users get started, or to control what users back up, you can
define sets of client options for clients to use. Client options sets are
defined in the server database and are used by the clients that you
designate.
Among the options that can be in a client option set are the include and
exclude options. These options control which files are considered for the
client operations.
For more information, see:
Chapter 2. Tivoli Storage Manager concepts
25
v Chapter 12, “Adding client nodes,” on page 399
v Chapter 13, “Managing client nodes,” on page 411
Allowing subfile backups
For mobile and remote users, you want to minimize the data sent over the
network, as well as the time that they are connected to the network. You
can set the server to allow a client node to back up changed portions of
files that have been previously backed up, rather than entire files. The
portion of the file that is backed up is called a subfile.
For more information, see Chapter 15, “Managing data for client nodes,”
on page 513.
Creating backup sets for client nodes
You can perform an instant archive for a client by creating a backup set. A
backup set copies a client node’s active, backed-up files from server
storage onto sequential media. If the sequential media can be read by a
device available to the client system, you can restore the backup set
directly to the client system without using the network. The server tracks
backup sets that you create and retains the backup sets for the time you
specify.
For more information, see Chapter 15, “Managing data for client nodes,”
on page 513.
For more information on managing client nodes, see the Backup-Archive Clients
Installation and User’s Guide.
Security management
Tivoli Storage Manager includes security features for user registration and
passwords. Also included are features that can help ensure security when clients
connect to the server across a firewall.
Registration for clients can be closed or open. With closed registration, a user with
administrator authority must register all clients. With open registration, clients can
register themselves at first contact with the server. See “Registering nodes with the
server” on page 400.
You can ensure that only authorized administrators and client nodes are
communicating with the server by requiring the use of passwords. You can also set
the following requirements for passwords:
v Number of characters in a password.
v Expiration time.
v A limit on the number of consecutive, invalid password attempts. When the
client exceeds the limit, Tivoli Storage Manager locks the client node from access
to the server.
See “Managing passwords and login procedures” on page 450.
You can control the authority of administrators. An organization may name a
single administrator or may distribute the workload among a number of
administrators and grant them different levels of authority. For details, see
“Managing levels of administrative authority” on page 448.
For better security when clients connect across a firewall, you can control whether
clients can initiate contact with the server for scheduled operations. See the
“Managing client nodes across a firewall” on page 412 for details.
26
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Several server options allow you to keep client and administrative traffic on
separate server ports.
For additional ways to manage security, see “Managing IBM Tivoli Storage
Manager security” on page 443.
Managing client data with policies
As the administrator, you define the rules for client backup, archive, and migration
operations, based on user or business requirements.
The rules are called policies. Policies identify:
v The criteria for backup, archive, and migration of client data
v Where the client data is initially stored
v How the data is managed by the server (how many backup versions are kept,
for how long)
In Tivoli Storage Manager, you define policies by defining policy domains, policy
sets, management classes, and backup and archive copy groups. When you install
Tivoli Storage Manager, you have a default policy that consists of a single policy
domain named STANDARD.
The default policy provides basic backup protection for end-user workstations. To
provide different levels of service for different clients, you can add to the default
policy or create new policy. For example, because of business needs, file servers are
likely to require a policy different from policy for users’ workstations. Protecting
data for applications such as Lotus Domino also may require a unique policy.
For more information about the default policy and establishing and managing new
policies, see Chapter 14, “Implementing policies for client data,” on page 455.
Schedules for client operations
Scheduling client operations can mean better protection for data, because
operations can occur consistently without user intervention.
Scheduling also can mean better utilization of resources such as the network.
Client backups that are scheduled at times of lower usage can minimize the impact
on user operations on a network.
You can automate operations for clients by using schedules. Tivoli Storage
Manager provides a central scheduling facility. You can also use operating system
utilities or other scheduling tools to schedule Tivoli Storage Manager operations.
With Tivoli Storage Manager schedules, you can perform the operations for a client
immediately or schedule the operations to occur at regular intervals.
The key objects that interact are:
Include-exclude options on each client
The include-exclude options determines which files are backed up,
archived, or space-managed, and determines management classes,
encryption, and type of backup for files.
The client can specify a management class for a file or group of files, or
can use the default management class for the policy domain. The client
specifies a management class by using an INCLUDE option in the client’s
Chapter 2. Tivoli Storage Manager concepts
27
include-exclude list or file. You can have central control of client options
such as INCLUDE and EXCLUDE by defining client option sets on the
server. When you register a client, you can specify a client option set for
that client to use. See “Managing client option files” on page 436 for
details.
Association defined between client and schedule
Associations determine which schedules are run for a client.
Clients are assigned to a policy domain when they are registered. To
automate client operations, you define schedules for a domain. Then you
define associations between schedules and clients in the same domain.
Schedule
The schedule determines when a client operation automatically occurs.
Schedules that can automate client operations are associated with a policy
domain.
The scheduled client operations are called events. The Tivoli Storage
Manager server stores information about events in its database. For
example, you can query the server to determine which scheduled events
completed successfully and which failed.
Management class
The management class determines where client files are initially stored and
how they are managed.
The management class contains information that determines how Tivoli
Storage Manager handles files that clients backup, archive, or migrate. For
example, the management class contains the backup copy group and the
archive copy group. Each copy group points to a destination, a storage pool
where files are first stored when they are backed up or archived.
For a schedule to work on a particular client, the client machine must be turned
on. The client either must be running the client scheduler or must allow the client
acceptor daemon to start the scheduler when needed.
Server maintenance
If you manage more than one server, you can ensure that the multiple servers are
consistently managed by using the enterprise management functions of Tivoli
Storage Manager.
You can set up one server as the configuration manager and have other servers
obtain configuration information from it.
To keep the server running well, you can perform these tasks:
v Managing server operations, such as controlling client access to the server
v Automating repetitive administrative tasks
v Monitoring and adjusting space for the database and the recovery log
v Monitoring the status of the server, server storage, and clients
28
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Server-operation management
When managing your server operations, you can choose from a variety of
associated tasks.
Some of the more common tasks that you can perform to manage your server
operations are shown in the following list:
v Start and stop the server.
v Allow and suspend client sessions with the server.
v Query, cancel, and preempt server processes such as backing up the server
database.
v Customize server options.
Other tasks that are needed less frequently include:
v Maintain compliance with the license agreement.
v Move the server.
See “Licensing IBM Tivoli Storage Manager” on page 571. For suggestions about
the day-to-day tasks required to administer the server, see Chapter 18, “Managing
server operations,” on page 571.
Server script automation
Repetitive, manual tasks associated with managing the server can be automated
through Tivoli Storage Manager schedules and scripts. Using schedules and scripts
can minimize the daily tasks for administrators.
You can define schedules for the automatic processing of most administrative
commands. For example, a schedule can run the command to back up the server’s
database every day.
Tivoli Storage Manager server scripts allow you to combine administrative
commands with return code checking and processing. The server comes with
scripts that you can use to do routine tasks, or you can define your own. The
scripts typically combine several administrative commands with return code
checking, or run a complex SQL SELECT command.
For more information about automating Tivoli Storage Manager operations, see
Chapter 19, “Automating server operations,” on page 589.
Modifying a maintenance script
You can modify your maintenance script to add, subtract, or reposition commands.
If you have a predefined maintenance script, you can add or subtract commands
using the maintenance script wizard. You can add, subtract, or reposition
commands if you have a custom maintenance script. Both methods can be accessed
through the same process. If you want to convert your predefined maintenance
script to a custom maintenance script, select a server with the predefined script,
click Select Action → Convert to Custom Maintenance Script.
Perform the following tasks to modify a maintenance script:
1. Click Server Maintenance in the navigation tree.
2. Select a server that has either Predefined or Custom designated in the
Maintenance Script column.
Chapter 2. Tivoli Storage Manager concepts
29
3. Click Select Action → Modify Maintenance Script. If you are modifying a
predefined maintenance script, the maintenance script wizard opens your script
for you to modify. If you are modifying a custom maintenance script, the
maintenance script editor opens your script so that you can modify it.
Database and recovery-log management
The Tivoli Storage Manager database contains information about registered client
nodes, policies, schedules, and the client data in storage pools. The database is key
to the operation of the server.
The information about the client data, also called metadata, includes the file name,
file size, file owner, management class, copy group, and location of the file in
server storage. The server records changes made to the database (database
transactions) in its recovery log. The recovery log is used to maintain the database
in a transactionally consistent state, and to maintain consistency across server
startup operations.
For more information about the Tivoli Storage Manager database and recovery log
and about the tasks associated with them, see Chapter 20, “Managing the database
and recovery log,” on page 611.
Sources of information about the server
Tivoli Storage Manager provides you with many sources of information about
server and client status and activity, the state of the server’s database and storage,
and resource usage. By monitoring selected information, you can provide reliable
services to users while making the best use of available resources.
Daily checks of some indicators are suggested. The Administration Center includes
a health monitor, which presents a view of the overall status of multiple servers
and their storage devices. From the health monitor, you can link to details for a
server, including a summary of the results of client schedules and a summary of
the availability of storage devices. See “Managing servers with the Administration
Center” on page 33.
You can use Tivoli Storage Manager queries and SQL queries to get information
about the server. You can also set up automatic logging of information about Tivoli
Storage Manager clients and server events.
See the following sections for more information about these tasks.
v Chapter 21, “Monitoring the Tivoli Storage Manager server,” on page 631
v “Using SQL to query the IBM Tivoli Storage Manager database” on page 636
v “Logging IBM Tivoli Storage Manager events to receivers” on page 644
v “Daily monitoring scenario” on page 664
30
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Tivoli Storage Manager server networks
You might have a number of Tivoli Storage Manager servers in your network, at
the same or different locations.
Some examples of different configurations are:
v Your users are scattered across many locations, so you have located Tivoli
Storage Manager servers close to the users to manage network bandwidth
limitations.
v You have set up multiple servers to provide services to different organizations at
one location.
v You have multiple servers on your network to make disaster recovery easier.
Servers connected to a network can be centrally managed. Tivoli Storage Manager
provides functions to help you configure, manage, and monitor the servers. An
administrator working at one Tivoli Storage Manager server can work with servers
at other locations around the world.
When you have a network of Tivoli Storage Manager servers, you can simplify
configuration and management of the servers by using enterprise administration
functions. You can do the following:
v Designate one server as a configuration manager that distributes configuration
information such as policy to other servers. See “Setting up enterprise
configurations” on page 703.
v Route commands to multiple servers while logged on to one server. See
“Routing commands” on page 725.
v Log events such as error messages to one server. This allows you to monitor
many servers and clients from a single server. See “Enterprise event logging:
logging events to another server” on page 656.
v Store data for one Tivoli Storage Manager server in the storage of another Tivoli
Storage Manager server. The storage is called server-to-server virtual volumes.
See “Using virtual volumes to store data on another server” on page 730 for
details.
v Share an automated library among Tivoli Storage Manager servers. See “Devices
on storage area networks” on page 86.
v Store a recovery plan file for one server on another server, when using disaster
recovery manager. You can also back up the server database and storage pools to
another server. See Chapter 25, “Using disaster recovery manager,” on page 815
for details.
Exporting and importing data
As conditions change, you can move data from one server to another by using
export and import processes.
For example, you may need to balance workload among servers by moving client
nodes from one server to another. The following methods are available:
v You can export part or all of a server’s data to sequential media, such as tape or
a file on hard disk. You can then take the media to another server and import
the data to that server
v You can export part or all of a server’s data and import the data directly to
another server, if server-to-server communications are set up.
Chapter 2. Tivoli Storage Manager concepts
31
For more information about moving data between servers, see Chapter 23,
“Exporting and importing data,” on page 737.
Protecting Tivoli Storage Manager and client data
The database, recovery log, and storage pools are critical to the operation of the
server and must be properly protected.
If the database or recovery log is unusable, the entire server is unavailable. If a
database is lost and cannot be recovered, the backup, archive, and space-managed
data for that server is lost. If a storage pool volume is lost and cannot be
recovered, the data on the volume is also lost.
IBM Tivoli Storage Manager provides a number of ways to protect your data,
including backing up your storage pools and database. For example, you can
define schedules so that the following operations occur:
v After the initial full backup of your storage pools, incremental storage pool
backups are done nightly.
v Full database backups are done weekly.
v Incremental database backups are done nightly.
In addition, disaster recovery manager (DRM), an optional feature of Tivoli Storage
Manager, can assist you in many of the tasks that are associated with protecting
and recovering your data. For details, see:
Chapter 25, “Using disaster recovery manager,” on page 815
Protecting the server
Tivoli Storage Manager provides a number of ways to protect and recover your
server from media failure or from the loss of the Tivoli Storage Manager database
or storage pools.
Recovery is based on the following preventive measures:
v Mirroring, by which the server maintains a copy of the active log
v Periodic backup of the database
v Periodic backup of the storage pools
v Audit of storage pools for damaged files, and recovery of damaged files when
necessary
v Backup of the device configuration and volume history files
v Validation of the data in storage pools, using cyclic redundancy checking
For information about protecting the server with these measures, see Chapter 24,
“Protecting and recovering your server,” on page 769.
You can also create a maintenance script to perform database and storage pool
backups through the Server Maintenance work item in the Administration Center.
See “Managing servers with the Administration Center” on page 33 for details.
In addition to taking these actions, you can prepare a disaster recovery plan to
guide you through the recovery process by using the disaster recovery manager,
which is available with Tivoli Storage Manager Extended Edition. The disaster
recovery manager (DRM) assists you in the automatic preparation of a disaster
32
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
recovery plan. You can use the disaster recovery plan as a guide for disaster
recovery as well as for audit purposes to certify the recoverability of the Tivoli
Storage Manager server.
The disaster recovery methods of DRM are based on taking the following
measures:
v Sending server backup volumes offsite or to another Tivoli Storage Manager
server
v Creating the disaster recovery plan file for the Tivoli Storage Manager server
v Storing client machine information
v Defining and tracking client recovery media
For more information about protecting your server and for details about recovering
from a disaster, see Chapter 24, “Protecting and recovering your server,” on page
769.
Managing servers with the Administration Center
The Administration Center is a Web-based interface for centrally configuring and
managing IBM Tivoli Storage Manager servers. It provides wizards to help guide
you through common configuration tasks. Properties notebooks allow you to
modify settings and perform advanced management tasks.
The Administration Center is installed as an IBM Integrated Solutions Console
component. The Integrated Solutions Console allows you to install components
provided by multiple IBM applications, and access them from a single interface.
For Administration Center system requirements, see the following Web site:
http://www.ibm.com/support/docview.wss?uid=swg21328445.
Using the Administration Center
The Administration Center is installed as a component of the IBM Integrated
Solutions Console. You can use the Administration Center to centrally configure
and manage your IBM Tivoli Storage Manager environment.
Introduction
Basic items (for example, server maintenance, storage devices, and so on) are listed
in the navigation tree on the left side of the Integrated Solutions Console. When
you click on an item, a work page containing one or more portlets (for example,
the Servers portlet) is displayed in the work area on the right side of the interface.
You use portlets to perform individual tasks, such as creating storage pools.
Each time you click an item in the navigation tree, a new work page is opened.
This allows you to open the same item for more than one server. To navigate
among open items, use the page bar at the top of the work area.
Many portlets contain tables. These tables display objects like servers, policy
domains, or reports. There are two ways to work with table objects. For any table
object, you can do the following:
1. Click its radio button or check box in the Select column.
2. Click Select Action to display the table action list.
3. Select an action to perform that action.
Chapter 2. Tivoli Storage Manager concepts
33
For some table objects, you can also click the object name to open a portlet or work
page pertaining to it. In most cases, a properties notebook portlet is opened. This
provides a fast way to work with table objects.
Fields marked with an asterisk and highlighted in yellow require an entry or
selection. However, if you have the Google search bar installed in your browser,
some fields can display bright yellow, whether they are required or not. To get
in the title bar of a
help at any time, click the context sensitive help button
portlet, properties notebook, and so on.
If you want more space in the work area, you can hide the navigation tree by
clicking
Do not use the Back, Forward and Refresh buttons in your browser. Doing so can
cause unexpected results. Using your keyboard’s Enter key can also cause
unexpected results. Use the controls in the Administration Center interface instead.
Sample task
This simple task will help familiarize you with Administration Center controls.
Suppose you want to create a new client node and add it to the STANDARD
policy domain associated with a particular server.
1. If you have not already done so, access the Administration Center by entering
the following address in a supported Web browser: https://
workstation_name:9043/ibm/console. The workstation_name is the network name
or IP address of the machine on which you installed the Administration Center.
The default Web administration port (HTTPS) is 9043. To get started, log in
using the Integrated Solutions Console user ID and password that you created
during the installation. Save this password in a safe location because you need
it not only to log in but also to uninstall the Administration Center.
2. Click Tivoli Storage Manager, and then click Policy Domains in the navigation
tree. The Policy Domains work page is displayed with a table that lists the
servers that are accessible from the Administration Center. The table also lists
the policy domains defined for each server:
3. In the Server Name column of the Policy Domains table, click the name of the
server with the STANDARD domain to which you want to add a client node. A
portlet is displayed with a table that lists the policy domains created for that
server:
34
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
4. In the Domain Name column of the server’s Policy Domains table, click the
STANDARD policy domain. The STANDARD Properties portlet is displayed:
5. In the domain’s properties portlet, click Client Nodes. A table is displayed
listing all the nodes assigned to the STANDARD policy domain:
6. In the client nodes table, click Select Action, and then select Create a Client
Node. The Create Client Node wizard is displayed:
Chapter 2. Tivoli Storage Manager concepts
35
7. Follow the instructions in the wizard. After you complete the wizard, the name
of the new client node is displayed in the Client Nodes table for the
STANDARD policy domain.
Starting and stopping the Administration Center
You can start and stop theTivoli Storage Manager Administration Center server by
using the supplied commands.
In the following task descriptions, <tsm_home> is the root directory for your
Integrated Solutions Console installation and <iscadmin> and <iscpass> are a valid
ISC user ID and password.
For Windows, the <tsm_home> default location is C:\Program Files\tivoli\tsm. To
start the Administration Center from a command line, go to the
<tsm_home>\AC\ISCW61\profiles\TsmAC\bin directory or a subdirectory of the Tivoli
Storage Manager installation directory and issue the following command:
startServer.bat tsmServer
To stop the Windows Administration Center from a command line, go to the
<tsm_home>\AC\ISCW61\profiles\TsmAC\bin directory or a subdirectory of the Tivoli
Storage Manager installation directory and issue the following command:
stopServer.bat tsmServer -username iscadmin -password iscpass. Alternatively,
you can issue the stopServer.bat tsmServer command and you are prompted for
your username and password.
To stop the server, you must specify a user ID and the password for that user ID.
If you do not specify the user ID and password, you are prompted to enter them.
36
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Functions not in the Administration Center
The Administration Center offers the functions of most administrative commands,
as well as unique functions such as the health monitor and wizards to help you
perform complex tasks. However, some Tivoli Storage Manager functions are
limited or not supported in the Administration Center.
The following table shows commands that are supported with some restrictions or
not yet supported in the Administration Center. Use the command line if the
command or command parameter that you need is not available in the
Administration Center.
Command
Supported in the Administration Center
ACCEPT DATE
No
AUDIT LIBRARY
Support added in Version 5.4
AUDIT LICENSES
No
AUDIT VOLUME
Support added in Version 5.4
BEGIN EVENTLOGGING
No
CANCEL EXPIRATION
No
CANCEL MOUNT
No
CANCEL RESTORE
No
CLEAN DRIVE
Support added in Version 6.1
CONVERT ARCHIVE
No
COPY DOMAIN
No
COPY MGMTCLASS
No
COPY POLICYSET
No
COPY PROFILE
No
COPY SCHEDULE
No
COPY SCRIPT
No
COPY SERVERGROUP
No
DEFINE COPYGROUP
TYPE=ARCHIVE
Supported except for these parameters:
v RETINIT
v RETMIN
These parameters are needed only to support IBM
Total Storage Archive Manager.
DEFINE EVENTSERVER
No
DEFINE NODEGROUP
Supported added in Version 6.1
DEFINE NODEGROUPMEMBER
Supported added in Version 6.1
DEFINE SPACETRIGGER
This command is supported for databases and
recovery logs, but not for storage pools, in Version
5.4 and 5.5.
In Version 6.1, this command is supported for
database and recovery logs and also for storage
pools.
Chapter 2. Tivoli Storage Manager concepts
37
Command
Supported in the Administration Center
DEFINE STGPOOL
Supported except for the RECLAMATIONTYPE
parameter
This parameter is needed only for support of EMC
Centera devices.
DELETE DATAMOVER
No
DELETE DISK
No
DELETE EVENT
No
DELETE EVENTSERVER
No
DELETE SUBSCRIBER
No
DISABLE EVENTS
No
DISMOUNT DEVICE
No
DISPLAY OBJNAME
No
ENABLE EVENTS
No
Event logging commands (BEGIN
EVENTLOGGING, END
EVENTLOGGING, ENABLE
EVENTS, DISABLE EVENTS)
No
MOVE GRPMEMBER
No
MOVE MEDIA
Support added in Version 5.4
MOVE NODEDATA
Support added in Version 5.4
QUERY AUDITOCCUPANCY
No
QUERY ENABLED
No
QUERY EVENTRULES
No
QUERY EVENTSERVER
No
QUERY LICENSE
No
QUERY MEDIA
Support added in Version 5.4
QUERY NASBACKUP
No
QUERY NODEDATA
Support added in Version 5.4
QUERY RESTORE
No
QUERY SYSTEM
No
QUERY TAPEALERTMSG
No
RECONCILE VOLUMES
No
REGISTER LICENSE
No
RENAME FILESPACE
No
RESTORE STGPOOL
No
RESTORE VOLUME
Yes, except use the command line to restore
random-access storage pool volumes.
SET ACCOUNTING
No
SET ACTLOGRETENTION
No
Some SNMP options can be viewed in the
interface, in a server’s properties notebook.
SET
No
ARCHIVERETENTIONPROTECTION
38
SET CLIENTACTDURATION
No
SET CONTEXTMESSAGING
No
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Command
Supported in the Administration Center
SET DBREPORTMODE
No
SET EVENTRETENTION
No
SET LICENSEAUDITPERIOD
No
SET MAXCMDRETRIES
No
SET MAXSCHEDSESSIONS
No
SET QUERYSCHEDPERIOD
No
SET RANDOMIZE
No
SET RETRYPERIOD
No
SET SCHEDMODES
No
SET SERVERNAME
No
SET SUBFILE
No
SET SUMMARYRETENTION
No
SET TAPEALERTMSG
No
SET TOCLOADRETENTION
No
SETOPT
Only the following server options can be modified
using the Administration Center:
v EXPINTERVAL
v RESTOREINTERVAL
UPDATE DISK
No
UPDATE DRIVE (FILE type)
No
UPDATE LIBRARY (FILE type)
No
UPDATE POLICYSET
No
VALIDATE LANFREE
Use the Enable LAN-free Data Movement wizard
to get this function.
Protecting the Administration Center
The Administration Center is installed as an Integrated Solutions Console (ISC)
plug-in. To protect your Administration Center configuration settings, use the
Tivoli Storage Manager backup-archive client to back up the ISC.
Backing up the Administration Center
To back up the Integrated Solutions Console the Tivoli Storage Manager
backup-archive client must be installed on the ISC system and configured to back
up to a Tivoli Storage Manager server.
To back up the ISC, theTivoli Storage Manager backup-archive client must be
installed on the ISC system and configured to back up to a Tivoli Storage Manager
server. For more information, see the Backup-Archive Clients Installation and User’s
Guide.
To back up the Administration Center, perform the following steps:
1. Stop the ISC. See “Starting and stopping the Administration Center” on page 36
for the command syntax.
2. Using the IBM Tivoli Storage Manager backup-archive client, back up the entire
Integrated Solutions Console installation directory. For example: back up
C:\Program Files\Tivoli\TSM\AC\ISCW61.
Chapter 2. Tivoli Storage Manager concepts
39
3. Start the ISC. See “Starting and stopping the Administration Center” on page
36 for the command syntax.
Restoring the Administration Center
To restore the Integrated Solutions Console the Tivoli Storage Manager
backup-archive client must be installed on the ISC system and configured to
restore from the Tivoli Storage Manager server that was used to back up the ISC.
To restore the Administration Center, perform the following steps:
1. If necessary, restore the operating system and reinstall the Tivoli Storage
Manager backup-archive client.
2. Reinstall the Integrated Solutions Console and the Administration Center. For
more information, see the Installation Guide.
3. Stop the ISC. See “Starting and stopping the Administration Center” on page 36
for the command syntax.
4. Use the Tivoli Storage Manager backup-archive client to restore the ISC to the
same location where it was originally installed.
5. Start the ISC. See “Starting and stopping the Administration Center” on page
36 for the command syntax.
40
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 3. Configuring the server
You can use the Tivoli Storage Manager Console to help you configure the server
on Windows systems. Each time you add a new Tivoli Storage Manager server
instance, one or more wizards are presented to help you with configuration tasks.
You can choose from two wizard-based configuration paths:
Standard configuration
Choose this option to initialize and configure a server. A series of wizards
is presented in sequence to guide you through the initial configuration
process. This is the recommended configuration path for setting up a
functional production environment.
Minimal configuration
Choose this option to quickly initialize a Tivoli Storage Manager server
instance and perform a test backup of data located on the Tivoli Storage
Manager server machine. This configuration allows you to quickly evaluate
basic function.
While all Tivoli Storage Manager configuration and management tasks can also be
performed using the command-line interface, the wizards are the preferred method
for initial configuration. You can return to individual wizards after the initial
configuration to update settings and perform management tasks. Refer to the
Installation Guide for more information on configuration and management wizards.
This chapter contains an overview of the wizard-based initial configuration process
and instructions for performing the initial configuration.
Initial configuration overview
You can configure the Tivoli Storage Manager server for Windows using either a
standard or minimal configuration.
Although the wizards simplify the configuration process by hiding some of the
detail, a certain amount of Tivoli Storage Manager knowledge is still required to
create and maintain a typically complex storage management environment. If you
are not familiar with IBM Tivoli Storage Manager functions and concepts, you
should refer to Chapter 1, “Tivoli Storage Manager overview,” on page 3 before
you begin.
The initial configuration process configures a single server. If you have purchased
the Enterprise Administration feature and plan to configure a network of servers,
you must perform additional tasks. For details, see Chapter 22, “Managing a
network of Tivoli Storage Manager servers,” on page 689.
© Copyright IBM Corp. 1993, 2009
41
Standard configuration
During the standard configuration process, wizards help you perform the
commonly-required tasks.
These include the following:
v Analyze drive performance to determine best location for Tivoli Storage
Manager server components
v Initialize the Tivoli Storage Manager server
v Apply Tivoli Storage Manager licenses
v Configure Tivoli Storage Manager to access storage devices
v Prepare media for use with Tivoli Storage Manager
v Register Tivoli Storage Manager client nodes
v Define schedules to automate Tivoli Storage Manager client tasks
Additional configuration wizards can help you perform the following optional
tasks:
v Configure Tivoli Storage Manager for use in a Microsoft Cluster Server (MSCS)
environment (Refer to Appendix B, “Configuring clusters,” on page 895.)
v Configure Tivoli Storage Manager for use in a Windows registry Active
Directory environment (Refer to theAppendix F, “Configuring Active Directory,”
on page 941 for more information.)
v Create a remote Tivoli Storage Manager for Windows client configuration
package (Refer to “Installing clients using shared resources” on page 68.)
The standard initial configuration process does not include all IBM Tivoli Storage
Manager features, but it does produce a functional Tivoli Storage Manager system
that can be further customized and tuned. The default settings suggested by the
wizards are appropriate for use in many cases.
Minimal configuration
During the minimal configuration process, a wizard helps you initialize a Tivoli
Storage Manager server instance. Open client registration is enabled, so Tivoli
Storage Manager client nodes can automatically register themselves with the
server.
The following objects are also created on the server machine:
v A client options file
If a Tivoli Storage Manager client is not installed locally, the required directory
structure will be created. If a client options file already exists, it will be backed
up before the new file is created. TCP/IP communication is enabled for the
client and server.
v A File device
A file device is drive space designated for use as a virtual storage device.
Standard files are used to represent individual media volumes. Data is written to
file volumes sequentially, as if they were tape volumes. When a new file volume
is required, a 25MB file is automatically created. When file volumes are emptied,
they are automatically deleted. Because the minimal configuration option does
not provide for storage device configuration, default backup and archive storage
pools are configured to send their data to the file device.
42
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Stopping the initial configuration
You can click Cancel to exit any wizard panel. A window appears, asking if you
want to mark the current wizard task as complete.
You can click Yes to continue to the next wizard, or No to exit the initial
configuration process. However, cancelling during initial configuration can produce
unexpected results. The preferred method is to complete the entire wizard
sequence, and then restart an individual wizard to make any configuration
changes.
Performing the initial configuration
If you intend to configure IBM Tivoli Storage Manager for use in a Microsoft
Cluster Server (MSCS) environment, there are certain tasks that you must complete
before you begin the initial configuration of the Tivoli Storage Manager server.
Before continuing with this section, refer to Appendix B, “Configuring clusters,” on
page 895.
See Chapter 7, “Configuring storage devices,” on page 121 for information on
device configuration for Windows Server 2003.
After you have installed IBM Tivoli Storage Manager, do the following:
1. Double click the
desktop.
Tivoli Storage Manager Management Console icon on the
The Tivoli Storage Manager Console window opens.
Chapter 3. Configuring the server
43
Figure 2. Tivoli Storage Manager Console – Welcome
2. Expand the IBM Tivoli Storage Manager tree in the left pane until the local
machine name is displayed.
3. Right-click the local machine name and select Add a New Tivoli Storage
Manager Server.
The Initial Configuration Task List is displayed.
44
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Figure 3. Tivoli Storage Manager Console – Welcome
4. Select Standard configuration or Minimal configuration and click Start. For
more information about configuration options, refer to “Initial configuration
overview” on page 41.
v If you selected Standard configuration, refer to “Initial configuration
environment wizard and tasks” for instructions.
v If you selected Minimal configuration, refer to “Server initialization wizard”
on page 47 for instructions.
Note: If a Tivoli Storage Manager server instance already exists on the local
machine, you will be prompted to confirm that you want to create and configure a
new server instance. Be careful to create only the server instances you require. In
most cases, only one server instance is necessary.
Initial configuration environment wizard and tasks
The Initial Configuration Environment Wizard is the first wizard in the standard
configuration sequence.
Chapter 3. Configuring the server
45
Figure 4. Initial configuration – environment wizard
The information you provide in this wizard will be used to customize upcoming
wizards to reflect your preferences and storage environment.
This wizard consists of a Welcome page and a series of input pages that help you
perform the following tasks:
First Input Page
Choose whether configuration tips are automatically displayed during the
initial configuration process. This additional information can be helpful for
new Tivoli Storage Manager users.
Second Input Page
Choose to configure Tivoli Storage Manager in a standalone or network
environment. Table 7 describes these environments.
Table 7. Standalone vs. network environment
Tivoli Storage
Manager
Environment
Description
Standalone
A Tivoli Storage Manager backup-archive client and Tivoli Storage
Manager server are installed on the same machine to provide storage
management for only that machine. There are no network-connected
Tivoli Storage Manager clients.
Client-server communication will be automatically configured.
46
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 7. Standalone vs. network environment (continued)
Tivoli Storage
Manager
Environment
Description
Network
A Tivoli Storage Manager server is installed. The backup-archive client
is optionally installed on the same machine. You are licensed to install
network-connected Tivoli Storage Manager clients on remote
machines.
You must configure communications between the remote clients and
the server.
Server initialization wizard
The Server Initialization Wizard is the only wizard that appears during the
minimal configuration process. It also appears as part of the standard configuration
wizard sequence.
Figure 5. Initial configuration – server initialization wizard
This wizard consists of a Welcome page and a series of input pages that help you
perform the following tasks:
First Input Page
Choose a directory to store files that are unique to the Tivoli Storage
Manager server instance you are currently configuring. Enter the location
of the initial-disk storage pool volume.
Second Input Page
Enter the locations of the directories to be used by the database. Each
location must be on a separate line, and the directories must be empty.
Chapter 3. Configuring the server
47
Third Input Page
Enter the directories to be used by the logs.
Fourth Input Page
Choose a name and password for the Tivoli Storage Manager server. Some
Tivoli Storage Manager features require a server password.
The database and log directory names are limited to the following characters:
A-Z
Any letter, A through Z
0–9
Any number, 0 through 9
_
Underscore
.
Period
Hyphen
+
Plus
&
Ampersand
If a Microsoft cluster server is detected during the standard configuration process,
you will be prompted to configure IBM Tivoli Storage Manager for use in a
clustered environment. Select Yes to start the Cluster Configuration Wizard. Before
you set up a cluster for use with Tivoli Storage Manager, you will need to do some
planning and ensure that your hardware is supported. For a detailed overview and
task instructions, refer to Appendix B, “Configuring clusters,” on page 895.
Note: The minimal configuration process does not support cluster configuration.
When you complete the Server Initialization Wizard, Tivoli Storage Manager does
the following:
v Initializes the server database and logs.
v Creates two default schedules: DAILY_INCR and WEEKLY_INCR. You can use
the Scheduling Wizard to work with these schedules or create others.
v Registers a local administrative client with the server. This client is used to
provide access to the administrative Web interface and server command-line
interface. The client is named admin, and its default password is admin. To
ensure system security, it is recommended that you change this password.
Initialization results are recorded in the initserv.log file in the server directory. If
you have problems starting the server after initialization, check this log file for
error statements. If you contact technical support for help, you may be asked to
provide this file.
If you are performing a minimal configuration, refer to the Installation Guide for
instructions about how to test backup and archive function.
Device configuration wizard
The Device Configuration Wizard automatically detects storage devices that are
attached to the Tivoli Storage Manager server. Use this wizard to select the devices
that you want to use with Tivoli Storage Manager, and to configure device sharing
if required.
The Device Configuration Wizard consists of a Welcome page and input pages that
help you perform the following tasks:
v Select the storage devices you want to use with Tivoli Storage Manager and
define them to Tivoli Storage Manager.
v Manually associate drives with libraries, if required.
48
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v Specify SCSI element number order for manually associated drives.
v Configure device sharing, if required.
v Manually add virtual or undetected devices.
Figure 6. Initial configuration – device configuration wizard
v The left wizard pane displays a tree-view of devices connected to the Tivoli
Storage Manager server machine. Tivoli Storage Manager device names are used
to identify devices. Libraries and drives can only be detected if your hardware
supports this function.
v The right pane displays basic and detailed information about the device selected
in the tree-view. If the device is a type that can be shared, the Sharing tab
displays any Tivoli Storage Manager components that will share the device.
You can perform the following tasks with the device configuration wizard:
Manually associating drives
Any drive listed as Unknown must be manually associated with a library.
For example, drives attached to a Fibre Channel Switch or a SAN cannot
be automatically associated. Tivoli Storage Manager can determine that the
library contains a certain number of drives but cannot acquire their
Chapter 3. Configuring the server
49
element numbers or addresses. The correct names for these drives will
appear at the bottom of the tree as standalone drives. Drag and drop the
unknown drive on the correct library. To use a library with Tivoli Storage
Manager, any of its drives displayed as Unknown must be replaced with a
valid drive name.
Note: If you manually associate more than one drive with the same library,
you must order the drives according to element number. If you do not
arrange the drives correctly, Tivoli Storage Manager will not work as
expected. To determine the element number for a drive, select the drive
and click the Detailed tab in the right wizard pane. Use the element
number lookup tool to determine the correct position of the drive. If your
drive is not listed, refer to the manufacturer’s documentation.
Setting up device sharing
To set up device sharing, click the Sharing tab and click the Components
button. The Device Sharing dialog is displayed. Follow the directions in
this dialog.
Adding virtual or undetected devices
Click the New button to add File-type devices and drives or libraries
accessed through an NDMP file server.
To define a device, select its check box. Any device with an open check box can be
defined to the Tivoli Storage Manager server. A library check box that is partially
filled indicates that some of the drives associated with that library have not been
selected for use with Tivoli Storage Manager.
Note: A solid green check box indicates that the device has been previously
defined to Tivoli Storage Manager. Previously defined devices cannot be
manipulated or removed using the wizard. You can use the administrative Web
interface or server command line to perform this task.
The libraries and drives you define to Tivoli Storage Manager will be available to
store data.
Client node configuration wizard
The Client Node Configuration Wizard allows you to add and register the client
nodes that will back up data to the server instance that you are configuring.
The Client Node Configuration Wizard consists of a Welcome page and several
input pages that help you perform the following tasks:
v Register client nodes with the Tivoli Storage Manager server. You can add nodes
individually, or detect and register multiple clients at once.
v Associate registered nodes with storage pools by adding the clients to a new or
existing policy domain.
v Arrange the storage pool hierarchy to meet your storage needs.
The wizard also allows you to specify how the backup data for these clients will
be stored, by associating client nodes with storage pools. See “Storage pools
overview” on page 52.
50
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Figure 7. Initial configuration – client node configuration wizard
v The left pane displays two default Tivoli Storage Manager storage pools
(DISKPOOL and BACKUPPOOL).
If you used the Device Configuration Wizard to define any storage devices to
Tivoli Storage Manager, storage pools associated with those devices were
automatically generated, and will also be displayed here.
v The right pane displays client nodes associated with the storage pool selected in
the left pane.
To register new client nodes, you must provide client node names and passwords.
You can also change storage policy settings by adding or modifying policy
domains. Tivoli Storage Manager storage policy determines how many copies of
backed up files are maintained, and how long individual copies of files are
retained in storage.
Note: You should consider using this wizard to register any remote client nodes
now, even if you have not yet installed Tivoli Storage Manager client code on those
machines. After you complete the initial server configuration, you can install the
client code remotely and configure the client nodes to transfer data to this server.
See “Installing clients using shared resources” on page 68 for more information.
Client nodes you have registered can be configured to back up data to this Tivoli
Storage Manager server instance. The backup data will be managed according to
way you set up the client’s associated storage pool hierarchy.
Chapter 3. Configuring the server
51
Storage pools overview
Tivoli Storage Manager uses a logical construct called a storage pool to represent
storage resources. Different storage pools are used to route client data to different
kinds of storage resources. Storage pools can be arranged in a hierarchy, with one
pointing to another, to allow for migration of data from one type of storage to
another.
Tivoli Storage Manager provides a default storage pool named DISKPOOL, which
represents random-access storage space on the hard drive of the Tivoli Storage
Manager server machine. During server initialization, Tivoli Storage Manager
created one volume (representing a discrete amount of allocated space) in this
storage pool. By default, this volume was configured to grow dynamically. You can
add more volumes to expand this storage pool as required.
Tivoli Storage Manager also provides three other default storage pools, which are
all set up to point to DISKPOOL. These three storage pools correspond to the three
ways Tivoli Storage Manager manages client data: backup, archive, and
space-management. The Client Node Configuration Wizard allows you to work
with the backup storage pool, BACKUPPOOL.
By default, data for any client nodes you associate with BACKUPPOOL will be
immediately transferred to DISKPOOL. You can store the data in DISKPOOL
indefinitely, or just use DISKPOOL as a temporary cache and then migrate the data
to any other storage devices represented in the storage pool hierarchy.
See “Arranging the storage-pool hierarchy” on page 54.
For more information, and to configure additional storage pools, refer to
Chapter 11, “Managing storage pools and volumes,” on page 275.
Registering client nodes
You can register client nodes with the Client Node Configuration Wizard.
To register client nodes individually, complete the following steps:
1. Click the Add button.
The Properties dialog appears, with the Node information tab selected.
52
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Figure 8. Properties for node - node information
2. Enter the node name and password information.
3. Consider your storage policy needs.
By default, the new client node will be associated with the STANDARD storage
policy domain. BACKUPPOOL is the default backup storage pool for this
domain. You can associate the new client node with a different storage pool by
clicking New to create a new policy domain, or Edit to modify the existing
policy domain.
Managing multiple policy domains can significantly increase your
administrative overhead, so you should create only the domains you require.
For more information, refer to the chapter on implementing policies for client
data in the Administrator’s Guide.
To detect and register multiple client nodes at once, return to the main wizard
panel and click the Advanced button. Follow the instructions in the Properties
dialog. You can add clients from a text file, or choose from computers detected in
your Windows domain. The Tivoli Storage Manager console directory contains a
file named sample_import_nodes.txt , which defines the format required to import
client nodes.
Chapter 3. Configuring the server
53
To modify Tivoli Storage Manager client node information, select a client node
name from the right wizard pane and click the Edit button. To delete a client node
you just added, select the client node name and click the Delete button.
Note: You cannot use the wizard to delete a client that was previously defined to
the server. You can use the administrative Web interface or server command line to
perform this task.
Arranging the storage-pool hierarchy
By default, new client nodes send backup data to BACKUPPOOL, which
immediately migrates the data to DISKPOOL. You can point BACKUPPOOL at any
other displayed storage pool to route data there instead.
A storage pool can migrate data to one other storage pool. Multiple storage pools
can be set up to migrate data to the same storage pool. To see which clients are
associated with a storage pool, select a storage pool in the left wizard pane. Any
client nodes associated with that pool are displayed in the right pane.
Note: In a standalone server configuration, it is generally more efficient to back up
data directly to tape. However, in a network configuration, consider arranging
your storage pools so that client data is backed up to disk and later migrated to
tape.
To backup client data directly to tape:
1. Associate clients with BACKUPPOOL.
2. Drop BACKUPPOOL on a tape storage pool (for example,
8MMPOOL1).
To backup client data to disk, for migration to tape:
1. Associate clients with BACKUPPOOL.
2. Drop BACKUPPOOL on DISKPOOL. (This is the default setting.)
3. Drop DISKPOOL on a tape storage pool.
Media labeling wizard
Storage media must be labeled and checked in to Tivoli Storage Manager before it
can be used. Media labels are written at the start of each volume to uniquely
identify that volume to Tivoli Storage Manager. The Media Labeling Wizard
appears only when attached storage devices have been defined to Tivoli Storage
Manager.
Slightly different versions of the wizard will appear for automated and manual
storage devices. This section describes the media labeling and check-in process for
automated library devices.
The Media Labeling Wizard consists of a Welcome page and a series of input pages
that help you perform the following tasks:
First Input Page
Select the devices that contain the media you want to label.
Second Input Page
Select and label specific media.
Third Input Page
Check in labeled media to Tivoli Storage Manager.
54
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Figure 9. Initial configuration – media labeling wizard (1)
v The left pane displays devices and drives recognized by Tivoli Storage Manager.
v The right pane displays information about any device or drive selected in the
left pane.
To select a device and any associated drives, check the
or drive name.
box next to the device
When the check-in process has completed, media will be available for use by Tivoli
Storage Manager. By default, media volumes will be checked in with scratch status.
For more information, refer to Chapter 8, “Managing removable media operations,”
on page 173.
Selecting and labeling media
You can specify volumes and labels to use with the Media Labeling Wizard.
Chapter 3. Configuring the server
55
Figure 10. Initial configuration – media labeling wizard
To select and label media, do the following:
box next to the media you want to label.
1. Check the
2. Check Overwrite existing label if necessary, and select from the other available
labeling options.
3. Click the Label Now button.
The Tivoli Storage Manager Media Labeling dialog appears.
4. Enter a label for the media.
The Media Labeling Wizard supports labels up to six characters long.
5. Click OK.
The Tivoli Storage Manager Media Labeling Monitor dialog appears. Status is
displayed and updated throughout the labeling process. When the labeling
process is complete, the OK button becomes active. The amount of time this
takes can depend on the storage hardware and type of media you are using.
6. Click OK.
The new label should appear in the left pane.
7. After you have finished labeling media, click Next.
The Media Check-in dialog appears.
56
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Checking in media
Labeled media must be checked in before you can use it. The Media Labeling
Wizard allows you to check in media.
Figure 11. Initial configuration – media labeling wizard
This window appears if you have labeled media using the previous window.
v Click the Check-in now button to check in labeled media to Tivoli Storage
Manager. Media volumes from all of the storage devices you selected in the first
media labeling dialog are eligible for check-in. All labeled media not previously
checked in to this server will automatically be checked in at this time.
A dialog appears, describing the check-in process. Check-in runs as a
background process, and media will not be available for use until the process
completes. Depending on your storage hardware, and the amount of media
being checked in, this process can take some time. To monitor the check-in
process, complete the initial configuration and do the following:
1. From the Tivoli Storage Manager Console, expand the tree for the Tivoli
Storage Manager server you are configuring.
2. Expand Reports and click Monitor.
3. Click the Start button to monitor server processes in real time.
Chapter 3. Configuring the server
57
Default configuration results
After the Initial Configuration completes, you are prompted to verify your
configuration.
Figure 12. Initial configuration – completed
If you have installed a local backup-archive client, click Yes to immediately start
the client. Click No if you have not installed the client code locally, or if you plan
to verify your configuration by backing up remotely installed clients.
Tivoli Storage Manager Backup Client icon on your desktop
Note: Click the
to start the local backup-archive client at any time.
You can use the Tivoli Storage Manager Console to perform a variety of
administrative tasks, including issuing commands and monitoring server processes.
You can also access the individual wizards you used during the initial
configuration process from this interface. Additional wizards are also available.
The Tivoli Storage Manager configuration wizards simplify the setup process by
hiding some of the detail. For the ongoing management of your Tivoli Storage
Manager system, it can be helpful to understand the default configuration that has
been created for you.
Your environment might differ somewhat from the one described in this section,
depending on the choices you made during the initial configuration process. All of
these default settings can be modified, and new policy objects can be created.
58
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Data management policy objects
Tivoli Storage Manager provides data management policy objects to specify how
files are backed up, archived, migrated from client node storage, and managed in
server storage.
Table 8 lists them. For more information, refer to Chapter 14, “Implementing
policies for client data,” on page 455.
Table 8. Default data management policy objects
Tivoli
Storage
Manager
Object
Name
Details
Policy
Domain
STANDARD
By default, any clients or schedules you created were
added to this domain. The domain contains one policy set.
Policy Set
STANDARD
This policy set is ACTIVE. It contains one management
class.
Management STANDARD
Class
This management class contains a backup copy group and
an archive copy group.
Copy Group
(Backup)
This copy group stores one active and one inactive version
of existing files. The inactive version will be kept for 30
days.
STANDARD
Stores one inactive version of deleted files for 60 days.
Points to BACKUPPOOL.
Copy Group
(Archive)
STANDARD
This copy group stores one active and one inactive version
of existing files. The inactive version will be kept for 30
days.
Stores one inactive version of deleted files for 60 days.
Points to ARCHIVEPOOL.
Storage device and media policy objects
Tivoli Storage Manager provides default storage-device and media-policy objects to
specify how data is stored.
Table 9 lists them. For more information, refer to Chapter 11, “Managing storage
pools and volumes,” on page 275.
Table 9. Default storage device and media policy objects
Tivoli
Storage
Manager
Object
Name
Details
Storage Pool
(Backup)
BACKUPPOOL
This storage pool points to DISKPOOL. No volumes are
defined, so data will migrate immediately.
You might have used the Client Node Configuration
Wizard to point BACKUPPOOL directly at a removable
media device.
Storage Pool
(Archive)
ARCHIVEPOOL
This storage pool points to DISKPOOL. No volumes are
defined, so data will migrate immediately.
Chapter 3. Configuring the server
59
Table 9. Default storage device and media policy objects (continued)
Tivoli
Storage
Manager
Object
Name
Details
Storage Pool
(Disk)
DISKPOOL
This storage pool consists of a 4MB volume created in
the tsmdata directory.
(Initial volume is
named
disk1.dsm)
You might have used the Client Node Configuration
Wizard to point DISKPOOL directly at a removable
media device. If so, data will begin to migrate from
DISKPOOL to the device when DISKPOOL reaches 90%
of capacity. Migration will continue until DISKPOOL
reaches 70% of capacity.
Tivoli Storage Manager library, drive, storage pool, and path objects will have been
created for any storage libraries or drives you defined using the Device
Configuration Wizard. Tivoli Storage Manager volumes will have been created for
any media you labeled using the Media Labeling Wizard. If you used the Client
Node Configuration Wizard to associate a Tivoli Storage Manager client with
SAN-attached disk, a Tivoli Storage Manager disk object was also created.
Objects for Tivoli Storage Manager clients
Tivoli Storage Manager provides default client objects to manage client schedules
and operations.
Table 10 lists them.
Table 10. Default client objects
Tivoli Storage
Manager Object
Name
Details
Tivoli Storage
Manager Client
(Local
Administrative)
ADMIN
This client is registered with the Tivoli Storage
Manager server by default. It provides access to the
administrative Web interface and server
command-line interface. The default password is
ADMIN. To ensure system security, it is
recommended that you change the password.
During the standard configuration process, you are
also prompted to create at least one local
backup-archive client with the same name as the local
machine.
Client Schedule
(Daily)
DAILY_INCR
This schedule is defined in the STANDARD policy
domain, so only clients associated with that domain
can use it. You can use the Scheduling Wizard to
associate clients with this schedule. You must also
install and start the client scheduler service on each
client node.
The schedule runs a daily incremental backup at the
same time you initially configured Tivoli Storage
Manager. The schedule has a window of 2 hours, and
a priority of 5.
60
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 10. Default client objects (continued)
Tivoli Storage
Manager Object
Name
Details
Client Schedule
(Weekly)
WEEKLY_INCR This schedule is defined in the STANDARD policy
domain, so only clients associated with that domain
can use it. You can use the Scheduling Wizard to
associate clients with this schedule. You must also
install and start the client scheduler service on each
client node.
The schedule runs a weekly incremental backup every
Friday at the same time you initially configured Tivoli
Storage Manager. The schedule has a window of 1
hour, and a priority of 2.
For more information, refer to Chapter 16, “Scheduling operations for client
nodes,” on page 545 and Chapter 17, “Managing schedules for client nodes,” on
page 553.
Verifying the initial configuration
You can verify the initial configuration by backing up client data to the IBM Tivoli
Storage Manager server.
Performing pre-backup tasks for remote clients
Before you can back up a remote client, you need to compete certain tasks.
The following tasks can be performed in any order:
v Register the client node with the Tivoli Storage Manager server (Refer to “Client
node configuration wizard” on page 50).
v Install and configure the Tivoli Storage Manager client on each remote machine.
Installing the Tivoli Storage Manager client:
You can install the Tivoli Storage Manager client using any of the
following methods:
– Install directly from the CD-ROM.
– Create client images to install.
– Use a network-shared drive to distribute the Tivoli Storage Manager
client code. (Refer to “Installing clients using shared resources” on
page 68).
Configuring the Tivoli Storage Manager client:
Configure the communications options in the client options file to
connect with the server.
Note: Each Tivoli Storage Manager client instance requires a client
options file (dsm.opt). For the location and details about configuring the
client options file, see “Creating or updating a client options file” on
page 69. You may also need to set up IBM Tivoli Storage Manager
schedules for your remote clients. See “Working with schedules on
network clients” on page 70 for more information.
Chapter 3. Configuring the server
61
Backing up a client
Back up a client to help verify your initial configuration.
For more information, see the appropriate Using the Backup-Archive Clients User’s
Guide.
Note: It is recommended that you back up a small file or directory.
Do the following to back up a remote or local client:
1. Start the client, enter a node name and password, and click Login. The
backup-archive client window opens.
2. Click Backup from the client window. The Backup window opens.
3. Expand the directory tree.
4. Select the folder icons to display the files in the directory.
5. Click on the selection boxes next to the files or directories you want to back up.
6. From the drop-down list, choose the backup type:
v Incremental (date only)
v Incremental (complete)
v Always backup: for a selective backup
Note: The first backup of a file is always a full backup, regardless of what
you specify.
7. Click Backup. The Backup Report window displays the backup processing
status.
Excluding files from the backup
You might not want to back up certain files. For example, core files, local caches of
network file systems, operating system or application files that could easily be
recovered by installing the program again, or any other files that you could easily
rebuild might not need to be backed up.
To exclude certain files from both incremental and selective backup processing,
create an include-exclude list in the client options file . IBM Tivoli Storage Manager
backs up any file that is not explicitly excluded from backup. You can also include
specific files that are in a directory that you have excluded. For more information,
see the appropriate Using the Backup-Archive Clients User’s Guide.
Restoring client files or directories
You can perform a simple restore of client files.
For details and advanced procedures, see the appropriate Backup-Archive Clients
Installation and User’s Guide publication.
To
1.
2.
3.
4.
5.
restore backup versions of files or directories:
Click Restore from the client window. The Restore window opens.
Expand the directory tree.
Expand the File Level.
Click on the selection boxes next to the files or directories you want to restore.
Click Restore. The Restore Destination window opens.
6. Select the destination in the Restore Destination window.
62
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
7. Click Restore. The Restore Report window displays the restore processing
status.
IBM Tivoli Storage Manager can keep multiple versions of files, and you can
choose which version to restore. Tivoli Storage Manager marks the most recent
version as active and all other versions as inactive. When you back up a file, Tivoli
Storage Manager marks the new backup version active, and marks the previous
active version as inactive. When the maximum number of inactive versions is
reached, Tivoli Storage Manager deletes the oldest inactive version.
If you try to restore both an active and inactive version of a file at the same time,
only the active version is restored.
v To restore an active backup version, click Display active files only from the
View drop-down list.
v To restore an inactive backup version, click Display active/inactive files from
the View drop-down list.
For more information, see the appropriate Using the Backup-Archive Clients User’s
Guide.
Archiving and retrieving files
Archive a small file or directory. You can select files to be archived by name or
from a directory tree.
For more information, see the appropriate Using the Backup-Archive Clients manual.
Archiving files by name
You can select files to be archived by name.
To archive files by name, complete the following procedure.
1. Click the Archive button in the client main window. The Archive window
opens.
2. Expand the directory tree until you find the drive or directory that you want.
3. Highlight the drive or directory that you want.
4. Search for file names by doing the following:
on the tool bar.
a. Click the Find icon
b. Enter the search criteria in the Find Files window. You can use a mask to
find files with similar names. Assign a unique description for each archive
package.
c. Click Search. The Matching Files window opens.
5. Click the selection boxes next to the files you want to archive.
6. In the Description box on the tool bar, enter a description, accept the default
description, or select an existing description for your archive package
7. Click Archive to archive the files. The Archive Status window displays the
status progress of the archive.
Chapter 3. Configuring the server
63
Archiving files using a directory tree
You can archive specific files or entire directories from a directory tree.
To archive your files from the directory tree:
1. Click the Archive button in the client main window. The Archive window
opens.
2. Expand the directory tree until you find the directories or drive that you want.
3. Click the selection boxes next to the files or directories that you want to
archive.
4. In the Description box on the tool bar, enter a description, accept the default
description, or select an existing description for your archive package.
5. Click Archive. The Archive Status window opens. The Archive Report
window displays the status progress of the archive.
Retrieving archive copies
You retrieve files when you want to return archived copies of files or directories to
your workstation.
To retrieve archived copies:
1. Click the Retrieve button on the client main window. The Retrieve window
opens.
2. You can find the files or directories in either of the following ways:
v From the directory tree: Expand the directory tree until you find the object
you want. The objects are grouped by archive package description.
v By name:
on the tool bar. The Find Files window opens.
a. Click the Find icon
b. Enter your search information in the Find Files window.
3.
4.
5.
6.
c. Click Search. The Matching Files window opens.
Click on the selection boxes next to the objects that you want to retrieve.
Click Retrieve. The Retrieve Destination window opens.
Enter the information in the Retrieve Destination window.
Click Retrieve. The Retrieve Report window displays the processing results.
Getting started with administrative tasks
There are basic IBM Tivoli Storage Manager administrative tasks that it is a good
idea to start out with.
Refer to the Chapter 1, “Tivoli Storage Manager overview,” on page 3 for a
comprehensive discussion of Tivoli Storage Manager features and detailed
instructions on monitoring, customizing, and administering the Tivoli Storage
Manager environment. This topic describes the following administrative tasks:
Managing the Tivoli Storage Manager server
v “Managing Tivoli Storage Manager servers” on page 65
v “Starting the Tivoli Storage Manager server” on page 66
v “Stopping the Tivoli Storage Manager server” on page 66
v “Backing up the database and database recovery log” on page 67
v “Removing theTivoli Storage Manager server” on page 67
Installing and configuring Tivoli Storage Manager clients
64
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v “Installing and configuring backup-archive clients” on page 68
v “Creating or updating a client options file” on page 69
Managing Tivoli Storage Manager client schedules
v “Working with schedules on network clients” on page 70
Managing Tivoli Storage Manager client/server communications
v “Installing and configuring backup-archive clients” on page 68
Managing Tivoli Storage Manager administrators
v “Registering additional administrators” on page 72
v “Changing administrator passwords” on page 72
You can also use the Administration Center to manage servers and clients. See
“Managing servers with the Administration Center” on page 33.
Managing Tivoli Storage Manager servers
IBM Tivoli Storage Manager services must be run under an Administrator’s Group,
Windows Power Users Group, or a Local System Account.
Administrator’s Group
If you are logged in under an account in the Administrator’s Group, you
can start or stop the server, set server properties, and perform non-service
related tasks using either the Services Management Console (services.msc)
or the Tivoli Storage Manager snapin (tsmw2k.msc). You can also control
services including server, storage agent, web client, client acceptor daemon,
scheduler, journal-based backup, and others.
Windows Power Users Group
If you are logged in under an account in the Windows Power Users group,
you can start or stop the server and control services and non-service
related tasks using the Services Management Console, but not the Tivoli
Storage Manager snapin. You can start or stop the Tivoli Storage Manager
service with the ″net start″ or ″net stop″ commands from the Windows
command line. You cannot set server properties from this group.
Local System Account
If you are logged in under an account in the local users group, you cannot
start or stop the server and you cannot set server properties. You can use
the Services Management Console to control other services, but only if the
Tivoli Storage Manager service is not using the Local System account. You
can also perform non-service related tasks using the management console,
however the following conditions apply:
v The user account must be able to read and write the registry under the
key: HKEY_LOCAL_MACHINE SOFTWARE IBM ADSM CurrentVersion
v The user account must be able to read and write files in the Tivoli
Storage Manager program folders and, in particular, log files in the
Tivoli Storage Manager management console directory
Chapter 3. Configuring the server
65
Starting the Tivoli Storage Manager server
You can start the Tivoli Storage Manager server in several ways.
However, we recommend that you start it as a service. In this way, the server
remains active when you log off the workstation. To start the server as a service,
do the following from the Tivoli Storage Manager Console:
1. Expand the tree for the Tivoli Storage Manager server you are starting and
expand Reports
2. Click
Service Information.
The Service Information view appears in the right pane.
3. If the server status displays Stopped, right click service line and select Start.
Stopping the Tivoli Storage Manager server
You can stop the server without warning if required. To avoid losing
administrative and client node connections, stop the server only after current
sessions have been completed or canceled.
For most tasks, your server must be running. This procedure is explained here only
if an unusual situation requires that you stop the server. To stop the server, do one
of the following:
v Stop a server that is running as a Service:
1. Expand the tree for the Tivoli Storage Manager server you are stopping and
expand Reports
Service Information.
2. Click
The Service Information view appears in the right pane.
3. Right-click the server service line and select Stop.
Note: This shuts down the server immediately. The shutdown also cancels all
Tivoli Storage Manager sessions.
v Stop a server from the administrative Web interface:
1. From the tree view in the browser, expand Object View.
2. Expand Server.
3. Click Server Status.
4. From the drop-down menu, select Halt Server and click Finish.
Note: This procedure shuts down the server immediately. The shutdown also
cancels all client sessions.
v Stop a server from the administrative command line:
1. Expand the tree for the Tivoli Storage Manager server you are stopping and
expand Reports
2. Click Command Line.
The Command Line view appears in the right pane.
3. Click Command Line Prompt in the right pane.
The Command Prompt dialog appears.
4. Enter halt in the Command field, and click the Submit button.
66
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Note: This shuts down the server immediately. The shutdown also cancels all
client sessions.
Backing up the database and database recovery log
If the Tivoli Storage Manager server database or the recovery log is unusable, the
entire server is unavailable. If a database is lost and cannot be recovered, all of the
data managed by that server is lost. If a storage pool volume is lost and cannot be
recovered, the data on the volume is also lost.
To back up the database and storage pools regularly, define administrative
schedules. If you lose your database or storage pool volumes, you can use offline
utilities provided by IBM Tivoli Storage Manager to restore your server and data.
See “Automating a basic administrative command schedule” on page 590 for
details.
Removing theTivoli Storage Manager server
Before you remove the current version of the Tivoli Storage Manager server, there
are certain tasks you must perform.
After you remove the Tivoli Storage Manager device driver, restart your system.
To return to an earlier version of Tivoli Storage Manager after you perform a
migrate install, perform a full database backup from your original version and the
server install code for your original version.
Note: You cannot restore a backed up database from a prior version on to a newer
version of the Tivoli Storage Manager server.
If you return to an earlier version of Tivoli Storage Manager, be aware of these
results:
v References to client files that were backed up, archived, or migrated to the
current Tivoli Storage Manager server will be lost.
v Some volumes might be overwritten or deleted during Tivoli Storage Manager
server operation. If so, client files that were on those volumes and that were
migrated, reclaimed, moved (MOVE DATA command), or deleted (DELETE
VOLUME command) might not be accessible to the earlier version of ADSM or
Tivoli Storage Manager.
v Definitions, updates, and deletions of Tivoli Storage Manager objects that were
performed on the current Tivoli Storage Manager server will be lost.
To remove Tivoli Storage Manager server:
1. Perform a full database backup. For example, if you have a tape device class
named tapeclass, enter this command to perform a full backup:
backup db type=full devclass=tapeclass
2. Save a copy of the volume history and device configuration files that you
defined on the VOLHISTORY and DEVCONFIG options in the server options
file. For example, to save the volume history in a file named volhist, and the
device configuration in a file named devices, enter:
backup volumehistory filenames=volhist
backup devconfig filenames=devices
3. Store the output volumes in a safe location.
Chapter 3. Configuring the server
67
Installing and configuring backup-archive clients
One way to install Tivoli Storage Manager clients is to run the setup routine
manually on each network-attached client system. Similarly, you can configure
Tivoli Storage Manager clients by manually editing the client options file on each
system.
To simplify the installation and configuration of multiple Tivoli Storage Manager
clients, consider copying the client setup files from the product CD and using the
Network Client Options File Wizard to create a configuration package. The setup
files and configuration package can then be placed on a file server that can be
accessed by Windows clients using a network-shared drive.
Installing clients using shared resources
You can place the IBM Tivoli Storage Manager client program on a file server and
use the package created by the Network Client Options File Wizard.
In the example shown in Figure 13, IBM Tivoli Storage Manager is installed on a
server named EARTH, which shares its D drive with all the Windows client
machines.
Figure 13. Windows Networked Environment
Each client machine is configured so that when it boots up, it maps the EARTH D
drive as its Z drive. For example, at start-up each client issues this command:
NET USE Z: \\EARTH\D$
The administrator used the Network Client Options File Wizard to create a client
configuration package named earthtcp that was stored on EARTH in the d:\tsmshar
directory. The administrator then registered each client node (“Client node
configuration wizard” on page 50).
The following scenario describes how to install the remote client and configure it
from a shared directory:
1. On EARTH, copy the contents of the Win32 client directory from the IBM Tivoli
Storage Manager client CD to the d:\tsmshar directory. Ensure that you include
any client subdirectories. You can use Windows Explorer or the xcopy
command with the /s option to perform the copy.
2. Provide the users of the Windows clients with the following instructions for
installing the client from the shared directory:
a. Open a command prompt and change directories to the shared CD-ROM
drive on EARTH. For example:
chdir /d x:\tsmshar
68
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
b. Start the client installation and follow the instructions in the setup routine.
setup
c. Run the configuration package batch file to configure the client to
communicate with the server (that is, create the client options file) by
issuing:
earthtcp.bat
Note: Using Windows Explorer, you can run the batch file if the drive is
shared and if you start the file from the shared directory. However, you
cannot run the batch file if you go to the directory using Explorer’s network
neighborhood. For example, if you go to Explorer and click on
z:\tsmshar\earthtcp.bat, the file will run. If you go to network neighborhood
and click on earth\tsmshar\earthtcp.bat, the batch file will not run. Similarly,
to issue the command from a command prompt, you must change to the
shared directory. A warning is displayed if you enter a command such as
x:\tsmshar\setup.
After they complete the procedure, the users can start their clients, contact the
server, and perform a backup.
Creating or updating a client options file
Each client requires a client options file, which contains options that identify the
server, communication method, backup and archive options, space management
options, and scheduling options.
You can edit or create client options files in several ways, depending on the client
platform and configuration of your system:
v Any Client
Edit the dsm.opt client options file with a text editor at a client workstation. This
is the most direct method, but it may not be best if you have many clients.
v Windows Clients
Generate the dsm.opt client options file from the server with the Network Client
Options File Wizard. This is easy and direct, and the wizard detects the network
address of the Tivoli Storage Manager server. To run the wizard, do the
following:
1. From the Tivoli Storage Manager Console, expand the tree for the Tivoli
Storage Manager server on which you want to create the file and click
Wizards.
The Wizards list is displayed in the right pane.
2. Double-click Client Options File from the Wizards list to start the wizard.
3. Follow the instructions in the wizard.
v Networked Windows Clients with a Shared Directory on a File Server
Use the Remote Client Configuration Wizard to create a package that allows
remote users to create client options files. The administrator uses the wizard to
generate a client configuration file and stores the file in a shared directory.
Clients access the shared directory and run the configuration file to create the
client options file. This method is suitable for sites with many clients.
Chapter 3. Configuring the server
69
Working with schedules on network clients
You can start Tivoli Storage Manager schedules that you have defined and verify
that they are running correctly.
Starting the Tivoli Storage Manager scheduler
The Tivoli Storage Manager Client Scheduler is the client component of the Tivoli
Storage Manager scheduling model. The client scheduler runs as a Windows
service and must be installed and running on the Tivoli Storage Manager client
machine to execute any client schedules you define to the Tivoli Storage Manager
server.
The client scheduler can be installed using a wizard provided by the Tivoli Storage
Manager client graphical interface. To automatically start the scheduler service as
required, manually start the scheduler service on each client node, or update the
managedservices option in the client options file. Refer to Backup-Archive Clients
Installation and User’s Guide for more information.
Schedule verification
You can verify that the automation is working, beginning the day after you define
the schedule and associate it with clients.
If the schedule runs successfully, the status indicates Completed.
Note: The include-exclude list (file on UNIX clients) on each client also affects
which files are backed up or archived. For example, if a file is excluded from
backup with an EXCLUDE statement, the file will not be backed up when the
schedule runs.
Setting client and server communications options
You can set up IBM Tivoli Storage Manager client/server communications.
To view and specify server communications options, use the Server Options utility
available from the Tivoli Storage Manager Console. This utility is available from
the Service Information view in the server tree. By default, the server uses the
TCP/IP, Named Pipes, and HTTP communication methods. If you start the server
console and see warning messages that a protocol could not be used by the server,
either the protocol is not installed or the settings do not match the Windows
protocol settings.
For a client to use a protocol that is enabled on the server, the client options file
must contain corresponding values for communication options. From the Server
Options utility, you can view the values for each protocol.
Tip: This section describes setting server options before you start the server. When
you start the server, the new options go into effect. If you modify any server
options after starting the server, you must stop and restart the server to activate
the updated options.
For more information about server options, see the Administrator’s Reference or the
Tivoli Storage Manager Console online help.
70
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
TCP/IP options
The Tivoli® Storage Manager server provides a range of TCP/IP options to
configure your system.
TCP/IP-related options include TCPPORT, TCPWINDOWSIZE, and
TCPNODELAY. Here is an example of TCP/IP setting:
commmethod
tcpport
tcpwindowsize
tcpnodelay
tcpip
1500
8
no
Named pipes options
The named pipes communication method is ideal when running the server and
client on the same Windows system because named pipes is provided with the
Windows base system.
Named pipes require no special configuration. Here is an example of a named
pipes setting:
commmethod
namedpipename
namedpipe
\\.\pipe\adsmpipe
SNMPDPI subagent options
Tivoli Storage Manager implements a Simple Network Management Protocol
(SNMP) subagent. You can configure the SNMP subagent to send traps to an
SNMP manager, such as IBM NetView®, and to provide support for a Management
Information Base (MIB).
For details about configuring SNMP for use with Tivoli Storage Manager, see the
Administrator’s Guide.
The subagent communicates with the snmpd daemon, which in turn communicates
with a management application. The snmpd daemon must support the DPI®
protocol. Agents are available on AIX. The subagent process is separate from the
Tivoli Storage Manager server process, but the subagent gets its information from a
server options file. When the SNMP management application is enabled, it can get
information and messages from servers.
Here is an example of a SNMP setting. You must specify the COMMMETHOD
option. For details about the other options, see the Administrator’s Reference.
commmethod
snmpheartbeatinterval
snmpmessagecategory
snmp
5
severity
Chapter 3. Configuring the server
71
Registering additional administrators
If you are adding administrators, register them and grant an authority level to
each.
Note: The name SERVER_CONSOLE is reserved for Tivoli Storage Manager
console operations and cannot be used as the name of an administrator.
From the administrative Web interface, do the following to register an
administrative client and grant an authority level:
1. From the tree view, expand Administrators.
2. From the Operations drop-down menu, select and click on Register an
Administrator.
3. Enter the required information and click Finish.
Changing administrator passwords
You can change the administrator password.
From the administrative Web interface, complete the following steps:
1. From the tree view, expand Administrators.
2. Select an administrator name.
3. From the Operations drop-down menu, select and click on Update an
Administrator.
4. Enter the password and click Finish.
72
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Part 2. Configuring and managing server storage
Initially, you must attach devices to the server and then create objects that
represent those devices. You also create objects representing storage resources, such
as storage pools and storage-pool volumes. A wide variety of Tivoli Storage
Manager functions, such as tape reclamation and simultaneous write, are available
to manage client data and to control and optimize server storage.
© Copyright IBM Corp. 1993, 2009
73
74
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 4. Storage device concepts
To work with storage devices, you must be familiar with Tivoli Storage Manager
storage objects and other basic concepts.
“Tivoli Storage Manager storage devices” on page 76
“Tivoli Storage Manager storage objects” on page 76
“Tivoli Storage Manager volumes” on page 85
“Planning for server storage” on page 100
“Device configurations” on page 86
“Removable media mounts and dismounts” on page 94
“How Tivoli Storage Manager uses and reuses removable media” on page 95
“Required definitions for storage devices” on page 98
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see
Administrator’s Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
You can also perform Tivoli Storage Manager tasks from the Administration
Center. For more information about using the Administration Center, see
“Managing servers with the Administration Center” on page 33.
Road map for key device-related task information
Key tasks include configuring and managing disk devices, physically attaching
storage devices to your system, and so on. In this document, information about
tasks is organized into linked topics.
Use the following table to identify key tasks and the topics that describe how to
perform those tasks.
Task
Topic
Configure and manage magnetic disk
devices, which Tivoli Storage Manager uses
to store client data, the database, database
backups, recovery log, and export data.
Chapter 5, “Magnetic disk devices,” on page
103
Physically attach storage devices to your
system. Install and configure the required
device drivers.
Chapter 6, “Using devices with the server
system,” on page 111
Configure devices to use with Tivoli Storage
Manager, using detailed scenarios of
representative device configurations.
Chapter 7, “Configuring storage devices,” on
page 121
Plan, configure, and manage an environment Chapter 9, “Using NDMP for operations
for NDMP operations
with NAS file servers,” on page 219
Perform routine operations such as labeling
volumes, checking volumes into automated
libraries, and maintaining storage volumes
and devices.
© Copyright IBM Corp. 1993, 2009
Chapter 8, “Managing removable media
operations,” on page 173
75
Task
Topic
Define and manage device classes.
Chapter 10, “Defining device classes,” on
page 251
Tivoli Storage Manager storage devices
With Tivoli Storage Manager, you can use a range of manual and automated
devices for server storage. Both direct and network-attached storage provide
options for storing data. Tivoli Storage Manager devices can be physical, such as
disk drives and tape drives, or logical, such as files on disk or storage on another
server.
Tivoli Storage Manager supports the following types of devices:
v Tape devices
v
v
v
v
Removable file devices
Disk devices
Optical disk devices
Storage area network (SAN) devices
Devices in a SAN environment must be supported by the Tivoli Storage
Manager device driver.
For a summary of supported devices, see Table 11 on page 98. For details and
updates, see the Tivoli Storage Manager device support Web site:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
Tivoli Storage Manager storage objects
Devices and media are represented by objects that you define. Information about
these objects is stored in the Tivoli Storage Manager database.
You can query, update, and delete the following objects:
v Library
v Drive
v Device class
v Disk Devices
v
v
v
v
v
76
Storage pool
Storage pool volume
Data mover
Path
Server
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Libraries
A physical library is a collection of one or more drives that share similar
media-mounting requirements. That is, the drive can be mounted by an operator or
by an automated mounting mechanism.
A library object definition specifies the library type, for example, SCSI or 349X, and
other characteristics associated with the library type, for example, the category
numbers used by an IBM TotalStorage® 3494 Tape Library for private, scratch
volumes, and scratch, write-once, read-many (WORM) volumes.
Tivoli Storage Manager supports a variety of library types.
Shared libraries
Shared libraries are logical libraries that are represented physically by SCSI, 349X,
or ACSLS libraries. The physical library is controlled by the Tivoli Storage Manager
server configured as a library manager. Tivoli Storage Manager servers using the
SHARED library type are library clients to the library manager server. Shared
libraries reference a library manager.
Optical devices are not supported for library sharing.
Automated cartridge system library software libraries
An automated cartridge system library software (ACSLS) library is a type of
external library that is controlled by the Sun StorageTek ACSLS
media-management software. The server can act as a client application to the
ACSLS software to use the drives.
The Sun StorageTek software performs the following functions:
v Volume mounts (specific and scratch)
v Volume dismounts
v Freeing of library volumes (return to scratch)
The ACSLS software selects the appropriate drive for media-access operations. You
do not define the drives, check in media, or label the volumes in an external
library.
|
|
|
Restriction: To utilize ACSLS functions, the Sun StorageTek Library Attach
software must be installed. See “ACSLS-managed libraries” on page 150 for more
information.
For additional information regarding ACSLS libraries, refer to the Sun StorageTek
documentation.
Manual libraries
In manual libraries, operators mount the volumes in response to mount-request
messages issued by the server.
The server sends these messages to the server console and to administrative clients
that were started by using the special MOUNTMODE or CONSOLEMODE
parameter.
You can also use manual libraries as logical entities for sharing sequential-access
disk (FILE) volumes with other servers.
Chapter 4. Storage device concepts
77
You cannot combine drives of different types or formats, such as Digital Linear
Tape (DLT) and 8 mm, in a single manual library. Instead, you must create a
separate manual library for each device type.
For information about configuring a manual library, see:
Chapter 7, “Configuring storage devices,” on page 121
. For information about monitoring mount messages for a manual library, see:
“Tivoli Storage Manager server requests” on page 190
SCSI libraries
A SCSI library is controlled through a SCSI interface, attached either directly to the
server’s host using SCSI cabling or by a storage area network. A robot or other
mechanism automatically handles volume mounts and dismounts.
The drives in a SCSI library can be of different types. A SCSI library can contain
drives of mixed technologies, for example LTO Ultrium and DLT drives. Some
examples of this library type are:
v The Sun StorageTek L700 library
v The IBM 3590 tape device, with its Automatic Cartridge Facility (ACF)
Remember: Although it has a SCSI interface, the IBM 3494 Tape Library
Dataserver is defined as a 349X library type.
For information about configuring a SCSI library, see:
Chapter 7, “Configuring storage devices,” on page 121
349X libraries
A 349X library is a collection of drives in an IBM 3494. Volume mounts and
demounts are handled automatically by the library. A 349X library has one or more
library management control points (LMCP) that the server uses to mount and
dismount volumes in a drive. Each LMCP provides an independent interface to the
robot mechanism in the library.
For information about configuring a 349X library, see:
Chapter 7, “Configuring storage devices,” on page 121
External libraries
An external library is a collection of drives managed by an external
media-management system that is not part of Tivoli Storage Manager. The server
provides an interface that allows external media management systems to operate
with the server.
The external media-management system performs the following functions:
v Volume mounts (specific and scratch)
v Volume dismounts
v Freeing of library volumes (return to scratch)
The external media manager selects the appropriate drive for media-access
operations. You do not define the drives, check in media, or label the volumes in
an external library.
78
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
An external library allows flexibility in grouping drives into libraries and storage
pools. The library can have one drive, a collection of drives, or even a part of an
automated library.
An ACSLS or LibraryStation-controlled Sun StorageTek library used in conjunction
with an external library manager (ELM), like Gresham’s EDT-DistribuTAPE, is a
type of external library.
For a definition of the interface that Tivoli Storage Manager provides to the
external media management system, see Appendix C, “External media
management interface description,” on page 913.
Drives
A drive object represents a drive mechanism within a library that uses removable
media. For devices with multiple drives, including automated libraries, you must
define each drive separately and associate it with a library.
Drive definitions can include such information as the element address (for drives
in SCSI libraries), how often the drive is cleaned (for tape drives), and whether or
not the drive is online.
Tivoli Storage Manager drives include tape and optical drives that can stand alone
or that can be part of an automated library. Supported removable media drives
also include removable file devices such as re-writable CDs.
Device class
Each device that is defined to Tivoli Storage Manager is associated with one device
class, which specifies the device type and media management information, such as
recording format, estimated capacity, and labeling prefixes.
A device type identifies a device as a member of a group of devices that share
similar media characteristics. For example, the 8MM device type applies to 8-mm
tape drives.
Device types include a variety of removable media types as well as FILE,
CENTERA, and SERVER.
A device class for a tape or optical drive must also specify a library.
Disk devices
Using Tivoli Storage Manager, you can define random-access disk (DISK device
type) volumes using a single command. You can also use space triggers to
automatically create preassigned private volumes when predetermined
space-utilization thresholds are exceeded.
For important disk-related information, see “Requirements for disk subsystems” on
page 103.
Chapter 4. Storage device concepts
79
Removable media
Tivoli Storage Manager provides a set of specified removable-media device types,
such as 8MM for 8 mm tape devices, or REMOVABLEFILE for Jaz or DVD-RAM
drives.
The GENERICTAPE device type is provided to support certain devices that do not
use the Tivoli Storage Manager device driver.
For more information about supported removable media device types, see
Chapter 10, “Defining device classes,” on page 251 and the Administrator’s
Reference.
Files on disk as sequential volumes (FILE)
The FILE device type lets you create sequential volumes by creating files on disk
storage. To the server, these files have the characteristics of a tape volume. FILE
volumes can also be useful when transferring data for purposes such as electronic
vaulting or for taking advantage of relatively inexpensive disk storage devices.
FILE volumes are a convenient way to use sequential-access disk storage for the
following reasons:
v You do not need to explicitly define scratch volumes. The server can
automatically acquire and define scratch FILE volumes as needed.
v You can create and format FILE volumes using a single command. The
advantage of private FILE volumes is that they can reduce disk fragmentation
and maintenance overhead.
v Using a single device class definition that specifies two or more directories, you
can create large, FILE-type storage pools. Volumes are created in the directories
you specify in the device class definition. For optimal performance, volumes
should be associated with file systems.
v When predetermined space-utilization thresholds have been exceeded, space
trigger functionality can automatically allocate space for private volumes in
FILE-type storage pools.
v The Tivoli Storage Manager server allows concurrent read-access and
write-access to a volume in a storage pool associated with the FILE device type.
Concurrent access improves restore performance by allowing two or more clients
to access the same volume at the same time. Multiple client sessions (archive,
retrieve, backup, and restore) or server processes (for example, storage pool
backup) can read the volume concurrently. In addition, one client session can
write to the volume while it is being read.
The following server processes are allowed shared read access to FILE volumes:
– BACKUP DB
– BACKUP STGPOOL
– COPY ACTIVEDATA
– EXPORT/IMPORT NODE
– EXPORT/IMPORT SERVER
– GENERATE BACKUPSET
– RESTORE STGPOOL
– RESTORE VOLUME
The following server processes are not allowed shared read access to FILE
volumes:
– AUDIT VOLUME
– DELETE VOLUME
80
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
–
–
–
–
MIGRATION
MOVE DATA
MOVE NODEDATA
RECLAMATION
Unless sharing with storage agents is specified, the FILE device type does not
require you to define library or drive objects. The only required object is a device
class.
For important disk-related information, see “Requirements for disk subsystems” on
page 103.
Files on sequential volumes (CENTERA)
The CENTERA device type defines the EMC Centera storage device. It can be used
like any standard storage device from which files can be backed up and archived
as needed.
The Centera storage device can also be configured with the Tivoli Storage Manager
server to form a specialized storage system that protects you from inadvertent
deletion of mission-critical data such as e-mails, trade settlements, legal documents,
and so on.
The CENTERA device class creates logical sequential volumes for use with Centera
storage pools. To the Tivoli Storage Manager server, these volumes have the
characteristics of a tape volume. With the CENTERA device type, you are not
required to define library or drive objects. CENTERA volumes are created as
needed and end in the suffix ″CNT.″
For more information about the Centera device class, see “Defining device classes
for CENTERA devices” on page 269. For details about Centera-related commands,
refer to the Administrator’s Reference.
Sequential volumes on another Tivoli Storage Manager server
(SERVER)
The SERVER device type lets you create volumes for one Tivoli Storage Manager
server that exist as archived files in the storage hierarchy of another server. These
virtual volumes have the characteristics of sequential-access volumes such as tape.
No library or drive definition is required.
You can use virtual volumes for the following:
v Device-sharing between servers. One server is attached to a large tape library
device. Other servers can use that library device indirectly through a SERVER
device class.
v Data-sharing between servers. By using a SERVER device class to export and
import data, physical media remains at the original location instead having to be
transported.
v Immediate offsite storage. Storage pools and databases can be backed up
without physically moving media to other locations.
v Offsite storage of the disaster recovery manager (DRM) recovery plan file.
v Electronic vaulting.
See “Using virtual volumes to store data on another server” on page 730.
Chapter 4. Storage device concepts
81
Library, drive, and device-class objects
Library objects, drive objects, and device-class objects taken together represent
physical storage entities.
These three objects are shown in Figure 14.
Figure 14. Removable media devices are represented by a library, drive, and device class
v For more information about the drive object, see:
“Managing drives” on page 203
“Defining drives” on page 167
v For more information about the library object, see:
“Managing libraries” on page 201
“Defining libraries” on page 166
v For more information about the device class object, see Chapter 10, “Defining
device classes,” on page 251.
Storage pools and storage-pool volumes
A storage pool is a collection of volumes that are associated with one device class
and one media type. For example, a storage pool that is associated with a device
class for 8-mm tape volumes contains only 8 mm tape volumes.
You can control the characteristics of storage pools, such as whether scratch
volumes are used.
Tivoli Storage Manager supplies default disk storage pools. .
Figure 15 on page 83 shows storage pool volumes grouped into a storage pool.
Each storage pool represents only one type of media. For example, a storage pool
for 8-mm devices represents collections of only 8-mm tapes.
82
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Figure 15. Relationships of storage pool volumes, storage pools, and media
For DISK device classes, you must define volumes. For other device classes, such
as tape and FILE, you can allow the server to dynamically acquire scratch volumes
and define those volumes as needed. For details, see:
“Preparing volumes for random-access storage pools” on page 290
“Preparing volumes for sequential-access storage pools” on page 291
One or more device classes are associated with one library, which can contain
multiple drives. When you define a storage pool, you associate the pool with a
device class. Volumes are associated with pools. Figure 16 shows these
relationships.
Vol.
Storage
Pool
Vol.
Storage Pool Volumes
Storage
Pool
Device Class
Storage
Pool
Device Class
Library
Drive
Drive
Drive
Drive
Figure 16. Relationships between storage and device objects
For information about defining storage pool and volume objects, see Chapter 11,
“Managing storage pools and volumes,” on page 275.
Chapter 4. Storage device concepts
83
For information about configuring volumes for random access see“Configuring
random access volumes on disk devices” on page 108.
Data movers
Data movers are devices that accept requests from Tivoli Storage Manager to
transfer data on behalf of the server. Data movers transfer data between storage
devices without using significant server, client, or network resources.
For NDMP operations, data movers are NAS file servers. The definition for a NAS
data mover contains the network address, authorization, and data formats required
for NDMP operations. A data mover enables communication and ensures authority
for NDMP operations between the Tivoli Storage Manager server and the NAS file
server.
Tivoli Storage Manager supports two types of data movers:
v For NDMP operations, data movers are NAS file servers. The definition for a
NAS data mover contains the network address, authorization, and data formats
required for NDMP operations. A data mover enables communication and
ensures authority for NDMP operations between the Tivoli Storage Manager
server and the NAS file server.
v For server-free data movement, data movers are devices such as the IBM SAN
Data Gateway, that move data between disk devices and tape devices on the
SAN.
Paths
Paths allow access to drives, disks, and libraries. A path definition specifies a
source and a destination. The source accesses the destination, but data can flow in
either direction between the source and destination.
Here are a few examples of paths:
v Between a server and a drive or a library
v Between a storage agent and a drive
v Between a data mover and a drive, a disk, or a library
For more information about the path object, see:
“Defining paths” on page 169
“Managing paths” on page 215
Server objects
Server objects are defined to use a library that is on a SAN and that is managed by
another Tivoli Storage Manager server, to use LAN-free data movement, or to store
data in virtual volumes on a remote server.
Among other characteristics, you must specify the server TCP/IP address.
For more information, see:
v “Setting up the library client servers” on page 160
v “Using virtual volumes to store data on another server” on page 730
v Storage Agent User’s Guide
84
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Tivoli Storage Manager volumes
A volume is the basic unit of storage for Tivoli Storage Manager storage pools.
Tivoli Storage Manager volumes are classified according to status: private, scratch,
and scratch write-once, read-many (WORM). Scratch WORM status applies to 349X
libraries only when the volumes are IBM 3592 WORM volumes.
The following definitions apply:
v A private volume is a labeled volume that is in use or owned by an application,
and may contain valid data. You must define each private volume. Alternatively,
for storage pools associated with sequential access disk (FILE) device classes,
you can use space triggers to create private, preassigned volumes when
predetermined space-utilization thresholds have been exceeded. Private FILE
volumes are allocated as a whole. The result is less risk of severe fragmentation
than with space dynamically acquired for scratch FILE volumes.
A request to mount a private volume must include the name of that volume.
Defined private volumes do not return to scratch when they become empty. For
information about defining private volumes, see “Defining storage pool
volumes” on page 292. For information about changing the status of a volume
(for example, from private to scratch) in an automated library, see the following:
– “Changing the status of automated library volumes” on page 183
v A scratch volume is a labeled volume that is empty or contains no valid data
and that can be used to satisfy any request to mount a scratch volume. When
data is written to a scratch volume, its status is changed to private, and it is
defined as part of the storage pool for which the mount request was made.
When valid data is moved from the volume and the volume is reclaimed, the
volume returns to scratch status and can be reused by any storage pool
associated with the library.
v A WORM scratch volume is similar to a conventional scratch volume. However,
WORM volumes cannot be reclaimed by Tivoli Storage Manager reclamation
processing. WORM volumes can be returned to scratch status only if they have
empty space in which data can be written. Empty space is space that does not
contain valid, expired or deleted data. (Deleted and expired data on WORM
volumes cannot be overwritten.) If a WORM volume does not have any empty
space in which data can be written (for example, if the volume is entirely full of
deleted or expired data), the volume remains private.
For each storage pool, you must decide whether to use scratch volumes. If you do
not use scratch volumes, you must define private volumes, or you can use
space-triggers if the volume is assigned to a storage pool with a FILE device type.
Tivoli Storage Manager keeps an inventory of volumes in each automated library it
manages and tracks whether the volumes are in scratch or private status. When a
volume mount is requested, Tivoli Storage Manager selects a scratch volume only
if scratch volumes are allowed in the storage pool. The server can choose any
scratch volume that has been checked into the library.
You do not need to allocate volumes to different storage pools associated with the
same automated library. Each storage pool associated with the library can
dynamically acquire volumes from the library’s inventory of scratch volumes. Even
if only one storage pool is associated with a library, you do not need to explicitly
define all the volumes for the storage pool. The server automatically adds volumes
to and deletes volumes from the storage pool.
Tip: A disadvantage of using scratch volumes is that volume usage information,
which you can use to determine when the media has reached its end of life, is
Chapter 4. Storage device concepts
85
deleted when a private volume is returned to the scratch volume pool.
Volume inventory for an automated library
A library’s volume inventory includes only those volumes that have been checked
into that library.
This inventory is not necessarily identical to the list of volumes in the storage
pools associated with the library. For example:
v A volume can be checked into the library but not be in a storage pool (a scratch
volume, a database backup volume, or a backup set volume).
v A volume can be defined to a storage pool associated with the library (a private
volume), but not checked into the library.
For more information on how to check in volumes, see the following:
v “Checking media into automated library devices” on page 177
Device configurations
You can configure devices on a local area network, on a storage area network, for
LAN-free data movement, and as network-attached storage. Tivoli Storage
Manager provides methods for configuring storage devices.
For information about supported devices and Fibre Channel hardware and
configurations, see http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html
Devices on local area networks
In the conventional local area network (LAN) configuration, one or more tape or
optical libraries are associated with a single Tivoli Storage Manager server.
In a LAN configuration, client data, electronic mail, terminal connection,
application program, and device control information must all be handled by the
same network. Device control information and client backup and restore data flow
across the LAN.
For information on the categories of libraries supported by Tivoli Storage Manager,
see “Libraries” on page 77.
Devices on storage area networks
A SAN is a dedicated storage network that can improve system performance. On a
SAN you can consolidate storage and relieve the distance, scalability, and
bandwidth limitations of LANs and wide area networks (WANs).
Using Tivoli Storage Manager in a SAN allows the following functions:
v Sharing storage devices among multiple Tivoli Storage Manager servers. For
more information on sharing storage devices, see
– “Configuring Tivoli Storage Manager servers to share SAN-connected
devices” on page 157
v Allowing Tivoli Storage Manager clients, through a storage agent on the client
machine, to move data directly to storage devices (LAN-free data movement).
In a SAN you can share storage devices that are supported by the Tivoli Storage
Manager device driver, including most SCSI devices.
86
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
This does not include devices that use the GENERICTAPE device type.
Figure 17 shows a SAN configuration in which two Tivoli Storage Manager servers
share a library.
Library Manager
Server
Library Client Server
Library
Control
LAN
Library Control
Data Flow
SAN
Data Flow
Tape Library
Figure 17. Library sharing in a storage area network (SAN) configuration. The servers
communicate over the LAN. The library manager controls the library over the SAN. The
library client stores data to the library devices over the SAN.
When Tivoli Storage Manager servers share a library, one server, the library
manager, controls device operations. These operations include mount, dismount,
volume ownership, and library inventory. Other Tivoli Storage Manager servers,
library clients, use server-to-server communications to contact the library manager
and request device service. Data moves over the SAN between each server and the
storage device.
Tivoli Storage Manager servers use the following features when sharing an
automated library:
Partitioning of the Volume Inventory
The inventory of media volumes in the shared library is partitioned among
servers. Either one server owns a particular volume, or the volume is in
the global scratch pool. No server owns the scratch pool at any given time.
Serialized Drive Access
Only one server accesses each tape drive at a time. Drive access is
serialized and controlled so that servers do not dismount other servers’
volumes or write to drives where other servers mount their volumes.
Chapter 4. Storage device concepts
87
Serialized Mount Access
The library autochanger performs a single mount or dismount operation at
a time. A single server (library manager) performs all mount operations to
provide this serialization.
LAN-free data movement
Tivoli Storage Manager allows a client, through a storage agent, to directly back up
and restore data to a tape library on a SAN.
Figure 18 shows a SAN configuration in which a client directly accesses a tape or
FILE library to read or write data.
Client
Storage Agent installed
Library Control
Client Metadata
LAN
Tivoli Storage Manager Server
Client
Data
Library
Control
SAN
File Library
Tape Library
Figure 18. LAN-Free data movement. Client and server communicate over the LAN. The
server controls the device on the SAN. Client data moves over the SAN to the device.
LAN-free data movement requires the installation of a storage agent on the client
machine. The server maintains the database and recovery log, and acts as the
library manager to control device operations. The storage agent on the client
handles the data transfer to the device on the SAN. This implementation frees up
bandwidth on the LAN that would otherwise be used for client data movement.
The following outlines a typical backup scenario for a client that uses LAN-free
data movement:
1. The client begins a backup operation. The client and the server exchange policy
information over the LAN to determine the destination of the backed up data.
For a client using LAN-free data movement, the destination is a storage pool
that uses a device on the SAN.
88
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
2. Because the destination is on the SAN, the client contacts the storage agent,
which will handle the data transfer. The storage agent sends a request for a
volume mount to the server.
3. The server contacts the storage device and, in the case of a tape library, mounts
the appropriate media.
4. The server notifies the client of the location of the mounted media.
5. The client, through the storage agent, writes the backup data directly to the
device over the SAN.
6. The storage agent sends file attribute information to the server, and the server
stores the information in its database.
If a failure occurs on the SAN path, failover occurs. The client uses its LAN
connection to the Tivoli Storage Manager server and moves the client data over the
LAN.
Remember:
v Centera storage devices and optical devices cannot be targets for LAN-free
operations.
v For the latest information about clients that support the feature, see the IBM
Tivoli Storage Manager support page at http://www.ibm.com/software/
sysmgmt/products/support/IBMTivoliStorageManager.html.
Network-attached storage
Network-attached storage (NAS) file servers are dedicated storage machines whose
operating systems are optimized for file-serving functions. NAS file servers
typically do not run third-party software. Instead, they interact with programs like
Tivoli Storage Manager through industry-standard network protocols, such as
network data management protocol (NDMP).
Tivoli Storage Manager provides two basic types of configurations that use NDMP
for backing up and managing NAS file servers. In one type of configuration, Tivoli
Storage Manager uses NDMP to back up a NAS file server to a library device
directly attached to the NAS file server. (See Figure 19 on page 90.) The NAS file
server, which can be distant from the Tivoli Storage Manager server, transfers
backup data directly to a drive in a SCSI-attached tape library. Data is stored in
special, NDMP-formatted storage pools, which can be backed up to storage media
that can be moved offsite for protection in case of an on-site disaster.
Chapter 4. Storage device concepts
89
Server
Offsite storage
Tape Library
NAS File
Server
Legend:
SCSI or Fibre
Channel Connection
TCP/IP
Connection
Data Flow
NAS File Server
File System
Disks
Figure 19. Library device directly attached to a NAS file server
In the other type of NDMP-based configuration, Tivoli Storage Manager uses
NDMP to back up a NAS file server to a Tivoli Storage Manager storage-pool
hierarchy. (See Figure 20 on page 91.) With this type of configuration you can store
NAS data directly to disk (either random access or sequential access) and then
migrate the data to tape. Data can also be backed up to storage media that can
then be moved offsite. The advantage of this type of configuration is that it gives
you all the backend-data management features associated with a conventional
Tivoli Storage Manager storage-pool hierarchy, including migration and
reclamation.
90
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Server
Disk Storage
Offsite storage
Tape Library
NAS File
Server
Legend:
SCSI or Fibre
Channel Connection
TCP/IP
Connection
Data Flow
NAS File Server
File System
Disks
Figure 20. NAS file server toTivoli Storage Manager storage-pool hierarchy
In both types of configurations, Tivoli Storage Manager tracks file-system image
backups and has the capability to perform NDMP file-level restores. For more
information regarding NDMP file-level restores, see “NDMP file-level restoration”
on page 92.
Note:
v A Centera storage device cannot be a target for NDMP operations.
v Support for filer-to-server data transfer is only available for NAS devices that
support NDMP version 4.
v For a comparison of NAS backup methods, including using a backup-archive
client to back up a NAS file server, see “Determining the location of NAS
backup” on page 227.
NDMP backup operations
In backup images produced by network data management protocol (NDMP)
operations for a NAS file server, Tivoli Storage Manager creates NAS
file-system-level or directory-level image backups.
The image backups are different from traditional Tivoli Storage Manager backups
because the NAS file server transfers the data to the drives in the library or
directly to the Tivoli Storage Manager server. NAS file system image backups can
be either full or differential image backups. The first backup of a file system on a
NAS file server is always a full image backup. By default, subsequent backups are
differential image backups containing only data that has changed in the file system
since the last full image backup. If a full image backup does not already exist, a
full image backup is performed.
If you restore a differential image, Tivoli Storage Manager automatically restores
the full backup image first, followed by the differential image.
Chapter 4. Storage device concepts
91
NDMP file-level restoration
Tivoli Storage Manager provides a way to restore data from backup images
produced by NDMP operations. To assist users in restoring selected files, you can
create a table of contents (TOC) of file-level information for each backup image.
Using the Web backup-archive client, users can then browse the TOC and select
the files that they want to restore. If you do not create a TOC, users must be able
to specify the name of the backup image that contains the file to be restored and
the fully qualified name of the file.
You can create a TOC using one of the following commands:
v BACKUP NODE server command. For details, see the Administrator’s Reference.
v BACKUP NAS client command, with include.fs.nas specified in the client
options file or specified in the client options set. For details, see the
Backup-Archive Clients Installation and User’s Guide.
Directory-level backup and restore
If you have a large NAS file system, initiating a backup on a directory level
reduces backup and restore times, and provides more flexibility in configuring
your NAS backups.
By defining virtual file spaces, a file system backup can be partitioned among
several NDMP backup operations and multiple tape drives. You can also use
different backup schedules to back up sub-trees of a file system.
The virtual file space name cannot be identical to any file system on the NAS
node. If a file system is created on the NAS device with the same name as a virtual
file system, a name conflict will occur on the Tivoli Storage Manager server when
the new file space is backed up. See the Administrator’s Reference for more
information about virtual file space mapping commands.
Remember: Virtual file space mappings are only supported for NAS nodes.
Mixed device types in libraries
Tivoli Storage Manager supports mixing different device types within a single
automated library, as long as the library itself can distinguish among the different
media for the different device types.
Libraries with this capability are those models supplied from the manufacturer
already containing mixed drives, or capable of supporting the addition of mixed
drives. Check with the manufacturer, and also check the Tivoli Storage Manager
Web site for specific libraries that have been tested on Tivoli Storage Manager with
mixed device types.
For example, you can have Quantum SuperDLT drives, LTO Ultrium drives, and
StorageTek 9940 drives in a single library defined to the Tivoli Storage Manager
server. For examples of how to set this up, see:
“Defining Tivoli Storage Manager storage objects with commands” on page 134
“Configuring a 3494 library with multiple drive device types” on page 140
92
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Different media generations in a library
While the Tivoli Storage Manager server now allows mixed device types in an
automated library, the mixing of different generations of the same type of drive is
still not supported. New drives cannot write the older media formats, and old
drives cannot read new formats.
If the new drive technology cannot write to media formatted by older generation
drives, the older media must be marked read-only to avoid problems for server
operations. Also, the older drives must be removed from the library. Some
examples of combinations that the Tivoli Storage Manager server does not support
in a single library are:
v SDLT 220 drives with SDLT 320 drives
v DLT 7000 drives with DLT 8000 drives
v StorageTek 9940A drives with 9940B drives
v UDO1 drives with UDO2 drives
There are two exceptions to the rule against mixing generations of LTO Ultrium
drives and media. The Tivoli Storage Manager server does support mixtures of the
following types:
v LTO Ultrium Generation 1 (LTO1) and LTO Ultrium Generation 2 (LTO2)
v LTO Ultrium Generation 2 (LTO2) with LTO Ultrium Generation 3 (LTO3)
v LTO Ultrium Generation 2 (LTO2) with LTO Ultrium Generation 3 (LTO3) and
LTO Ultrium Generation 4 (LTO4)
The server supports these mixtures because the different drives can read and write
to the different media. If you plan to upgrade all drives to Generation 2 (or
Generation 3 or Generation 4), first delete all existing Ultrium drive definitions and
the paths associated with them. Then you can define the new Generation 2 (or
Generation 3 or Generation 4) drives and paths.
Note:
1. LTO Ultrium Generation 3 drives can only read Generation 1 media. If you are
mixing Ultrium Generation 1 and Ultrium Generation 3 drives and media in a
single library, you must mark the Generation 1 media as read-only, and all
Generation 1 scratch volumes must be checked out.
2. LTO Ultrium Generation 4 drives can only read Generation 2 media. If you are
mixing Ultrium Generation 2 and Ultrium Generation 4 drives and media in a
single library, you must mark the Generation 2 media as read-only, and all
Generation 2 scratch volumes must be checked out.
To learn more about additional considerations when mixing LTO Ultrium
generations, see “Defining LTO device classes” on page 264.
When using Tivoli Storage Manager you cannot mix 3592 generation 1, generation
2, and generation 3 drives. Use one of three special configurations. For details, see
“Defining 3592 device classes” on page 257.
If you plan to encrypt volumes in a library, do not mix media generations in the
library.
Chapter 4. Storage device concepts
93
Mixed media and storage pools
You cannot mix media formats in a storage pool. Each unique media format must
be mapped to a separate storage pool through its own device class.
This includes LTO1, LTO2, LTO3, and LTO4 formats. Multiple storage pools and
their device classes of different types can point to the same library which can
support them as explained in “Different media generations in a library” on page
93.
You can migrate to a new generation of a media type within the same storage pool
by following these steps:
1. ALL older drives are replaced with the newer generation drives within the
library (they cannot be mixed).
2. The existing volumes with the older formats are marked R/O if the new drive
cannot append those tapes in the old format. If the new drive can write to the
existing media in their old format, this is not necessary, but Step 1 is still
required. If it is necessary to keep both LTO1 and LTO2 drives within the same
library, separate storage pools for each must be used.
Removable media mounts and dismounts
When data is to be stored in or retrieved from a storage pool, the server selects the
storage-pool volume and determines the name of the library that contains the
drives to be used for the operation. When it has finished accessing the volume and
the mount retention period has elapsed, the server dismounts the volume.
When data is to be stored in or retrieved from a storage pool, the server does the
following:
1. The server selects a volume from the storage pool. The selection is based on the
type of operation:
Retrieval
The name of the volume that contains the data to be retrieved is stored
in the database.
Store
If a defined volume in the storage pool can be used, the server selects
that volume.
If no defined volumes in the storage pool can be used, and if the
storage pool allows it, the server selects a scratch volume.
2. The server checks the device class associated with the storage pool to
determine the name of the library that contains the drives to be used for the
operation.
v The server searches the library for an available drive or until all drives have
been checked. A drive status can be:
– Offline.
– Busy and not available for the mount.
– In an error state and not available for the mount.
– Online and available for the mount.
3. The server mounts the volume:
v For a manual library, the server displays a mount message for a private or a
scratch volume to be mounted in the selected drive.
94
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v For an automated library, the server directs the library to move the volume
from a storage slot into the selected drive. No manual intervention is
required.
If a scratch mount is requested, the server checks the library’s volume
inventory for a scratch volume. If one is found, its status is changed to
private, it is mounted in the drive, and it is automatically defined as part of
the original storage pool. However, if the library’s volume inventory does
not contain any scratch volumes, the mount request fails.
4. The server dismounts the volume when it has finished accessing the volume
and the mount retention period has elapsed.
v For a manual library, the server ejects the volume from the drive so that an
operator can place it in its storage location.
v For an automated library, the server directs the library to move the volume
from the drive back to its original storage slot in the library.
How Tivoli Storage Manager uses and reuses removable media
Using Tivoli Storage Manager, you can control how removable media are used and
reused. After Tivoli Storage Manager selects an available medium, that medium is
used and eventually reclaimed according to its associated policy.
Tivoli Storage Manager manages the data on the media, but you manage the media
itself, or you can use a removable media manager. Regardless of the method used,
managing media involves creating a policy to expire data after a certain period of
time or under certain conditions, move valid data onto new media, and reuse the
empty media.
In addition to information about storage pool volumes, the volume history
contains information about tapes used for database backups and exports (for
disaster recovery purposes). The process for reusing these tapes is slightly different
from the process for reusing tapes containing client data backups.
Figure 21 on page 96 shows a typical life cycle for removable media. The numbers
(such as 1) refer to numbers in the figure.
Chapter 4. Storage device concepts
95
Figure 21. Simplified view of the life cycle of a tape
1. You label 1 and check in 2 the media. Checking media into a manual library
simply means storing them (for example, on shelves). Checking media into an
automated library involves adding them to the library volume inventory.
See
v “Labeling media with automated tape libraries” on page 176 or “Labeling
media for manual libraries” on page 188
2. If you plan to define volumes to a storage pool associated with a device, you
should check in the volume with its status specified as private. Use of scratch
volumes is more convenient in most cases.
3. A client sends data to the server for backup, archive, or space management.
The server stores the client data on the volume. Which volume the server
selects 3 depends on:
v The policy domain to which the client is assigned.
v The management class for the data (either the default management class for
the policy set, or the class specified by the client in the client’s
include/exclude list or file).
v The storage pool specified as the destination in either the management class
(for space-managed data) or copy group (for backup or archive data). The
storage pool is associated with a device class, which determines which
device and which type of media is used.
v Whether the maximum number of scratch volumes that a server can request
from the storage pool has been reached when the scratch volumes are
selected.
96
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v Whether collocation is enabled for that storage pool. When collocation is
enabled, the server attempts to place data for different client nodes, groups
of client nodes, or client file spaces on separate volumes. For more
information, see “Keeping client files together using collocation” on page
340.
Figure 22 shows more detail about the policies and storage pool specifications
which govern the volume selection described in step 3.
Figure 22. How Tivoli Storage Manager affects media use
4. The data on a volume changes over time as a result of:
v Expiration of files 4 (affected by management class and copy group
attributes, and the frequency of expiration processing). See “Basic policy
planning” on page 455.
v Movement and deletion of file spaces by an administrator.
v Automatic reclamation of media 5
The amount of data on the volume and the reclamation threshold set for the
storage pool affects when the volume is reclaimed. When the volume is
reclaimed, any valid, unexpired data is moved to other volumes or possibly
to another storage pool (for storage pools with single-drive libraries).
v Collocation, by which Tivoli Storage Manager attempts to keep data
belonging to a single client node, group of client nodes, or client file space
on a minimal number of removable media in a storage pool.
If the volume becomes empty because all valid data either expires or is moved
to another volume, the volume is available for reuse (unless a time delay has
been specified for the storage pool). The empty volume becomes a scratch
volume if it was initially a scratch volume. The volume starts again at step 3 on
page 96.
Chapter 4. Storage device concepts
97
5. You determine when the media has reached its end of life.
For volumes that you defined (private volumes), check the statistics on the
volumes by querying the database. The statistics include the number of write
passes on a volume (compare with the number of write passes recommended
by the manufacturer) and the number of errors on the volume.
You must move any valid data off a volume that has reached end of life. Then,
if the volume is in an automated library, check out the volume from the library.
If the volume is not a scratch volume, delete the volume from the database.
Required definitions for storage devices
Before the Tivoli Storage Manager server can use a device, the device must be
configured to the operating system as well as to the server.
The Device Configuration Wizard, available in the Administration Center,
automatically detects storage devices attached to the Tivoli Storage Manager server.
You can use this wizard to select the devices you want to use with Tivoli Storage
Manager, and to configure device sharing if required.
Table 11 summarizes the definitions that are required for different device types.
Table 11. Required definitions for storage devices
Required Definitions
Device
Device Types
Library
Drive
Path
Device Class
Magnetic disk
DISK
—
—
—
Yes See note
FILE See note
—
—
—
Yes
CENTERA
—
—
—
Yes
3590
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Tape
3592
4MM
8MM
DLT
LTO
NAS
QIC
VOLSAFE
3570
DTF
GENERICTAPE
CARTRIDGE See note
ECARTRIDGE See note
Optical
OPTICAL
WORM
WORM12
WORM14
See note
See note
Removable media
(file system)
REMOVABLEFILE
Yes
Yes
Yes
Yes
Virtual volumes
SERVER
—
—
—
Yes
Notes:
98
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v The DISK device class exists at installation and cannot be changed.
v FILE libraries, drives and paths are required for sharing with storage agents.
v Support for the CARTRIDGE device type:
– IBM 3480, 3490, and 3490E tape drives
v The ECARTRIDGE device type is for StorageTek’s cartridge tape drives such as
– SD-3, 9480, 9890, and 9940 drives
v The WORM12 and WORM14 device types are available on AIX and Microsoft
Windows only.
Example: Mapping devices to device classes
You have internal disk drives, an automated tape library with 8-mm drives, and a
manual DLT tape drive. You create a device class for each type of storage.
To map storage devices to device classes, use the information shown in Table 12.
Table 12. Mapping storage devices to device classes
Device Class
Description
DISK
Storage volumes that reside on the internal disk drive
Tivoli Storage Manager provides one DISK device class that
is already defined. You do not need and cannot define
another device class for disk storage.
8MM_CLASS
Storage volumes that are 8 mm tapes, used with the drives
in the automated library
DLT_CLASS
Storage volumes that are DLT tapes, used on the DLT drive
You must define any device classes that you need for your removable media
devices such as tape drives. See Chapter 10, “Defining device classes,” on page 251
for information on defining device classes to support your physical storage
environment.
Example: Mapping storage pools to device classes and
devices
After you categorize your storage devices, you can identify availability, space, and
performance requirements for client data that is stored in server storage. These
requirements help you determine where to store data for different groups of clients
and different types of data. You can then create storage pools that are storage
destinations for backed-up, archived, or space-managed files to match
requirements.
For example, you determine that users in the business department have three
requirements:
v Immediate access to certain backed-up files, such as accounts receivable and
payroll accounts.
These files should be stored on disk. However, you need to ensure that data is
moved from the disk to prevent it from becoming full. You can set up a storage
hierarchy so that files can migrate automatically from disk to the automated tape
library.
v Periodic access to some archived files, such as monthly sales and inventory
reports.
These files can be stored on 8-mm tapes, using the automated library.
Chapter 4. Storage device concepts
99
v Occasional access to backed-up or archived files that are rarely modified, such as
yearly revenue reports.
These files can be stored using the DLT drive.
To match user requirements to storage devices, you define storage pools, device
classes, and, for device types that require them, libraries and drives. For example,
to set up the storage hierarchy so that data migrates from the BACKUPPOOL to 8
mm tapes, you specify BACKTAPE1 as the next storage pool for BACKUPPOOL.
See Table 13.
Table 13. Mapping storage pools to device classes, libraries, and drives
Storage Pool
Device Class
Library
(Hardware)
Drives
Volume Type
Storage Destination
BACKUPPOOL
DISK
—
—
Storage volumes
on the internal
disk drive
For a backup copy
group for files
requiring immediate
access
BACKTAPE1
8MM_CLASS
AUTO_8MM
(Exabyte
EXB-210)
DRIVE01,
DRIVE02
8-mm tapes
For overflow from the
BACKUPPOOL and for
archived data that is
periodically accessed
BACKTAPE2
DLT_CLASS
MANUAL_LIB
(Manually
mounted)
DRIVE03
DLT tapes
For backup copy
groups for files that are
occasionally accessed
Note: Tivoli Storage Manager has the following default disk storage pools:
v BACKUPPOOL
v ARCHIVEPOOL
v SPACEMGPOOL
v DISKPOOL
For more information, see
“Configuring random access volumes on disk devices” on page 108
.
Planning for server storage
To determine the device classes and storage pools that you need for your server
storage, you must evaluate the devices in your storage environment.
Most devices can be configured using the Device Configuration Wizard in the
Tivoli Storage Manager Console. The Device Configuration Wizard is
recommended for configuring devices. See Chapter 7, “Configuring storage
devices,” on page 121. The wizard can guide you through many of the following
steps:
1. Determine which drives and libraries are supported by the server. For more
information on device support, see “Tivoli Storage Manager storage devices”
on page 76.
2. Determine which storage devices may be selected for use by the server. For
example, determine how many tape drives you have that you will allow the
server to use. For more information about selecting a device configuration, see
“Device configurations” on page 86
The servers can share devices in libraries that are attached through a SAN. If
the devices are not on a SAN, the server expects to have exclusive use of the
100
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
3.
4.
5.
6.
drives defined to it. If another application (including another Tivoli Storage
Manager server) tries to use a drive while the server to which the drive is
defined is running, some server functions may fail. For more information
about specific drives and libraries, see http://www.ibm.com/software/
sysmgmt/products/support/IBMTivoliStorageManager.html.
Determine the device driver that supports the devices. For more information
on device driver support, see:
“Selecting a device driver” on page 114
Determine how to attach the devices to the server. . For more information
about attaching devices, see:
“Attaching an automated library device” on page 112
Determine whether to back up client data directly to tape or to a storage
hierarchy.
Determine which client data is backed up to which device, if you have
multiple device types.
7. Determine the device type and device class for each of the available devices.
Group together similar devices and identify their device classes. For example,
create separate categories for 4 mm and 8 mm devices.
Tip: For sequential access devices, you can categorize the type of removable
media based on their capacity. For example, standard length cartridge tapes
and longer length cartridge tapes require different device classes.
8. Determine how the mounting of volumes is accomplished for the devices:
v Devices that require operators to load volumes must be part of a defined
MANUAL library.
v Devices that are automatically loaded must be part of a defined SCSI or
349X. Each automated library device is a separate library.
v Devices that are controlled by Sun StorageTek Automated Cartridge System
Library Software (ACSLS) must be part of a defined ACSLS library.
v Devices that are managed by an external media management system must
be part of a defined EXTERNAL library.
9. If you are considering storing data for one Tivoli Storage Manager server
using the storage of another Tivoli Storage Manager server, consider network
bandwidth and network traffic. If your network resources constrain your
environment, you may have problems using the SERVER device type
efficiently.
Also consider the storage resources available on the target server. Ensure that
the target server has enough storage space and drives to handle the load from
the source server.
10. Determine the storage pools to set up, based on the devices you have and on
user requirements. Gather users’ requirements for data availability. Determine
which data needs quick access and which does not.
11. Be prepared to label removable media. You may want to create a new labeling
convention for media so that you can distinguish them from media used for
other purposes.
Chapter 4. Storage device concepts
101
Server options that affect storage operations
Tivoli Storage Manager provides a number of options that you can specify in the
server options file (dsmserv.opt) to configure certain server storage operations.
Table 14 provides brief descriptions of these options. See the Administrator’s
Reference for details.
Table 14. Server storage options
Option
Description
3494SHARED
Enables sharing of an IBM TotalStorage 3494 Tape Library
between a Tivoli Storage Manager server and server
applications other than a Tivoli Storage Manager server. This
configuration is not recommended, because this
configuration can cause drive contention.
ACSACCESSID
Specifies the ID for the Automatic Cartridge System (ACS)
access control.
ACSLOCKDRIVE
Allows the drives within ACSLS libraries to be locked.
ACSQUICKINIT
Allows a quick or full initialization of the ACSLS library.
ACSTIMEOUTX
Specifies the multiple for the built-in timeout value for
ACSLS API.
ASSISTVCRRECOVERY
102
Specifies whether the server assists an IBM 3570 or 3590
drive in recovering from a lost or corrupted Vital Cartridge
Records (VCR) condition.
DRIVEACQUIRERETRY
Specifies how many times the server retries the acquisition
of a drive in a library when there are no drives available
after acquiring a mount point.
NOPREEMPT
Specifies whether the server allows certain operations to
preempt other operations for access to volumes and devices.
See “Preemption of client or server operations” on page 584
for details.
RESOURCETIMEOUT
Specifies how long the server waits for a resource before
canceling the pending acquisition of a resource.
Note: For proper management of shared library resources,
consider setting the RESOURCETIMEOUT option at the
same time limit for all servers in a shared configuration. In
the case of error recovery, Tivoli Storage Manager always
defers to the longest time limit.
SEARCHMPQUEUE
Specifies the order in which the server satisfies requests in
the mount queue.
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 5. Magnetic disk devices
Using magnetic disk devices, Tivoli Storage Manager can store essential data for
the server and client environments.
Tivoli Storage Manager stores data on magnetic disks in random access volumes,
as data is normally stored on disk, and in files on the disk that are treated as
sequential access volumes.
Magnetic disk devices allow you to:
v Store the database and the recovery log.
v Store client data that has been backed up, archived, or migrated from client
nodes. The client data is stored in storage pools. Procedures for configuring disk
storage of client data are described in this chapter.
v Store backups of the database and export and import data.
See the following sections:
Tasks:
“Configuring random access volumes on disk devices” on page 108
“Configuring FILE sequential volumes on disk devices” on page 108
“Varying disk volumes online or offline” on page 109
“Cache copies for files stored on disk” on page 109
“Freeing space on disk” on page 109
“Scratch FILE volumes” on page 110
“Volume history file and volume reuse” on page 110
Note: Some of the tasks described in this chapter require an understanding of
storage objects. For an introduction to these storage objects, see “Tivoli Storage
Manager storage objects” on page 76.
Requirements for disk subsystems
Tivoli Storage Manager requires certain behaviors of disk storage subsystems for
the database, the active and archive logs, and storage pool volumes of the DISK
device class and of FILE device types.
I/O operation results must be reported synchronously and accurately. For the
database and the active and archive logs, unreported or asynchronously reported
write errors that result in data not being permanently committed to the storage
subsystem can cause failures that range from internal processing errors to the
inability to restart the server. Depending upon the error, the result could be the
loss of some or all stored data.
For the database, the active and archive logs, and DISK device class storage pool
volumes, write operations must be nonvolatile. Data must be permanently
committed to the storage known to Tivoli Storage Manager. Tivoli Storage Manager
has many of the attributes of a database system, and data relationships that are
maintained require that data written as a group be permanently resident as a
© Copyright IBM Corp. 1993, 2009
103
group or not resident as a group. Intermediate states produce data integrity issues.
Data must be permanently resident following each operating-system write API
invocation.
For FILE device type storage pool volumes, data must be permanently resident
following an operating system flush API invocation. This API is used at key
processing points in the Tivoli Storage Manager application. The API is used when
data is to be permanently committed to storage and synchronized with database
and log records that have already been permanently committed to disk storage.
For subsystems that use caches of various types, the data must be permanently
committed by the write APIs (for the database, the active and archive logs, and
DISK device class storage pool volumes) and by the flush API (for FILE device
class storage pool volumes). Tivoli Storage Manager uses write-through flags
internally when using storage for the database, the active and archive logs, and
DISK device class storage pool volumes. If nonvolatile cache is used to safeguard
I/O writes to a device, if the nonvolatile cache is battery protected, and if the
power is not restored before the battery is exhausted, data for the I/O operation
can be lost. This would be the same as having uncommitted storage resulting in
data integrity issues.
To write properly to the Tivoli Storage Manager database, to active and archive
logs, and to DISK device class storage pool volumes, the operating system API
write invocation must synchronously and accurately report the operation results.
Similarly, the operating system API flush invocation for FILE device type storage
pool volumes must also synchronously and accurately report the operation results.
A successful result from the API for either write or flush must guarantee that the
data is permanently committed to the storage subsystem.
Contact the vendor for the disk subsystem if you have questions or concerns about
whether the stated requirements for Tivoli Storage Manager are supported. The
vendor should be able to provide the configuration settings to meet these
requirements.
Tivoli Storage Manager supports the use of remote file systems or drives for
reading and writing storage pool data, database backups, and other data
operations. Remote file systems in particular may report successful writes, even
after being configured for synchronous operations. This mode of operation causes
data integrity issues if the file system can fail after reporting a successful write.
Check with the vendor of your file system to ensure that flushes are performed to
nonvolatile storage in a synchronous manner.
Random access and sequential access disk devices
Before configuring your disk device, you should consider the differences between
the two methods of storing data on disks and the advantages and disadvantages of
each. The particular advantages provided by either device type will depend on the
operating system on which your Tivoli Storage Manager server is running.
Table 15 on page 105 provides some general information about the characteristics
of DISK devices (random access) and FILE devices (sequential access) and the
benefits of each.
104
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 15. Comparing random access and sequential access disk devices
Function
Random Access (DISK)
Sequential Access (FILE)
Comment
Storage space allocation
and tracking
Disk blocks.
Volumes.
Space allocation and
tracking by blocks incurs
higher overhead (more
database storage space, and
more processing power)
than space allocation and
tracking by volume.
Concurrent volume access
A volume can be accessed
concurrently by different
operations.
A volume can be accessed
concurrently by different
operations.
Concurrent volume access
means that two or more
different operations can
access the same volume at
the same time.
Client restore operations
One session per restore.
Multiple concurrent
sessions accessing different
volumes simultaneously on
both the server and the
storage agent. Active
versions of client backup
data collocated in
active-data pools.
Multi-session restore
enables backup-archive
clients to perform multiple
restore sessions for
no-query restore operations,
increasing the speed of
restores. Active-data pools
defined using
sequential-access disk
(FILE) enable fast client
restore because the server
does not have to physically
mount tapes and does not
have to position past
inactive files. For more
information, see “Concepts
for client restore
operations” on page 537
and “Backing up storage
pools” on page 774.
Available for use in
LAN-free backup
Not available.
Available for LAN-free
backup using Tivoli
SANergy®, a separate
product, licensed to users
through the Tivoli Storage
Manager product. Tivoli
SANergy is included with
some versions of Tivoli
Storage Manager.
Using LAN-free backup,
data moves over a
dedicated storage area
network (SAN) to the
sequential-access storage
device, freeing up
bandwidth on the LAN.
For more information, see
“LAN-free data movement”
on page 88.
The Tivoli Storage Manager
server acquires and defines
scratch volumes as needed
if storage administrators set
the MAXSCRATCH
parameter to a value
greater than zero.
For more information about
volumes on random-access
media, see “Configuring
random access volumes on
disk devices” on page 108.
For more information about
volumes on FILE devices,
see “Configuring FILE
sequential volumes on disk
devices” on page 108.
Volume configuration
Operators need to define
volumes and specify their
sizes, or define space
triggers to automatically
allocate space when a
threshold is reached.
Operators can also define
space triggers to
automatically allocate space
when a threshold is
reached.
Chapter 5. Magnetic disk devices
105
Table 15. Comparing random access and sequential access disk devices (continued)
Function
Random Access (DISK)
Sequential Access (FILE)
Comment
Server caching is not
necessary because access
times are comparable to
random access (DISK)
access times.
Caching can improve how
quickly the Tivoli Storage
Manager server retrieves
files during client restore or
retrieve operations. For
more information, see
“Caching in disk storage
pools” on page 317.
The server recovers disk
space in a process called
reclamation, which involves
copying physical files to
another volume, making
the reclaimed volume
available for reuse. This
minimizes the amount of
overhead because there is
no mount time required.
For more information about
reclamation, see
“Reclaiming space in
sequential-access storage
pools” on page 350.
Aggregate reconstruction
occurs as part of the
reclamation process. It is
also available using the
RECONSTRUCT parameter
on the MOVE DATA and
MOVE NODEDATA
commands.
An aggregate is two or more
files grouped together for
storage purposes. Most
data from backup-archive
clients is stored in
aggregates. Aggregates
accumulate empty space as
files are deleted, expire, or
as they are deactivated in
active-data pools. For more
information, see “How
Tivoli Storage Manager
reclamation works” on
page 350.
Not available.
Available for use as copy
storage pools or active-data
pools
Available.
Copy storage pools and
active-data pools provide
additional levels of
protection for client data.
For more information, see
“Backing up storage pools”
on page 774.
File location
FILE volumes use
directories. A list of
directories may be
specified. If directories
correspond with file
systems, performance is
optimized.
Tivoli Storage Manager
server caching (after files
have been migrated to the
next storage pool in the
storage pool hierarchy)
Recovery of disk space
Server caching is available,
but overhead is incurred in
freeing the cached space.
For example, as part of a
backup operation, the
server must erase cached
files to make room for
storing new files.
When caching is enabled,
the space occupied by
cached files is reclaimed on
demand by the server.
When caching is disabled,
the server recovers disk
space immediately after all
physical files are migrated
or deleted from within an
aggregate.
Aggregate reconstruction
106
Not available; the result is
wasted space.
Volume location is limited
by the trigger prefix or by
manual specification.
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 15. Comparing random access and sequential access disk devices (continued)
Function
Random Access (DISK)
Sequential Access (FILE)
Comment
Use the AUDIT VOLUME
command to identify
inconsistencies between
information about a volume
in the database and the
actual content of the
volume. You can specify
whether the Tivoli Storage
Manager server resolves the
database inconsistencies it
finds. For more information
about auditing volumes,
see “Auditing storage pool
volumes” on page 797. For
more information about
reuse delay, see “Delaying
reuse of volumes for
recovery purposes” on page
780. For command syntax,
refer to the Administrator’s
Reference.
Restoring the database to
an earlier level
See comments.
Use the REUSEDELAY
parameter to retain
volumes in a pending state;
volumes are not rewritten
until the specified number
of days have elapsed.
During database
restoration, if the data is
physically present, it can be
accessed after DSMSERV
RESTORE DB.
Migration
Performed by node.
Migration from
random-access pools can
use multiple processes.
Performed by volume. Files For more information, see
“Migrating disk storage
are not migrated from a
volume until all files on the pools” on page 308.
volume have met the
threshold for migration
delay as specified for the
storage pool. Migration
from sequential-access
pools can use multiple
processes.
Storage pool backup
Performed by node and
filespace. Every storage
pool backup operation
must check every file in the
primary pool to determine
whether the file must be
backed up.
Performed by volume. For For more information, see
a primary pool, there is no “Storage pools” on page
need to scan every object in 276.
the primary pool every
time the pool is backed up
to a copy storage pool.
Copying active data
Performed by node and
filespace. Every storage
pool copy operation must
check every file in the
primary pool to determine
whether the file must be
copied.
Performed by volume. For For more information, see
a primary pool, there is no “Storage pools” on page
need to scan every object in 276.
the primary pool every
time the active data in the
pool is copied to an
active-data pool.
Transferring data from
Major benefits by moving
non-collocated to collocated data from non-collocated
storage
storage to DISK storage,
and then allowing data to
migrate to collocated
storage. See “Restoring files
to a storage pool with
collocation enabled” on
page 793 for more
information.
Some benefit by moving
data from non-collocated
storage to FILE storage,
and then moving data to
collocated storage.
For more information, see
“Keeping client files
together using collocation”
on page 340.
Chapter 5. Magnetic disk devices
107
Table 15. Comparing random access and sequential access disk devices (continued)
Function
Random Access (DISK)
Sequential Access (FILE)
Comment
Shredding data
If shredding is enabled,
sensitive data is shredded
(destroyed) after it is
deleted from a storage
pool. Write caching on a
random access device
should be disabled if
shredding is enforced.
Shredding is not supported For more information, see
on sequential access disk
“Securing sensitive client
devices.
data” on page 519.
Data deduplication
Not available
Duplicate data in primary, For more information, see
copy, and active-data pools “Data deduplication
can be identified and
overview” on page 319.
removed, reducing the
overall amount of time that
is required to retrieve data
from disk.
Configuring random access volumes on disk devices
Tivoli Storage Manager provides a predefined DISK device class that is used with
all disk devices.
To set up a random access volume on disk to store client backup, archive, or
space-managed data, do the following:
1. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
2. Click Wizards, then double-click Disk Volume in the right pane.
3. Follow the instructions in the wizard.
Note: Define storage pool volumes on disk drives that reside on the server
machine, not on remotely mounted file systems. Network attached drives can
compromise the integrity of the data that you are writing.
Configuring FILE sequential volumes on disk devices
Magnetic disk storage uses files as volumes that store data sequentially (as on tape
volumes). The space for FILE volumes is managed by the operating system rather
than by Tivoli Storage Manager.
To configure files as volumes that store data sequentially, do the following:
1. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
2. Click Wizards, then double-click Device Configuration in the right pane.
3. Navigate to the Tivoli Storage Manager Device Selection page and click New.
The Properties dialog appears.
4. Select File Device from the drop down list.
5. Enter or browse for the directory you want to allocate as a FILE volume.
6. Click OK. Tivoli Storage Manager configures the FILE volume.
7. Click Next to complete the wizard.
108
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
The Device Configuration Wizard automatically creates a storage pool when the
FILE volume is configured. Administrators must then do one of the following:
v Use Tivoli Storage Manager policy to specify the new storage pool as the
destination for client data. See Chapter 14, “Implementing policies for client
data,” on page 455.
v Place the new storage pool in the storage pool migration hierarchy by updating
an already defined storage pool. See “Example: Updating storage pools” on page
286.
Varying disk volumes online or offline
To perform maintenance on a disk volume or to upgrade disk hardware, you can
vary a disk volume offline. If Tivoli Storage Manager encounters a problem with a
disk volume, the server automatically varies the volume offline.
Task
Required Privilege Class
Vary a disk volume online or offline
System or operator
For example, to vary the disk volume named STGVOL.POOL001 offline, enter:
vary offline stgvol.pool001
You can make the disk volume available to the server again by varying the volume
online. For example, to make the disk volume named STGVOL.POOL001 available
to the server, enter:
vary online stgvol.pool001
Cache copies for files stored on disk
When you define a storage pool that uses disk random access volumes, you can
choose to enable or disable cache. When you use cache, a copy of the file remains
on disk storage even after the file has been migrated to the next pool in the storage
hierarchy (for example, to tape). The file remains in cache until the space it
occupies is needed to store new files.
Using cache can improve how fast a frequently accessed file is retrieved. Faster
retrieval can be important for clients storing space-managed files. If the file needs
to be accessed, the copy in cache can be used rather than the copy on tape.
However, using cache can degrade the performance of client backup operations
and increase the space needed for the database. For more information, see
“Caching in disk storage pools” on page 317.
Freeing space on disk
As client files expire, the space they occupy is not freed for other uses until you
run expiration processing on the server.
Expiration processing deletes from the database information about any client files
that are no longer valid according to the policies you have set. For example,
suppose four backup versions of a file exist in server storage, and only three
versions are allowed in the backup policy (the management class) for the file.
Expiration processing deletes information about the oldest of the four versions of
the file. The space that the file occupied in the storage pool becomes available for
reuse.
Chapter 5. Magnetic disk devices
109
You can run expiration processing by using one or both of the following methods:
v Use the EXPIRE INVENTORY command. See “Running expiration processing to
delete expired files” on page 490.
v Set the server option for the expiration interval, so that expiration processing
runs periodically. See the Administrator’s Reference for information on how to set
the options.
Shredding occurs only after a data deletion commits, but it is not necessarily
completed immediately after the deletion. The space occupied by the data to be
shredded remains occupied while the shredding takes place, and is not available as
free space for new data until the shredding is complete. When sensitive data is
written to server storage and the write operation fails, the data that was already
written is shredded. For more information, see “Securing sensitive client data” on
page 519.
Scratch FILE volumes
When the server needs a new volume, the server automatically creates a file that is
a scratch volume, up to the number you specify.
You can specify a maximum number of scratch volumes for a storage pool that has
a FILE device type.
When scratch volumes used in storage pools become empty, the files are deleted.
Scratch volumes can be located in multiple directories on multiple file systems.
Volume history file and volume reuse
When you back up the database or export server information, Tivoli Storage
Manager records information about the volumes used for these operations in the
volume history. Tivoli Storage Manager will not allow you to reuse these volumes
until you delete the volume information from the volume history.
To reuse volumes that have previously been used for database backup or export,
use the DELETE VOLHISTORY command. For information about the volume
history and volume history files, see “Saving the volume history file” on page 782.
Note: If your server is licensed for the disaster recovery manager (DRM) function,
the volume information is automatically deleted during MOVE DRMEDIA
command processing. For additional information about DRM, see Chapter 25,
“Using disaster recovery manager,” on page 815.
110
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 6. Using devices with the server system
For IBM Tivoli Storage Manager to use a device, you must attach the device to
your server system and install the appropriate device driver.
Attached devices should be on their own Host Bus Adapter (HBA) and should not
share with other devices types (disk, CDROM, and so on). IBM tape drives have
some special requirements for HBAs and associated drivers.
Tasks:
t_drive_manual_attach_win
t_device_automated_lib_attach_win
“Device alias names” on page 113
“Selecting a device driver” on page 114
t_lib_centera_sdk_install_win
Attaching a manual drive
Attaching manual drives to your system allows you to utilize storage.
Perform the following steps to attach a manual drive:
1. Install the SCSI adapter card in your system, if not already installed.
2. Determine the SCSI IDs available on the SCSI adapter card to which you are
attaching the device. Find one unused SCSI ID for each drive.
3. Follow the manufacturer’s instructions to set the SCSI ID for the drive to the
unused SCSI IDs that you found. Usually this means setting switches on the
back of the device or through the device operator’s panel
Note: Each device that is connected in a chain to a single SCSI bus must be set
to a unique SCSI ID. If each device does not have a unique SCSI ID, you may
have serious system problems.
4. Follow the manufacturer’s instructions to attach the device to your server
system hardware.
Attention:
a. Power off your system before attaching a device to prevent damage to the
hardware.
b. Attach a terminator to the last device in the chain of devices connected on
one SCSI adapter card.
5. Install the appropriate device drivers. See “Selecting a device driver” on page
114.
6. Determine the name for the device and record the name. This information can
help you when you need to perform operations such as adding volumes. Keep
the records for future reference.
© Copyright IBM Corp. 1993, 2009
111
Attaching an automated library device
Perform the following steps to attach an automated library device:
1. Install the SCSI adapter card in your system, if not already installed.
2. Determine the SCSI IDs available on the SCSI adapter card to which you are
attaching the device. Find one unused SCSI ID for each drive, and one unused
SCSI ID for the library or autochanger controller.
Note: In some automated libraries, the drives and the autochanger share a
single SCSI ID, but have different LUNs. For these libraries, only a single SCSI
ID is required. Check the documentation for your device.
3. Follow the manufacturer’s instructions to set the SCSI ID for the drives and
library controller to the unused SCSI IDs that you found. Usually this means
setting switches on the back of the device.
Note: Each device that is connected in a chain to a single SCSI bus must be set
to a unique SCSI ID. If each device does not have a unique SCSI ID, you may
have serious system problems.
4. Follow the manufacturer’s instructions to attach the device to your server
system hardware.
Attention:
a. Power off your system before attaching a device to prevent damage to the
hardware.
b. Attach a terminator to the last device in the chain of devices connected on
one SCSI adapter card. Detailed instructions should be in the
documentation that came with your hardware.
5. Install the appropriate device drivers. See “Selecting a device driver” on page
114.
6. Determine the name for each drive and for the library, and record the names.
This information can help you when you need to perform operations such as
adding volumes to an autochanger. Keep the records for future reference.
7. For the IBM Tivoli Storage Manager server to access a SCSI library, set the
device for the appropriate mode. This is usually called random mode; however,
terminology may vary from one device to another. Refer to the documentation
for your device to determine how to set it to the appropriate mode.
Note:
a. Some libraries have front panel menus and displays that can be used for
explicit operator requests. However, if you set the device to respond to such
requests, it typically will not respond to IBM Tivoli Storage Manager
requests.
b. Some libraries can be placed in sequential mode, in which volumes are
automatically mounted in drives by using a sequential approach. This mode
conflicts with how IBM Tivoli Storage Manager accesses the device.
112
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Device alias names
The server uses alias names to identify tape and optical disk devices to the IBM
Tivoli Storage Manager device driver.
Device names for the IBM Tivoli Storage Manager device driver differ from device
names for the Windows device driver. For example, an automated library device
might be known as lb0.0.0.1 to the IBM Tivoli Storage Manager device driver and
as changerx (where x is a number 0–9), to the Windows device driver.
If you use the Device Configuration Wizard to initially configure devices, the
wizard automatically provides the appropriate device name. However, if you
configure devices by using IBM Tivoli Storage Manager commands, you must
provide the device names as parameters to the DEFINE PATH command. If you
modify device driver control, you may need to provide alias name information in
the Device Exclude List. The names can be either:
v Drive letters, for devices attached as local, removable file systems
v Alias names, for devices controlled by either the IBM Tivoli Storage Manager
device driver or the Windows device drivers
“Obtaining device alias names” on page 114 describes the procedure for using the
IBM Tivoli Storage Manager Console to obtain device names.
Alias names replace the real device names in IBM Tivoli Storage Manager
commands and screens. The IBM Tivoli Storage Manager device driver
communicates with devices by using the alias names. See “Obtaining device alias
names” on page 114.
Alias names appear in the form mtx.y.z.n or lbx.y.z.n or opx.y.z.n, where:
mt
Indicates the device is a tape device. For example:
mt3 (Tape drive at SCSI ID 3, LUN 0, bus 0, port 0)
mt5.0.0.1 (Tape drive at SCSI ID 5, LUN 0, bus 0, port 1)
lb
Indicates the device is the controller for an automated library device. For
example:
lb4.1 (Library at SCSI ID 4, LUN 1, bus 0, port 0)
op
Indicates the device is an optical device. For example:
op4 (Optical drive at SCSI ID 4, LUN 0, bus 0, port 0)
x
Indicates the SCSI ID for the targeted device
y
Indicates the logical unit number (LUN) for the targeted device
z
Indicates the bus number supported by the adapter device driver
n
Indicates the port number for the SCSI adapter device driver
Note:
Alias names can be abbreviated when the trailing numbers are zeros.
Chapter 6. Using devices with the server system (Windows)
113
Obtaining device alias names
You can obtain device alias names if you use IBM Tivoli Storage Manager
commands to configure devices.
If you use the IBM Tivoli Storage Manager Device Configuration Wizard to initially
configure devices, this step is unnecessary because the wizard gathers information
about the SCSI Target IDs, logical unit numbers, bus numbers, and SCSI port
numbers required for the alias names. However, if you add devices using IBM
Tivoli Storage Manager commands, you must provide the information in the
DEFINE PATH command. To determine the SCSI properties for a device:
1. From the Tivoli Storage Manager Console, expand the tree to Tivoli Storage
Manager Device Driver for the machine that you are configuring.
2. Expand Tivoli Storage Manager Device Driver and Reports.
3. Click Device Information. The Device Information view appears. The view
lists all devices connected to the server and lists their SCSI attributes in the
form of the alias names.
4. You can also obtain device alias names from the TSM Name column.
See “Device alias names” on page 113 for an overview of IBM Tivoli Storage
Manager device names.
Selecting a device driver
To use a tape or optical device, you must install the appropriate device driver.
IBM device drivers are available for most IBM labeled devices. The Tivoli Storage
Manager device driver, which is provided with IBM Tivoli Storage Manager, is
available for non-IBM devices. Windows device drivers are also supported in some
cases.
Drivers for IBM devices
Tivoli Storage Manager supports drivers for IBM devices.
IBM device drivers are available on the ftp site: ftp://ftp.software.ibm.com/
storage/devdrvr/. It is recommended that you install the most current driver
available.
The IBM device driver should be installed for the following devices:
IBM 3494 library
IBM Ultrium 3580, TS2230, TS2340 tape drives
IBM 3581, 3582, 3583, 3584 tape libraries
IBM 3590, 3590E, and 3590H tape drives
IBM 3592 and TS1120 tape drives
IBM TS3100, TS3200, TS3310, TS3400, and TS3500 tape libraries
v For the most up-to-date list of devices and operating-system levels supported by
IBM device drivers, see the Tivoli Storage Manager Supported Devices Web site
at http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html.
v For installation information, see the IBM Tape Device Drivers Installation and
User’s Guide. You can download the guide from the Doc folder on
ftp://ftp.software.ibm.com/storage/devdrvr/.
114
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Tivoli Storage Manager supports all devices that are supported by IBM device
drivers. However, Tivoli Storage Manager does not support all the
operating-system levels that are supported by IBM device drivers.
Tivoli Storage Manager Support for Multipath I/O with IBM Tape
Devices
Multipath I/O is the use of different paths to get to the same physical device (for
example, through multiple host bus adapters, switches, and so on). Multipathing
helps ensure that there is no single point of failure.
The IBM tape device driver provides multipathing support so that if a path fails,
the Tivoli Storage Manager server can use a different path to access data on a
storage device. The failure and transition to a different path are undetected by the
server. The IBM tape device driver also uses multipath I/O to provide dynamic
load balancing for enhanced I/O performance.
A computer has a unique SCSI address and Tivoli Storage Manager device name
for each path to a changer or tape device, even though some paths may be
redundant. For each set of redundant paths, you must define only one path to
Tivoli Storage Manager using one of the corresponding Tivoli Storage Manager
device names.
You can determine which Tivoli Storage Manager device names are redundant by
using a tool such as tsmdlst to review the device serial numbers. If multiple Tivoli
Storage Manager changer or tape device names have the same serial number, then
they are redundant and you must define only one to Tivoli Storage Manager.
For an overview of path failover and load balancing, as well as information about
how to enable, disable, or query the status of path failover for each device, see the
IBM Tape Device Drivers Installation and User’s Guide.
Preventing conflicts between the IBM device driver and RSM
The IBM device driver allows devices to be managed by both the Windows
Removable Storage component (RSM) and Tivoli Storage Manager. If you are not
using RSM to manage your SCSI tape library devices, disable it so that it does not
conflict with Tivoli Storage Manager’s use of these devices.
To disable RSM services, complete the following steps:
1.
2.
3.
4.
5.
6.
7.
From your desktop, right click My Computer.
Select Manage.
Select Services/Applications.
Select Services.
Right click on Removable Storage and Select All Tasks and then click on Stop.
Right click again on Removable Storage and select Properties.
Under the General tab, choose Disabled for the Startup Type.
8. Click OK.
You can also allow RSM to run, but selectively disable each SCSI device that it tries
to manage:
1.
2.
3.
4.
From your desktop, right click My Computer.
Select Manage.
Select Storage.
Select Removable Storage.
Chapter 6. Using devices with the server system (Windows)
115
5. Select Physical Locations.
6. Under Physical Locations you will see a list of tape libraries, under which are
listed the library’s drives.
7. Right click each library and drive to be disabled from RSM and select its
properties.
8. Uncheck the Enable Library or Enable Drive box.
9. Click OK.
10. Close the Computer Management Console.
When the operating system is started, the Windows device driver tries to acquire
the devices it supports before the IBM Tivoli Storage Manager device driver can
acquire devices. Read the following sections to determine how to select the device
driver you want.
Drivers for non-IBM devices
If you manage a mixture of devices, you can control some with the Tivoli Storage
Manager device driver and others with the Windows device driver. The way you
set up competing device drivers determines which one acquires devices when the
server is started.
The Tivoli Storage Manager device drivers are available at http://www.ibm.com/
software/sysmgmt/products/support/IBMTivoliStorageManager.html. For devices
not currently supported by the Tivoli Storage Manager device driver, the Windows
driver may be suitable. See “Creating a file to list devices and their attributes” on
page 118 for more information.
v For the following tape devices, you can choose whether to install the Tivoli
Storage Manager device driver or the Windows device driver:
4MM
8MM
DLT
DTF
QIC
StorageTek SD3, 9490, 9840, and 9940
v For optical, WORM, and non-IBM LTO devices, you must install the Tivoli
Storage Manager device driver.
v Removable media devices (attached as local file systems) require the Windows
device driver.
v All SCSI-attached libraries that contain optical and tape drives from the list
above must use the Tivoli Storage Manager changer driver.
v Third party vendor device drivers are supported if they are supplied by the
hardware vendor and are associated with the GENERICTAPE device class. Using
a device class other than GENERICTAPE with a third party vendor device driver
is not recommended. Generic device drivers are not supported in WORM device
classes. For more information, see the DEFINE DEVCLASS - GENERICTAPE
command in the Administrator’s Reference.
116
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Installing device drivers for IBM 3494 libraries
You can install device drivers to use IBM 3494 tape libraries. The IBM tape library
driver consists of the ibmatl (a service) and other components.
To install the device driver for an IBM 3494 Tape Library Dataserver, refer to the
IBM TotalStorage Tape Device Drivers Installation and User’s Guide.
To define a path for the library, you can determine the symbolic name of the
library by verifying the value entered in the C:\winnt\ibmatl.conf file. For example,
if the symbolic name for the library in the C:\winnt\ibmatl.conf file is 3494a, then
this is the name of your device. Drives in the library are set up separately.
Installing the Tivoli Storage Manager device driver
The Tivoli Storage Manager device driver is installed into the driver store through
the Device Driver Installation Wizard. The wizard is displayed during the Tivoli
Storage Manager device driver package installation.
Before installing a new version of the Tivoli Storage Manager device driver,
uninstall the previous version. Then complete the following steps during
installation of the device driver package.
1. When the Device Driver Installation Wizard welcome panel displays, select
Next and proceed through the panels to install the device drivers.
Note:
v Windows 2003: During installation, the system may display a warning dialog
box detailing that the software has not passed Windows Logo testing to
verify compatibility with your version of Windows. You may see this
warning several times. Always select Continue Anyway.
v Windows 2008: During installation, the system may display a Windows
Security dialog box asking if you would like to install the device software.
Place a check mark on Always trust software from ″IBM Corporation″ and
select Install.
2. Once your device drivers have been installed you will come to the final panel
in the wizard. Select Finish to complete the installation.
After a successful installation, use the Device Manager to configure devices with
the Tivoli Storage Manager device driver.
Uninstalling the Tivoli Storage Manager device driver
The Tivoli Storage Manager device driver should be uninstalled any time you are
planning on upgrading to a more current version.
Complete the following steps to uninstall the Tivoli Storage Manager device driver.
1. From your Windows Control Panel, navigate to Add or Remove Programs on
Windows 2003 or Programs and Features on Windows 2008.
2. Remove or uninstall the IBM Tivoli Storage Manager Device Driver entry.
3. Do not manually remove the Windows Driver Package entries for tsmscsi.
These packages are automatically removed once the IBM Tivoli Storage Device
Driver program is removed in step 2. These entries, however, may still appear
in the Add or Remove Programs or Programs and Features windows until the
window is refreshed.
Chapter 6. Using devices with the server system (Windows)
117
Windows device drivers
Windows device drivers provide basic connectivity for devices using Removable
Storage Manager (RSM) or native Windows backup tools. Occasionally, you can
use devices that are not yet supported by the IBM Tivoli Storage Manager device
driver by using the Windows device drivers.
The server cannot use all of the devices that Windows device drivers support
because some devices do not have all the functions that the server requires. To
determine if you can use the Windows device driver with a specific device, see
“Creating a file to list devices and their attributes.” You can find the setup
procedure for these devices at “Configuring devices not supported by the Tivoli
Storage Manager device driver” on page 125.
v Tivoli Storage Manager does not recognize the device type.
If you add devices and intend to use the Windows device drivers, you should
understand that the server does not know device types and recording formats.
For example, if you use a Windows device driver for a 4MM drive using the
DDS2 recording format, IBM Tivoli Storage Manager knows only that the device
is a tape drive and will use the default recording format.
The server cannot prevent errors when it does not know the device type. For
example, if one GENERICTAPE device class points to a manual library device
containing a 4MM drive and an 8MM drive, the server may make an impossible
request: mount a 4MM cartridge into an 8MM drive.
v Device problems may be more difficult to solve.
The server cannot report I/O errors with as much detail. Without the IBM Tivoli
Storage Manager device driver, the server can obtain only minimal information
for display in the server console log.
Creating a file to list devices and their attributes
Devices may be used with Windows device drivers or with the manufacturer’s
device drivers, but the devices must have specified capabilities.
The device should be able to perform the following tasks:
v Write in variable mode
v Write filemarks
v Can forward/reverse space blocks
v Can forward/reverse space filemarks
A file listing devices and their attributes can be created by completing the
following procedure.
1. Click Start→Programs→Command Prompt on the Windows Start button. The
Command Prompt dialog appears.
2. Change directories to the directory in which the IBM Tivoli Storage Manager
Console been installed. For default installations, the path resembles the
following:
c:\program files\tivoli\tsm\console
3. To create the file, type in the following command:
tsmdlst devlist.txt
4. To view the file, type in the following command:
notepad devlist.txt
118
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Controlling devices with the Tivoli Storage Manager device
driver
On Windows systems, devices are automatically controlled by the default Windows
device driver, even if you install the Tivoli Storage Manager driver (tsmscsi).
Tape drives may be automatically controlled by the Tivoli Storage Manager device
driver if the Windows device drivers are not available. If the devices are not
automatically configured and controlled by the Tivoli Storage Manager device
driver, you must manually update the controlling driver for each device that you
want controlled by the tsmscsi device driver.
Perform the following procedures from the Device Manager Console:
1. Right click on the device and select Properties. Select the Driver tab and Driver
File Details. This will allow you to see the driver that is currently controlling
your device.
2. You will need to configure the device to be used by tsmscsi.sys by right
clicking on the device and selecting Update Driver or by selecting Action and
then Update Driver. The Hardware Update Wizard will appear.
3. On Windows Server 2003, select Install from a list or specific location
(Advanced). Click Next. On Windows Server 2008, select Browse my computer
for driver software.
4. On Windows Server 2003, select Don’t search. I will choose the driver to
install. Click Next. On Windows Server 2008, select Let me pick from a list of
device drivers on my computer.
5. Select the IBM Tivoli Storage Manager device driver to control the device.
6. Click Next.
7. On Windows Server 2003, from the Hardware Installation panel, click
Continue Anyway. Click Finish.
8. Verify that the device has been configured correctly for tsmscsi:
a. Right click on the device and select Properties.
b. Select the driver tab and driver details.
Installing the Centera SDK for Centera shared libraries
Beginning with Tivoli Storage Manager Version 5.5, Centera shared libraries are not
installed with the server. In order to use Centera with Tivoli Storage Manager, the
Centera SDK must be installed.
Perform the following steps when setting up the Tivoli Storage Manager server to
access Centera.
1. Install the Tivoli Storage Manager server.
2. If you are upgrading from a previous level of Tivoli Storage Manager, delete
the following Centera SDK library files from the directory where the server was
installed:
FPLibrary.dll
FPParser.dll
fpos.dll
PAImodule.dll
3. Contact your EMC representative to obtain the installation packages and
instructions to install the Centera SDK Version 3.2 or later.
Chapter 6. Using devices with the server system (Windows)
119
4. Install the Centera SDK. During the installation, take note of the directory
where the Centera SDK is installed.
a. Unzip and untar the package in a working directory.
b. Copy the files in the lib32 directory to the directory with the server
executable (dsmserv.exe).
5. Start the Tivoli Storage Manager server and set up the policy, device class, and
storage pools for Centera.
120
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 7. Configuring storage devices
You must understand the concepts and procedures for configuring tape devices,
optical disk devices, and removable file devices with Tivoli Storage Manager.
For the most up-to-date list of supported devices and operating-system levels, see
the Tivoli Storage Manager Supported Devices Web site at http://www.ibm.com/
software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html.
Concepts:
“Device configuration overview” on page 122
“Mixed device types in libraries” on page 92
“Server options that affect storage operations” on page 102
“Impact of device changes on the SAN” on page 165
“Defining devices and paths” on page 166
Use the following table to locate instructions for specific tasks:
Tasks:
“Configuring manual devices” on page 123
“Configuring automated library devices” on page 124
“Configuring optical devices” on page 124
“Configuring devices not supported by the Tivoli Storage Manager device driver” on page
125
“Configuring removable media devices” on page 126
“Configuring devices using Tivoli Storage Manager commands” on page 133
“Configuring Tivoli Storage Manager servers to share SAN-connected devices” on page 157
“Configuring Tivoli Storage Manager for LAN-free data movement” on page 161
“Validating your LAN-free configuration” on page 162
“Configuring Tivoli Storage Manager for NDMP operations” on page 162
“Configuring IBM 3494 libraries” on page 137
“ACSLS-managed libraries” on page 150
“Troubleshooting device configuration” on page 163
Configuration tasks are performed using the Tivoli Storage Manager Console and
the command line interface. For information about the Tivoli Storage Manager
Console, see the Tivoli Storage Manager Console online help. For information
about Tivoli Storage Manager commands, see the Administrator’s Reference or issue
the HELP command from the command line of a Tivoli Storage Manager
administrative client.
All of the commands can be performed from the administrative Web interface. For
more information about using the administrative interface, see the Installation
Guide.
© Copyright IBM Corp. 1993, 2009
121
Some of the tasks documented require an understanding of Tivoli Storage Manager
storage objects. For an introduction to these storage objects, see “Tivoli Storage
Manager storage objects” on page 76.
Use the following table to locate information needed to understand concepts of
Tivoli Storage Manager device support:
Device configuration overview
You can configure devices using the Administration Center wizard or configure
them manually.
The following steps give an overview of the device configuration process.
1. Plan for the device.
2. Attach the device to the server. See the device manufacturer’s documentation
for information about attaching the device.
3. Start the appropriate device driver. Both the Tivoli Storage Manager device
driver and the native Windows device driver can be used. You may need to
specify which device driver acquires which devices.
4. Configure the device. The device configuration wizard automatically detects
drives, and allows you to drag and drop them to configure.
Important: In most cases, the server expects to have exclusive use of devices
defined to the server. Attempting to use a Tivoli Storage Manager device with
another application might cause the server or the other application to fail. This
restriction does not apply to 3494 library devices, or when using a storage area
network (SAN) to share library devices.
5. Determine the media type and device type for client data.
You can link clients to devices by directing client data to a type of media. For
example, accounting department data might be directed to LTO Ultrium tapes,
and as a result the server would select LTO Ultrium devices.
You can direct data to a specific media type through Tivoli Storage Manager
policy. When you register client nodes, you specify the associated policy.
For configuring devices by using Tivoli Storage Manager commands, you must
also define or update the Tivoli Storage Manager policy objects that will link
clients to the pool of storage volumes and to the device.
6. Register clients to the policy domain defined or updated in the preceding step.
This step links clients and their data with storage volumes and devices.
7. Prepare media for the device.
Label tapes and optical disks before they can be used. For automated library
devices, you must also add the media to the device’s volume inventory by
checking media into the library device.
122
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Windows device configuration wizard
You can configure some devices with the Device Configuration Wizard. While it is
recommended that you use the wizard whenever possible, some devices, such as
the IBM 3494 Tape Library Dataserver, StorageTek Volsafe, and Sony AIT WORM
must be added using Tivoli Storage Manager commands.
Task
Required Privilege Class
Adding devices
System
Configuring manual devices
You can configure manually-operated, stand-alone tape and optical devices that are
supported by the Tivoli Storage Manager device driver.
For devices not yet supported by the Tivoli Storage Manager device driver, you
can use the Windows device driver. Perform the following steps to configure
manually-operated, stand-alone tape and optical devices:
1. Attach the device to the system.
Follow the manufacturer’s instructions to attach the device to the system.
2. Set up the appropriate device driver for the device.
3. Configure the device.
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance that you are configuring.
b. Click Wizards, then double-click Device Configuration in the right pane.
The Device Configuration Wizard appears.
c. Follow the instructions in the wizard.
4. Determine your backup strategy.
Determine which device the server backs up client data to, and whether client
data is backed up to disk, and then migrated to tape, or if it is backed up
directly to tape.
5. Update the Tivoli Storage Manager policy.
Define the Tivoli Storage Manager policy that links client data with media for
the device.
6. Label volumes.
See the following topics for more information:
“Configuring devices not supported by the Tivoli Storage Manager device
driver” on page 125
“Defining and updating a policy domain” on page 476
“Labeling media for manual libraries” on page 188
“Planning for server storage” on page 100
“Selecting a device driver” on page 114
Chapter 7. Configuring storage devices (Windows)
123
Configuring automated library devices
You can add automated library devices with the Device Configuration Wizard.
Perform the following steps to add automated library devices:
1. Attach the library device to the system.
Follow the manufacturer’s instructions to attach the device to the system.
2. Set up the appropriate device driver for the library device.
3. Configure the library device.
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
b. Click Wizards, then double-click Device Configuration in the right pane.
The Device Configuration Wizard appears.
c. Follow the instructions in the wizard.
4. Determine your backup strategy.
Determine to which device the server backs up client data, and whether client
data is backed up to disk, and then migrated to tape, or if it is backed up
directly to tape.
5. Update the Tivoli Storage Manager policy.
Define the Tivoli Storage Manager policy that links client data with media for
the device.
6. Label volumes.
7. Add volumes to the library.
Add volumes to an automated device by checking the volumes into library.
Scratch volumes are checked in differently than private volumes.
Adding volumes depends on the presence of scratch volumes or private
volumes in the library device:
v Scratch volumes are recommended. As volumes are used, you may need to
increase the number of scratch volumes allowed in the storage pool defined
for this library.
v Private volumes are not recommended because you must define volumes to
the storage pool. The defined volumes must have been labeled already.
See the following topics for more information:
“Defining and updating a policy domain” on page 476
“Defining storage pool volumes” on page 292
“Labeling media for manual libraries” on page 188
“Planning for server storage” on page 100
“Selecting a device driver” on page 114.
Configuring optical devices
You can configure optical disk devices that are supported by the Tivoli Storage
Manager device driver.
Perform the following steps to configure the optical disks:
1. Attach the device to the system.
Follow the manufacturer’s instructions to attach the device to the system.
2. Set up the device driver for the device.
3. Configure the device.
a. From the Tivoli Storage Manager Console, expand the tree to Tivoli
Storage Manager Device Driver for the machine that you are configuring.
124
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
b. Expand Tivoli Storage Manager Device Driver and Reports.
c. Click Service Information in the Tivoli Storage Manager Console tree in the
left panel. The Service Information window appears in the right panel.
d. Right click Tivoli Storage Manager Device Driver. A pop-up menu
appears.
e. Click Properties in the pop-up menu. The Device Driver Options dialog
appears.
f. Click to check the Enable Windows and Optical Device Support check box.
The startup type is set to Boot as a default.
g. Click OK. A warning message appears because the action changes some
entries in the registry. Click OK.
h. The Tivoli Storage Manager device driver now starts before Windows
device drivers.
4. Determine your backup strategy.
Determine which device the server backs up client data to, and whether client
data is backed up to disk, and then migrated to tape.
5. Update the Tivoli Storage Manager policy.
Define the Tivoli Storage Manager policy that links client data with media for
the device.
See the following topics for more information:
“Defining and updating a policy domain” on page 476
“Planning for server storage” on page 100
“Selecting a device driver” on page 114
Configuring devices not supported by the Tivoli Storage
Manager device driver
You can configure devices that run with their own or Windows device drivers as
long as the devices meet Tivoli Storage Manager requirements.
Devices not supported by the Tivoli Storage Manager device driver can be added
by using Tivoli Storage Manager commands.
1. Attach the device to the system.
Follow the manufacturer’s instructions to attach the device to the system.
2. Set up the appropriate Windows device driver for the device.
3. Configure the device. The following guidelines must be followed:
v The device class must have a device type of GENERICTAPE.
v Define a different device class and a different manual library device for every
unique device type that will be controlled by the Windows device driver. For
example, to use a 4 mm drive and an 8 mm drive, define two manual
libraries, and two device classes (both with device type GENERICTAPE).
4. Determine your backup strategy.
Determine which device the server backs up client data to, and whether client
data is backed up to disk, and then migrated to tape, or if it is backed up
directly to tape.
5. Update the Tivoli Storage Manager policy.
Define the Tivoli Storage Manager policy that links client data with media for
the device.
6. Label volumes.
Chapter 7. Configuring storage devices (Windows)
125
See the following topics for more information:
“Configuring devices using Tivoli Storage Manager commands” on page 133
“Creating a file to list devices and their attributes” on page 118
“Defining Tivoli Storage Manager storage objects with commands” on page 134
“Defining and updating a policy domain” on page 476
“Planning for server storage” on page 100
“Selecting a device driver” on page 114
“Labeling media with automated tape libraries” on page 176
“Labeling media for manual libraries” on page 188
Configuring removable media devices
You can add removable media devices by issuing Tivoli Storage Manager
commands.
The following guidelines must be followed:
If a removable media device can be formatted with a file system, Tivoli Storage
Manager may be able to use the device. The server recognizes the device as a
device with type REMOVABLEFILE. To use device type REMOVABLEFILE for a
device, the device:
v Must not be supported by a device type that is available for a Tivoli Storage
Manager device class.
v Must be a device with removable media, for example, Iomega Zip or Jaz drives,
CD drive, or DVD drive.
v Must be viewed by the operating system as a removable media drive, and not as
a fixed, hard disk drive. The server cannot use the device if the storage adapter
card makes the removable media drive appear as a fixed disk drive to the
operating system.
The operating system treats some optical drives as fixed drives after data is
written to them and until the system reboots. The server cannot use these drives
as removable file devices.
Tip: If a data cartridge that is associated with a REMOVABLEFILE device class has
two sides, the server treats each side as a separate Tivoli Storage Manager volume.
Tivoli Storage Manager REMOVABLEFILE device class supports only single-sided
media.
You can use the CD or DVD media as input media on a target Tivoli Storage
Manager server by using the REMOVABLEFILE device class for input. Using the
REMOVABLEFILE device class allows the server to distinguish media volumes by
a “volume label,” to prompt for additional media, and to dismount media.
With CD support for Windows, you can also use CD media as an output device
class. Using CD media as output requires other software which uses a file system
on top of the CD media. This media allows other software to write to a CD by
using a drive letter and file names. The media can be either CD-R (read) or
CD-RW (read/write).
With DVD support for Windows, you can also use DVD media as an output device
class. Using DVD media as output requires other software which uses a file system
on top of the DVD media. DVDFORM software is ../common tool that comes with
some DVD-RAM device drivers. The DVDFORM software, for example, allows you
126
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
to label the media, which has to be DVD-RAM, by using upper case letters and
numbers. After the media is formatted, you can use the LABEL system command
to change the label.
To set up a device, perform the following steps.
1. Attach the device to the system.
Follow the manufacturer’s instructions to attach the device to the system.
2. Set up the appropriate device driver for the device.
3. Configure the device.
The following parameters must be specified:
v The device class must have device type of REMOVABLEFILE.
v The library type can be either MANUAL or SCSI.
v The device name used in defining drives is the drive letter by which the
system knows the drive.
4. Determine your backup strategy.
Determine which device the server backs up client data to, and whether client
data is backed up to disk, and then migrated to tape, or if it is backed up
directly to tape.
5. Label removable file media.
Utilities are not supplied by the server to format or label CDs or DVDs. You
must label CDs or DVDs with the device manufacturer’s or Windows utilities
because Tivoli Storage Manager does not provide utilities to format or label
these media. The operating system utilities include the Disk Administrator
program (a graphical user interface) and the label command.
For additional information, see the following topics:
“Configuring devices using Tivoli Storage Manager commands” on page 133
“Defining Tivoli Storage Manager storage objects with commands” on page 134
“Defining and updating a policy domain” on page 476
“Labeling media” on page 175
“Obtaining device alias names” on page 114
“Planning for server storage” on page 100
“Selecting a device driver” on page 114
Example of removable file support (CD):
The steps are included as an example of Tivoli Storage Manager REMOVABLEFILE
support. This example takes an export object and moves it from one server to
another.
v Server A:
– Define a device class named expfile with a device type of FILE.
define devclass expfile devtype=file directory=c:\data\move maxcap=650M
– Export the node. This command creates a file named CDR03 in the
c:\data\move directory. CDR03 contains the export data for node USER1.
export node user1 filedata=all devclass=expfile vol=CDR03
You can use software for writing CDs to create a CD with volume label
CDR03 that contains the file named CDR03.
v Server B:
– Insert the CD in a drive on the Windows system, for example, E:
Chapter 7. Configuring storage devices (Windows)
127
– Issue the following Tivoli Storage Manager commands to import the node
data on the CD volume CDR03:
define library manual
define devclass cdrom devtype=removablefile library=manual
define drive manual cddrive
define path server01 cddrive srctype=server desttype=drive
library=manual directory=e:\ device=e:
import node user1 filedata=all devclass=cdrom vol=CDR03
Example of removable file support (DVD-RAM):
The steps (similar to CD support) are used to move data from one server to
another.
The following example shows how DVD-RAM drives work inside a SCSI library:
v Server A:
– Configure the device.
– For the library, follow the normal tape library configuration method.
– To configure the DVD-RAM drives, use the following procedure:
1. From your desktop, right click My Computer.
2. Select Device Manager.
3. Select the correct SCSI CD-ROM Device and right click for Properties.
4. Select Drivers.
5. Select Update Driver and choose the dvdram.sys file for the driver.
v Issue the following Tivoli Storage Manager commands to manage the library
functions on the DVD-RAM volume DVD01 (use the library element map in the
IBM Tivoli Storage Manager device support pages for your library to determine
the correct element of each drive):
define library dvdlib libtype-scsi
define drive dvdlib drv1 element 6001
define path sever1 dvdlib srctype=server desttype=library device=lb6.0.0.3
define path server1 drv1 srctype=server desttype=drive
library=dvdlib directory=i:\ device=i:
checkin libv dvdlib search=yes status=scratch
checkout libv dvdlib DVD01 rem=no
define devclass a_class devtype=removablefile library=dvdlib
Manually configuring devices
When the Tivoli Storage Manager device driver is installed, some tape drives may
be automatically configured by the Tivoli Storage Manager driver if the Windows
device drivers for the devices are not available. If the tape drives are not
automatically configured with the Tivoli Storage Manager driver, you will need to
manually configure them.
To see if a device has already been automatically configured with the Tivoli
Storage Manager device driver, go to Device Manager. Right click on the device
and select Properties. Select the Driver tab and Driver File Details. This will allow
you to see the device driver that is currently controlling your device.
You can also run tsmdlst.exe in the Tivoli Storage Manager console directory to see
if devices tape drives have been configured with the Tivoli Storage Manager device
driver. If the devices tape drives have not been configured with the Tivoli Storage
Manager device driver, the TSM Type will show GENERICTAPE.
128
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Manually configuring devices on Windows Server 2003
You can configure tape drives, medium changers, and optical devices manually
when you are running your system on Windows Server 2003. Devices are
controlled with the tsmscsi device driver.
To manually configure devices for the IBM Tivoli Storage Manager device driver,
tsmscsi.sys, complete the following steps.
1. Locate the device in the Device Manager console (devmgmt.msc) and select it.
Tape drives are listed under Tape drives, medium changers are under Medium
Changers, and optical drives are under Disk drives.
Figure 23. Device Manager
2. Configure the device for use by tsmscsi.sys.
a. Select Update Driver... either from Action -> Update Driver... or by
right-clicking on the device and selecting Update Driver...
b. The Hardware Update Wizard appears. Select Install from a list or specific
location (Advanced). If Install the software automatically is selected,
Windows will choose the best driver for the device, instead of TSMSCSI.
c. Click Next.
Chapter 7. Configuring storage devices (Windows)
129
Figure 24. Hardware Update Wizard
3. Select Don’t search. I will choose the driver to install.
Figure 25. Search and Installation Options
130
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
4. Click Next.
5. Select one of the following options, depending on what kind of device you are
configuring:
v For a tape drive, select IBM Tivoli Storage Manager for Tape Drives.
v For a medium changer, select IBM Tivoli Storage Manager for Medium
Changers.
v For an optical drive, select IBM Tivoli Storage Manager for Optical Drives.
6. Click Next.
Figure 26. Select Device Driver
7. From the Hardware Installation panel, click Continue Anyway.
Chapter 7. Configuring storage devices (Windows)
131
Figure 27. Hardware Installation
8. Click Finish.
9. Verify that the device has been configured correctly for tsmscsi.
a. Right-click on the device and select Properties.
b. Select the driver tab and driver details.
c. The following panel shows the device driver that is controlling the device.
Driver files show the Tivoli Storage Manager device driver. This should be
tsmscsi.sys for 32-bit Windows Server 2003, or tsmscsi64.sys for 64-bit
Windows Server 2003.
Manually configuring devices on Windows Server 2008
You can configure tape drives, medium changers, and optical devices manually
when you are running your system on Windows Server 2008. Devices are
controlled with the tsmscsi device driver.
To manually configure devices for the Tivoli Storage Manager device driver,
tsmscsi.sys, complete the following steps.
1. Locate the device in the Device Manager console (devmgmt.msc) and select it.
Tape drives are listed under Tape drives, medium changers are under Medium
Changers, and optical drives are under Disk drives.
2. Configure the device for use by tsmscsi.sys.
a. Select Update Driver... either from Action -> Update Driver... or by
right-clicking on the device and selecting Update Driver Software...
b. Select Browse my computer for driver software.
3. Select Let me pick from a list of device drivers on my computer.
4. Click Next.
5. Select one of the following options, depending on what kind of device you are
configuring:
v For a tape drive, select IBM Tivoli Storage Manager for Tape Drives.
v For a medium changer, select IBM Tivoli Storage Manager for Medium
Changers.
132
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v For an optical drive, select IBM Tivoli Storage Manager for Optical Drives.
6. Click Next.
7. Click Close.
8. Verify that the device has been configured correctly for tsmscsi.
a. Right-click on the device and select Properties.
b. Select the Driver tab and Driver Details.
c. The Driver Details panel shows the device driver that is controlling the
device. This should be tsmscsi.sys for 32-bit Windows Server 2008, or
tsmscsi64.sys for 64-bit Windows Server 2008.
For Windows Server 2008 Server Core, devices cannot be configured through
Device Manager. If the devices are not automatically configured, you will need to
use the Tivoli Storage Manager CHANGE DEVDRIVER command to configure the
devices. See Technote 1320150 for more information.
Configuring devices using Tivoli Storage Manager commands
You can add devices by issuing Tivoli Storage Manager commands.
The scenario documented adds a manual tape device, automated library devices,
and a removable file system device such as an Iomega Jaz drive.
Automated library devices can have more than one type of device. The scenario
shows the case of a library with one type of device (a DLT 8000 drive) and a
library with two types of devices (a DLT 8000 drive and an LTO Ultrium drive).
Perform the following steps to add a device:
1. Attach the device to the system. Follow the manufacturer’s instructions to
attach the device to the system.
2. Set up the appropriate device driver for the device.
3. Configure the device.
4. Determine which device the server backs up client data to, and whether client
data is backed up to disk and then migrated to tape, or if it is backed up
directly to tape.
5. Label the media.
6. Add new volumes to the library.
Some of the tasks described in this section require an understanding of Tivoli
Storage Manager storage objects. For more information about Tivoli Storage
Manager commands, see the Administrator’s Reference.
For additional information, see:
“Checking media into automated library devices” on page 177
“Defining and updating a policy domain” on page 476
“Labeling media with automated tape libraries” on page 176
“Labeling media for manual libraries” on page 188
“Planning for server storage” on page 100
“Selecting a device driver” on page 114
“Tivoli Storage Manager storage objects” on page 76
Chapter 7. Configuring storage devices (Windows)
133
Defining Tivoli Storage Manager storage objects with
commands
You can use commands to define storage objects. These objects are used to
represent each library device and its drives, as well as their respective paths and
the policy used to manage the media associated with each library device.
For additional information, see:
“Defining libraries”
“Defining drives in the library”
Defining libraries
All devices must be defined as libraries. Manual devices require a manual type
library, and most automated devices require the SCSI type library. Automated
libraries also require a path defined to them using the DEFINE PATH command.
You define libraries with the DEFINE LIBRARY command. See the following
examples of the different ways to define a library:
Manual device
define library manual8mm libtype=manual
Automated library device with one device type
define library autodltlib libtype=scsi
Note: If you have a SCSI library with a barcode reader and you would like
to automatically label tapes before they are checked in, you can specify the
following:
define library autodltlib libtype=scsi autolabel=yes
define path server01 autodltlib srctype=server desttype=library
device=lb3.0.0.0
Automated library device with two device types
define library automixlib libtype=scsi
define path server01 automixlib srctype=server desttype=library
device=lb3.0.0.0
Removable file system device (Iomega Jaz drive)
define library manualjaz libtype=manual
For more information about defining Tivoli Storage Manager libraries, see
“Defining devices and paths” on page 166.
Defining drives in the library
All drives that you wish to use must be defined to the library. You can define
drives by issuing the DEFINE DRIVE command. You must also issue the DEFINE
PATH command to define the path for each of the drives.
See the following examples for defining drives in the library:
Manual device
define drive manual8mm drive01
define drive manual8mm drive02
define path server01 drive01 srctype=server desttype=drive
library=manual8mm device=mt1.0.0.0
define path server01 drive02 srctype=server desttype=drive
library=manual8mm device=mt2.0.0.0
134
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Automated library device with one device type
define drive autodltlib dlt_mt4
define drive autodltlib dlt_mt5
define path server01 dlt_mt4 srctype=server desttype=drive
library=autodltlib device=mt4.0.0.0
define path server01 dlt_mt5 srctype=server desttype=drive
library=autodltlib device=mt5.0.0.0
For drives in SCSI libraries with more than one drive, the server requires
the element address for each drive. The element address indicates the
physical location of a drive within an automated library. The server
attempts to obtain the element address directly from the drive. If the drive
is not capable of supplying the information, you must supply the element
address in the drive definition.
Automated library device with two device types
define drive automixlib dlt_mt4
define drive automixlib lto_mt5
define path server01 dlt_mt4 srctype=server desttype=drive
library=automixlib device=mt4.0.0.0
define path server01 lto_mt5 srctype=server desttype=drive
library=automixlib device=mt5.0.0.0
For drives in SCSI libraries with more than one drive, the server requires
the element address for each drive. The element address indicates the
physical location of a drive within an automated library. The server
attempts to obtain the element address directly from the drive. If the drive
is not capable of supplying the information, you must supply the element
address in the drive definition.
Removable file system device (Iomega Jaz drive)
define drive manualjaz drive01
define path server01 drive01 srctype=server desttype=drive
library=manualJAZ directory=e:\ device=e:
For additional information, see:
“Defining devices and paths” on page 166
“Defining drives” on page 167
Define the device classes that group together similar devices
Each Tivoli Storage Manager device must be a member of a Tivoli Storage
Manager device class. Device classes are collections of similar devices, for example
all 8 mm devices that use the same media format. You can define device classes by
issuing the DEFINE DEVCLASS command.
See the following examples of defining device classes that group together similar
devices:
Manual device
define devclass tape8mm_class devtype=8mm format=8500 library=manual8mm
Automated library device with one device type
define devclass autodlt_class devtype=dlt format=drive library=autodltlib
Automated library device with two device types
define devclass autodlt_class devtype=dlt format=dlt40 library=automixlib
define devclass autolto_class devtype=lto format=ultriumc library=automixlib
Chapter 7. Configuring storage devices (Windows)
135
Important: Do not use the DRIVE format, which is the default. Because the
drives are different types, Tivoli Storage Manager uses the format
specification to select a drive. The results of using the DRIVE format in a
mixed media library are unpredictable.
Removable file system device (Iomega Jaz drive)
define devclass jazdisk_class devtype=removablefile library=manualjaz
For detailed information about defining Tivoli Storage Manager device classes, see
Chapter 10, “Defining device classes,” on page 251.
Creating a storage pool for the device added
Each Tivoli Storage Manager device must be associated with a Tivoli Storage
Manager storage pool to allow it to be used to store client data. Storage pools are
collections of media and like device classes.
They are organized for a grouping of specific types of media, for example a storage
pool named TAPE8MM_POOL for the device class TAPE8MM_CLASS, and
AUTODLT_POOL for the device class AUTODLT_CLASS. See the following
examples of how to create a storage pool for the added device:
Manual device
define stgpool tape8mm_pool tape8mm_class maxscratch=20
Automated library device with one device type
define stgpool autodlt_pool autodlt_class maxscratch=20
Automated library device with two device types
define stgpool autodlt_pool autodlt_class maxscratch=20
define stgpool autolto_pool autolto_class maxscratch=20
Removable file system device (Iomega Jaz drive)
define stgpool manualjaz_pool jazdisk_class
For detailed information about defining storage pools, see Chapter 11, “Managing
storage pools and volumes,” on page 275.
Determining backup strategies
Administrators are responsible for creating a backup strategy and implementing it
through Tivoli Storage Manager policy. Typically, a backup strategy determines the
device and media to which client data is written. It also determines if data is
backed up directly to tape or if data is backed up to disk and then later migrated
to tape.
For disk-to-tape backups:
1. Set up a storage pool hierarchy
2. Use STANDARD default Tivoli Storage Manager policy
For backups directly to tape, you must create new policy by copying default policy
and modifying it for the desired results.
See “Configuring policy for direct-to-tape backups” on page 500.
136
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Determining the media and device type for client backups
Determine the type of media and the type of device to which the server backs up
client data by changing Tivoli Storage Manager policy.
See the following examples for how to determine the media and device type for
client backups:
Manual device
To assign client node astro to the direct-to-tape policy named dir2tape,
with the password cadet, enter:
register node astro cadet dir2tape
Automated library devices
To assign client node astro to a direct-to-tape policy domain named
dsk2tape, with the password cadet, enter:
register node astro cadet dsk2tape
Removable file system device (Iomega Jaz drive)
To assign client node astro to a removable media device policy domain
named rmdev, with the password cadet, enter:
register node astro cadet rmdev
Configuring IBM 3494 libraries
An IBM 3494 library can be added only by using Tivoli Storage Manager
commands. One or more Tivoli Storage Manager servers can use a single IBM 3494
library.
See the following sections:
v “Configuring an IBM 3494 library for use by one server” on page 138
v “Sharing an IBM 3494 library among servers” on page 143
v “Migrating a shared IBM 3494 library to a library manager” on page 145
v “Sharing an IBM 3494 library by static partitioning of drives” on page 146
See also “Categories in an IBM 3494 library.”
Note: 3494 libraries are supported on Windows Server 2003 and Windows Server
2008.
Categories in an IBM 3494 library
The library manager built into the IBM 3494 library tracks the category number of
each volume in the library. A single category number identifies all volumes used
for the same purpose or application. Category numbers are useful when multiple
systems share the resources of a single library.
Attention: If other systems or other Tivoli Storage Manager servers connect to the
same 3494 library, each must use a unique set of category numbers. Otherwise, two
or more systems may try to use the same volume, and cause corruption or loss of
data.
Typically, a software application that uses a 3494 library uses volumes in one or
more categories that are reserved for that application. To avoid loss of data, each
application sharing the library must have unique categories. When you define a
3494 library to the server, you can use the PRIVATECATEGORY and
SCRATCHCATEGORY parameters to specify the category numbers for private and
Chapter 7. Configuring storage devices (Windows)
137
scratch Tivoli Storage Manager volumes in that library. If the volumes are IBM
3592 WORM (write once, read many) volumes, you can use the
WORMSCRATCHCATEGORY parameter to specify category numbers for scratch
WORM volumes in the library. See “Tivoli Storage Manager volumes” on page 85
for more information on private, scratch, and scratch WORM volumes.
When a volume is first inserted into the library, either manually or automatically at
the convenience I/O station, the volume is assigned to the insert category
(X’FF00’). A software application such as Tivoli Storage Manager can contact the
library manager to change a volume’s category number. For Tivoli Storage
Manager, you use the CHECKIN LIBVOLUME command (see “Checking media
into automated library devices” on page 177).
The Tivoli Storage Manager server only supports 3590 and 3592 tape drives in an
IBM 3494 library. The server reserves two different categories for each 3494 library
object. The categories are private and scratch.
When you define a 3494 library, you can specify the category numbers for volumes
that the server owns in that library by using the PRIVATECATEGORY,
SCRATCHCATEGORY, and if the volumes are IBM 3592 WORM volumes, the
WORMSCRATCHCATEGORY parameters. For example:
define library my3494 libtype=349x privatecategory=400 scratchcategory=401
wormscratchcategory=402
For this example, the server uses the following categories in the new my3494
library:
v 400 (X’190’) Private volumes
v 401 (X’191’) Scratch volumes
v 402 (X’192’) WORM scratch volumes
Note: The default values for the categories may be acceptable in most cases.
However, if you connect other systems or Tivoli Storage Manager servers to a
single 3494 library, ensure that each uses unique category numbers. Otherwise, two
or more systems may try to use the same volume, and cause a corruption or loss
of data.
For a discussion regarding the interaction between library clients and the library
manager in processing Tivoli Storage Manager operations, see “Shared libraries” on
page 186.
Configuring an IBM 3494 library for use by one server
In the following example, an IBM 3494 library containing two drives is configured
for use by one Tivoli Storage Manager server.
You must first set up the IBM 3494 library on the server system. This involves the
following tasks:
1. Set the symbolic name for the library in the configuration file for the library
device driver (c:\winnt\ibmatl.conf). This procedure is described in IBM Tape
Device Drivers Installation and User’s Guide.
2. Physically attach the devices to the server hardware or the SAN.
3. Install and configure the appropriate device drivers for the devices on the
server that will use the library and drives.
4. Determine the device names that are needed to define the devices to Tivoli
Storage Manager.
138
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
For details, see the following topic:
v “Selecting a device driver” on page 114.
There are two possible configurations:
v In the first configuration, both drives in the library are the same device type. See
“Configuring a 3494 library with a single drive device type.”
v In the second configuration, the drives are different device types.
Drives with different device types (or different generations of drives) are
supported in a single physical library if you define one library to Tivoli Storage
Manager for each type of drive (or generation of drive). For example, if you
have two device types, such as 3590E and 3590H (or two generations of drives
of the same device type), define two libraries. Then define drives and device
classes for each library. In each device class definition, you can use the FORMAT
parameter with a value of DRIVE, if you choose. See “Configuring a 3494 library
with multiple drive device types” on page 140.
Configuring a 3494 library with a single drive device type
In this example, the 3494 library contains two IBM 3590 tape drives.
1. Define a 3494 library named 3494LIB:
define library 3494lib libtype=349x
2. Define a path from the server to the library:
define path server1 3494lib srctype=server desttype=library
device=library1
See “Defining libraries” on page 166 and “SCSI libraries” on page 78.
For more information about paths, see “Defining paths” on page 169.
3. Define the drives in the library:
define drive 3494lib drive01
define drive 3494lib drive02
Both drives belong to the 3494LIB library.
See “Defining drives” on page 167.
4. Define a path from the server to each drive:
define path server1 drive01 srctype=server desttype=drive
library=3494lib device=mt1.0.0.0
define path server1 drive02 srctype=server desttype=drive
library=3494lib device=mt2.0.0.0
The DEVICE parameter gives the device alias name for the drive. For more
about device names, see “Device alias names” on page 113.
For more information about paths, see “Defining paths” on page 169.
5. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, for the two 3590 drives in the 3494LIB library, use the
following command to define a device class named 3494_CLASS:
define devclass 3494_class library=3494lib devtype=3590 format=drive
This example uses FORMAT=DRIVE as the recording format because both
drives associated with the device class use the same recording format; both are
3590 drives. If instead one drive is a 3590 and one is a 3590E, you need to use
specific recording formats when defining the device classes. See “Configuring a
3494 library with multiple drive device types” on page 140.
See also “Defining tape and optical device classes” on page 253.
6. Verify your definitions by issuing the following commands:
Chapter 7. Configuring storage devices (Windows)
139
query
query
query
query
library
drive
path
devclass
For details, see the following topics:
“Requesting information about drives” on page 203
“Obtaining information about device classes” on page 270
“Obtaining information about paths” on page 215
“Obtaining information about libraries” on page 201
7. Define a storage pool named 3494_POOL associated with the device class
named 3494_CLASS.
define stgpool 3494_pool 3494_class maxscratch=20
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 340 and “How collocation affects reclamation” on page
360.
For more information, see “Defining storage pools” on page 281.
Configuring a 3494 library with multiple drive device types
In this example, the 3494 library contains two IBM 3590E tape drives and two IBM
3590H tape drives.
1. Define two libraries, one for each type of drive. For example, to define
3590ELIB and 3590HLIB enter the following commands:
define library 3590elib libtype=349x scratchcategory=301 privatecategory=300
define library 3590hlib libtype=349x scratchcategory=401 privatecategory=400
See “Defining libraries” on page 166.
Note: Specify scratch and private categories explicitly. If you accept the
category defaults for both library definitions, different types of media will be
assigned to the same categories.
2. Define a path from the server to each library:
define path server1 3590elib srctype=server desttype=library device=library1
define path server1 3590hlib srctype=server desttype=library device=library1
The DEVICE parameter specifies the symbolic name for the library, as defined
in the configuration file for the library device driver (c:\winnt\ibmatl.conf).
For more information about paths, see “Defining paths” on page 169.
3. Define the drives, ensuring that they are associated with the appropriate
libraries.
140
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v Define the 3590E drives to 3590ELIB.
define drive 3590elib 3590e_drive1
define drive 3590elib 3590e_drive2
v Define the 3590H drives to 3590HLIB.
define drive 3590hlib 3590h_drive3
define drive 3590hlib 3590h_drive4
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
See “Defining drives” on page 167.
4. Define a path from the server to each drive. Ensure that you specify the correct
library.
v For the 3590E drives:
define path server1 3590e_drive1 srctype=server desttype=drive
library=3590elib device=mt1.0.0.0
define path server1 3590e_drive2 srctype=server desttype=drive
library=3590elib device=mt2.0.0.0
v For the 3590H drives:
define path server1 3590h_drive3 srctype=server desttype=drive
library=3590hlib device=mt3.0.0.0
define path server1 3590h_drive4 srctype=server desttype=drive
library=3590hlib device=mt4.0.0.0
The DEVICE parameter gives the device alias name for the drive. For more
about device names, see “Device alias names” on page 113.
For more information about paths, see “Defining paths” on page 169.
5. Classify the drives according to type by defining Tivoli Storage Manager device
classes, which specify the recording formats of the drives. Because there are
separate libraries, you can enter a specific recording format, for example 3590H,
or you can enter DRIVE.
define devclass 3590e_class library=3590elib devtype=3590 format=3590e
define devclass 3590h_class library=3590hlib devtype=3590 format=3590h
See “Defining tape and optical device classes” on page 253.
6. To check what you have defined, enter the following commands:
query
query
query
query
library
drive
path
devclass
See the following topics:
v “Obtaining information about device classes” on page 270
v “Obtaining information about paths” on page 215
v “Requesting information about drives” on page 203
7. Create the storage pools to use the devices in the device classes you just
defined. For example, define a storage pool named 3590EPOOL associated with
the device class 3490E_CLASS, and 3590HPOOL associated with the device
class 3590H_CLASS:
define stgpool 3590epool 3590e_class maxscratch=20
define stgpool 3590hpool 3590h_class maxscratch=20
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
Chapter 7. Configuring storage devices (Windows)
141
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 340 and “How collocation affects reclamation” on page
360.
For more information, see “Defining storage pools” on page 281.
Checking in and labeling 3494 library volumes
Ensure that enough volumes in the library are available to the server. Keep enough
labeled volumes on hand so that you do not run out during an operation such as
client backup. Label and set aside extra scratch volumes for any potential recovery
operations you might have later.
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
The procedures for volume check-in and labeling are the same whether the library
contains drives of a single device type, or drives of multiple device types.
Note: If your library has drives of multiple device types, you defined two libraries
to the Tivoli Storage Manager server in the procedure in “Configuring a 3494
library with multiple drive device types” on page 140. The two Tivoli Storage
Manager libraries represent the one physical library. The check-in process finds all
available volumes that are not already checked in. You must check in media
separately to each defined library. Ensure that you check in volumes to the correct
Tivoli Storage Manager library.
Do the following:
1. Check in the library inventory. The following shows two examples.
v Check in volumes that are already labeled:
checkin libvolume 3494lib search=yes status=scratch checklabel=no
v Label and check in volumes:
label libvolume 3494lib search=yes checkin=scratch
2. Depending on whether you use scratch volumes or private volumes, do one of
the following:
v If you use only scratch volumes, ensure that enough scratch volumes are
available. For example, you may need to label more volumes. As volumes are
used, you may also need to increase the number of scratch volumes allowed
in the storage pool that you defined for this library.
v If you want to use private volumes in addition to or instead of scratch
volumes in the library, define volumes to the storage pool you defined. The
volumes you define must have been already labeled and checked in. See
“Defining storage pool volumes” on page 292.
142
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
For more information about checking in volumes, see “Checking media into
automated library devices” on page 177.
Sharing an IBM 3494 library among servers
Sharing an IBM 3494 library requires one of the following environments.
v The library must be on a SAN.
v Through the use of the dual ports on 3590 drives in the library, the drives and
the library are connected to two systems on which Tivoli Storage Manager
servers run.
The following tasks are required for Tivoli Storage Manager servers to share library
devices over a SAN:
1. Set up server-to-server communications.
2. Set up the device on the server systems.
3. Set up the library on the library manager server. In the following example, the
library manager server is named MANAGER.
4. Set up the library on the library client server. In the following example, the
library client server is named CLIENT.
See “Categories in an IBM 3494 library” on page 137 for additional information
about configuring 3494 libraries.
Setting up a 3494 library on the server system and SAN
You must first set up the device on the server system, which involves certain tasks.
1. Set the symbolic name for the library in the configuration file for the library
device driver. This procedure is described in the IBM Tape Device Drivers
Installation and User’s Guide.
2. Physically attach the devices to the SAN or to the server hardware.
3. On each server system that will access the library and drives, install and
configure the appropriate device drivers for the devices.
4. Determine the device names that are needed to define the devices to Tivoli
Storage Manager.
For details, see “Selecting a device driver” on page 114
Note: You can also configure a 3494 library so that it contains drives of multiple
device types or different generations of drives of the same device type. The
procedure for working with multiple drive device types is similar to the one
described for a LAN in “Configuring a 3494 library with multiple drive device
types” on page 140.
For details about mixing generations of drives, see “Defining 3592 device classes”
on page 257 and “Defining LTO device classes” on page 264.
Chapter 7. Configuring storage devices (Windows)
143
Setting up the 3494 library manager server
Use the following procedure as an example of how to set up a Tivoli Storage
Manager server as a library manager named MANAGER.
1. Define a 3494 library named 3494SAN:
define library 3494san libtype=349x shared=yes
2. Define a path from the server to the library:
define path manager 3494san srctype=server desttype=library
device=library1
The DEVICE parameter specifies the symbolic name for the library, as defined
in the configuration file for the library device driver (c:\winnt\ibmatl.conf).
For more information about paths, see “Defining paths” on page 169.
3. Define the drives in the library:
define drive 3494san drivea
define drive 3494san driveb
4. Define a path from the server to each drive:
define path manager drivea srctype=server desttype=drive library=3494san
device=mt4.0.0.0
define path manager driveb srctype=server desttype=drive library=3494san
device=mt5.0.0.0
For more information about paths, see “Defining paths” on page 169.
5. Define all the device classes that are associated with the shared library.
define devclass 3494_class library=3494san devtype=3590
6. Check in the library inventory. The following shows two examples. In both
cases, the server uses the name on the barcode label as the volume name.
To check in volumes that are already labeled, use the following command:
checkin libvolume 3494san search=yes status=scratch checklabel=no
To label and check in the volumes, use the following command:
label libvolume 3494san checkin=scratch search=yes
7. Set any required storage pools for the shared library with a maximum of 50
scratch volumes.
define stgpool 3494_sanpool tape maxscratch=50
Setting up the 3494 library client servers
Use the following sample procedure for each Tivoli Storage Manager server that
will be a library client server.
1. Define the server that is the library manager:
define server manager serverpassword=secret hladdress=9.115.3.45 lladdress=1580
crossdefine=yes
2. Define a shared library named 3494SAN, and identify the library manager:
Note: Ensure that the library name agrees with the library name on the library
manager.
define library 3494san libtype=shared primarylibmanager=manager
3. Perform this step from the library manager. Define a path from the library client
server to each drive that the library client server will be allowed to access. The
device name should reflect the way the library client system sees the device.
There must be a path defined from the library manager to each drive in order
for the library client to use the drive. The following is an example of how to
define a path:
144
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
define path client drivea srctype=server desttype=drive
library=3494san device=mt3.0.0.0
define path client driveb srctype=server desttype=drive
library=3494san device=mt4.0.0.0
For more information about paths, see “Defining paths” on page 169.
4. Return to the library client for the remaining steps. Define all the device classes
that are associated with the shared library.
define devclass 3494_class library=3494san devtype=3590
Set the parameters for the device class the same on the library client as on the
library manager. Making the device class names the same on both servers is
also a good practice, but is not required.
The device class parameters specified on the library manager server override
those specified for the library client. This is true whether or not the device class
names are the same on both servers. If the device class names are different, the
library manager uses the parameters specified in a device class that matches the
device type specified for the library client.
Note: If a library client requires a setting that is different from what is
specified in the library manager’s device class (for example, a different mount
limit), do the following:
a. Create an additional device class on the library manager server. Specify the
parameter settings you want the library client to use.
b. Create a device class on the library client with the same name and device
type as the new device class you created on the library server.
5. Define the storage pool, BACKTAPE, that will use the shared library.
define stgpool backtape 3494_class maxscratch=50
6. Repeat this procedure to define additional servers as library clients. For a
discussion regarding the interaction between library clients and the library
manager in processing Tivoli Storage Manager operations, see “Shared
libraries” on page 186
Migrating a shared IBM 3494 library to a library manager
If you have been sharing an IBM 3494 library among Tivoli Storage Manager
servers by using the 3494SHARED option in the dsmserv.opt file, you can migrate
to sharing the library by using a library manager and library clients.
To help ensure a smoother migration and to ensure that all tape volumes that are
being used by the servers get associated with the correct servers, perform the
following migration procedure.
1. Do the following on each server that is sharing the 3494 library:
a. Update the storage pools using the UPDATE STGPOOL command. Set the
value for the HIGHMIG and LOWMIG parameters to 100%.
b. Stop the server by issuing the HALT command.
c. Edit the dsmserv.opt file and make the following changes:
1) Comment out the 3494SHARED YES option line
2) Activate the DISABLESCHEDS YES option line if it is not active
3) Activate the EXPINTERVAL X option line if it is not active and change
its value to 0, as follows:
EXPINTERVAL 0
d. Start the server.
e. Enter the following Tivoli Storage Manager command:
Chapter 7. Configuring storage devices (Windows)
145
disable sessions
2. Set up the library manager on the Tivoli Storage Manager server of your choice
(see “Setting up server communications” on page 157 and “Setting up the
library manager server” on page 158).
3. Do the following on the remaining servers (the library clients):
a. Save the volume history file.
b. Check out all the volumes in the library inventory. Use the CHECKOUT
LIBVOLUME command with REMOVE=NO.
c. Follow the library client setup procedure (“Setting up the 3494 library client
servers” on page 144).
4. Do the following on the library manager server:
a. Check in each library client’s volumes. Use the CHECKIN LIBVOLUME
command with the following parameter settings:
v STATUS=PRIVATE
v OWNER=<library client name>
Note: You can use the saved volume history files from the library clients
as a guide.
b. Check in any remaining volumes as scratch volumes. Use the CHECKIN
LIBVOLUME command with STATUS=SCRATCH.
5. Halt all the servers.
6. Edit the dsmserv.opt file and comment out the following lines in the file:
DISABLESCHEDS YES
EXPINTERVAL 0
7. Start the servers.
Sharing an IBM 3494 library by static partitioning of drives
If your IBM 3494 library is not on a SAN, you can use partitioning to share that
library among Tivoli Storage Manager servers.
Tivoli Storage Manager uses the capability of the 3494 library manager, which
allows you to partition a library between multiple Tivoli Storage Manager servers.
Library partitioning differs from library sharing on a SAN in that with
partitioning, there are no Tivoli Storage Manager library managers or library
clients.
When you partition a library on a LAN, each server has its own access to the same
library. For each server, you define a library with tape volume categories unique to
that server. Each drive that resides in the library is defined to only one server. Each
server can then access only those drives it has been assigned. As a result, library
partitioning does not allow dynamic sharing of drives or tape volumes because
they are pre-assigned to different servers using different names and category
codes.
In the following example, an IBM 3494 library containing four drives is attached to
a Tivoli Storage Manager server named ASTRO and to another Tivoli Storage
Manager server named JUDY.
Note: Tivoli Storage Manager can also share the drives in a 3494 library with other
servers by enabling the 3494SHARED server option. When this option is enabled,
you can define all of the drives in a 3494 library to multiple servers, if there are
SCSI connections from all drives to the systems on which the servers are running.
146
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
This type of configuration is not recommended, however, because when this type
of sharing takes place there is a risk of contention between servers for drive usage,
and operations can fail.
Setting up the 3494 library on the server system
You must first set up the 3494 library on the server system.
This involves the following tasks:
1. Set the symbolic name for the library in the configuration file for the library
device driver. This procedure is described in IBM Tape Device Drivers Installation
and User’s Guide.
2. Physically attach the devices to the server hardware.
3. On each server system that will access the library and drives, install and
configure the appropriate device drivers for the devices.
4. Determine the device names that are needed to define the devices to Tivoli
Storage Manager.
For details, see “Selecting a device driver” on page 114.
Defining 3494 library devices to the Tivoli Storage Manager
server ASTRO
Complete the following steps to define the 3493 library.
1. Define the 3494 library named 3494LIB:
define library 3494lib libtype=349x privatecategory=400 scratchcategory=600
The PRIVATECATEGORY and SCRATCHCATEGORY are set differently from
the default settings. See “Categories in an IBM 3494 library” on page 137.
2. Define the path from the server, ASTRO, to the library:
define path astro 3494lib srctype=server desttype=library
device=library1
The DEVICE parameter specifies the symbolic name for the library, as defined
in the configuration file for the library device driver (c:\winnt\ibmatl.conf).
See “Defining libraries” on page 166 and “SCSI libraries” on page 78.
For more information about paths, see “Defining paths” on page 169.
3. Define the drives that are partitioned to server ASTRO:
define drive 3494lib drive1
define drive 3494lib drive2
4. Define the path from the server, ASTRO, to each of the drives:
define path astro drive1 srctype=server desttype=drive library=3494lib
device=mt1.0.0.0
define path astro drive2 srctype=server desttype=drive library=3494lib
device=mt2.0.0.0
The DEVICE parameter gives the device alias name for the drive. For more
about device names, see “Device alias names” on page 113.
5. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, to classify the two drives in the 3494LIB library, use the
following command to define a device class named 3494_CLASS:
define devclass 3494_class library=3494lib devtype=3590 format=drive
This example uses FORMAT=DRIVE as the recording format because both
drives associated with the device class use the same recording format; both are
3590 drives. If instead one drive is a 3590 and one is a 3590E, you need to use
specific recording formats when defining the device classes. See “Configuring a
3494 library with multiple drive device types” on page 140.
Chapter 7. Configuring storage devices (Windows)
147
See “Defining tape and optical device classes” on page 253.
6. Verify your definitions by issuing the following commands:
query
query
query
query
library
drive
path
devclass
See the following topics:
v “Obtaining information about device classes” on page 270
v “Obtaining information about paths” on page 215
v “Requesting information about drives” on page 203
7. Define a storage pool named 3494_POOL associated with the device class
named 3494_CLASS:
define stgpool 3494_pool 3494_class maxscratch=20
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 340 and “How collocation affects reclamation” on page
360.
For more information, see “Defining storage pools” on page 281.
Defining 3494 library devices to the Tivoli Storage Manager
server JUDY
The DEVICE parameter specifies the device special file for the LMCP.
1. Define the 3494 library named 3494LIB:
define library 3494lib libtype=3494 privatecategory=112 scratchcategory=300
The PRIVATECATEGORY and SCRATCHCATEGORY are defined differently
than the first server’s definition. See “Categories in an IBM 3494 library” on
page 137.
2. Define the path from the server, JUDY, to the library:
define path judy 3494lib srctype=server desttype=library
device=library1
The DEVICE parameter specifies the symbolic name for the library, as defined
in the configuration file for the library device driver (c:\winnt\ibmatl.conf).
See “Defining libraries” on page 166 and “SCSI libraries” on page 78. .
For more information about paths, see “Defining paths” on page 169
3. Define the drives that are partitioned to server JUDY:
define drive 3494lib drive3
define drive 3494lib drive4
148
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
4. Define the path from the server, JUDY, to each of the drives:
define path judy drive3 srctype=server desttype=drive library=3494lib
device=mt3.0.0.0
define path judy drive4 srctype=server desttype=drive library=3494lib
device=mt4.0.0.0
For more information about paths, see “Defining paths” on page 169.
5. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, to classify the two drives in the 3494LIB library, use the
following command to define a device class named 3494_CLASS:
define devclass 3494_class library=3494lib devtype=3590 format=drive
This example uses FORMAT=DRIVE as the recording format because both
drives associated with the device class use the same recording format; both are
3590 drives. If instead one drive is a 3590 and one is a 3590E, you need to use
specific recording formats when defining the device classes. See “Configuring a
3494 library with multiple drive device types” on page 140.
See “Defining tape and optical device classes” on page 253.
6. Verify your definitions by issuing the following commands:
query
query
query
query
library
drive
path
devclass
See the following topics:
v “Obtaining information about device classes” on page 270
v “Requesting information about drives” on page 203
7. Define a storage pool named 3494_POOL associated with the device class
named 3494_CLASS.
define stgpool 3494_pool 3494_class maxscratch=20
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 340 and “How collocation affects reclamation” on page
360.
For more information, see “Defining storage pools” on page 281.
Chapter 7. Configuring storage devices (Windows)
149
ACSLS-managed libraries
Tivoli Storage Manager supports tape libraries controlled by StorageTek Automated
Cartridge System Library Software (ACSLS). The ACSLS library server manages
the physical aspects of tape cartridge storage and retrieval.
The ACSLS client application communicates with the ACSLS library server to
access tape cartridges in an automated library. Tivoli Storage Manager is one of the
applications that gains access to tape cartridges by interacting with ACSLS through
its client, which is known as the control path. The Tivoli Storage Manager server
reads and writes data on tape cartridges by interacting directly with tape drives
through the data path. The control path and the data path are two different paths.
In order to utilize ACSLS functions, StorageTek Library Attach software must be
installed.
ACSLS libraries are supported on 32–bit and 64–bit versions of Windows Server
2003. The ACSLS client daemon must be initialized before starting the server using
StorageTek Library Attach. For detailed installation, configuration, and system
administration of ACSLS, refer to the appropriate StorageTek documentation.
|
|
|
|
Configuring an ACSLS-managed library
The library ACSLS is attached to the ACSLS server, and the drives are attached to
the Tivoli Storage Manager server. The ACSLS server and the Tivoli Storage
Manager server must be on different systems. Refer to the ACSLS installation
documentation for details about how to set up the library.
There are two configurations described in this section:
v In the first configuration, both drives in the ACSLS library are the same device
type. See “Configuring an ACSLS library with a single drive device type.”
v In the second configuration, the drives are different device types.
Drives with different device types (or different generations of drives) are
supported in a single physical library if you define one library to Tivoli Storage
Manager for each type of drive (or generation of drive). If you have two device
types, such as 9840 and 9940 (or two generations of drives of the same device
type), define two libraries. Then define drives and device classes for each library.
In each device class definition, you can use the FORMAT parameter with a value
of DRIVE, if you choose. See “Configuring an ACSLS library with multiple drive
device type” on page 152.
Configuring an ACSLS library with a single drive device type
The parameter ACSID specifies the number that the Automatic Cartridge System
System Administrator (ACSSA) assigned to the library. Issue the QUERY ACS
command to your ACSLS system to determine the number for your library ID.
1. Define an ACSLS library named ACSLIB:
define library acslib libtype=acsls acsid=1
2. Define the drives in the library:
define drive acslib drive01 acsdrvid=1,2,3,4
define drive acslib drive02 acsdrvid=1,2,3,5
The ACSDRVID parameter specifies the ID of the drive that is being accessed.
The drive ID is a set of numbers that indicate the physical location of a drive
within an ACSLS library. This drive ID must be specified as a, l, p, d, where a is
the ACSID, l is the LSM (library storage module), p is the panel number, and d
150
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
is the drive ID. The server needs the drive ID to connect the physical location
of the drive to the drive’s SCSI address. See the StorageTek documentation for
details.
See “Defining drives” on page 167.
3. Define a path from the server to each drive:
define path server1 drive01 srctype=server desttype=drive
library=acslib device=mt1.0.0.0
define path server1 drive02 srctype=server desttype=drive
library=acslib device=mt2.0.0.0
The DEVICE parameter gives the device alias name for the drive. For more
about device names, see “Device alias names” on page 113.
For more information about paths, see “Defining paths” on page 169.
4. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, to classify the two drives in the ACSLIB library, issue the
following command to define a device class named ACS_CLASS:
define devclass acs_class library=acslib devtype=ecartridge format=drive
This example uses FORMAT=DRIVE as the recording format because both
drives associated with the device class use the same recording format; for
example, both are 9940 drives. If instead one drive is a 9840 and one is a 9940,
you must use specific recording formats when defining the device classes. See
“Configuring an ACSLS library with multiple drive device type” on page 152.
See “Defining tape and optical device classes” on page 253.
5. To check what you have defined, issue the following commands:
query
query
query
query
library
drive
path
devclass
See the following topics:
v “Obtaining information about device classes” on page 270
v “Obtaining information about paths” on page 215
v “Requesting information about drives” on page 203
6. Create the storage pool to use the devices in the device class you just defined.
For example, define a storage pool named ACS_POOL associated with the
device class ACS_CLASS:
define stgpool acs_pool acs_class maxscratch=20
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 340 and “How collocation affects reclamation” on page
360.
Chapter 7. Configuring storage devices (Windows)
151
For more information, see “Defining storage pools” on page 281.
Configuring an ACSLS library with multiple drive device type
The following example shows how to set up and ACSLS library with a mix of two
9840 drives and two 9940 drives.
1. Define two ACSLS libraries that use the same ACSID. For example to define
9840LIB and 9940LIB, enter the following commands:
define library 9840lib libtype=acsls acsid=1
define library 9940lib libtype=acsls acsid=1
The ACSID parameter specifies the number that the Automatic Cartridge
System System Administrator (ACSSA) assigned to the libraries. Issue the
QUERY ACS command to your ACSLS system to determine the number for
your library ID.
2. Define the drives, ensuring that they are associated with the appropriate
libraries.
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
v Define the 9840 drives to 9840LIB.
define drive 9840lib 9840_drive1 acsdrvid=1,2,3,1
define drive 9840lib 9840_drive2 acsdrvid=1,2,3,2
v Define the 9940 drives to 9940LIB.
define drive 9940lib 9940_drive3 acsdrvid=1,2,3,3
define drive 9940lib 9940_drive4 acsdrvid=1,2,3,4
The ACSDRVID parameter specifies the ID of the drive that is being accessed.
The drive ID is a set of numbers that indicate the physical location of a drive
within an ACSLS library. This drive ID must be specified as a, l, p, d, where a is
the ACSID, l is the LSM (library storage module), p is the panel number, and d
is the drive ID. The server needs the drive ID to connect the physical location
of the drive to the drive’s SCSI address. See the StorageTek documentation for
details.
See “Defining drives” on page 167.
3. Define a path from the server to each drive. Ensure that you specify the correct
library.
v For the 9840 drives:
define path server1 9840_drive1 srctype=server desttype=drive
library=9840lib device=mt1.0.0.0
define path server1 9840_drive2 srctype=server desttype=drive
library=9840lib device=mt2.0.0.0
v For the 9940 drives:
define path server1 9940_drive3 srctype=server desttype=drive
library=9940lib device=mt3.0.0.0
define path server1 9940_drive4 srctype=server desttype=drive
library=9940lib device=mt4.0.0.0
The DEVICE parameter gives the device alias name for the drive. For more
about device names, see “Device alias names” on page 113.
For more information about paths, see “Defining paths” on page 169.
4. Classify the drives according to type by defining Tivoli Storage Manager device
classes, which specify the recording formats of the drives. Because there are
separate libraries, you can enter a specific recording format, for example 9840,
152
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
or you can enter DRIVE. For example, to classify the drives in the two libraries,
use the following commands to define one device class for each type of drive:
define devclass 9840_class library=9840lib devtype=ecartridge format=9840
define devclass 9940_class library=9940lib devtype=ecartridge format=9940
See “Defining tape and optical device classes” on page 253.
5. To check what you have defined, enter the following commands:
query
query
query
query
library
drive
path
devclass
See the following topics:
v “Obtaining information about device classes” on page 270
v “Obtaining information about paths” on page 215
v “Requesting information about drives” on page 203
6. Create the storage pools to use the devices in the device classes that you just
defined. For example, define storage pools named 9840_POOL associated with
the device class 9840_CLASS and 9940_POOL associated with the device class
9940_CLASS:
define stgpool 9840_pool 9840_class maxscratch=20
define stgpool 9940_pool 9940_class maxscratch=20
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 340 and “How collocation affects reclamation” on page
360.
For more information, see “Defining storage pools” on page 281.
Setting up an ACSLS library manager server
Use the following procedure as an example of how to set up a Tivoli Storage
Manager server as a library manager named GLENCOE:
When upgrading multiple servers participating in library sharing, upgrade all the
servers at once, or do the library manager servers and then the library client
servers. Library manager servers at Version 5.4 or higher are compatible with
downlevel library clients. However, library clients are not compatible with
downlevel library manager servers.
Note: An exception to this rule is when a fix or product enhancement requires
concurrent code changes to the server, storage agent, and library client.
Chapter 7. Configuring storage devices (Windows)
153
1. Verify that the server that is the library manager is running. Start it if it is not.
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager server that is the library
manager is running. If it is stopped, right click on the server name. A
pop-up menu appears.
e. Click Start in the pop-up menu.
2. Verify that the device driver is running, and start it if it is not:
a. From the Tivoli Storage Manager Console, expand the tree for the
machine you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager device driver is running. If it is
stopped, right click Tivoli Storage Manager Device Driver. A pop-up
menu appears.
e. Click Start in the pop-up menu.
3. Obtain the library and drive information for the shared library device:
a. From the Tivoli Storage Manager Console, expand the tree for the
machine you are configuring.
b. Expand Tivoli Storage Manager Device Driver and Reports.
c. Click Device Information. The Device Information window appears in the
right pane.
4. Define a library whose library type is ACSLS. For example:
define library macgregor libtype=acsls shared=yes
5. Define the path from the server to the library:
define path glencoe macgregor srctype=server desttype=library
device=lb0.0.0.2
6. Define the drives in the library.
define drive macgregor drivea acsdrvid=1,0,1,0
define drive macgregor driveb acsdrvid=1,0,1,1
This example uses the acsdrvid value, which specifies the ID of the drive that
is being accessed in an ACSLS library. The drive ID is a set of numbers that
indicates the physical location of a drive within an ACSLS library. This drive
ID must be specified as a,l,p,d, where a is the ACSID, l is the LSM (library
storage module), p is the panel number, and d is the drive ID. The server
needs the drive ID to connect the physical location of the drive to the drive’s
SCSI address. See the StorageTek documentation for details.
7. Define the path from the server to each of the drives.
define path glencoe drivea srctype=server desttype=drive library=macgregor
device=mt0.1.0.2
define path glencoe driveb srctype=server desttype=drive library=macgregor
device=mt0.2.0.2
8. Define at least one device class.
154
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
define devclass tape devtype=dlt library=macgregor
9. Check in the library inventory. The following example checks all volumes into
the library inventory as scratch volumes. The server uses the name on the bar
code label as the volume name.
checkin libvolume macgregor search=yes status=scratch
checklabel=barcode
10. Set up a storage pool for the shared library with a maximum of 50 scratch
volumes.
define stgpool backtape tape
description='storage pool for shared macgregor' maxscratch=50
Setting up an ACSLS library client server
Use the following procedure as an example of how to set up a Tivoli Storage
Manager server named WALLACE as a library client.
You must define the library manager server before setting up the library client
server.
1. Verify that the server that is the library client is running, and start it if it is not:
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager server that is the library client is
running. If it is stopped, right click on the server name. A pop-up menu
appears.
e. Click Start in the pop-up menu.
2. Verify that the device driver is running, and start it if it is not:
a. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager device driver is running. If it is
stopped, right click Tivoli Storage Manager Device Driver. A pop-up menu
appears.
e. Click Start in the pop-up menu.
3. Obtain the library and drive information for the shared library device:
a. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
b. Expand Tivoli Storage Manager Device Driver and Reports.
c. Click Device Information. The Device Information window appears in the
right pane.
4. Define the shared library, MACGREGOR, and identify the library manager.
Ensure that the library name is the same as the library name on the library
manager.
define library macgregor libtype=shared primarylibmanager=glencoe
Chapter 7. Configuring storage devices (Windows)
155
5. On the Tivoli Storage Manager Console of the server you designated as the library
manager: Define the paths from the library client server to each of the drives.
define path wallace drivea srctype=server desttype=drive library=macgregor
device=mt0.1.0.3
define path wallace driveb srctype=server desttype=drive library=macgregor
device=mt0.2.0.3
6. Return to the library client for the remaining steps.: Define at least one device class.
define devclass tape devtype=dlt mountretention=1 mountwait=10
library=macgregor
Set the parameters for the device class the same on the library client as on the
library manager. Making the device class names the same on both servers is
also a good practice, but is not required.
The device class parameters specified on the library manager server override
those specified for the library client. This is true whether or not the device class
names are the same on both servers. If the device class names are different, the
library manager uses the parameters specified in a device class that matches the
device type specified for the library client.
Note: If a library client requires a setting that is different from what is
specified in the library manager’s device class (for example, a different mount
limit), do the following:
a. Create an additional device class on the library manager server. Specify the
parameter settings you want the library client to use.
b. Create a device class on the library client with the same name and device
type as the new device class you created on the library server.
7. Define the storage pool, LOCHNESS, that will use the shared library.
define stgpool lochness tape
description='storage pool for shared macgregor' maxscratch=50
8. Update the copy group to set the destination to the storage pool, LOCHNESS
9. Repeat this procedure to define additional servers as library clients.
Checking in and labeling ACSLS library volumes
Ensure that enough volumes are available to the server in the library. You must
label volumes that do not already have a standard label. Keep enough labeled
volumes on hand so that you do not run out during an operation such as client
backup.
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
Attention: If your library has drives of multiple device types, you defined two
libraries to the Tivoli Storage Manager server in the procedure in “Configuring an
ACSLS library with multiple drive device type” on page 152. The two Tivoli
Storage Manager libraries represent the one physical library. The check-in process
finds all available volumes that are not already checked in. You must check in
media separately to each defined library. Ensure that you check in volumes to the
correct Tivoli Storage Manager library.
1. Check in the library inventory. The following shows examples for libraries with
a single drive device type and with multiple drive device types.
v Check in volumes that are already labeled:
156
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
checkin libvolume acslib search=yes status=scratch checklabel=no
v Label and check in volumes:
label libvolume acslib search=yes overwrite=no checkin=scratch
2. Depending on whether you use scratch volumes or private volumes, do one of
the following:
v If you use only scratch volumes, ensure that enough scratch volumes are
available. For example, you may need to label more volumes. As volumes are
used, you may also need to increase the number of scratch volumes allowed
in the storage pool that you defined for this library.
v If you want to use private volumes in addition to or instead of scratch
volumes in the library, define volumes to the storage pool you defined. The
volumes you define must have been already labeled and checked in. See
“Defining storage pool volumes” on page 292.
For more information about checking in volumes, see:
v “Checking media into automated library devices” on page 177
v “Checking media into automated library devices” on page 177
Configuring Tivoli Storage Manager servers to share SAN-connected
devices
The steps to configure servers to share SAN-connected devices includes setting up:
server communications, the library manager server, and the library client servers.
The following tasks are required to share tape library devices over a SAN:
Task
Required Privilege Class
“Setting up server communications”
System or unrestricted storage
“Setting up the library manager server” on
page 158
System or unrestricted storage
“Setting up the library client servers” on
page 160
System or unrestricted storage
Setting up server communications
Before Tivoli Storage Manager servers can share a storage device over a SAN, you
must set up server communications. This requires configuring each server as you
would for Enterprise Administration, which means you define the servers to each
other using the cross-define function.
Set up each server with a unique name.
For details, see “Setting up communications among servers” on page 694.
Chapter 7. Configuring storage devices (Windows)
157
Setting up the library manager server
You must set up the library manager server in order to configure the Tivoli Storage
Manager servers to share SAN-connected devices.
Use the following procedure as an example of how to set up a Tivoli Storage
Manager server as a library manager named ASTRO:
1. Verify that the server that is the library manager is running. Start it if it is not.
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager server that is the library
manager is running. If it is stopped, right click on the server name. A
pop-up menu appears.
e. Click Start in the pop-up menu.
2. Verify that the device driver is running, and start it if it is not:
a. From the Tivoli Storage Manager Console, expand the tree for the
machine you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager device driver is running. If it is
stopped, right click Tivoli Storage Manager Device Driver. A menu
appears.
e. Click Start.
3. Obtain the library and drive information for the shared library device:
a. From the Tivoli Storage Manager Console, expand the tree for the
machine you are configuring.
b. Expand Tivoli Storage Manager Device Driver and Reports.
c. Click Device Information. The Device Information window appears in the
right pane.
4. Define a library whose library type is SCSI. For example:
define library sangroup libtype=scsi shared=yes
This example uses the default for the library’s serial number, which is to have
the server obtain the serial number from the library itself at the time that the
path is defined. Depending on the capabilities of the library, the server may
not be able to automatically detect the serial number. In this case, the server
will not record a serial number for the device, and will not be able to confirm
the identity of the device when you define the path or when the server uses
the device.
5. Define the path from the server to the library.
define path astro sangroup srctype=server desttype=library
device=lb0.0.0.2
If you did not include the serial number when you defined the library, the
server now queries the library to obtain this information. If you did include
158
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
the serial number when you defined the library, the server verifies what you
defined and issues a message if there is a mismatch.
6. Define the drives in the library.
define drive sangroup drivea
define drive sangroup driveb
This example uses the default for the drive’s serial number, which is to have
the server obtain the serial number from the drive itself at the time that the
path is defined. Depending on the capabilities of the drive, the server may not
be able to automatically detect the serial number. In this case, the server will
not record a serial number for the device, and will not be able to confirm the
identity of the device when you define the path or when the server uses the
device.
This example also uses the default for the drive’s element address, which is to
have the server obtain the element number from the drive itself at the time
that the path is defined.
The element address is a number that indicates the physical location of a drive
within an automated library. The server needs the element address to connect
the physical location of the drive to the drive’s SCSI address. You can have
the server obtain the element number from the drive itself at the time that the
path is defined, or you can specify the element number when you define the
drive.
Depending on the capabilities of the library, the server may not be able to
automatically detect the element address. In this case you must supply the
element address when you define the drive. If you need the element numbers,
check the device worksheet filled out in step 6 on page 111. Element numbers
for many libraries are available at http://www.ibm.com/software/sysmgmt/
products/support/IBMTivoliStorageManager.html.
7. Define the path from the server to each of the drives.
define path astro drivea srctype=server desttype=drive library=sangroup
device=mt0.1.0.2
define path astro driveb srctype=server desttype=drive library=sangroup
device=mt0.2.0.2
If you did not include the serial number or element address when you
defined the drive, the server now queries the drive or the library to obtain
this information.
8. Define at least one device class.
define devclass tape devtype=dlt library=sangroup
9. Check in the library inventory. The following example checks all volumes into
the library inventory as scratch volumes. The server uses the name on the bar
code label as the volume name.
checkin libvolume sangroup search=yes status=scratch
checklabel=barcode
10. Set up a storage pool for the shared library with a maximum of 50 scratch
volumes.
define stgpool backtape tape
description='storage pool for shared sangroup' maxscratch=50
Chapter 7. Configuring storage devices (Windows)
159
Setting up the library client servers
You must set up the library client server in order to configure the Tivoli Storage
Manager servers to share SAN-connected devices.
First you must define the library manager server. Use the following procedure as
an example of how to set up a Tivoli Storage Manager server named JUDY as a
library client.
1. Verify that the server that is the library client is running. Start the server if it is
not running:
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager server that is the library client is
running. If it is stopped, right click on the server name. A pop-up menu
appears.
e. Click Start in the pop-up menu.
2. Verify that the device driver is running, and start it if it is not:
a. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
b. Expand Reports.
Service Information in the Tivoli Storage Manager Console tree
c. Click
in the left panel. The Service Information window appears in the right
panel.
d. Check to see if the Tivoli Storage Manager device driver is running. If it is
stopped, right click Tivoli Storage Manager Device Driver. A pop-up menu
appears.
e. Click Start in the pop-up menu.
3. Obtain the library and drive information for the shared library device:
a. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
b. Expand Tivoli Storage Manager Device Driver and Reports.
c. Click Device Information. The Device Information window appears in the
right pane.
4. Define the shared library, SANGROUP, and identify the library manager.
Ensure that the library name is the same as the library name on the library
manager.
define library sangroup libtype=shared primarylibmanager=astro
5. On the Tivoli Storage Manager Console of the server you designated as the library
manager: Define the paths from the library client server to each of the drives.
define path judy drivea srctype=server desttype=drive library=sangroup
device=mt0.1.0.3
define path judy driveb srctype=server desttype=drive library=sangroup
device=mt0.2.0.3
6. Return to the library client for the remaining steps.: Define at least one device class.
define devclass tape devtype=dlt mountretention=1 mountwait=10
library=sangroup
160
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Set the parameters for the device class the same on the library client as on the
library manager. Making the device class names the same on both servers is
also a good practice, but is not required.
The device class parameters specified on the library manager server override
those specified for the library client. This is true whether or not the device class
names are the same on both servers. If the device class names are different, the
library manager uses the parameters specified in a device class that matches the
device type specified for the library client.
If a library client requires a setting that is different from what is specified in the
library manager’s device class (for example, a different mount limit), perform
the following steps:
a. Create an additional device class on the library manager server. Specify the
parameter settings you want the library client to use.
b. Create a device class on the library client with the same name and device
type as the new device class you created on the library server.
7. Define the storage pool, BACKTAPE, that will use the shared library:
define stgpool backtape tape
description='storage pool for shared sangroup' maxscratch=50
8. Repeat this procedure to define additional servers as library clients.
Configuring Tivoli Storage Manager for LAN-free data movement
You can configure the Tivoli Storage Manager client and server so that the client,
through a storage agent, can move its data directly to storage on a Storage Area
Networks (SAN). This function, called LAN-free data movement, is provided by
Tivoli Storage Manager for the SAN.
As part of the configuration, a storage agent is installed on the client system. Tivoli
Storage Manager supports both tape libraries and FILE libraries. This feature
supports SCSI, 349X, and ACSLS tape libraries.
|
|
|
|
The configuration procedure you follow will depend on the type of environment
you implement; however in all cases you must perform the following steps:
1. Install and configure the client.
2. Install and configure the storage agent.
3. Configure the libraries for LAN-free data movement.
4. Define the libraries and associated paths.
5. Define associated devices and their paths.
6. Configure Tivoli Storage Manager policy for LAN-free data movement for the
client. If you are using shared FILE storage, install and configure IBM
TotalStorage SAN File System, Tivoli SANergy, or IBM General Parallel File
System™.
For more information on configuring Tivoli Storage Manager for LAN-free data
movement see the Storage Agent User’s Guide.
To help you tune the use of your LAN and SAN resources, you can control the
path that data transfers take for clients with the capability of LAN-free data
movement. For each client you can select whether data read and write operations
use:
v The LAN path only
v The LAN-free path only
Chapter 7. Configuring storage devices (Windows)
161
v Any path
See the REGISTER NODE and UPDATE NODE commands in the Administrator’s
Reference.
Validating your LAN-free configuration
After configured your Tivoli Storage Manager client for LAN-free data movement,
you can verify your configuration and server definitions by issuing the VALIDATE
LANFREE command. This command allows you to determine which destinations
for a given node, using a specific storage agent, are capable of LAN-free data
movement.
The VALIDATE LANFREE command can also be used to determine if there is a
problem with an existing LAN-free configuration. You can evaluate the policy,
storage pool, and path definitions for a given node using a given storage agent to
ensure that an operation is working properly.
To determine if there is a problem with the client node FRED using the storage
agent FRED_STA, issue the following:
validate lanfree fred fred_sta
The output will allow you to see which management class destinations for a given
operation type are not LAN-free capable. It will also report the total number of
LAN-free destinations.
See the VALIDATE LANFREE command in the Administrator’s Reference for more
information.
Configuring Tivoli Storage Manager for NDMP operations
Tivoli Storage Manager can use Network Data Management Protocol (NDMP) to
communicate with NAS (network attached storage) file servers and provide
backup and restore services. This feature supports SCSI, ACSLS, and 349X library
types.
To configure Tivoli Storage Manager for NDMP operations, perform the following
steps:
1. Define the libraries and their associated paths.
Important: An NDMP device class can only use a Tivoli Storage Manager
library in which all of the drives can read and write all of the media in the
library.
2. Define a device class for NDMP operations.
3. Define the storage pool for backups performed by using NDMP operations.
4. Optional: Select or define a storage pool for storing tables of contents for the
backups.
5. Configure Tivoli Storage Manager policy for NDMP operations.
6. Register the NAS nodes with the server.
7. Define a data mover for the NAS file server.
8. Define the drives and their associated paths.
For more information on configuring Tivoli Storage Manager for NDMP
operations, see Chapter 9, “Using NDMP for operations with NAS file servers,” on
page 219.
162
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Troubleshooting device configuration
Procedures are available for displaying device information and the event log.
Common sources of device problems are identified. The impact of device and
cabling changes in a SAN environment is a SAN are described.
Displaying device information
You can display information about devices connected to the server by using the
Device Information utility.
Perform the following steps to display device information:
1. From the Tivoli Storage Manager Console, expand the tree to the machine you
are configuring.
2. Expand Tivoli Storage Manager Device Driver and Reports.
3. Click Device Information. This utility provides a convenient way to find
information about devices available and defined to the server.
The information provided by this utility is from the Windows registry. Some of the
information is put into the registry by the Tivoli Storage Manager device driver. To
receive accurate information, ensure that the device driver is running. If the device
driver is not running, the information may be incorrect if device attachments have
changed since the last time the device driver was running.
Displaying the event log to find device errors
You can display the Windows Event Log, which will help you understand the
problem behind some device errors.
See the following steps for how to display the event log:
1. In the Tivoli Storage Manager Console, select Launch. From the
Administrative Tools window, open Event Viewer. Information on system
events is displayed.
2. Under the column labeled Source, look for events that are labeled with
“tsmscsi”
You can also use filtering to see just the events related to Tivoli Storage
Manager. From the View menu, select Filter Events..., and set the filter to show
events with event ID 11.
3. Double-click on the event to get information about the date and time of the
error, and detailed information. To interpret the data bytes that are shown in
the Data field, use the appendix in the IBM Tivoli Storage Manager Messages that
discusses device I/O errors.
Troubleshooting problems with devices
Some common sources of device problems when configuring or using Tivoli
Storage Manager are provided to you.
Chapter 7. Configuring storage devices (Windows)
163
Symptom
Problem
Solution
Conflicts with other
applications.
Tivoli Storage Manager
requires a storage area
network or a Removable
Storage Manager library to
share devices.
Set up a storage area
network.
Set up an RSM library.
Attention: Data loss can
occur if multiple Tivoli
Storage Manager servers use
the same device. Define or
use a device with only one
Tivoli Storage Manager
server.
Labeling fails.
Conflicts among device
drivers.
A device for labeling
volumes cannot be used at
the same time that the server
uses the device for other
processes.
Incorrect or incomplete
license registration.
Register the license for the
device support that was
purchased, if this has not
been done. For more
information on licensing, see
“Licensing IBM Tivoli
Storage Manager” on page
571.
Tivoli Storage Manager
issues messages about I/O
errors when trying to define
or use a sequential access
device.
Windows device drivers and
drivers provided by other
applications can interfere
with the Tivoli Storage
Manager device driver if the
Tivoli Storage Manager
driver is not started first. To
check on the order that
device drivers are started by
the system, perform the
following steps:
1. Click on Control Panel.
2. Click on Devices. Device
drivers and their startup
types are listed.
Device driver conflicts often
result in I/O errors when
trying to define or use a tape
or optical disk device.
Windows device drivers and
drivers provided by other
applications can interfere
with the Tivoli Storage
Manager device driver if it is
not started first. For a
procedure to ensure that the
Tivoli Storage Manager
device driver starts before
the Windows device driver,
see “Controlling devices with
the Tivoli Storage Manager
device driver” on page 119.
164
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Impact of device changes on the SAN
The SAN environment can shift dramatically because of device or cabling changes.
Device IDs assigned by the SAN may be altered due to bus resets or other
environmental changes. This dynamically changing nature of the SAN can cause
the static definitions defined and known to the server (or storage agent) to fail or
become unpredictable.
The server may know a device as id=1 based on the original path specification to
the server and original configuration of the LAN. However, some event in the SAN
(new device added, cabling change) causes the device to be assigned id=2. When
the server tries to access the device with id=1, it will either get a failure or the
wrong target device. The server assists in recovering from changes to devices on
the SAN by using serial numbers to confirm the identity of devices it contacts.
When you define a device (drive or library) you have the option of specifying the
serial number for that device. If you do not specify the serial number when you
define the device, the server obtains the serial number when you define the path
for the device. In either case, the server then has the serial number in its database.
From then on, the server uses the serial number to confirm the identity of a device
for operations.
When the server uses drives and libraries on a SAN, the server attempts to verify
that the device it is using is the correct device. The server contacts the device by
using the device name in the path that you defined for it. The server then requests
the serial number from the device, and compares that serial number with the serial
number stored in the server database for that device.
If the serial numbers do not match, the server begins the process of discovery on
the SAN to attempt to find the device with the matching serial number. If the
server finds the device with the matching serial number, it corrects the definition
of the path in the server’s database by updating the device name in that path. The
server issues a message with information about the change made to the device.
Then the server proceeds to use the device.
You can monitor the activity log for messages if you want to know when device
changes on the SAN have affected Tivoli Storage Manager. The following are the
number ranges for messages related to serial numbers:
v ANR8952 through ANR8958
v ANR8961 through ANR8967
Restriction: Some devices do not have the capability of reporting their serial
numbers to applications such as the Tivoli Storage Manager server. If the server
cannot obtain the serial number from a device, it cannot assist you with changes to
that device’s location on the SAN.
Chapter 7. Configuring storage devices (Windows)
165
Defining devices and paths
The following topics describe how to define libraries and drives, as well as their
paths, to Tivoli Storage Manager.
See “Managing libraries” on page 201 and “Managing drives” on page 203 for
information about displaying library and drive information, and updating and
deleting libraries and drives.
Defining libraries
Before you can use a drive, you must first define the library to which the drive
belongs. This is true for both manually mounted drives and drives in automated
libraries. For example, you have several stand-alone tape drives. You can define a
library named MANUALMOUNT for these drives by using the following
command:
Task
Required Privilege Class
Define or update libraries
System or unrestricted storage
define library manualmount libtype=manual
For all libraries other than manual libraries, you define the library and then define
a path from the server to the library. For example, if you have an IBM 3583 device,
you can define a library named ROBOTMOUNT using the following command:
define library robotmount libtype=scsi
Next, you use the DEFINE PATH command. In the path, you must specify the
DEVICE parameter. The DEVICE parameter is required and specifies the device
alias name by which the library’s robotic mechanism is known.
define path server1 robotmount srctype=server desttype=library
device=lb3.0.0.0
For more information about paths, see “Defining paths” on page 169.
Defining SCSI libraries on a SAN
For a library type of SCSI on a SAN, the server can track the library’s serial
number. With the serial number, the server can confirm the identity of the device
when you define the path or when the server uses the device.
If you choose, you can specify the serial number when you define the library to
the server. For convenience, the default is to allow the server to obtain the serial
number from the library itself at the time that the path is defined.
If you specify the serial number, the server confirms that the serial number is
correct when you define the path to the library. When you define the path, you can
set AUTODETECT=YES to allow the server to correct the serial number if the
number that it detects does not match what you entered when you defined the
library.
Depending on the capabilities of the library, the server may not be able to
automatically detect the serial number. Not all devices are able to return a serial
number when asked for it by an application such as the server. In this case, the
server will not record a serial number for the device, and will not be able to
confirm the identity of the device when you define the path or when the server
uses the device. See “Impact of device changes on the SAN” on page 165.
166
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Defining drives
To inform the server about a drive that can be used to access storage volumes,
issue the DEFINE DRIVE command, followed by the DEFINE PATH command.
When issuing the DEFINE DRIVE command, you must provide some or all of the
following information:
Library name
The name of the library in which the drive resides.
Drive name
The name assigned to the drive.
Serial number
The serial number of the drive. The serial number parameter applies only
to drives in SCSI libraries. With the serial number, the server can confirm
the identity of the device when you define the path or when the server
uses the device.
You can specify the serial number if you choose. The default is to allow the
server to obtain the serial number from the drive itself at the time that the
path is defined. If you specify the serial number, the server confirms that
the serial number is correct when you define the path to the drive. When
you define the path, you can set AUTODETECT=YES to allow the server to
correct the serial number if the number that it detects does not match what
you entered when you defined the drive.
Depending on the capabilities of the drive, the server may not be able to
automatically detect the serial number. In this case, the server will not
record a serial number for the device, and will not be able to confirm the
identity of the device when you define the path or when the server uses
the device.
Element address
The element address of the drive. The ELEMENT parameter applies only
to drives in SCSI libraries. The element address is a number that indicates
the physical location of a drive within an automated library. The server
needs the element address to connect the physical location of the drive to
the drive’s SCSI address. You can allow the server to obtain the element
number from the drive itself at the time that the path is defined, or you
can specify the element number when you define the drive.
Depending on the capabilities of the library, the server may not be able to
automatically detect the element address. In this case you must supply the
element address when you define the drive, if the library has more than
one drive. If you need the element numbers, check the device worksheet
filled out in step 6 on page 111. Element numbers for many libraries are
available at http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html.
For example, to define a drive that belongs to the manual library named MANLIB,
enter this command:
define drive manlib mandrive
Next, you define the path from the server to the drive, using the device name used
to access the drive:
define path server1 mandrive srctype=server desttype=drive library=manlib
device=mt3.0.0.0
Chapter 7. Configuring storage devices (Windows)
167
For more information about paths, see:
“Defining paths” on page 169
“Impact of device changes on the SAN” on page 165
Defining data movers
Data movers are SAN-attached devices that, through a request from Tivoli Storage
Manager, transfer client data for backup, archiving or restore purposes. Data
movers are defined as unique objects to Tivoli Storage Manager.
When issuing the DEFINE DATAMOVER command, you must provide some or all
of the following information:
Data mover name
The name of the defined data mover.
Type
The type of data mover (SCSI or NAS).
World wide name
The Fibre Channel world wide name for the data mover device.
Serial number
Specifies the serial number of the data mover.
High level address
The high level address is either the numerical IP address or the domain
name of a NAS file server.
Low level address
The low level address specifies the TCP port number used to access a NAS
file server.
User ID
The user ID specifies the ID for a user when initiating a Network Data
Management Protocol (NDMP) session with a NAS file server.
Password
The password specifies the password associated with a user ID when
initiating an NDMP session with a NAS file server. Check with your NAS
file server vendor for user ID and password conventions.
Copy threads
The number of concurrent copy operations that the SCSI data mover can
support.
Online
The online parameter specifies whether the data mover is online.
Data format
The data format parameter specifies the data format used according to the
type of data mover device used.
For example, to define a NAS data mover named NAS1, enter the following:
define datamover nas1 type=nas hladdress=netapp2.tucson.ibm.com lladdress=10000
userid=root password=admin dataformat=netappdump
168
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Defining paths
Before a device can be used, a path must be defined between the device and the
server or the device and the data mover responsible for outboard data movement.
This command must be used to define the following path relationships:
v Between a server and a drive or a library.
v Between a storage agent and a drive.
v Between a data mover and a drive, a disk, or a library.
When issuing the DEFINE PATH command, you must provide some or all of the
following information:
Source name
The name of the server, storage agent, or data mover that is the source for
the path.
Destination name
The assigned name of the device that is the destination for the path.
Source type
The type of source for the path. (A storage agent is considered a type of
server for this purpose.)
Destination type
The type of device that is the destination for the path.
Library name
The name of the library that a drive is defined to if the drive is the
destination of the path.
Device
The alias name of the device (or for an IBM 3494 library, the symbolic
name). This parameter is used when defining a path between a server or a
storage agent and a library, drive, or disk. This parameter should not be
used when defining a data mover as the source type, except when the data
mover is a NAS data mover. NAS data movers always require a device
parameter. For shared FILE drives, this value is always “FILE.”
Directory
The directory location or locations of the files used in the FILE device
class. The default is the current working directory of the server at the time
the command is issued. Windows registry information is used to determine
the default directory.
Automatic detection of serial number and element address
For devices on a SAN, you can specify whether the server should correct
the serial number or element address of a drive or library, if it was
incorrectly specified on the definition of the drive or library. The server
uses the device name to locate the device and compares the serial number
(and the element address for a drive) that it detects with that specified in
the definition of the device. The default is to not allow the correction.
LUN
Logical Unit Number. An identifier used on a SCSI bus to distinguish
between devices with the same target ID. On a Fibre Channel bus it is used
to distinguish between devices with the same world wide name. If the
LUN of the device, as identified by the source of the path, differs from the
LUN in the base definition of the device, you must use the LUN as
identified by the source of the path. This parameter should not be used
when defining a server as the source type.
Chapter 7. Configuring storage devices (Windows)
169
Initiator ID
The SCSI initiator ID that the source will use when accessing the
destination. The parameter should not be used when defining a server as
the source type.
For example, if you had a SCSI type library named AUTODLTLIB that had a
device name of lb3.0.0.0, and you wanted to define it to a server named ASTRO1,
you would issue the following command:
define path astro1 autodltlib srctype=server desttype=library
device=lb3.0.0.0
If you had a drive, DRIVE01, that resided in library AUTODLTLIB, and had a
device name of mt3.0.0.0, and you wanted to define it to server ASTRO1, you
would issue the following command:
define path astro1 drive01 srctype=server desttype=drive library=autodltlib
device=mt3.0.0.0
Increased block size for writing to tape
Tivoli Storage Manager provides the DSMMAXSG utility that can improve the rate
at which the server processes data for backups and restores, and for archives and
retrieves.
Actual results will depend upon your system environment. The utility does not
affect the generation of backup sets.
The utility increases the maximum transfer length for some Host Bus Adapters
(HBAs) and, consequently, the block size used by the Tivoli Storage Manager
server for writing data to and getting data from the following types of tape drives:
v 3570
v 3590
v 3592
v DLT
v DTF
v ECARTRIDGE
v LTO
The maximum supported block size with this utility is 256 KB. When you run
DSMMAXSG, it modifies one registry key for every HBA driver on your system.
The name of the key is MaximumSGList.
Normally, the utility is executed automatically as part of the Tivoli Storage
Manager server or storage agent installation. However, if you install a new HBA
on your system after server or storage agent installation or if you install a new
version of an existing HBA device driver that resets the value of the maximum
transfer size, you must run the utility manually in order to take advantage of the
larger block size.
Important: If you back up or archive to tape using the 256 KB block size, you
cannot append to or read from the tape using an HBA that does not support the
256 KB block size. For example, if you use a 256 KB Windows system for backing
up client data to the Tivoli Storage Manager server, you cannot restore the data
using a Windows system that supports a different transfer length. If you want to
append to or read from tape written to using a 256 KB transfer length, you need to
install an HBA that supports 256 KB transfers.
170
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
For more information on the DSMMAXSG utility, see the Administrator’s Reference.
Chapter 7. Configuring storage devices (Windows)
171
172
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 8. Managing removable media operations
Routine removable media operations include preparing media for use, controlling
how and when media are reused, and ensuring that sufficient media are available.
You also need to respond to operator requests and manage libraries, drives, disks,
paths, and data movers.
“Preparing media for automated libraries” on page 175
“Managing media in automated libraries” on page 182
“Labeling media for manual libraries” on page 188
“Media management in manual libraries” on page 189
“Tivoli Storage Manager server requests” on page 190
“Tape rotation” on page 193
“Using removable media managers” on page 195
“Managing paths” on page 215
“Managing libraries” on page 201
“Managing drives” on page 203
“Managing data movers” on page 216
“Managing disks” on page 216
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see
Administrator’s Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see
Administrator’s Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Defining volumes
For each storage pool, decide whether to use scratch volumes or private volumes.
Private volumes require more human intervention than scratch volumes.
When you add devices with the Device Configuration Wizard, the wizard
automatically creates a storage pool for each device it configures and allows a
maximum of 500 scratch volumes for the storage pool. When you use commands
to add devices, you specify the maximum number of scratch volumes with the
MAXSCRATCH parameter of the DEFINE STGPOOL or UPDATE STGPOOL
command. If the MAXSCRATCH parameter is 0, all the volumes in the storage
pool are private volumes that you must define.
For example, to create a storage pool named STORE1 that can use up to 500
scratch volumes, issue the following command:
define stgpool store1 maxscratch=500
Scratch volumes are recommended for the following reasons:
© Copyright IBM Corp. 1993, 2009
173
v You need not explicitly define each storage pool volume.
v Scratch volumes are convenient to manage and they fully exploit the automation
of robotic devices.
v Different storage pools sharing the same automated library can dynamically
acquire volumes from the library’s collection of scratch volumes. The volumes
need not be preallocated to the different storage pools.
Use private volumes to regulate the volumes used by individual storage pools, and
to manually control the volumes. Define each private volume with the DEFINE
VOLUME command. For database backups, dumps, or loads, or for server import
or export operations, you must list the private volumes.
Managing volumes
When Tivoli Storage Manager needs a new volume, it chooses a volume from the
storage pool available for client backups. If you set up private volumes, it selects a
specific volume. If you set up scratch volumes, it selects any scratch volume in the
library.
IBM 3494 Tape Library Dataservers use category numbers to identify volumes that
are used for the same purpose or application. For details, see “Category numbers
for IBM 3494 libraries” on page 187. For special considerations regarding
write-once, read-many (WORM) volumes, see “Write-once, read-many (WORM)
tape media” on page 180.
Remember: Each volume used by a server for any purpose must have a unique
name. This requirement applies to all volumes, whether the volumes are used for
storage pools, or used for operations such as database backup or export. The
requirement also applies to volumes that reside in different libraries but that are
used by the same server.
Partially-written volumes
Partially-written volumes are always private volumes, even if their status was
scratch before Tivoli Storage Manager selects them to be mounted. Tivoli Storage
Manager tracks the original status of scratch volumes, so it can return them to
scratch status when they become empty.
Except for volumes in automated libraries, Tivoli Storage Manager is unaware of a
scratch volume until after the volume is mounted. Then, the volume status changes
to private, and the volume is automatically defined as part of the storage pool for
which the mount request was made.
For information about changing the status of a volume in an automated library, see
“Changing the status of automated library volumes” on page 183.
174
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Volume inventory for automated libraries
Tivoli Storage Manager maintains a volume inventory for each automated library.
The volume inventory allows the device to provide maximum automation.
The volume inventory is created when you check media volumes into the library.
Tivoli Storage Manager tracks the status of volumes in the inventory as either
scratch or private.
A list of volumes in the library volume inventory will not necessarily be identical
to a list of volumes in the storage pool inventory for the device. For example,
scratch volumes may be checked in to the library but not defined to a storage pool
because they have not yet been selected for backup; private volumes may be
defined to a storage pool, but not checked into the device’s volume inventory.
Changing the status of database-backup and database-export
volumes
When Tivoli Storage Manager backs up the database or exports server information,
it records information about the volumes used for these operations in the volume
history file.
To change the status of database-backup and database-export volumes, use the
DELETE VOLHISTORY command or the UPDATE LIBVOLUME command.
For details about the volume history file, see Chapter 24, “Protecting and
recovering your server,” on page 769.
Preparing media for automated libraries
You prepare tape and optical disk volumes by labeling them and checking them
into the library volume inventory:
Task
Required Privilege Class
“Labeling media”
System
“Checking media into automated library
devices” on page 177
System
“Element addresses for library storage slots” Any Administrator or Operator
on page 179
Labeling media
All media require labels. Labeling media with an automated library requires you to
check media into the library. Checkin processing can be done at the same time that
the volume is labeled.
To label volumes with the LABEL LIBVOLUME command, specify the CHECKIN
parameter.
To automatically label tape volumes in SCSI-type libraries, use the AUTOLABEL
parameter on the DEFINE LIBRARY and UPDATE LIBRARY commands. Using
this parameter eliminates the need to pre-label a set of tapes. It is also more
efficient than using the LABEL LIBVOLUME command, which requires you to
mount volumes separately. If you use the AUTOLABEL parameter, you must check
in tapes by specifying CHECKLABEL=BARCODE on the CHECKIN LIBVOLUME
command.
Chapter 8. Managing removable media operations
175
A label cannot include embedded blanks or periods and must be valid when used
as a file name on the media.
Labeling media with automated tape libraries
If you label volumes with the Labeling Wizard, you can select check-in processing
in the wizard.
Insert the media into storage slots or entry/exit ports and invoke the Labeling
Wizard.
Tip: The Labeling Wizard does not support labeling of optical media. To label
optical media, you must issue the LABEL LIBVOLUME command.
1. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
2. Click Wizards, then double click Media Labeling in the right pane. The Media
Labeling Wizard appears.
3. Click Library Media Labeling in the right pane of the Tivoli Storage Manager
Server Utilities.
4. Click the Start button. The Tivoli Storage Manager Autochanger Media
Labeling Wizard appears.
5. Follow the instructions in the wizard. In the last wizard dialog, check the box
named Checkin Tapes.
6. The labels on VolSafe volumes can be overwritten only once. Therefore, you
should issue the LABEL LIBVOLUME command only once for VolSafe
volumes. You can guard against overwriting the label by using the
OVERWRITE=NO option on the LABEL LIBVOLUME command.
By default, the label command does not overwrite an existing label on a volume.
However, if you want to overwrite existing volume labels, you can specify
OVERWRITE=YES when you issue the LABEL LIBVOLUME command. See
“Labeling volumes using commands” on page 195.
Attention: Use caution when overwriting volume labels to avoid destroying
important data. By overwriting a volume label, you destroy all of the data that
resides on the volume.
Labeling media for use with bar code readers
Libraries equipped with bar code readers can obtain volume names using the
reader instead of prompting the administrator.
If you are labeling media with the labeling wizard, check the bar code check box in
the wizard. If you are labeling media with commands, issue the LABEL
LIBVOLUME command, specifying SEARCH=YES and
LABELSOURCE=BARCODE. Tivoli Storage Manager reads the bar code and the
media are moved from the entry/exit port to a drive where the information on the
bar code label is written as the internal label on the media. After the tape is
labeled, it is moved back to the entry/exit port or to a storage slot if the
CHECKIN option is specified.
Because bar code scanning can take a long time for unlabeled volumes, do not mix
volumes with bar code labels and volumes without bar code labels in a library.
Bar code support is available for libraries controlled by Tivoli Storage Manager
using the Tivoli Storage Manager device driver or the RMSS LTO Ultrium device
driver. Bar code support is unavailable for devices using the native Windows
176
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
device driver or devices whose media are managed by Removable Storage
Manager (RSM). See “Using removable media managers” on page 195.
Checking media into automated library devices
After volumes have been labeled, make the volumes available to Tivoli Storage
Manager devices by checking the volumes into the library volume inventory using
the CHECKIN LIBVOLUME command.
The CHECKIN LIBVOLUME command involves device access, and may take a
long time to complete. For this reason, the command always executes as a
background process. Wait for the CHECKIN LIBVOLUME process to complete
before defining volumes or the defining process will fail. You can save time by
checking in volumes as part of the labeling operation. For details, see “Labeling
media” on page 175.
You can specify that Tivoli Storage Manager read media labels for the volumes you
are checking in. When label-checking is enabled, Tivoli Storage Manager mounts
each volume and reads the internal label before checking in the volume. Tivoli
Storage Manager checks in only volumes that are properly labeled. Checking labels
can prevent errors later, when Tivoli Storage Manager selects and mounts volumes,
but it also increases check in processing time.
Tip: When exporting data from a Tivoli Storage Manager server other than an
OS/400® PASE server (for example, from a Tivoli Storage Manager Windows
server) to a Tivoli Storage Manager OS/400 PASE server, use a server-to-server
export rather than an export to sequential media. The CHECKIN LIBVOLUME
command fails on a Tivoli Storage Manager OS/400 PASE server when the server
attempts to check in a sequential-media volume containing export data from
servers other than OS/400 PASE servers. If you must use an LTO device to create
an export tape, follow these steps:
1. Define a Tivoli Storage Manager library of type MANUAL.
2. Define an LTO tape drive in the MANUAL library.
3. Label a tape with six characters or less.
4. Perform the export.
Checking a single volume into an automated library
You can check in single volumes using the CHECKIN LIBVOLUME command with
the SEARCH=NO parameter.
Tivoli Storage Manager issues a mount request identifying a storage slot with an
element address. The media can be loaded directly into a single storage slot or into
one of the device’s entry/exit ports, if it is equipped with them. For example,
check a scratch volume named VOL001 into a library named TAPELIB by entering
the following command:
checkin libvolume tapelib vol001 search=no status=scratch
Tivoli Storage Manager finds that the first empty slot is at element address 5, and
issues the following message:
ANR8306I 001: Insert 8MM volume VOL001 R/W in slot with element
address 5 of library TAPELIB within 60 minutes; issue 'REPLY' along
with the request ID when ready.
If the library is equipped with entry/exit ports, the administrator can load the
volume into a port without knowing the element addresses of the device’s storage
slots. After inserting the volume into an entry/exit port or storage slot, the
Chapter 8. Managing removable media operations
177
administrator responds to the preceding message at a Tivoli Storage Manager
command line by issuing the REPLY command with the request number (the
number at the beginning of the mount request):
reply 1
Tip: A REPLY command is not required if you specify a wait time of zero using
the optional WAITTIME parameter on the CHECKIN LIBVOLUME command. The
default wait time is 60 minutes.
Checking in volumes using library bar code readers
You can save time checking volumes into libraries equipped with bar code readers
by using the characters on the bar code labels as names for the volumes being
checked in.
Tivoli Storage Manager reads the bar code labels and uses the information on the
labels to write the internal media labels. For volumes missing bar code labels,
Tivoli Storage Manager mounts the volumes in a drive and attempts to read the
internal, recorded label.
For example, to use a bar code reader to search a library named TAPELIB and
check in a scratch tape, enter:
checkin libvolume tapelib search=yes status=scratch
checklabel=barcode
Checking in volumes from library entry/exit ports
To search all slots of bulk entry/exit ports for labeled volumes that Tivoli Storage
Manager can check in automatically, issue the CHECKIN LIBVOLUME command,
specifying SEARCH=BULK. The server searches through all slots even if it
encounters an unavailable slot.
Issuing a REPLY command in response to a server request is not required if you
specify a wait time of zero using the optional WAITTIME parameter. Without the
requirement for a reply, the CHECKIN LIBVOLUME command is much easier to
script and requires less intervention. The default value for the WAITTIME
parameter is 60 minutes.
To have Tivoli Storage Manager load a cartridge in a drive and read the label, you
must specify the CHECKLABEL=YES option. The CHECKLABEL=NO option is
invalid with the SEARCH=BULK option. After reading the label, Tivoli Storage
Manager moves the tape from the drive to a storage slot. When bar code reading is
enabled with the CHECKLABEL=BARCODE parameter, Tivoli Storage Manager
reads the label and moves the tape from the entry/exit port to a storage slot.
Checking in volumes from library storage slots
You can search storage slots for new volumes that have not yet been added to the
volume inventory and check those volumes into the library using the CHECKIN
LIBVOLUME command, specifying SEARCH=YES.
Issuing the SEARCH=YES parameter eliminates issuing an explicit CHECKIN
LIBVOLUME command for each volume. For example, for a SCSI device you can
simply open the library access door, place all of the new volumes in unused slots,
close the door, and issue the CHECKIN LIBVOLUME command with
SEARCH=YES.
See “Element addresses for library storage slots” on page 179.
178
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Checkin of private volumes
Private volumes are volumes that are either predefined to a storage pool or
volumes that are partially-written. You can check in private volumes, but you must
assign a private status to them before checking them in.
Private volumes cannot be accidentally overwritten when a scratch mount is
requested. The server does not allow the administrator to check in a volume with
scratch status when that volume already belongs to a storage pool.
Partially-written volumes are always private volumes. Volumes begin with a status
of either scratch or private, but once Tivoli Storage Manager stores data on them,
their status becomes private. See “Returning partially-written volumes to
automated libraries” on page 184.
Checkin of volumes into full libraries
You can check volumes into devices that are fully populated and have no empty
storage slots by enabling tape swapping. Swapping allows Tivoli Storage Manager
to select and eject volumes to store in a different physical location.
Tivoli Storage Manager selects the volume to eject by checking first for any
available scratch volumes, then for the least frequently mounted volumes. Without
tape swapping, the checkin fails. See “Setting up volume overflow locations for
automated libraries” on page 185.
Checkin of volumes into IBM 3494 libraries
Volumes inserted into an IBM 3494 library are assigned to the insert category
(X’FF00’).
When a volume is first inserted into an IBM 3494 library, either manually or
automatically at the convenience I/O station, the volume is assigned to the insert
category (X’FF00’). You can then change the category number when issuing the
CHECKIN LIBVOLUME command.
Element addresses for library storage slots
If a library has entry/exit ports, you can add and remove media by loading the
media into the ports. If there are no entry/exit ports, you must load tapes into
storage slots.
If you load tapes into storage slots, you must reply to mount requests that identify
storage slots with element addresses, unless you specify a wait time of zero when
issuing the CHECKIN LIBVOLUME or LABEL LIBVOLUME commands. (If the
wait time is zero, no reply is required.) An element address is a number that
indicates the physical location of a storage slot or drive within an automated
library.
You need device names and element addresses when:
v Defining or updating drives in an automated library.
v Checking volumes into an automated library that has no entry/exit ports.
v Using a specific drive in an automated library to label volumes.
Element addresses for IBM-supported devices are available through the Device
Configuration wizard. Element addresses are also available in the device
manufacturer’s documentation or at the following Web site:http://www.ibm.com/
software/sysmgmt/products/support/IBMTivoliStorageManager.html.
Chapter 8. Managing removable media operations
179
Write-once, read-many (WORM) tape media
Write-once, read-many (WORM) media helps prevent accidental or deliberate
deletion of critical data. However, Tivoli Storage Manager imposes certain
restrictions and guidelines to follow when using WORM media.
Tivoli Storage Manager supports the following types of WORM media:
v StorageTek VolSafe
v Sony AIT50 and AIT100
v IBM 3592
v IBM LTO-3 and LTO-4; HP LTO-3 and LTO-4; and Quantum LTO-3
v Quantum SDLT 600, Quantum DLT V4, and Quantum DLT S4
Tips:
v External and manual libraries use separate logical libraries to segregate their
media. Ensuring that the correct media are loaded is the responsibility of the
operator and the library manager software.
v A storage pool can consist of either WORM or RW media, but not both.
v Do not use WORM tapes for database backup or export operations. Doing so
wastes tape following a restore or import operation.
For information about defining device classes for WORM tape media, see
“Defining device classes for StorageTek VolSafe devices” on page 268 and
“Defining tape and optical device classes” on page 253.
For information about selecting device drivers for IBM and devices from other
vendors, see:
“Selecting a device driver” on page 114.
WORM-capable drives
To use WORM media in a library, all the drives in the library must be
WORM-capable. A mount will fail if a WORM cartridge is mounted in a read write
(RW) drive.
However, a WORM-capable drive can be used as a RW drive if the WORM
parameter in the device class is set to NO. Any type of library can have both
WORM and RW media if all of the drives are WORM enabled. The only exception
to this rule is NAS-attached libraries in which WORM tape media cannot be used.
Checkin of WORM media
The type of WORM media determines whether the media label needs to be read
during checkin.
Library changers cannot identify the difference between standard read-write (RW)
tape media and the following types of WORM tape media:
v VolSafe
v Sony AIT
v LTO
v SDLT
v DLT
180
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
To determine the type of WORM media that is being used, a volume must be
loaded into a drive. Therefore, when checking in one of these types of WORM
volumes, you must use the CHECKLABEL=YES option on the CHECKIN
LIBVOLUME command.
If they provide support for WORM media, IBM 3592 library changers can detect
whether a volume is WORM media without loading the volume into a drive.
Specifying CHECKLABEL=YES is not required. Verify with your hardware vendors
that your 3592 drives and libraries provide the required support.
LTO restrictions on WORM media
Pre-labeled WORM media are not supported with the LTO device class. You cannot
use WORM media in IBM or HP LTO-4 drives with Tivoli Storage Manager
specified as the drive-encryption key manager.
Mount failures with WORM media
If WORM tape media are loaded into a drive for a read-write (RW) device-class
mount, it will cause a mount failure. Similarly, if RW tape media are loaded into a
drive for a WORM device-class mount, the mount will fail.
Relabeling WORM media
You cannot relabel a WORM cartridge if it contains data. This applies to Sony AIT
WORM, LTO WORM, SDLT WORM, DLT WORM, and IBM 3592 cartridges. The
label on a VolSafe volume should be overwritten only once and only if the volume
does not contain usable, deleted, or expired data.
Issue the LABEL LIBVOLUME command only once for VolSafe volumes. You can
guard against overwriting the label by using the OVERWRITE=NO option on the
LABEL LIBVOLUME command.
Removing private WORM volumes from a library
If you perform some action on a WORM volume (for example, if you delete file
spaces) and the server does not mark the volume as full, the volume is returned to
scratch status. If a WORM volume is not marked as full and you delete it from a
storage pool, the volume will remain private. To remove a private WORM volume
from a library, you must issue the CHECKOUT LIBVOLUME command.
Creation of DLT WORM volumes
DLT WORM volumes can be converted from read-write (RW) volumes.
If you have SDLT-600, DLT-V4, or DLT-S4 drives and you want to enable them for
WORM media, upgrade the drives using V30 or later firmware available from
Quantum. You can also use DLTIce software to convert unformatted read-write
(RW) volumes or blank volumes to WORM volumes.
In SCSI or automated-cartridge system-library software (ACSLS) libraries, the
Tivoli Storage Manager server creates scratch DLT WORM volumes automatically
when the server cannot locate any scratch WORM volumes in a library’s inventory.
The server converts available unformatted or blank RW scratch volumes or empty
RW private volumes to scratch WORM volumes. The server also rewrites labels on
newly created WORM volumes using the label information on the existing RW
volumes.
In manual libraries, you can use the server to format empty volumes to WORM.
Chapter 8. Managing removable media operations
181
Support for short and normal 3592 WORM tapes
Tivoli Storage Manager supports both short and normal 3592 WORM tapes. For
best results, define them in separate storage pools
Querying a device class for the WORM-parameter setting
You can determine the setting of the WORM parameter for a device class by using
the QUERY DEVCLASS command. The output contains a field, labeled WORM,
and a value (YES or NO).
Managing media in automated libraries
Typically, automated libraries require little intervention after you set up a media
rotation. However, you might occasionally need to add, remove, or manually
manage media in automated libraries.
Tivoli Storage Manager tracks the media in the library volume inventory, which it
maintains for each automated library. The library volume inventory is separate
from the storage pool inventory for the device. To add volumes to a device’s
volume inventory, check volumes into the device. For details on the checkin
procedure, see “Checking media into automated library devices” on page 177. To
add volumes to a library’s storage pool, see “Adding scratch volumes to
automated library devices” on page 185.
You can extend the media management function of Tivoli Storage Manager by
using Windows Removable Storage Manger (RSM) to manage media. The
capabilities of these programs go beyond the media management function offered
by Tivoli Storage Manager and they allow different applications to share the same
device. See “Using removable media managers” on page 195.
You can manage media in automated libraries by:
Task
Required Privilege Class
“Changing the status of automated library
volumes” on page 183
System or Unrestricted Storage
“Removing volumes from automated
libraries” on page 183
System or Unrestricted Storage
“Returning partially-written volumes to
automated libraries” on page 184
System or Unrestricted Storage
“Auditing volume inventories in libraries”
on page 184
System or Unrestricted Storage
“Adding scratch volumes to automated
library devices” on page 185
System or Unrestricted Storage
“Category numbers for IBM 3494 libraries”
on page 187
System or Unrestricted Storage
“Media reuse in automated libraries” on
page 188
182
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Changing the status of automated library volumes
You can change the status of a volume from private to scratch or from scratch to
private.
To change the status of volumes, issue the UPDATE LIBVOLUME command.
Private volumes must be administrator-defined volumes with either no data or
invalid data. They cannot be partially-written volumes containing active data.
Volume statistics are lost when volume statuses are modified.
Removing volumes from automated libraries
You can remove volumes from automated libraries by issuing the CHECKOUT
LIBVOLUME command.
Tivoli Storage Manager mounts each volume and verifies its internal label before
checking it out of the volume inventory. After a volume has been checked out,
Tivoli Storage Manager moves the media to the entry/exit port of the device if it
has one, or Tivoli Storage Manager requests that the operator remove the volume
from a drive within the device.
For automated libraries with multiple entry/exit ports, you can issue the
CHECKOUT LIBVOLUME command with the SEARCH=BULK parameter. Tivoli
Storage Manager ejects the volume to the next available entry/exit port.
Partially-written volumes that are removed from the device will need to be
checked in again if Tivoli Storage Manager attempts to access them. See
“Partially-written volumes” on page 174.
Messages: When a volume is dismounted, TapeAlert information will be reported
in four possible messages. TapeAlert has only three severity levels, Critical,
Warning and Informational. Some Critical messages will result in ANR8481S, while
others will use ANRxxxxE, depending on the text. Examples of each warning type
are:
ANRxxxxS Device /dev/rmt1, volume VOL123 has issued the following
Critical TapeAlert: Your Data is at risk:
1. copy any data you require from this tape;
2. Do not use the tape again;
3. Restart the operation with a different tape.
ANRxxxxE Device /dec/lb0, volume NONE has issued the following
Critical TapeAlert: The library has a problem with the host interface:
1. Check the cables and cable connections;
2. Restart the operation.
ANRxxxxW Device /dev/lb0, volume NONE has issued the following
Warning TapeAlert: A hardware failure of the library is predicted.
Call the library supplier helpline.
ANRxxxxI Device /dev/mto, volume MYVOL1 has issued the following
Informational TapeAlert: You have tried to load a cartridge of
a type which is not supported by this drive
These messages indicate a hardware error, and not a Tivoli Storage Manager
application error.
Chapter 8. Managing removable media operations
183
Returning partially-written volumes to automated libraries
Partially-written volumes that are checked out of a library continue to be defined
to a storage pool and have a status of private.
To return partially-written volumes:
1. Check in the volume by issuing the CHECKIN LIBVOLUME command with
STATUS=PRIVATE parameter.
2. Change the volume access from unavailable to read/write or read-only by
issuing the UPDATE VOLUME command with the ACCESS parameter.
Returning reclaimed volumes to a library (Windows)
Tivoli Storage Manager can reuse volumes after valid data is reclaimed.
Scratch volumes are automatically returned to the library as scratch volumes. To
reuse private volumes, check them into the library.
Auditing volume inventories in libraries
Auditing the volume inventory ensures that the information maintained by the
Tivoli Storage Manager server is consistent with the physical media in the library.
Audits are useful when the inventory was manually manipulated.
To audit the volume inventories of automated libraries, issue the AUDIT LIBRARY
command . Tivoli Storage Manager deletes missing volumes and updates the
locations of volumes that have moved since the last audit. Tivoli Storage Manager
cannot add new volumes during an audit.
Unless devices are equipped with bar code readers, the server mounts each volume
during the audit process to verify the label. After the label has been verified, the
volume remains in a wait state until the mount retention interval times out. You
can save time by issuing the DISMOUNT VOLUME command to force idle
volumes to be dismounted.
Auditing volume inventories using bar code readers
You can save time when auditing volume inventories for devices equipped with
bar code readers by using the bar code reader to verify the identity of volumes.
If a volume has a bar code label with six characters or less, Tivoli Storage Manager
reads the volume name from the bar code label during the audit. The volume is
not mounted to verify that the external bar code name matches the internal,
recorded volume name.
If a volume has no bar code label or the bar code label does not meet Tivoli
Storage Manager label requirements, Tivoli Storage Manager mounts the volume in
a drive and attempts to read the internal label. See “Labeling media” on page 175.
For example, to audit the TAPELIB library using its bar code reader, issue the
following command:
audit library tapelib checklabel=barcode
184
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Adding scratch volumes to automated library devices
As the demand for media grows, you can add scratch volumes to libraries.
To increase the maximum number of scratch volumes:
1. Check volumes into the library. Label them if necessary. You might need to
temporarily store volumes in an overflow location in order to make room for
additional scratch volumes. See “Setting up volume overflow locations for
automated libraries.”
2. Increase the maximum number of scratch volumes. You can issue the UPDATE
STGPOOL command to increase the number of scratch volumes that can be
added to a storage pool.
The initial maximum number of scratch volumes for a library is determined when
the library storage pool is created. See “Defining volumes” on page 173.
Setting up volume overflow locations for automated libraries
As the demand for media grows, the number of volumes needed for a storage pool
may exceed the physical capacity of an automated library. To make room for new
volumes while keeping track of existing volumes, you can define a physical
location as an overflow area.
Tivoli Storage Manager tracks the volumes moved to the overflow area thus
allowing you to make storage slots available for new volumes. To set up and
manage an overflow location:
1. Create a volume overflow location. Define or update the storage pool
associated with the automated library by issuing the DEFINE STGPOOL or
UPDATE STGPOOL command with the OVERFLOW parameter. For example,
to create an overflow location named ROOM2948 for a storage pool named
ARCHIVEPOOL, issue the following:
update stgpool archivepool ovflocation=Room2948
2. Move media to the overflow location as required. Issue the MOVE MEDIA
command to move media from the library to the overflow location. For
example, to move all full volumes in the specified storage pool out of the
library.
move media * stgpool=archivepool
All full volumes are checked out of the library, and Tivoli Storage Manager
records the location of the volumes as Room2948.
Use the DAYS parameter to specify the number of days that must elapse before
the volume is eligible for processing by the MOVE MEDIA command.
3. Check in new scratch volumes (if required). See “Checking media into
automated library devices” on page 177. If a volume has an entry in volume
history, you cannot check it in as a scratch volume.
4. Identify the empty scratch tapes in the overflow location. For example, enter
this command:
query media * stg=* whereovflocation=Room2948 wherestatus=empty
move media * stg=* wherestate=mountablenotinlib wherestatus=empty
cmd="checkin libvol autolib &vol status=scratch"
cmdfilename=\storage\move\media\checkin.vols
5. Check in volumes from the overflow area when Tivoli Storage Manager
requests them. Operators must check volumes in from the overflow area when
Tivoli Storage Manager needs them. Tivoli Storage Manager issues mount
requests that include the location of the volumes.
Chapter 8. Managing removable media operations
185
Operators can locate volumes in an overflow location by issuing the QUERY
MEDIA command. This command can also be used to generate commands. For
example, you can issue a QUERY MEDIA command to list the volumes in the
overflow location, and at the same time generate the commands to check those
volumes into the library. For example, enter this command:
query media format=cmd stgpool=archivepool whereovflocation=Room2948
cmd="checkin libvol autolib &vol status=private"
cmdfilename="\storage\move\media\checkin.vols"
Use the DAYS parameter to specify the number of days that must elapse before
the volumes are eligible for processing by the QUERY MEDIA command.
The file that contains the generated commands can be run using the Tivoli
Storage Manager MACRO command. For this example, the file may look like
this:
checkin libvol autolib TAPE13 status=private
checkin libvol autolib TAPE19 status=private
Modifying volume access modes
Occasionally, you might need to manipulate the access mode for volumes, for
example, when removing partially-written volumes from or returning them to
libraries.
To change the access mode of a volume, issue the UPDATE VOLUME command,
specifying ACCESS=UNAVAILABLE.
If you want to make volumes unavailable in order to send the data they contain
offsite for safekeeping, consider using copy storage pools or active-data pools
instead. You can back up primary storage pools to a copy storage pool and then
send the copy storage pool volumes offsite. You can also copy active versions of
client backup data to active-data pools, and then send the volumes offsite. You can
track copy storage pool volumes and active-data pool volumes by changing their
access mode to offsite, and updating the volume history to identify their location.
For more information, see “Backing up storage pools” on page 774.
Shared libraries
Shared libraries are logical libraries that are represented physically by SCSI, 349X,
or ACSLS libraries. The Tivoli Storage Manager server is configured as a library
manager and controls the physical library. Tivoli Storage Manager servers using
the SHARED library type are library clients to the library manager server.
The library client contacts the library manager, when the library manager starts
and the storage device initializes, or after a library manager is defined to a library
client. The library client confirms that the contacted server is the library manager
for the named library device. The library client also compares drive definitions
with the library manager for consistency. The library client contacts the library
manager for each of the following operations:
Volume Mount
A library client sends a request to the library manager for access to a
particular volume in the shared library device. For a scratch volume, the
library client does not specify a volume name. If the library manager
cannot access the requested volume, or if scratch volumes are not available,
the library manager denies the mount request. If the mount is successful,
the library manager returns the name of the drive where the volume is
mounted.
186
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Volume Release (free to scratch)
When a library client no longer needs to access a volume, it notifies the
library manager that the volume should be returned to scratch. The library
manager’s database is updated with the volume’s new location. The
volume is deleted from the volume inventory of the library client.
Table 16 shows the interaction between library clients and the library manager in
processing Tivoli Storage Manager operations.
Table 16. How SAN-enabled servers processTivoli Storage Manager Operations
Operation
Library Manager
Library Client
Not applicable.
(QUERY LIBVOLUME)
Displays the volumes that
are checked into the library.
For private volumes, the
owner server is also
displayed.
Check in and check out
library volumes
Performs the commands to
the library device.
Not applicable.
Performs the inventory
synchronization with the
library device.
Performs the inventory
synchronization with the
library manager server.
Performs the labeling and
checkin of media.
Not applicable.
Sends the request to the
library device.
Requests that the library
manager server perform the
operation.
Checks whether the volume
is owned by the requesting
library client server and
checks whether the volume
is in the library device.
Requests that the library
manager server perform the
operation.
(Command)
Query library volumes
(CHECKIN LIBVOLUME,
CHECKOUT LIBVOLUME)
Audit library inventory
(AUDIT LIBRARY)
Label a library volume
When a checkin operation
must be performed because
of a client restore, a request
is sent to the library manager
server.
(LABEL LIBVOLUME)
Dismount a volume
(DISMOUNT VOLUME)
Query a volume
(QUERY VOLUME)
Category numbers for IBM 3494 libraries
Category numbers for IBM 3494 Tape Library Dataservers identify volumes that
are used for the same purpose or application. To avoid data loss, ensure that each
application sharing the library uses unique category numbers.
A 3494 library has an intelligent control unit that tracks the category number of
each volume in the volume inventory. The category numbers are useful when
multiple systems share the resources of a single library. Typically, a software
application that uses a 3494 uses only volumes in categories that are reserved for
that application.
Chapter 8. Managing removable media operations
187
Media reuse in automated libraries
Reusing media in automated libraries is essentially the same as reusing media in
manual libraries except that less intervention is required for automated devices
than for manual devices.
You can set up expiration processing and reclamation processing and tune the
media rotation to achieve the desired results.
v Setting up expiration processing
Expiration processing is the same, regardless of the type of device and media on
which backups are stored. See “Running expiration processing to delete expired
files” on page 490.
v Setting up reclamationprocessing
For a storage pool associated with a library that has more than one drive, the
reclaimed data is moved to other volumes in the same storage pool. See
“Reclaiming space in sequential-access storage pools” on page 350.
v Returning reclaimed media to the storage pool
Most media can be returned to a storage pool after it has been reclaimed but
media containing database backups and database export data require you to
perform an additional step. For these volumes, you must issues the DELETE
VOLHISTORY command or the UPDATE LIBVOLUME command to change the
status of the volume.
When Tivoli Storage Manager backs up the database or exports server
information, Tivoli Storage Manager records information about the volumes used
for these operations in the volume history file. Volumes that are tracked in the
volume history file require the administrator to delete the volume information
from the volume history file. The volume history file is a key component of
server recovery and is discussed in detail in Chapter 24, “Protecting and
recovering your server,” on page 769.
Tip: If your server uses the disaster recovery manager function, the volume
information is automatically deleted during MOVE DRMEDIA command
processing. For additional information about DRM, see Chapter 25, “Using
disaster recovery manager,” on page 815.
v Ensuring media are available
See “Tape rotation” on page 193.
Labeling media for manual libraries
Media must be inserted into a drive and labeled before they can be used. You can
label tapes and optical disks for use in a manual library by inserting the media
into the drive and invoking the Labeling Wizard.
Labels must meet the following criteria:
v Sixcharacters or less
v No embedded blanks or periods
v Valid when used as a file name on the media
Note: You must label CD-ROM, Zip, or Jaz volumes with the device
manufacturer’s or Windows utilities because Tivoli Storage Manager does not
provide utilities to format or label these media. The operating system utilities
include the Disk Administrator program (a graphical user interface) and the label
command. See “Labeling media” on page 175.
188
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
To label tapes and optical disks for use in a manual library:
1. Inserting the media into the drive.
2. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
3. Click Wizards, then double click Media Labeling in the right pane. The Media
Labeling Wizard appears.
4. Click Manual Media Labeling in the right pane of the Tivoli Storage Manager
Server Utilities.
5. Click the Start button. The Tivoli Storage Manager Manual Device Media
Labeling Wizard appears.
6. Follow the instructions in the wizard.
7. After labeling a tape for a manual library, place the tape on the shelf. See
“Labeling volumes using commands” on page 195.
Media management in manual libraries
Media for manually operated devices are stored outside of the device (for example,
in a file cabinet). Operators must therefore mount and dismount media manually.
You can manage media with Windows Removable Storage Manager (RSM).
However, unless device sharing across storage management applications is
required, using a media manager for stand-alone devices could introduce
unjustifiable administrative overhead.
Task
Required Privilege Class
Modifying the status of manual device
volumes
System or unrestricted storage
Removing volumes from a manual library
device
Not applicable
Returning volumes to a manual library
device
Not applicable
Adding more volumes to a manual library
device
Not applicable
Reusing media in manual libraries
Not applicable
Modifying the status of manual device volumes
You can modify the status of volumes, regardless of the type of device that
uses them, by issuing the UPDATE LIBVOLUME command. The command
allows you to assign a private status to scratch volumes or to assign a
scratch status to private volumes. The private volumes cannot be
partially-written volumes containing active data.
Removing volumes from a manual library device
You can remove volumes from manual devices at any time because the
server maintains no volume inventory for manually-operated devices. No
checkout processing is required for manual devices.
Returning volumes to a manual library device
You can return volumes to manual devices at any time because the server
maintains no volume inventory for manual libraries. No check in
processing is required for manual libraries.
Adding more volumes to a manual library device
See “Tape rotation” on page 193.
Chapter 8. Managing removable media operations
189
Reusing media in manual libraries
Reusing media in manual libraries is essentially the same as reusing media
in automated libraries except that more human intervention is required for
manual devices than for automated devices. See “Media reuse in
automated libraries” on page 188.
Tivoli Storage Manager server requests
Tivoli Storage Manager displays requests and status messages to all administrative
clients that are started in console mode. These request messages often have a time
limit. If the request is not fulfilled within the time limit, the operation times out
and fails.
For manual libraries, Tivoli Storage Manager detects when there is a cartridge
loaded in a drive, so no operator reply is necessary. For automated libraries, the
CHECKIN LIBVOLUME and LABEL LIBVOLUME commands involve inserting
cartridges into slots and, depending on the value of the WAITTIME parameter,
issuing a reply message. (If the value of the parameter is zero, no reply is
required.) The CHECKOUT LIBVOLUME command involves inserting cartridges
into slots and, in all cases, issuing a reply message.
Task
Required Privilege Class
“Starting the administrative client as a
server console monitor”
Any Administrator
“Displaying information about volumes that
are currently mounted” on page 191
Any Administrator
“Displaying information about mount
requests that are pending” on page 191
Operator
“Replying to mount requests” on page 191
Operator
“Canceling mount requests” on page 191
Operator
“Responding to requests for volume
checkin” on page 192
System or Unrestricted Storage
“Dismounting idle volumes” on page 192
Operator
“Dismounting volumes from stand-alone
removable-file devices” on page 193
Operator
Starting the administrative client as a server console monitor
You can display mount requests and status messages by starting the administrative
client in console mode. However, if the server is started as a Windows service,
which is recommended, a server console is required to see messages.
To start the administrative client as a server console monitor:
1. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
2. Expand the server you want to work with and then expand Reports.
3. Click Monitor. A server console monitor opens in the right pane.
4. Click Start.
To start a server console monitor from an operating system command line, enter
this command:
> dsmadmc -consolemode
190
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Displaying information about volumes that are currently
mounted
To display the volumes currently mounted by Tivoli Storage Manager, issue the
QUERY MOUNT command. The information lists mounted volumes, the drives on
which they are mounted, and whether the volumes are currently in use.
Displaying information about mount requests that are pending
You can display information about pending mount requests either by checking the
mount message queue on a server console monitor or by issuing the QUERY
REQUEST command.
Tivoli Storage Manager displays a message similar to the following:
ANR8352I Requests outstanding:
ANR8326I 001: Mount DLT volume VOL1 R/W in drive D1 (MT4) of library
MYMANLIB within 60 minutes.
Tivoli Storage Manager displays a three-digit request ID number as part of the
message. The request ID number can also be obtained by issuing a QUERY
REQUEST command. If the request requires the operator to provide a device to be
used for the mount, the second parameter for this command is a device name.
Replying to mount requests
Unless the specified wait time is zero, you must issue a REPLY command in
response to mount requests from automated libraries. Manual libraries do not
require a reply because Tivoli Storage Manager detects when there is a cartridge
loaded in the drive.
If a wait time greater than zero was specified, the server waits the specified
number of minutes before resuming processing.
The first parameter for the REPLY command is the three-digit request ID number
that indicates which of the pending mount requests has been completed. For
example, an operator can issue the following command to respond to request 001
in the previous code sample.
reply 001
Canceling mount requests
If a mount request for a manual library cannot be satisfied, you can issue the
CANCEL REQUEST command. Tivoli Storage Manager cancels the request and the
operation that required the volume fails.
The CANCEL REQUEST command must include the request identification number.
This number is included in the request message, or it can be obtained by issuing a
QUERY REQUEST command, as described in “Displaying information about
mount requests that are pending.”
Chapter 8. Managing removable media operations
191
Canceling mount requests for volumes that were removed from
library devices
You might occasionally remove media from a library with the intention of storing
or destroying the media. If, after the media have been removed, Tivoli Storage
Manager requests the volumes, then you can cancel the request with the CANCEL
REQUEST command.
To ensure that the server does not try to mount the requested volume again,
specify the PERMANENT parameter to mark the volume as unavailable.
For most of the requests associated with automated libraries, the Tivoli Storage
Manager CANCEL REQUEST command is not accepted by the server. An operator
must perform a hardware or system action to cancel the requested mount.
Responding to requests for volume checkin
The procedure for responding to request for volume checkin vary, depending on
whether the requested volume is available or unavailable.
Operators may occasionally need to check additional volumes into an automated
library, for example, when Tivoli Storage Manager cannot locate a volume it
requires from the volume inventory. If the requested volume is available, place the
volume in the device and check in the volume. See “Checking media into
automated library devices” on page 177.
If the volume requested is unavailable (lost or destroyed):
1. Update the access mode of the volume to UNAVAILABLE by using the
UPDATE VOLUME command.
2. Cancel the server’s request for checkin by using the CANCEL REQUEST
command. (Do not cancel the client process that caused the request.) To get the
ID of the request to cancel, issue the QUERY REQUEST command.
If operators do not respond to checkin requests within the mount-wait period,
Tivoli Storage Manager marks the volume as unavailable. The mount-wait period
is set in the device class of the storage pool.
Dismounting idle volumes
After a volume becomes idle, it remains mounted for a time specified by the
mount retention parameter for the device class.
To explicitly request that an idle volume be dismounted, use the DISMOUNT
VOLUME command.
Using mount retention can reduce the access time if volumes are used repeatedly.
For information about setting mount retention times, see “Controlling the amount
of time that a volume remains mounted” on page 256.
192
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Dismounting volumes from stand-alone removable-file devices
For manual libraries, operators must respond to messages that require media (for
example, JAZ, DVD, and CD media) to be manually ejected from removable file
devices.
Tivoli Storage Manager checks the drive every seven seconds to see if the medium
has been ejected. A volume dismount is not considered complete until Tivoli
Storage Manager detects that the medium has been ejected from the drive or that a
different medium has been inserted into the drive.
Obtaining tape alert messages
Tape alert messages are generated by tape and library devices to report hardware
errors. These messages help to determine problems that are not related to the IBM
Tivoli Storage Manager server.
A log page is created and can be retrieved at any given time or at a specific time
such as when a drive is dismounted.
There are three severity levels of tape alert messages:
v Informational (for example, you may have tried to load a cartridge type that is
not supported)
v Warning (for example, a hardware failure is predicted)
v Critical (for example, there is a problem with the tape and your data is at risk)
Tape alert messages are turned off by default. To set tape alert messages to ON,
issue the SET TAPEALERTMSG command. To query tape alert messages, issue the
QUERY TAPEALERTMSG command.
Tape rotation
Over time, media ages, and certain backup data might no longer be needed. You
can reclaim useful data on media and then reclaim and reuse the media
themselves.
Tivoli Storage Manager policy determines how many backup versions are retained
and how long they are retained. See “Basic policy planning” on page 455.
Deleting data - expiration processing
Expiration processing deletes data that is no longer valid either because it
exceeds the retention specifications in policy or because users or
administrators have deleted the active versions of the data. See “File
expiration and expiration processing” on page 458 and “Running
expiration processing to delete expired files” on page 490.
Reusing media - reclamation processing
Data on tapes may expire, move, or be deleted. Reclamation processing
consolidates any unexpired data by moving it from multiple volumes onto
fewer volumes. The media can then be returned to the storage pool and
reused.
You can set a reclamation threshold that allows Tivoli Storage Manager to
reclaim volumes whose valid data drops below a threshold. The threshold
is a percentage of unused space on the volume and is set for each storage
pool. The amount of data on the volume and the reclamation threshold for
the storage pool affects when the volume is reclaimed. See “Reclaiming
space in sequential-access storage pools” on page 350.
Chapter 8. Managing removable media operations
193
Determining when media have reached end of life
You can use Tivoli Storage Manager to displays statistics about volumes
including the number of write operations performed on the media and the
number of write errors. Tivoli Storage Manager overwrites this statistical
data for media initially defined as scratch volumes each time the media are
reclaimed. For media initially defined as private volumes, Tivoli Storage
Manager maintains this statistical data, even as the volume is reclaimed.
You can compare the information with the number of write operations and
write errors recommended by the manufacturer.
Reclaim any valid data from volumes that have reached end of life. If the
volumes are in automated libraries, check them out of the volume
inventory. Delete private volumes the database. See “Reclaiming space in
sequential-access storage pools” on page 350.
Ensuring media are available for the tape rotation
Over time, the demand for volumes may cause the storage pool to run out
of space. You can set the maximum number of scratch volumes high
enough to meet demand by doing one or both of the following:
v Increase the maximum number of scratch volumes by updating the
storage pool definition. Label and check in new volumes to be used as
scratch volumes if needed.
v Make volumes available for reuse by running expiration processing and
reclamation, to consolidate data onto fewer volumes. See “Media reuse
in automated libraries” on page 188 and “Media management in manual
libraries” on page 189.
For automated libraries, see “Setting up volume overflow locations for
automated libraries” on page 185.
Write-once-read-many (WORM) drives can waste media when Tivoli
Storage Manager cancels transactions because volumes are not available to
complete the backup. Once Tivoli Storage Manager writes to WORM
volumes, the space on the volumes cannot be reused, even if the
transactions are canceled (for example, if a backup is canceled because of a
shortage of media in the device).
Large files can cause even greater waste. For example, consider a client
backing up a 12 GB file onto WORM platters that hold 2.6 GB each. If the
backup requires five platters and only four platters are available, Tivoli
Storage Manager cancels the backup and the four volumes that were
written to cannot be reused.
To minimize wasted WORM media:
1. Ensure that the maximum number of scratch volumes for the device
storage pool is at least equal to the number of storage slots in the
library.
2. Check enough volumes into the device’s volume inventory for the
expected load.
If most backups are small files, controlling the transaction size can affect
how WORM platters are used. Smaller transactions mean that less space is
wasted if a transaction such as a backup must be canceled. Transaction size
is controlled by a server option, TXNGROUPMAX, and a client option,
TXNBYTELIMIT.
194
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Labeling volumes using commands
All media require labels. You can label volumes with the LABEL LIBVOLUME
command.
The following example demonstrates using the LABEL LIBVOLUME command to
label tapes for a manual library and for an automated library. Assume the
automated device is attached to SCSI address 4, and the manual device is attached
to SCSI address 5. You want to insert media into the device’s entry/exit ports and
you want the device’s bar code reader to read bar code labels and overwrite
existing labels with the information on the bar code label.
To automatically label tape volumes in SCSI-type libraries, you can use the
AUTOLABEL parameter on the DEFINE LIBRARY and UPDATE LIBRARY
commands. Using this parameter eliminates the need to pre-label a set of tapes. It
is also more efficient than using the LABEL LIBVOLUME command, which
requires you to mount volumes separately. If you use the AUTOLABEL parameter,
you must check in tapes by specifying CHECKLABEL=BARCODE on the
CHECKIN LIBVOLUME command.
Automated library device:
label libvolume storagelibname overwrite=yes labelsource=barcode
Manual library device:
label libvolume storagelibname overwrite=yes labelsource=barcode
label libvolume storagelibname volname
Tip: To automatically label tape volumes in SCSI-type libraries, you can use the
AUTOLABEL parameter on the DEFINE LIBRARY and UPDATE LIBRARY
commands. Using this parameter eliminates the need to pre-label a set of tapes. It
is also more efficient than using the LABEL LIBVOLUME command, which
requires you to mount volumes separately. If you use the AUTOLABEL parameter,
you must check in tapes by specifying CHECKLABEL=BARCODE on the
CHECKIN LIBVOLUME command.
Using removable media managers
You can use external removable media management software to help manage
Tivoli Storage Manager tape and optical media. Removable media managers
provide extended media control and automation to Tivoli Storage Manager, which
primarily specializes in managing data.
One of the supported removable media managers is Removable Storage Manager
(RSM). RSM includes a Microsoft Management Console snap-in that provides
../common interface for tracking removable storage media, and managing storage
devices.
The principal value of using these media managers with Tivoli Storage Manager is
the improved capability to share multiple devices with other applications. RSM
requires some additional administrative overhead, which may be justified by the
savings from sharing expensive hardware like automated libraries.
Tivoli Storage Manager also provides a programming interface that allows you to
use a variety of external programs to control Tivoli Storage Manager media. See
Appendix C, “External media management interface description,” on page 913 for
a
Chapter 8. Managing removable media operations
195
complete description of this interface. See “Using external media managers to
control media” on page 199 for Tivoli Storage Manager setup information.
Tivoli Storage Manager media-manager support
While Tivoli Storage Manager tracks and manages client data, the removable
media manager labels, catalogs, and tracks physical volumes. The media manager
also controls libraries, drives, slots, and doors.
Tivoli Storage Manager works cooperatively with removable media managers to
control storage. Media managers help Tivoli Storage Manager make better use of
media resources. To use a media manager with Tivoli Storage Manager, you must
define a Tivoli Storage Manager library that represents the media manager.
Defining these libraries is similar to defining any other type of library to Tivoli
Storage Manager, except that in this case, the library does not represent a physical
device. Different library types are required for RSM control and External Media
Management Interface control.
RSM
RSM library definition is not device-based, but is instead based on media
type. When you define the library, a media type is specified. The media
manager will assume control of all volumes that match the specified media
type when the volumes are injected into a library controlled by the media
manager. See “RSM device control” on page 197.
Note: For specific information about installing and configuring RSM, see
the Windows online help.
External Media Management Interface
The External Media Management Interface uses the EXTERNAL library
type. The EXTERNAL library type does not map to a device or media type,
but instead specifies the installed path of the external media manager. See
“Using external media managers to control media” on page 199.
Setting up Tivoli Storage Manager to use RSM
Administrators set up media management when they define RSM libraries to Tivoli
Storage Manager. Libraries to be controlled by RSM must also be defined to the
Windows RSM service. Normally, this will occur at system boot time when RSM
will claim all supported removable media devices.
The following tasks are required to set up RSM media management:
196
Task
Required Privilege Class
“RSM device control” on page 197
System
“Defining RSM libraries using the device
configuration wizard” on page 197
System
“Adding media to RSM libraries” on page
198
System
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
RSM device control
When the RSM service is started, it automatically takes control of all eligible
storage devices.
For a device to be eligible for RSM control:
v A Windows driver must be installed for the device, and
v The Tivoli Storage Manager device driver cannot have already claimed the
device
RSM relies on native device drivers for drive support. This requires that you
explicitly control the acquisition of devices by device drivers to use these media
managers with Tivoli Storage Manager. You must either disable the Tivoli Storage
Manager device driver or add devices to be controlled by the media manager to
the Tivoli Storage Manager Device Exclude List before starting the RSM service.
See “Selecting a device driver” on page 114.
Defining RSM libraries using the device configuration wizard
As a best practice, use the Tivoli Storage Manager Device Configuration Wizard to
define RSM libraries.
To define an RSM library:
1. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
2. Click Wizards, then double click Device Configuration in the right pane. The
Device Configuration Wizard appears.
3. Follow the instructions in the wizard.
This procedure creates the following Tivoli Storage Manager storage objects:
v An RSM library
v An associated device class with a device type of GENERICTAPE
v An associated storage pool
Media pools:
An RSM media pool is analogous to a directory or folder. The names of the
volumes in the pool are listed in the folder. The volumes contain Tivoli Storage
Manager data. RSM retains information that maps physical media to devices.
When you create and configure an RSM library, typically with the Tivoli Storage
Manager Device Configuration Wizard, Tivoli Storage Manager directs RSM to
create:
v A top-level media pool called IBM Tivoli Storage Manager
v A second-level Tivoli Storage Manager server instance pool
Under the IBM Tivoli Storage Manager media pool, Tivoli Storage Manager creates
two storage pools that are media-type specific. The first pool is associated with the
automated library and the second pool with an import media pool.
Chapter 8. Managing removable media operations
197
Adding media to RSM libraries
To add media to an RSM-controlled library, you must activate the Microsoft
Management Console (MMC) snap-in for RSM, open Removable Storage, and then
request door access. Normally, the library door is locked by RSM.
To unlock the library door:
1. On RSM, click Start → Programs → Administrative Tools → Computer
Management.
2. In the console tree under Storage , double-click Removable Storage.
To request door access:
1. Double-click Physical Location.
2. Click on the applicable library, and then select Door Access.
3. When prompted, open the door.
You can use the library door to insert and remove media. After media are injected
and the library door is closed, RSM automatically inventories the device. If the
new media matches the media type for a defined RSM library, RSM labels the
media and adds it to one of the following media pools in that library:
Free Pool for RSM
This pool is used to track previously unlabeled media. Free pool media are
assumed to be empty or to contain invalid data. Media in free pools are
available for use by any application. You must provide an adequate supply
of media in the free or scratch pool to satisfy mount requests. When Tivoli
Storage Manager needs media, RSM obtains it from the scratch pool. RSM
manages the media from that point.
Import Pool
This pool is used to track previously labeled media that is recognized by a
particular application in the RSM-controlled storage management system.
Media in import pools can be allocated by any application, including the
application that originally labeled it. To safeguard data, it is recommended
that you move these volumes to the application-specific import pool.
Unrecognized Pool
This pool is used to track previously labeled media that are not recognized
by any application in the RSM-controlled storage management system.
Unrecognized pool volumes cannot be allocated by any application, and
require administrator intervention to correct labeling or program errors.
Normally, volumes in the Unrecognized Pool would be moved to the Free
Pool for later application use.
Note: You can use the Properties dialog to view the attributes of any volume in
the Free, Import, or Unrecognized pools.
198
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Setting up RSM libraries using commands
To set up an RSM library, you need to define the library, define a device class for
the library, and define a storage pool for the device class.
The following example defines an RSM library for an 8-mm autochanger device
containing two drives:
1. Define a library for the RSM-managed device. For example:
define library astro libtype=rsm mediatype="8mm AME"
Tip:
v Specify the library type as libtype=rsm for RSM.
v Use the RSM documentation to determine the value to use for the media
type.
v Enclose the media type within quotation marks if it contains embedded
blanks.
2. Define a device class for the RSM library with a device type of GENERICTAPE.
For example:
define devclass 8MMCLASS1 devtype=generictape library=rsmlib
format=drive mountretention=5 mountwait=10 mountlimit=2
The MOUNTLIMIT parameter specifies the number of drives in the library.
Tip: For storage environments in which devices are shared across applications,
MOUNTRETENTION and MOUNTWAIT settings must be carefully considered.
These parameters determine how long an idle volume remains in a drive and
the timeout value for mount requests. Because RSM will not dismount an
allocated drive to satisfy pending requests, you must tune these parameters to
satisfy competing mount requests while maintaining optimal system
performance.
3. Define a storage pool for the device class. For example:
define stgpool 8MMPOOL1 8MMCLASS1 maxscratch=500
Using external media managers to control media
The External Media Management API lets you use external media manager
software to control your media.
For details about the interface, see Appendix C, “External media management
interface description,” on page 913.
The following sample procedure, describes how to set up an 8 mm automated tape
library to use the External Media Management Interface with a media manager.
You cannot use the Device Configuration Wizard to set up an external library.
1. Set up the media manager to interface with Tivoli Storage Manager. For more
information, see Appendix C, “External media management interface
description,” on page 913.
2. Define a library whose library type is EXTERNAL, and define a path to the
media manager executable. For example:
define library medman libtype=external
For example:
define devclass class1 devtype=8mm library=medman mountretention=5 mountlimit=2
define path server1 medman srctype=server desttype=library
externalmanager=c:\server\mediamanager.exe
Chapter 8. Managing removable media operations
199
The MOUNTLIMIT parameter specifies the number of drives in the library. The
MOUNTRETENTION parameter determines how long an idle volume remains
in a drive. If the library is shared among applications, this setting is especially
important. Some media managers will not dismount an allocated drive to
satisfy pending requests. You should set the mount retention period to balance
competing mount requests and system performance.
3. Define a device class for the EXTERNAL library with a device type of 8MM.
4. Define a storage pool for the device class. For example:
define stgpool pool1 class1 maxscratch=500
5. Associate client nodes with the new storage pool by defining a new policy
domain or by updating an existing policy domain
Requirements for managing media in external libraries
There are certain unique requirements for managing media in external libraries
When managing media in external libraries, consider the following guidelines:
v You do not need to check in and label media in external libraries. Those media
are not tracked in the Tivoli Storage Manager volume inventory, and the media
manager handles labeling. However, you must ensure that an adequate supply
of blank media are available.
v If you are using disaster recovery manager, you can issue the MOVE DRMEDIA
command to issue an operator request to remove the media from the library. For
more information, see Chapter 25, “Using disaster recovery manager,” on page
815.
v You should not migrate media from a Tivoli Storage Manager SCSI library to an
external library. Instead, use external media management on a new Tivoli
Storage Manager configuration or when defining externally managed devices to
Tivoli Storage Manager.
v Deleting externally managed storage pools requires that you delete any volumes
associated with the Tivoli Storage Manager library. When the library is deleted,
the externally managed storage pool associated with that library is also deleted.
For more information, see “Deleting storage pool volumes that contain data” on
page 395.
Removing devices from media-manager control
Procedures for removing Tivoli Storage Manager devices from media-manager
control vary, depending on the media manager.
To remove RSM-managed devices from media manager control, modify the device
configuration to allow the ADSMSCSI device driver to claim the devices before
RSM. For more information, see “Selecting a device driver” on page 114. For
information about removing devices from other external media managers, refer to
the specific management product’s documentation set.
200
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Troubleshooting database errors
Error conditions can cause the Tivoli Storage Manager volume database and the
media manager’s volume database to become unsynchronized.
The most likely symptom of this problem is that the volumes in the media
manager’s database are not known to Tivoli Storage Manager, and thus not
available for use. Verify the Tivoli Storage Manager volume list and any disaster
recovery media. If volumes not identified to Tivoli Storage Manager are found, use
the media manager interface to deallocate and delete the volumes.
Managing libraries
Using Tivoli Storage Manager commands, you can query and delete libraries. You
can also update automated libraries.
Obtaining information about libraries
Standard and detailed reports provide information about libraries.
Task
Required Privilege Class
Request information about libraries
Any administrator
To obtain information about libraries, use the QUERY LIBRARY command. The
default is a standard report. For example, to display information about all libraries
in a standard report, issue the following command:
query library
The following shows an example of output from this command:
Library
Name
------MANLIB
EXB
3494LIB
Library
Type
------MANUAL
SCSI
349X
Private
Category
--------
300
Scratch
Category
--------
301
WORM Scratch
Category
------------
External
Manager
--------
302
Updating automated libraries
You can update an existing automated library by issuing the UPDATE LIBRARY
command. To update the device names of a library, issue the UPDATE PATH
command. You cannot update a MANUAL library.
Task
Required Privilege Class
Update libraries
System or unrestricted storage
If your system or device is reconfigured, and the device name changes, you may
need to update the device name. The examples below show how you can issue the
UPDATE LIBRARY and UPDATE PATH commands for the following library types:
v SCSI
v 349X
v ACSLS
v External
Examples:
Chapter 8. Managing removable media operations
201
v SCSI Library
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=lb4.0.0.0
Update the definition of a SCSI library named SCSILIB defined to a library client
so that a new library manager is specified:
update library scsilib primarylibmanager=server2
v 349X Library
Update the path from SERVER1 to an IBM 3494 library named 3494LIB with
new device names.
update path server1 3494lib srctype=server desttype=library
device=lb2.0.0.0,lb3.0.0.0,lb4.0.0.0
Update the definition of an IBM 3494 library named 3494LIB defined to a library
client so that a new library manager is specified:
update library 3494lib primarylibmanager=server2
v ACSLS Library
Update an automated cartridge system library software (ACSLS) library named
ACSLSLIB with a new ID number.
update library acslslib ascid=1
v External Library
Update an external library named EXTLIB with a new media manager path
name.
update path server1 extlib srctype=server desttype=library
externalmanager=c:\server\mediamanager.exe
Update an EXTERNAL library named EXTLIB in a LAN-free configuration so
that the server uses the value set for mount retention in the device class
associated with the library:
update library extlib obeymountretention=yes
Deleting libraries
Before you delete a library with the DELETE LIBRARY command, you must delete
all of the drives and drive paths that have been defined as part of the library and
delete the path to the library.
Task
Required Privilege Class
Delete libraries
System or unrestricted storage
For information about deleting drives, see “Deleting drives” on page 214.
For example, suppose that you want to delete a library named 8MMLIB1. After
deleting all of the drives defined as part of this library and the path to the library,
issue the following command to delete the library itself:
delete library 8mmlib1
202
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Managing drives
You can query, update, and delete drives.
Requesting information about drives
You can request information about drives by using the QUERY DRIVE command.
Task
Required Privilege Class
Request information about drives
Any administrator
The QUERY DRIVE command accepts wildcard characters for both a library name
and a drive name. See the Administrator’s Reference for information about using
wildcard characters.
For example, to query all drives associated with your server, issue the following
command:
query drive
The following shows an example of the results of this command.
Library
Name
-------MANLIB
AUTOLIB
Drive
Name
------8MM.0
8MM.2
Device
Type
--------8MM
8MM
On Line
------Yes
Yes
Updating drives
You can change the attributes of a drive by issuing the UPDATE DRIVE command.
Task
Required Privilege Class
Update drives
System or unrestricted storage
You can change the following attributes of a drive by issuing the UPDATE DRIVE
command.
v The element address, if the drive resides in a SCSI library
v The ID of a drive in an automated cartridge system library software (ACSLS)
library
v The cleaning frequency
v Change whether the drive is online or offline
For example, to change the element address of a drive named DRIVE3 to 119, issue
the following command:
update drive auto drive3 element=119
Note: You cannot change the element number if a drive is in use. If a drive has a
volume mounted, but the volume is idle, it can be explicitly dismounted as
described in “Dismounting idle volumes” on page 192.
If you are reconfiguring your system, you can change the device name of a drive
by issuing the UPDATE PATH command. For example, to change the device name
of a drive named DRIVE3, issue the following command:
Chapter 8. Managing removable media operations
203
update path server1 drive3 srctype=server desttype=drive library=scsilib
device=mt3.0.0.0
You can change a drive to offline status while the drive is in use. Tivoli Storage
Manager will finish with the current tape in the drive, and then not use the drive
anymore. By changing a drive to offline, you can drain work off of a drive.
However, if the tape that had been in use was part of a series of tapes for a single
transaction, the drive will not be available to complete the series. If no other drives
are available, the transaction may fail. If all drives in a library are made offline,
any attempts by Tivoli Storage Manager to write to the storage pool associated
with the library will fail.
The ONLINE parameter specifies the value of the drive’s online state, even if the
drive is in use. ONLINE=YES indicates that the drive is available for use (online).
ONLINE=NO indicates that the drive is not available for use (offline). This
parameter is optional. Do not specify other optional parameters along with
ONLINE=YES or ONLINE=NO. If you do, the drive will not be updated, and the
command will fail when the drive is in use. This command can be issued when the
drive is involved in an active process or session, but this is not recommended.
The ONLINE parameter allows drives to be taken offline and used for another
activity, such as maintenance. If you make the drive offline while it is in use, the
drive will be marked offline. However, the mounted volume will complete its
current process. If this volume was part of a series of volumes for a given
transaction, the drive will no longer be available to complete mounting the series.
If no other drives are available, the active process may fail. The updated state will
be retained even when the server is halted and brought up again. If a drive is
marked offline when the server is brought up, a warning is issued noting that the
drive must be manually brought online. If all the drives in a library are updated to
be offline, processes requiring a library mount point will fail, rather than queue up
for one.
Using drive encryption
You can use drive encryption to protect tapes that contain critical or sensitive data
(for example, tapes that contain sensitive financial information). Drive encryption
is particularly beneficial for tapes that you move from the Tivoli Storage Manager
server environment to an offsite location.
Tivoli Storage Manager supports drive encryption for 3592 generation 2 drives,
IBM LTO generation 4 drives, and HP LTO-4 generation 4 drives. Drives must be
able to recognize the correct format. The following encryption methods are
supported:
Table 17. Encryption methods supported
204
Application method
Library method
System method
3592 generation 3
Yes
Yes
Yes
3592 generation 2
Yes
Yes
Yes
IBM LTO-4
Yes
Yes, but only if your Yes
system hardware (for
example, 3584)
supports it
HP LTO-4
Yes
No
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
No
To enable drive encryption with IBM LTO-4, you must have the IBM RMSS
Ultrium device driver installed. SCSI drives do not support IBM LTO-4 encryption.
To enable encryption with HP LTO-4, you must have the Tivoli Storage Manager
device driver installed.
To enable drive encryption, specify the DRIVEENCRYPTION parameter on the
DEFINE DEVCLASS and UPDATE DEVCLASS commands for the 3592 and LTO
device types. For details about this parameter on 3592, see “Encrypting data with
3592 generation 2 and generation 3 drives” on page 259. For details about this
parameter on LTO, see “Encrypting data using LTO generation 4 drives” on page
266.
A library can contain a mixture of drives, some of which support encryption and
some which do not. (For example, a library might contain two LTO-2 drives, two
LTO-3 drives, and two LTO-4 drives.) You can also mix media in a library using,
for example, a mixture of encrypted and non-encrypted device classes having
different tape and drive technologies. However, all LTO-4 drives must support
encryption if Tivoli Storage Manager is to use drive encryption. In addition, all
drives within a logical library must use the same method of encryption. Tivoli
Storage Manager does not support an environment in which some drives use the
Application method and some drives use the Library or System methods of
encryption.
When using encryption-capable drives with a supported encryption method, a new
format will be used to write encrypted data to tapes. If data is written to volumes
using the new format and if the volumes are then returned to scratch, they will
contain labels that are only readable by encryption-enabled drives. To use these
scratch volumes in a drive that is not enabled for encryption, either because the
hardware is not capable of encryption or because the encryption method is set to
NONE, you must relabel the volumes.
For more information on setting up your hardware environment to use drive
encryption, refer to your hardware documentation.
Replacement of tape and optical drives
If you replace a drive in a tape or optical library that is defined to IBM Tivoli
Storage Manager, you must delete the drive and path definitions for the old drive
and define the new drive and path.
Replacing drive and path definitions is required even if you are exchanging one
drive for another of the same type, using the same logical address, physical
address, SCSI ID, and port number. Device alias names can change when you
change your drive connections.
If the new drive is an upgrade that supports a new media format, you might also
need to define a new logical library, device class, and storage pool. Procedures for
setting up policy for a new drive in a multiple-drive library will vary, depending
on the types of drives and media in the library.
Chapter 8. Managing removable media operations
205
Preventing errors caused by media incompatibility
Understanding media compatibility issues can prevent errors. Sometimes a new
drive has a limited ability to use media formats supported by a previous version of
the drive. Often, a new drive can read but not write to the old media.
By default, existing volumes with a status of FILLING will remain in that state
after a drive upgrade. In some cases, you might want to continue using an older
drive to fill these volumes. This will preserve read/write capability for the existing
volumes until they have been reclaimed. If you choose to upgrade all of the drives
in a library, pay attention to the media formats supported by the new hardware.
Unless you are planning to use only the latest media with your new drive, you
will need to be aware of any compatibility issues. For migration instructions, see
“Migrating to upgraded drives” on page 210.
To use a new drive with media it can read but not write to, issue the UPDATE
VOLUME command to set the access for those volumes to read-only. This will
prevent errors caused by read/write incompatibility. For example, a new drive
may eject media written in a density format it does not support as soon as the
media is loaded into the drive. Or a new drive may fail the first write command to
media partially written in a format it does not support.
When data on the read-only media expires and the volume is reclaimed, replace it
with media that is fully compatible with the new drive. Errors can be generated if
a new drive is unable to correctly calibrate a volume written using an older
format. To avoid this problem, ensure that the original drive is in good working
order and at current microcode levels.
Removing drives
Drive removal requires a new drive and path definition.
To remove a drive:
1. Stop the IBM Tivoli Storage Manager server and shut down the operating
system.
2. Remove the old drive and follow the manufacturer’s instructions to install the
new drive.
3. Restart the operating system and the IBM Tivoli Storage Manager server.
4. Delete the path from the server to the drive. For example:
delete path server1 lib1 srctype=server desttype=drive
5. Delete the drive definition. For example, to delete a drive named DLT1 from a
library device named LIB1, enter:
delete drive lib1 dlt1
6. Define the new drive and path. This procedure will vary, depending on the
configuration of drives in your library. See “Defining new drives” on page 207.
206
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Defining new drives
How you define a new drive depends on several factors, including whether the
new drive is a replacement or an upgrade, whether you plan to use different drive
types within the same library, and whether you plan to use different media types
within the same library.
Replacing drives with others of the same type:
To add a drive that supports the same media formats as the drive it replaces, you
need to define a new drive and path.
For example, to define a new drive and name it DRIVE1 and a path to it from
SERVER1, enter the following commands:
define drive lib1 drive1
define path server1 drive1 srctype=server desttype=drive library=lib1
device=mt3.0.0.1
You can use your existing library, device class, and storage pool definitions.
Upgrading all of the drives in a library that contained only one type of drive:
To upgrade all the drives in a library that contained only one type of drive, you
need to define a new drive and path. You also need to update device class and
storage pool definitions.
You must decide how to manage any new types of media supported by the new
drives. See “Preventing errors caused by media incompatibility” on page 206 for
more information.
The following scenario assumes you already have a library device defined as
follows:
Library
Name
------LIB1
Library
Type
------349X
Private
Category
-------200
Scratch
Category
-------201
WORM Scratch
Category
------------
External
Manager
--------
Define each new drive and path
For example, to define a new drive and name it DRIVE1, enter:
define drive lib1 drive1
define path server1 drive1 srctype=server desttype=drive library=lib1
device=mt3.0.0.1
Update device class and storage pool definitions
v If you plan to use only one type of media in the library, you can use
your existing device class and storage pool definitions.
v If you plan to use media with different capacities in the same library,
you can define separate device classes and storage pools for each type of
media. This will provide accurate capacity reporting for each type of
media.
For example, if you plan to use both 3590E and 3590H tapes, you can
define two device classes:
Chapter 8. Managing removable media operations
207
define devclass 3590E_class devtype=3590 format=drive library=lib1
estcapacity=20g
define devclass 3590H_class devtype=3590 format=drive library=lib1
estcapacity=40g
Note: You must specify FORMAT=DRIVE for the new device classes.
You can then define two storage pools to divide the tapes within the
library:
define stgpool 3590E_pool 3590E_class maxscratch=number_of_3590E_tapes
define stgpool 3590H_pool 3590H_class maxscratch=number_of_3590H_tapes
Finally, you can issue the DEFINE VOLUME command to associate
media with the appropriate storage pool.
Upgrading some of the drives in a library that contained only one type of
drive:
To upgrade some of the drives in a library that contained only one type of drive,
you need to define a separate logical library for each type of drive.
If an automated cartridge system library software (ACSLS), 349X, Manual, or
External library contains only one type of drive and you upgrade only a subset of
those drives, you must define an additional logical library. For SCSI libraries, we
do not support upgrading one type of drive if the new drives cannot read and
write in the format of the existing media. If the new drives can only read some of
the media, they must upgrade all of the drives.
The following scenario assumes you already have a library device defined as
follows:
Library
Name
------LIB1
Library
------349X
Private Scratch WORM Scratch
Category Category Category
-------- -------- -----------200
201
External
Manager
--------
Define a new logical library and path for each new type of drive
For example, to add a logical library named LIB2 for the same physical
device already defined as LIB1, enter:
define library lib2 libtype=349X privatecategory=300 scratchcategory=301
wormscratchcategory=302
define path server1 lib2 srctype=server desttype=library
device=lb3.0.0.0
Define each new drive and path to the new library
To define a new drive named DRIVE2 to the logical library named LIB2
and a new path to the drive, enter:
define drive lib2 drive2
define path server1 drive1 srctype=server desttype=drive library=lib2
device=mt3.0.0.1
Update device class and storage pool definitions
To define a new device class, enter:
define devclass new_dev_class devtype=3592 worm=yes format=drive
library=lib2 estcapacity=40G
208
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
For accurate reporting of capacity information, you must specify the
ESTCAPACITY parameter.
To define a new storage pool, enter:
define stgpool new_stg_pool new_dev_class maxscratch=number_of_new_tapes
You can then issue the CHECKIN LIBVOLUME command to check the
new media into the logical library LIB2.
Upgrading all of the drives in a library that contained more than one type of
drive:
To upgrade all the drives in a library that contained more than one type of drive,
you need to update the drive and path definitions for each logical library.
The following scenario assumes you already have two logical libraries defined. For
example:
Library
Name
------LIB1
LIB2
Library
------349X
349X
Private Scratch WORM Scratch
Category Category Category
-------- -------- -----------200
201
300
301
302
External
Manager
--------
Update drive and path definitions for each logical library
For each library, follow the guidelines in “Upgrading all of the drives in a
library that contained only one type of drive” on page 207. For accurate
reporting of capacity information, you cannot use a global scratch pool
with this configuration.
Upgrading some of the drives in a library that contained more than one type of
drive:
To upgrade some ofl the drives in a library that contained more than one type of
drive, you need to update the drive and path definitions for each logical library.
The following scenario assumes you already have two logical libraries defined, for
example:
Library
Name
------LIB1
LIB2
Library
------349X
349X
Private
Category
-------200
300
Scratch
Category
-------201
301
WORM Scratch
Category
------------
External
Manager
--------
302
You must update the drive and path definitions for each logical library. Follow the
guidelines in “Upgrading some of the drives in a library that contained only one
type of drive” on page 208. For accurate reporting of capacity information, you
cannot use a global scratch pool with this configuration.
Chapter 8. Managing removable media operations
209
Migrating to upgraded drives
If you upgrade all of the drives in a library, you can preserve your existing policy
definitions to migrate and expire existing data, while using the new drives to store
new data.
Define a new DISK storage pool and set it up to migrate its data to a storage pool
created for the new drives. Then update your existing management-class
definitions to begin storing data in the new DISK storage pool.
Cleaning drives
You can use the server to manage tape drive cleaning. The server can control
cleaning tape drives in SCSI libraries and offers partial support for cleaning tape
drives in manual libraries.
Task
Required Privilege Class
Clean drives
System or unrestricted storage
For automated libraries, you can automate cleaning by specifying the frequency of
cleaning operations and checking a cleaner cartridge into the library’s volume
inventory. The server mounts the cleaner cartridge as specified. For manual
libraries, the server issues a mount request for the cleaner cartridge. There are
special considerations if you plan to use server-controlled drive cleaning with a
SCSI library that provides automatic drive cleaning support in its device hardware
Drive cleaning methods
If your library includes its own functions for drive cleaning, you need to decide
which method to use: The device’s built-in drive cleaning or the Tivoli Storage
Manager server’s drive cleaning. To avoid problems, use either the device’s built-in
drive cleaning, or the server’s drive cleaning, but not both.
Device manufacturers that include automatic cleaning recommend its use to
prevent premature wear on the read/write heads of the drives. For example, SCSI
libraries such as StorageTek 9710, IBM 3570, and IBM 3575 have their own
automatic cleaning built into the device.
To avoid problems, use either the device’s built-in drive cleaning, or the server’s
drive cleaning, but not both. Drives and libraries from different manufacturers
differ in how they manage cleaner cartridges and how they report the presence of
a cleaner cartridge in a drive. Consult the manufacturer’s information that
accompanies the library and the drives for an explanation of how the library and
drive manage and report the presence of cleaner cartridges. The device driver may
not be able to open a drive that contains a cleaner cartridge. Sense codes and error
codes that are issued by devices for drive cleaning vary. If a library has its own
automatic cleaning, the library usually tries to keep the process transparent to all
applications. However, this is not always the case. Because of this variability, the
server cannot reliably detect a cleaner cartridge in a drive for all hardware. The
server also may not be able to determine if the library has started a cleaning
process. Therefore, it is important to choose one method or the other, but not both.
Some devices require a small amount of idle time between mount requests to
initiate the drive cleaning. However, the server tries to minimize the idle time for a
drive. These two conditions may combine to prevent the device’s control of drive
210
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
cleaning to function effectively. If this happens, try using the server to control
drive cleaning. Set the frequency to match the cleaning recommendations from the
manufacturer.
If you decide to have the server control drive cleaning, disable the device’s own
drive cleaning function to prevent problems. For example, while the device’s own
drive cleaning function is enabled, some devices automatically move any cleaner
cartridge found in the library to slots in the library that are dedicated for cleaner
cartridges. An application such as Tivoli Storage Manager does not know that
these dedicated slots exist. You will not be able to check a cleaner cartridge into
the server’s library inventory until you disable the device’s own drive cleaning
function.
If you decide to have the device control drive cleaning and then you have
problems, consider using the drive cleaning control provided by the server.
Cleaning drives in an automated library
When you set up server-controlled drive cleaning in an automated library, you can
specify how often you want the drives cleaned.
To set up server-controlled drive cleaning in an automated library:
1. Define or update the drives in a library, using the CLEANFREQUENCY
parameter.
The CLEANFREQUENCY parameter sets how often you want the drive
cleaned. Refer to the DEFINE DRIVE and UPDATE DRIVE commands. Consult
the manuals that accompany the drives for recommendations on cleaning
frequency. For example, to have DRIVE1 cleaned after 100 GB are processed on
the drive, issue the following command:
update drive autolib1 drive1 cleanfrequency=100
Consult the drive manufacturer’s information for cleaning recommendations. If
the information gives recommendations for cleaning frequency in terms of
hours of use, convert to a gigabytes value by doing the following:
a. Use the bytes-per-second rating for the drive to determine a
gigabytes-per-hour value.
b. Multiply the gigabytes-per-hour value by the recommended hours of use
between cleanings.
c. Use the result as the cleaning frequency value.
Restrictions:
a. For IBM 3570, 3590, and 3592 drives, specify a value for the
CLEANFREQUENCY parameter rather than specify ASNEEDED. Using the
cleaning frequency recommended by the product documentation will not
overclean the drives.
b. The CLEANFREQUENCY=ASNEEDED parameter value does not work for
all tape drives. To determine whether a drive supports this function, see the
Web site: http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html. At this Web site,
click the drive to view detailed information. If ASNEEDED is not
supported, you can use the gigabytes value for automatic cleaning.
2. Check a cleaner cartridge into the library’s volume inventory with the
CHECKIN LIBVOLUME command. For example:
checkin libvolume autolib1 cleanv status=cleaner cleanings=10 checklabel=no
Chapter 8. Managing removable media operations
211
After the cleaner cartridge is checked in, the server will mount the cleaner
cartridge in a drive when the drive needs cleaning. The server will use that
cleaner cartridge for the number of cleanings specified. See “Checking in
cleaner volumes” and “Operations with cleaner cartridges in a library” on page
213 for more information.
For details about these commands, see the Administrator’s Reference.
Checking in cleaner volumes:
To allow server to control drive cleaning without operator intervention, you must
check a cleaner cartridge into the automated library’s volume inventory.
It is recommended that you check in cleaner cartridges one at a time and do not
use the search function of checkin for a cleaner cartridge.
Attention: When checking in a cleaner cartridge to a library, ensure that it is
correctly identified to the server as a cleaner cartridge. Also use caution when a
cleaner cartridge is already checked in and you are checking in data cartridges.
Ensure that cleaner cartridges are in their correct home slots, or errors and delays
can result.
When checking in data cartridges with SEARCH=YES, ensure that a cleaner
cartridge is not in a slot that will be detected by the search process. Errors and
delays of 15 minutes or more can result from a cleaner cartridge being improperly
moved or placed. For best results, check in the data cartridges first when you use
the search function. Then check in the cleaner cartridge separately.
For example, if you need to check in both data cartridges and cleaner cartridges,
put the data cartridges in the library and check them in first. You can use the
search function of the CHECKIN LIBVOLUME command (or the LABEL
LIBVOLUME command if you are labeling and checking in volumes). Then check
in the cleaner cartridge to the library by using one of the following methods.
v Check in without using search:
checkin libvolume autolib1 cleanv status=cleaner cleanings=10
checklabel=no
The server then requests that the cartridge be placed in the entry/exit port, or
into a specific slot.
v Check in using search, but limit the search by using the VOLRANGE or
VOLLIST parameter:
checkin libvolume autolib1 status=cleaner cleanings=10
search=yes checklabel=barcode vollist=cleanv
The process scans the library by using the bar code reader, looking for the
CLEANV volume.
212
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Manual drive cleaning in an automated library:
If your library has limited capacity and you do not want to use a slot in your
library for a cleaner cartridge, the server can issue messages telling you that a
drive needs to be cleaned.
Set the cleaning frequency for the drives in the library. When a drive needs
cleaning based on the frequency setting, the server issues the message, ANR8914I.
For example:
ANR89141I Drive DRIVE1 in library AUTOLIB1 needs to be cleaned.
You can use that message as a cue to manually insert a cleaner cartridge into the
drive. However, the server cannot track whether the drive has been cleaned.
Operations with cleaner cartridges in a library:
Guidelines include monitoring cleaning messages and verifying that cleaner
cartridges are in the correct storage slots.
When a drive needs to be cleaned, the server runs the cleaning operation after
dismounting a data volume if a cleaner cartridge is checked in to the library. If the
cleaning operation fails or is canceled, or if no cleaner cartridge is available, then
the indication that the drive needs cleaning is lost. Monitor cleaning messages for
these problems to ensure that drives are cleaned as needed. If necessary, issue the
CLEAN DRIVE command to have the server try the cleaning again, or manually
load a cleaner cartridge into the drive.
The server uses a cleaner cartridge for the number of cleanings that you specify
when you check in the cleaner cartridge. If you check in two or more cleaner
cartridges, the server uses only one of the cartridges until the designated number
of cleanings for that cartridge has been reached. Then the server begins to use the
next cleaner cartridge. If you check in two or more cleaner cartridges and issue
two or more CLEAN DRIVE commands concurrently, the server uses multiple
cartridges at the same time and decrements the remaining cleanings on each
cartridge.
Visually verify that cleaner cartridges are in the correct storage slots before issuing
any of the following commands:
v AUDIT LIBRARY
v CHECKIN LIBVOLUME with SEARCH specified
v LABEL LIBVOLUME with SEARCH specified
To find the correct slot for a cleaner cartridge, issue the QUERY LIBVOLUME
command.
Chapter 8. Managing removable media operations
213
Drive cleaning in a manual library
The server can issue messages telling you that a drive in a manual library needs to
be cleaned.
Cleaning a drive in a manual library is the same as setting up drive cleaning
without checking in a cleaner cartridge for an automated library. The server issues
the ANR8914I message when a drive needs cleaning. For example:
ANR89141I Drive DRIVE1 in library MANLIB1 needs to be cleaned.
Monitor the activity log or the server console for these messages and load a cleaner
cartridge into the drive as needed. The server cannot track whether the drive has
been cleaned.
Error checking for drive cleaning
Occasionally you might move some cartridges around within a library and put a
data cartridge where Tivoli Storage Manager shows that there is a cleaner
cartridge. Tivoli Storage Manager can recover from the error.
When a drive needs cleaning, the server loads what its database shows as a cleaner
cartridge into the drive. The drive then moves to a READY state, and Tivoli
Storage Manager detects that the cartridge is a data cartridge. The server then
performs the following steps:
1. The server attempts to read the internal tape label of the data cartridge.
2. The server ejects the cartridge from the drive and moves it back to the home
slot of the “cleaner” cartridge within the library. If the eject fails, the server
marks the drive offline and issues a message that the cartridge is still in the
drive.
3. The server checks out the “cleaner” cartridge to avoid selecting it for another
drive cleaning request. The “cleaner” cartridge remains in the library but no
longer appears in the Tivoli Storage Manager library inventory.
4. If the server was able to read the internal tape label, the server checks the
volume name against the current library inventory, storage pool volumes, and
the volume history file.
v If there is not a match, you probably checked in a data cartridge as a cleaner
cartridge by mistake. Now that the volume is checked out, you do not need
to do anything else.
v If there is a match, the server issues messages that manual intervention and a
library audit are required. Library audits can take considerable time, so you
should issue the command when sufficient time permits. See “Auditing
volume inventories in libraries” on page 184.
Deleting drives
You can delete a drive if it is not currently in use. If a drive has a volume
mounted, but the volume is currently idle, it can be dismounted.
Task
Required Privilege Class
Delete drives
System or unrestricted storage
To delete a drive definition, issue the DELETE DRIVE command.
214
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Note: A drive cannot be deleted until the defined path to the drive has been
deleted. Also, a library cannot be deleted until all of the drives defined within it
are deleted.
For details about dismounting, see “Dismounting idle volumes” on page 192.
Managing paths
You can use Tivoli Storage Manager commands to query, update, and delete paths.
Obtaining information about paths
You can use the QUERY PATH command to obtain information about paths.
You can request either a standard or a detailed report. For example, to display
information about all paths, issue the following command:
query path
The following shows an example of the output from this command.
Source Name
----------NETAPP1
NETAPP1
datamover2
Source Type
----------Data mover
Data mover
Data mover
Destination Name
---------------DRIVE1
NASLIB
drive4
Destination Type
---------------Drive
Library
Drive
Online
-----Yes
Yes
Yes
Updating paths
You can use the UPDATE PATH command to update the attributes of an existing
path definition.
The examples below show how you can use the UPDATE PATH commands for the
following path types:
v Library Paths
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=lb4.0.0.0
v Drive Paths
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path nas1 scsilib srctype=datamover desttype=drive
library=naslib device=mt3.0.0.0
Deleting paths
You can use the DELETE PATH command to delete an existing path definition.
Task
Required Privilege Class
Delete paths
System or unrestricted storage
A path cannot be deleted if the destination is currently in use.
To delete a path from a NAS data mover NAS1 to the library NASLIB:
delete path nas1 naslib srctype=datamover desttype=library
Attention: If you delete the path to a device or make the path offline, you disable
access to that device.
Chapter 8. Managing removable media operations
215
Managing data movers
You can use Tivoli Storage Manager commands to query, update, and delete data
movers.
Obtaining information about data movers
You can use the QUERY DATAMOVER command to obtain information about SCSI
and NAS data movers.
You can request either a standard or a detailed report. For example, to display a
standard report about all data movers, issue the following command:
query datamover *
The following shows an example of the output from this command.
Data Mover Name
------------NASMOVER1
NASMOVER2
DATAMOVER1
Type
---------NAS
NAS
SCSI
Online
------Yes
No
Yes
Updating data movers
You can use the UPDATE DATAMOVER command to update the attributes of a
data mover definition.
For example, to update the data mover for the node named NAS1 to change the IP
address, issue the following command:
update datamover nas1 hladdress=9.67.97.109
Deleting data movers
You can use the DELETE DATAMOVER command to delete an existing datamover.
Before you can delete a data mover, you must delete all paths defined for the data
mover.
To delete a data mover named NAS1, issue the following command:
delete datamover nas1
Managing disks
You can query, update, and delete client-owned disks that reside in a storage area
network.
Obtaining information about disks
You can use the QUERY DISK command to obtain information about client-owned
disks that reside in a SAN environment.
You can request either a standard or a detailed report. For example, to display a
standard report about all defined disks, issue the following command:
query disk *
The following shows an example of the output from this command.
216
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Node
Disk
Online
Name
Name
------------ ------ -----NODE1
Harddisk1 Yes
NODE2
Harddisk2 Yes
Updating disks
You can use the UPDATE DISK command to update the attributes of an existing
disk definition.
The example below shows how you can use the UPDATE DISK command to
change the world wide name, serial number, and status of a disk.
Update a disk named Harddisk1 owned by NODE1. The world wide name to
20020060450d00e2 and the serial number to 100047. Change the ONLINE status to
YES.
update disk node1 Harddisk1 wwn=20020060450d00e2 serial=100047 online=yes
Deleting disks
You can use the DELETE DISK command to delete an existing disk definition.
All paths related to a disk must be deleted before the disk itself can be deleted.
Delete a disk named Harddisk1 that is owned by the node NODE1.
delete disk node1 Harddisk1
Chapter 8. Managing removable media operations
217
218
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 9. Using NDMP for operations with NAS file servers
You can plan, configure, and manage a backup environment that protects your
network-attached storage (NAS) file server by using NDMP (network data
management protocol). Tivoli Storage Manager Extended Edition includes support
for the use of NDMP to back up and recover NAS file servers.
Tasks:
“Configuring Tivoli Storage Manager for NDMP operations” on page 225
“Determining the location of NAS backup” on page 227
“Setting up tape libraries for NDMP operations” on page 231
“Configuring Tivoli Storage Manager policy for NDMP operations” on page 226
“Registering NAS nodes with the Tivoli Storage Manager server” on page 237
“Defining a data mover for the NAS file server” on page 237
“Defining a path to a library” on page 239
“Defining a path to a library” on page 239
“Defining tape drives and paths for NDMP operations” on page 238
“Labeling and checking tapes into the library” on page 240
“Scheduling NDMP operations” on page 240
“Defining virtual file spaces” on page 240
“Tape-to-tape copy to back up data” on page 240
“Tape-to-tape copy to move data” on page 241
“Backing up and restoring NAS file servers using NDMP” on page 241
“Performing NDMP filer to Tivoli Storage Manager server backups” on page 243
“Managing table of contents” on page 224
“NDMP operations management” on page 222
“Managing NAS file server nodes” on page 222
“Managing data movers used in NDMP operations” on page 223
“Storage pool management for NDMP operations” on page 224
NDMP requirements
You must meet certain requirements when using NDMP (network data
management protocol) for operations with network-attached storage (NAS) file
servers.
Tivoli Storage Manager Extended Edition
Licensed program product that includes support for the use of NDMP.
NAS File Server
A NAS file server. The operating system on the file server must be
supported by Tivoli Storage Manager. Visit http://www.ibm.com/
software/sysmgmt/products/support/IBMTivoliStorageManager.html for a
list of NAS file servers that are certified through the “Ready for IBM Tivoli
software.”
© Copyright IBM Corp. 1993, 2009
219
Note: Vendors on the“Ready for IBM Tivoli software” list follow
guidelines to implement NDMP as specified by Tivoli Storage Manager. If
a file server is on the list, it has undergone tests to ensure it is compatible
with Tivoli Storage Manager.
The combination of file server model and operating system must be
supported by the NAS file server. For more specifics, consult the product
information for the NAS file server.
Tape Libraries
This requirement is only necessary for a backup to a locally attached NAS
device. The Tivoli Storage Manager server supports two types of libraries
for operations using NDMP. The libraries supported are SCSI and ACSLS
(automated cartridge system library software). 349X tape libraries can also
be used with certain NAS file servers.
v SCSI library
A SCSI library that is supported by the Tivoli Storage Manager server.
Visit http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html. This type of library can be attached
directly either to the Tivoli Storage Manager server or to the NAS file
server. When the library is attached directly to the Tivoli Storage
Manager server, the Tivoli Storage Manager server controls the library
operations by passing the SCSI commands directly to the library. When
the library is attached directly to the NAS file server, the Tivoli Storage
Manager server controls the library by passing SCSI commands to the
library through the NAS file server.
v ACSLS library
An ACSLS library can only be directly connected to the Tivoli Storage
Manager server. The Tivoli Storage Manager server controls the library
by passing the library request through TCP/IP to the library control
server.
Note: The Tivoli Storage Manager server does not include External
Library support for the ACSLS library when the library is used for
NDMP operations.
v 349X library
A 349X library can only be directly connected to the Tivoli Storage
Manager server. The Tivoli Storage Manager server controls the library
by passing the library request through TCP/IP to the library manager.
Library Sharing: The Tivoli Storage Manager server that performs NDMP
operations can be a library manager for either an ACSLS, SCSI, or 349X
library, but cannot be a library client.The Tivoli Storage Manager server can
also be a library client, in a configuration where the NAS filer sends data
to a Tivoli Storage Manager server using TCP/IP rather than to a tape
library attached to the NAS filer. If the Tivoli Storage Manager server that
performs NDMP operations is a library manager, that server must control
the library directly and not by passing commands through the NAS file
server.
Tape Drives
One or more tape drives in the tape library. A tape drive is only necessary
for backup to a locally attached NAS device. The NAS file server must be
able to access the drives. A NAS device is not supported in a mixed device
library. The drives must be supported for tape backup operations by the
220
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
NAS file server and its operating system. For complete NDMP device
support, refer to the NAS file server product documentation.
Drive Sharing: The tape drives can be shared by the Tivoli Storage
Manager server and one or more NAS file servers. Also, when a SCSI or a
349X library is connected to the Tivoli Storage Manager server and not to
the NAS file server, the drives can be shared by one or more NAS file
servers and one or more Tivoli Storage Manager:
v library clients
v storage agents
Verify the compatibility of specific combinations of a NAS file server, tape devices,
and SAN-attached devices with the hardware manufacturers.
Attention: Tivoli Storage Manager supports NDMP Version 4 for all NDMP
operations. Tivoli Storage Manager will continue to support all NDMP backup and
restore operations with a NAS device running NDMP version 3. The Tivoli Storage
Manager server will negotiate the highest protocol level (either Version 3 or
Version 4) with the NDMP server when establishing an NDMP connection. If you
experience any issues with Version 4, you may want to try using Version 3.
Interfaces for NDMP operations
You can use several interfaces to perform NDMP (network data management
protocol) operations. You can schedule an NDMP operation using the BACKUP
NODE and RESTORE NODE commands, and scheduling the operation as an
administrative schedule.
Client Interfaces:
v Backup-archive command-line client (on a Windows, 64 bit AIX, or 64 bit Sun
Solaris system)
v Web client
Server Interfaces:
v Server console
v Command line on the administrative client
Tip: All examples in this chapter use server commands.
v Web administrative interface
The Tivoli Storage Manager Web client interface, available with the backup-archive
client, displays the file systems of the network-attached storage (NAS) file server in
a graphical view. The client function is not required, but you can use the client
interfaces for NDMP operations. The client function is recommended for file-level
restore operations. See “File-level backup and restore for NDMP operations” on
page 244 for more information about file-level restore.
Tivoli Storage Manager prompts you for an administrator ID and password when
you perform NDMP functions using either of the client interfaces. See the
Backup-Archive Clients Installation and User’s Guide for more information about
installing and activating client interfaces.
Chapter 9. Using NDMP for operations with NAS file servers
221
Attention: In order to use the Tivoli Storage Manager backup-archive client or
Web client to perform NAS operations, the file system names on the NAS device
must have a forward slash (“/”) as the first character. This restriction does not
affect NAS operations initiated from the Tivoli Storage Manager server command
line.
Data formats for NDMP backup operations
During filer-to-filer backup operations that use NDMP (network data management
protocol) and are not stored in the Tivoli Storage Manager server storage hierarchy,
the network-attached storage (NAS) file server controls the format of the data
written to the tape library.
The NDMP format is not the same as the data format used for traditional Tivoli
Storage Manager backups. When you define a NAS file server as a data mover and
define a storage pool for NDMP operations, you specify the data format. For
example, you would specify NETAPPDUMP if the NAS file server is a NetApp or
an IBM System Storage N Series device. You would specify CELERRADUMP if the
NAS file server is an EMC Celerra device. For all other devices, you would specify
NDMPDUMP.
NDMP operations management
There are several administrator activities for NDMP operations.
These include:
v NAS nodes
v Data movers
v Tape libraries and drives
v Paths
v Device classes
v Storage pools
v Table of contents
Managing NAS file server nodes
You can update, query, rename, and remove NAS (network attached storage)
nodes.
For example, assume you have created a new policy domain named NASDOMAIN
for NAS nodes and you want to update a NAS node named NASNODE1 to
include it in the new domain.
1. Query the node.
query node nasnode1 type=nas
2. Change the domain of the node by issuing the following command:
update node nasnode1 domain=nasdomain
222
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Renaming a NAS node
To rename a NAS (network attached storage) node, you must also rename the
corresponding NAS data mover; both must have the same name.
For example, to rename NASNODE1 to NAS1 you must perform the following
steps:
1. Delete all paths between data mover NASNODE1 and libraries and between
data mover NASNODE1 and drives.
2. Delete the data mover defined for the NAS node.
3. To rename NASNODE1 to NAS1, issue the following command:
rename node nasnode1 nas1
4. Define the data mover using the new node name. In this example, you must
define a new data mover named NAS1 with the same parameters used to
define NASNODE1.
Attention: When defining a new data mover for a node that you have
renamed, ensure that the data mover name matches the new node name and
that the new data mover parameters are duplicates of the original data mover
parameters. Any mismatch between a node name and a data mover name or
between new data mover parameters and original data mover parameters can
prevent you from establishing a session with the NAS file server.
5. For SCSI or 349X libraries, define a path between the NAS data mover and a
library only if the tape library is physically connected directly to the NAS file
server.
6. Define paths between the NAS data mover and any drives used for NDMP
(network data management protocol) operations.
Deleting a NAS node
To delete a NAS (network attached storage) node, first delete any file spaces for
the node. Then delete any paths from the data mover before deleting the data
mover.
1. Delete any virtual file space definitions for the node.
2. Enter the following command:
remove node nas1
Managing data movers used in NDMP operations
You can update, query, and delete the data movers that you define for NAS
(network attached storage) file servers.
For example, if you shut down a NAS file server for maintenance, you might want
to take the data mover offline.
1. Query your data movers to identify the data mover for the NAS file server that
you want to maintain.
query datamover nasnode1
2. Issue the following command to make the data mover offline:
update datamover nasnode1 online=no
To delete the data mover, you must first delete any path definitions in which
the data mover has been used as the source.
3. Issue the following command to delete the data mover:
delete datamover nasnode1
Chapter 9. Using NDMP for operations with NAS file servers
223
Attention: If the data mover has a path to the library, and you delete the data
mover or make the data mover offline, you disable access to the library.
Dedicating a Tivoli Storage Manager drive to NDMP operations
If you are already using a drive for Tivoli Storage Manager operations, you can
dedicate that drive to NDMP (network data management protocol) operations.
Remove Tivoli Storage Manager server access by deleting the path definition with
the following command:
delete path server1 nasdrive1 srctype=server desttype=drive library=naslib
Storage pool management for NDMP operations
When NETAPPDUMP, CELERRADUMP, or NDMPDUMP are designated as the
type of storage pool, managing the storage pools produced by NDMP (network
data management protocol) operations is different from managing storage pools
containing media for traditional Tivoli Storage Manager backups.
You can query and update storage pools. You cannot update the DATAFORMAT
parameter.
You cannot designate a Centera storage pool as a target pool of NDMP operations.
Maintaining separate storage pools for data from different NAS vendors is
suggested even though the data format for both is NDMPDUMP.
The following DEFINE STGPOOL and UPDATE STGPOOL parameters are ignored
because storage pool hierarchies, reclamation, and migration are not supported for
these storage pools:
MAXSIZE
NEXTSTGPOOL
LOWMIG
HIGHMIG
MIGDELAY
MIGCONTINUE
RECLAIMSTGPOOL
OVFLOLOCATION
Attention: Ensure that you do not accidentally use storage pools that have been
defined for NDMP operations in traditional Tivoli Storage Manager operations. Be
especially careful when assigning the storage pool name as the value for the
DESTINATION parameter of the DEFINE COPYGROUP command. Unless the
destination is a storage pool with the appropriate data format, the backup fails.
Managing table of contents
You can use several commands to manage different aspects of your data contents.
The SET TOCLOADRETENTION command can be used to specify the
approximate number of minutes that an unreferenced table of contents (TOC)
remains loaded in the Tivoli Storage Manager database. The Tivoli Storage
Manager server-wide table of contents retention value will determine how long a
loaded TOC is retained in the database after the latest access to information in the
TOC.
224
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Because TOC information is loaded into temporary database tables, this
information is lost if the server is halted, even if the TOC retention period has not
elapsed. At installation, the retention time is set to 120 minutes. Use the QUERY
STATUS command to see the TOC retention time.
Issue the QUERY NASBACKUP command to display information about the file
system image objects that have been backed up for a specific NAS (network
attached storage) node and file space. By issuing the command, you can see a
display of all backup images generated by NDMP (network data management
protocol) and whether each image has a corresponding table of contents.
Note: The Tivoli Storage Manager server may store a full backup in excess of the
number of versions you specified, if that full backup has dependent differential
backups. Full NAS backups with dependent differential backups behave like other
base files with dependent subfiles. Due to retention time specified in the RETAIN
EXTRA setting, the full NAS backup will not be expired, and the version will be
displayed in the output of a QUERY NASBACKUP command. See “File expiration
and expiration processing” on page 458 for details.
Use the QUERY TOC command to display files and directories in a backup image
generated by NDMP. By issuing the QUERY TOC server command, you can
display all directories and files within a single specified TOC. The specified TOC
will be accessed in a storage pool each time the QUERY TOC command is issued
because this command does not load TOC information into the Tivoli Storage
Manager database. Then, use the RESTORE NODE command with the FILELIST
parameter to restore individual files.
Configuring Tivoli Storage Manager for NDMP operations
Before beginning the configuration of Tivoli Storage Manager for NDMP (network
data management protocol) operations, ensure that you register the required
license.
Perform the following steps to configure the Tivoli Storage Manager for NDMP
operations:
1. Set up the tape library and media. See “Setting up tape libraries for NDMP
operations” on page 231, where the following steps are described in more
detail.
a. Attach the SCSI library to the NAS file server or to the Tivoli Storage
Manager server, or attach the ACSLS library or 349X library to the Tivoli
Storage Manager server.
b. Define the library with a library type of SCSI, ACSLS, or 349X.
c. Define a device class for the tape drives.
d. Define a storage pool for NAS backup media.
e. Define a storage pool for storing a table of contents. This step is optional.
2. Configure Tivoli Storage Manager policy for managing NAS image backups.
See “Configuring Tivoli Storage Manager policy for NDMP operations” on
page 226.
3. Register a NAS file server node with the Tivoli Storage Manager server. See
“Registering NAS nodes with the Tivoli Storage Manager server” on page 237.
4. Define a data mover for the NAS file server. See “Defining a data mover for
the NAS file server” on page 237.
Chapter 9. Using NDMP for operations with NAS file servers
225
5. Define a path from either the Tivoli Storage Manager server or the NAS file
server to the library. See “Defining a path to a library” on page 239.
6. Define the tape drives to Tivoli Storage Manager, and define the paths to
those drives from the NAS file server and optionally from the Tivoli Storage
Manager server. See “Defining tape drives and paths for NDMP operations”
on page 238.
7. Check tapes into the library and label them. See “Labeling and checking tapes
into the library” on page 240.
8. Set up scheduled backups for NAS file servers. This step is optional. See
“Scheduling NDMP operations” on page 240.
9. Define a virtual file space name. This step is optional. See “Defining virtual
file spaces” on page 240.
10. Configure for tape-to-tape copy to back up data. This step is optional. See
“Tape-to-tape copy to back up data” on page 240.
11. Configure for tape-to-tape copy to move data to a different tape technology.
This step is optional. See “Tape-to-tape copy to move data” on page 241.
Configuring Tivoli Storage Manager policy for NDMP
operations
Policy lets you manage the number and retention time of NDMP (network data
management protocol) image backup versions.
See “Configuring policy for NDMP operations” on page 502 for more information.
Complete the following steps to configure Tivoli Storage Manager policy for
NDMP operations:
1. Create a policy domain for NAS (network attached storage) file servers. For
example, to define a policy domain that is named NASDOMAIN, enter the
following command:
define domain nasdomain description='Policy domain for NAS file servers'
2. Create a policy set in that domain. For example, to define a policy set named
STANDARD in the policy domain named NASDOMAIN, issue the following
command:
define policyset nasdomain standard
3. Define a management class, and then assign the management class as the
default for the policy set. For example, to define a management class named
MC1 in the STANDARD policy set, and assign it as the default, issue the
following commands:
define mgmtclass nasdomain standard mc1
assign defmgmtclass nasdomain standard mc1
4. Define a backup copy group in the default management class. The destination
must be the storage pool you created for backup images produced by NDMP
operations. In addition, you can specify the number of backup versions to
retain. For example, to define a backup copy group for the MC1 management
class where up to four versions of each file system are retained in the storage
pool named NASPOOL, issue the following command:
define copygroup nasdomain standard mc1 destination=naspool verexists=4
If you also chose the option to create a table of contents, TOCDESTINATION
must be the storage pool you created for the table of contents.
define copygroup nasdomain standard mc1 destination=naspool
tocdestination=tocpool verexists=4
226
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Attention: When defining a copy group for a management class to which a
file system image produced by NDMP will be bound, be sure that the
DESTINATION parameter specifies the name of a storage pool that is defined
for NDMP operations. If the DESTINATION parameter specifies an invalid
storage pool, backups via NDMP will fail.
5. Activate the policy set. For example, to activate the STANDARD policy set in
the NASDOMAIN policy domain, issue the following command:
activate policyset nasdomain standard
The policy is ready to be used. Nodes are associated with Tivoli Storage
Manager policy when they are registered. For more information, see
“Registering NAS nodes with the Tivoli Storage Manager server” on page 237.
Policy for backups initiated with the client interface
When a client node initiates a backup, the policy is affected by the option file for
that client node.
You can control the management classes that are applied to backup images
produced by NDMP (network data management protocol) operations regardless of
which node initiates the backup. You can do this by creating a set of options to be
used by the client nodes. The option set can include an include.fs.nas statement
to specify the management class for NAS (network attached storage) file server
backups. See “Creating client option sets on the server” on page 436 for more
information.
Determining the location of NAS backup
When Tivoli Storage Manager uses NDMP (network data management protocol) to
protect NAS (network attached storage) file servers, the Tivoli Storage Manager
server controls operations while the NAS file server transfers the data, either to an
attached library or directly to the Tivoli Storage Manager server.
You can also use a backup-archive client to back up a NAS file server by mounting
the NAS file-server file system on the client machine (with either an NFS [network
file system] mount or a CIFS [common internet file system] map) and then backing
up as usual. Table 18 compares the three backup-and-restore methods.
Note: You can use a single method or a combination of methods in your
individual storage environment.
Table 18. Comparing methods for backing up NDMP data
Property
NDMP: Filer to server
NDMP: Filer to attached
library
Backup-archive client to
server
Network data traffic
All backup data goes across The server controls
the LAN from the NAS file operations remotely, but the
server to the server.
NAS device moves the data
locally.
All backup data goes across
the LAN from the NAS
device to the client and
then to the server.
File server processing
during backup
Less file server processing
is required, compared to
the backup-archive client
method, because the
backup does not use file
access protocols such as
NFS and CIFS.
More file server processing
is required because file
backups require additional
overhead for file access
protocols such as NFS and
CIFS.
Less file server processing
is required, compared to
the backup-archive client
method, because the
backup does not use file
access protocols such as
NFS and CIFS.
Chapter 9. Using NDMP for operations with NAS file servers
227
Table 18. Comparing methods for backing up NDMP data (continued)
NDMP: Filer to server
NDMP: Filer to attached
library
Backup-archive client to
server
Distance between devices
The Tivoli Storage Manager
server must be within SCSI
or Fibre Channel range of
the tape library.
The Tivoli Storage Manager
server can be distant from
the NAS file server and the
tape library.
The Tivoli Storage Manager
server must be within SCSI
or Fibre Channel range of
the tape library.
Firewall considerations
More stringent than
filer-to-attached- library
because communications
can be initiated by either
the Tivoli Storage Manager
server or the NAS file
server.
Less stringent than
filer-to-server because
communications can be
initiated only by the Tivoli
Storage Manager server.
Client passwords and data
are encrypted.
Security considerations
Data is sent unencrypted
from NAS file server to the
Tivoli Storage Manager
server.
Method must be used in a
trusted environment
because port numbers are
not secure.
Port number configuration
allows for secure
administrative sessions
within a private network.
Load on the Tivoli Storage
Manager server
Higher CPU workload is
required to manage all back
end data processes (for
example, migration).
Lower CPU workload is
required because migration
and reclamation are not
supported.
Higher CPU workload is
required to manage all back
end data processes.
Data can be backed up only
to copy storage pools that
have the same NDMP data
format (NETAPPDUMP,
CELERRADUMP, or
NDMPDUMP).
Data can be backed up only
to copy storage pools that
have the NATIVE data
format.
Data can be restored only
to storage pools and
volumes that have the same
NDMP format.
Data can be restored only
to storage pools and
volumes that have the
NATIVE data format.
Property
Backup of primary storage Data can be backed up only
pools to copy storage pools to copy storage pools that
have the NATIVE data
format.
Restore of primary storage
pools and volumes from
copy storage pools
Data can be restored only
to storage pools and
volumes that have the
NATIVE data format.
Moving NDMP data from
storage pool volumes
Data can be moved to
Data can be moved to
another storage pool only if another storage pool only if
it has a NATIVE data
it has the same NDMP data
format.
format.
Data can be moved to
another storage pool only if
it has a NATIVE data
format.
Migration from one
primary storage pool to
another
Supported
Not supported
Supported
Reclamation of a storage
pool
Supported
Not supported
Supported
Simultaneous write during
backups
Not supported
Not supported
Supported
Export and import
operations
Not supported
Not supported
Supported
Backup set generation
Not supported
Not supported
Supported
Cyclic Redundancy
Supported
Checking (CRC) when data
is moved using Tivoli
Storage Manager processes
Not supported
Supported
Validation using Tivoli
Storage Manager audit
commands
Not supported
Supported
228
Supported
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 18. Comparing methods for backing up NDMP data (continued)
Property
NDMP: Filer to server
Disaster recovery manager
Supported
NDMP: Filer to attached
library
Supported
Backup-archive client to
server
Supported
Tape libraries and drives for NDMP operations
Most of the planning required to implement backup and recovery operations that
use NDMP (network data management protocol) is related to device configuration.
You have choices about how to connect and use the libraries and drives.
Many of the configuration choices you have for libraries and drives are determined
by the hardware features of your libraries. You can set up NDMP operations with
any supported library and drives. However, the more features your library has, the
more flexibility you can exercise in your implementation.
You might start by answering the following questions:
v What type of library (SCSI, ACSLS, or 349X) will you use?
v If you are using a SCSI library, do you want to attach tape library robotics to the
Tivoli Storage Manager server or to the network-attached storage (NAS) file
server?
v Will you want to move your NDMP data to tape?
v How do you want to use the tape drives in the library?
– Dedicate all tape drives to NDMP operations.
– Dedicate some tape drives to NDMP operations and others to traditional
Tivoli Storage Manager operations.
– Share tape drives between NDMP operations and traditional Tivoli Storage
Manager operations.
v Will you back up data tape-to-tape for disaster recovery functions?
v Will you send backup data to a single Tivoli Storage Manager server instead of
attaching a tape library to each NAS device?
v Do you want to keep all hardware on the Tivoli Storage Manager server and
send NDMP data over the LAN?
Determining library drive usage when backing up to
NAS-attached libraries
Drives can be used for multiple purposes because of the flexible configurations
allowed by Tivoli Storage Manager. For NDMP (network data management
protocol) operations, the NAS (network attached storage) file server must have
access to the drive. The Tivoli Storage Manager server can also have access to the
same drive, depending on your hardware connections and limitations.
All drives are defined to the Tivoli Storage Manager server. However, the same
drive may be defined for both traditional Tivoli Storage Manager operations and
NDMP operations. Figure 28 on page 230 illustrates one possible configuration. The
Tivoli Storage Manager server has access to drives 2 and 3, and each NAS file
server has access to drives 1 and 2.
Chapter 9. Using NDMP for operations with NAS file servers
229
NAS File Server 1
Tape Library
NAS File Server 2
1
2
3
Tivoli Storage Manager
Server
Legend:
Drive access
Drives
1
2 3
Figure 28. Tivoli Storage Manager drive usage example
To create the configuration shown in Figure 28, perform the following steps:
1. Define all three drives to Tivoli Storage Manager.
2. Define paths from the Tivoli Storage Manager server to drives 2 and 3. Because
drive 1 is not accessed by the server, no path is defined.
3. Define each NAS file server as a separate data mover.
4. Define paths from each data mover to drive 1 and to drive 2.
To use the Tivoli Storage Manager back end data movement operations, the Tivoli
Storage Manager server requires two available drive paths from a single NAS data
mover. The drives can be in different libraries and can have different device types
that are supported by NDMP. You can make copies between two different tape
devices, for example, the source tape drive can be an DLT drive in a library and
the target drive can be an LTO drive in another library.
During Tivoli Storage Manager back end data movements, the Tivoli Storage
Manager server locates a NAS data mover that supports the same data format as
the data to be copied from and that has two available mount points and paths to
the drives. If theTivoli Storage Manager server cannot locate such a data mover,
the requested data movement operation is not performed. The number of available
mount points and drives depends on the mount limits of the device classes for the
storage pools involved in the back end data movements.
If the back end data movement function supports multiprocessing, each concurrent
Tivoli Storage Manager back end data movement process requires two available
mount points and two available drives. To run two Tivoli Storage Manager
processes concurrently, at least four mount points and four drives must be
available.
See “Defining tape drives and paths for NDMP operations” on page 238 for more
information.
230
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Setting up tape libraries for NDMP operations
You must complete several tasks to set up a tape library for NDMP (network data
management protocol) operations.
Perform the following steps to set up tape libraries for NDMP operations:
1. Connect the library and drives for NDMP operations.
a. Connect the SCSI library. Before setting up a SCSI tape library for NDMP
operations, you should have already determined whether you want to
attach your library robotics control to the Tivoli Storage Manager server or
to the NAS (network attached storage) file server. See “Tape libraries and
drives for NDMP operations” on page 229. Connect the SCSI tape library
robotics to the Tivoli Storage Manager server or to the NAS file server. See
the manufacturer’s documentation for instructions.
Library Connected to Tivoli Storage Manager: Make a SCSI or Fibre
Channel connection between the Tivoli Storage Manager server and the
library robotics control port. Then connect the NAS file server with the
drives you want to use for NDMP operations.
Library Connected to NAS File Server: Make a SCSI or Fibre Channel
connection between the NAS file server and the library robotics and
drives.
b. Connect the ACSLS Library. Connect the ACSLS tape library to the Tivoli
Storage Manager server.
c. Connect the 349X Library. Connect the 349X tape library to the Tivoli
Storage Manager server.
2. Define the library for NDMP operations. (The library has to be a single device
type, not a mixed device one.)
SCSI Library
define library tsmlib libtype=scsi
ACSLS Library
define library acslib libtype=acsls acsid=1
349X Library
define library tsmlib libtype=349x
3. Define a device class for NDMP operations. Create a device class for NDMP
operations. A device class defined with a device type of NAS is not explicitly
associated with a specific drive type (for example, 3570 or 8 mm). However, we
recommend that that you define separate device classes for different drive
types.
In the device class definition:
v Specify NAS as the value for the DEVTYPE parameter.
v Specify 0 as the value for the MOUNTRETENTION parameter.
MOUNTRETENTION=0 is required for NDMP operations.
v Specify a value for the ESTCAPACITY parameter.
For example, to define a device class named NASCLASS for a library named
NASLIB and media whose estimated capacity is 40 GB, issue the following
command:
define devclass nasclass devtype=nas library=naslib mountretention=0
estcapacity=40g
4. Define a storage pool for NDMP media. When NETAPPDUMP,
CELERRADUMP, or NDMPDUMP is designated as the type of storage pool,
managing the storage pools produced by NDMP operations is different from
Chapter 9. Using NDMP for operations with NAS file servers
231
managing storage pools containing media for traditional Tivoli Storage
Manager backups. Tivoli Storage Manager operations use storage pools defined
with a NATIVE or NONBLOCK data format. If you select NETAPPDUMP,
CELERRADUMP, or NDMPDUMP, NDMP operations require storage pools
with a data format that matches the NAS file server and the selected backup
method. Maintaining separate storage pools for data from different NAS
vendors is recommended, even though the data format for both is
NDMPDUMP. For example, to define a storage pool named NDMPPOOL for a
file server which is neither a NetApp nor a Celerra file server, issue the
following command:
define stgpool ndmppool nasclass maxscratch=10 dataformat=ndmpdump
To define a storage pool named NASPOOL for a NetApp file server, issue the
following command:
define stgpool naspool nasclass maxscratch=10 dataformat=netappdump
To define a storage pool named CELERRAPOOL for an EMC Celerra file server,
issue the following command:
define stgpool celerrapool nasclass maxscratch=10 dataformat=celerradump
Attention: Ensure that you do not accidentally use storage pools that have
been defined for NDMP operations in traditional Tivoli Storage Manager
operations. Be especially careful when assigning the storage pool name as the
value for the DESTINATION parameter of the DEFINE COPYGROUP
command. Unless the destination is a storage pool with the appropriate data
format, the backup will fail.
5. Define a storage pool for a table of contents. If you plan to create a table of
contents, you should also define a disk storage pool in which to store the table
of contents. You must set up policy so that the Tivoli Storage Manager server
stores the table of contents in a different storage pool from the one where the
backup image is stored. The table of contents is treated like any other object in
that storage pool. This step is optional.
For example, to define a storage pool named TOCPOOL for a DISK device
class, issue the following command:
define stgpool tocpool disk
Then, define volumes for the storage pool. For more information see:
“Configuring random access volumes on disk devices” on page 108.
Attaching tape library robotics for NAS-attached libraries
If you have decided to back up your network-attached storage (NAS) data to a
library directly attached to the NAS device and are using a SCSI tape library, one
of the first steps in planning for NDMP (network data management protocol)
operations is to determine where to attach it.
You must determine whether to attach the library robotics to the Tivoli Storage
Manager server or to the NAS file server. Regardless of where you connect library
robotics, tape drives must always be connected to the NAS file server for NDMP
operations.
Distance and your available hardware connections are factors to consider for SCSI
libraries. If the library does not have separate ports for robotics control and drive
access, the library must be attached to the NAS file server because the NAS file
server must have access to the drives. If your SCSI library has separate ports for
robotics control and drive access, you can choose to attach the library robotics to
either the Tivoli Storage Manager server or the NAS file server. If the NAS file
232
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
server is at a different location from the Tivoli Storage Manager server, the distance
may mean that you must attach the library to the NAS file server.
Whether you are using a SCSI, ACSLS, or 349X library, you have the option of
dedicating the library to NDMP operations, or of using the library for NDMP
operations as well as most traditional Tivoli Storage Manager operations.
Table 19. Summary of configurations for NDMP operations
Configuration
Distance between
Tivoli Storage
Manager server and
library
Drive sharing
between Tivoli
Storage Manager
Library sharing and NAS file server
Drive sharing
between NAS
file servers
Drive sharing
between storage
agent and NAS
file server
Configuration 1
(SCSI library
connected to the
Tivoli Storage
Manager server)
Limited by SCSI or
FC connection
Supported
Supported
Supported
Supported
Configuration 2
(SCSI library
connected to the
NAS file server)
No limitation
Not supported
Supported
Supported
Not supported
Configuration 3
(349X library)
May be limited by
349X connection
Supported
Supported
Supported
Supported
Configuration 4
(ACSLS library)
May be limited by
ACSLS connection
Supported
Supported
Supported
Supported
Configuration 1: SCSI library connected to the Tivoli Storage
Manager server
In this configuration, the tape library must have separate ports for robotics control
and for drive access. In addition, the library must be within Fibre-Channel range
or SCSI bus range of both the Tivoli Storage Manager server and the
network-attached storage (NAS) file server.
In this configuration, the Tivoli Storage Manager server controls the SCSI library
through a direct, physical connection to the library robotics control port. For
NDMP (network data management protocol) operations, the drives in the library
are connected directly to the NAS file server, and a path must be defined from the
NAS data mover to each of the drives to be used. The NAS file server transfers
data to the tape drive at the request of the Tivoli Storage Manager server. To also
use the drives for Tivoli Storage Manager operations, connect the Tivoli Storage
Manager server to the tape drives and define paths from the Tivoli Storage
Manager server to the tape drives. This configuration also supports a Tivoli
Storage Manager storage agent having access to the drives for its LAN-free
operations, and the Tivoli Storage Manager server can be a library manager.
Chapter 9. Using NDMP for operations with NAS file servers
233
Tivoli Storage
Manager Server
Tape
Library
Web Client
(optional)
1
2
2
Legend:
NAS File
Server
SCSI or Fibre
Channel Connection
TCP/IP
Connection
Data Flow
Robotics Control
Drive access
1
2
NAS File Server
File System
Disks
Figure 29. Configuration 1: SCSI library connected to Tivoli Storage Manager server
Configuration 2: SCSI library connected to the NAS file server
In this configuration, the library robotics and the drives must be physically
connected directly to the NAS (network attached storage) file server, and paths
must be defined from the NAS data mover to the library and drives. No physical
connection is required between the Tivoli Storage Manager server and the SCSI
library.
The Tivoli Storage Manager server controls library robotics by sending library
commands across the network to the NAS file server. The NAS file server passes
the commands to the tape library. Any responses generated by the library are sent
to the NAS file server, and passed back across the network to the Tivoli Storage
Manager server. This configuration supports a physically distant Tivoli Storage
Manager server and NAS file server. For example, the Tivoli Storage Manager
server could be in one city, while the NAS file server and tape library are in
another city.
234
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Tivoli Storage
Manager Server
Tape
Library
Web Client
(optional)
1 2
Legend:
NAS File
Server
SCSI or Fibre
Channel Connection
TCP/IP
Connection
Data Flow
Robotics Control
1
Drive access
2
NAS File Server
File System
Disks
Figure 30. Configuration 2: SCSI library connected to the NAS file server
Configuration 3: 349x library connected to the Tivoli Storage
Manager server
For this configuration, you connect the tape library to the system as for traditional
operations.
In this configuration, the 349X tape library is controlled by the Tivoli Storage
Manager server. The Tivoli Storage Manager server controls the library by passing
the request to the 349X library manager through TCP/IP.
In order to perform NAS (network attached storage) backup or restore operations,
the NAS file server must be able to access one or more tape drives in the 349X
library. Any tape drives used for NAS operations must be physically connected to
the NAS file server, and paths need to be defined from the NAS data mover to the
drives. The NAS file server transfers data to the tape drive at the request of the
Tivoli Storage Manager server. Follow the manufacturer’s instructions to attach the
device to the server system.
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city, while the NAS file server and tape library are in another city.
Chapter 9. Using NDMP for operations with NAS file servers
235
Figure 31. Configuration 3: 349x library connected to the Tivoli Storage Manager server
Configuration 4: ACSLS library connected to the Tivoli Storage
Manager server
For this configuration, connect the tape library to the system as you do for
traditional Tivoli Storage Manager operations.
The ACSLS (automated cartridge system library software) tape library is controlled
by the Tivoli Storage Manager server. The Tivoli Storage Manager server controls
the library by passing the request to the ACSLS library server through TCP/IP. The
ACSLS library supports library sharing and LAN-free operations.
Restriction: In order to utilize ACSLS functions, StorageTek Library Attach
software must be installed. See “ACSLS-managed libraries” on page 150 for more
information.
|
|
|
In order to perform NAS (network attached storage) backup or restore operations,
the NAS file server must be able to access one or more tape drives in the ACSLS
library. Any tape drives used for NAS operations must be physically connected to
the NAS file server, and any paths need to be defined from the NAS data mover to
the drives. The NAS file server transfers data to the tape drive at the request of the
Tivoli Storage Manager server. Follow the manufacturer’s instructions to attach the
device to the server system.
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city while the NAS file server and tape library are in another city.
To also use the drives for Tivoli Storage Manager operations, connect the Tivoli
Storage Manager server to the tape drives and define paths from the Tivoli Storage
Manager server to the tape drives.
236
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Figure 32. Configuration 4: ACSLS library connected to the Tivoli Storage Manager server
Registering NAS nodes with the Tivoli Storage Manager server
Register the NAS (network attached storage) file server as a Tivoli Storage
Manager node, specifying TYPE=NAS. This node name is used to track the image
backups for the NAS file server.
To register a NAS file server as a node named NASNODE1, with a password of
NASPWD1, in a policy domain named NASDOMAIN, issue the following example
command:
register node nasnode1 naspwd1 domain=nasdomain type=nas
If you are using a client option set, specify the option set when you register the
node.
You can verify that this node is registered by issuing the following command:
query node type=nas
Important: You must specify TYPE=NAS so that only NAS nodes are displayed.
Defining a data mover for the NAS file server
Define a data mover for each NAS (network attached storage) file server, using
NDMP (network data management protocol) operations in your environment. The
data mover name must match the node name that you specified when you
registered the NAS node to the Tivoli Storage Manager server.
To define a data mover for a NAS node named NASNODE1, enter the following
example command:
define datamover nasnode1 type=nas hladdress=netapp2 lladdress=10000 userid=root
password=admin dataformat=netappdump
In this command:
Chapter 9. Using NDMP for operations with NAS file servers
237
v The high-level address is an IP address for the NAS file server, either a
numerical address or a host name.
v The low-level address is the IP port for NDMP sessions with the NAS file server.
The default is port number 10000.
v The user ID is the ID defined to the NAS file server that authorizes an NDMP
session with the NAS file server (for this example, the user ID is the
administrative ID for the NetApp file server).
v The password parameter is a valid password for authentication to an NDMP
session with the NAS file server.
v The data format is NETAPPDUMP. This is the data format that the NetApp file
server uses for tape backup. This data format must match the data format of the
target storage pool.
Defining tape drives and paths for NDMP operations
Define the tape drives that you want to use in NDMP (network data management
protocol) operations and the paths to those drives. Depending on your hardware
and network connections, you can use the drives for only NDMP operations, or for
both traditional Tivoli Storage Manager operations and NDMP operations.
Perform the following steps to define tape drives and paths for NDMP operations:
1. Define an example drive named NASDRIVE1 for the library named NASLIB by
issuing the following command:
define drive naslib nasdrive1 element=117
Important: When you define SCSI drives to the Tivoli Storage Manager server,
the ELEMENT parameter must contain a number if the library has more than
one drive. If the drive is shared between the NAS (network attached storage)
file server and the Tivoli Storage Manager server, the element address is
automatically detected. If the library is connected to a NAS file server only,
there is no automatic detection of the element address and you must supply it.
Element numbers are available from device manufacturers. Element numbers
for tape drives are also available in the device support information available on
the Tivoli Web site at http://www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
2. Define a path for the drive. For example, if the drive is to be used only for
NDMP operations, issue the following command:
define path nasnode1 nasdrive1 srctype=datamover desttype=drive
library=naslib device=rst0l
Attention: For a drive connected only to the NAS file server, do not specify
ASNEEDED for the CLEANFREQUENCY parameter of the DEFINE DRIVE
command.
For example, if a drive is to be used for both Tivoli Storage Manager and
NDMP operations, issue the following commands:
define path server1 nasdrive1 srctype=server desttype=drive
library=naslib device=mt3.0.0.2
define path nasnode1 nasdrive1 srctype=datamover desttype=drive
library=naslib device=rst0l
238
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Defining a path to a library
Define a path to the SCSI library from either the Tivoli Storage Manager or the
NAS (network attached storage) file server.
1. For a SCSI Library connected to Tivoli Storage Manager, issue the following
example command to define a path from the server, named SERVER1, to the
SCSI library named TSMLIB:
define path server1 tsmlib srctype=server desttype=library
device=lb0.0.0.2
2. For a SCSI library connected to a NAS file server, issue the following example
command to define a path between a NetApp NAS data mover named
NASNODE1 and a library named NASLIB:
define path nasnode1 naslib srctype=datamover desttype=library device=mc0
3. For a 349X library, define a path to the library from the Tivoli Storage Manager
server. For example, issue the following command to define a path from the
server, named SERVER1, to the 349X library named TSMLIB:
define path server1 tsmlib srctype=server desttype=library
device=library1
Attention: DEFINE PATH is not needed for an automated cartridge system
library software (ACSLS) library.
Obtaining special file names for path definitions:
When you are creating paths, you must provide special file names for tape libraries
and drives.
For paths from a NAS data mover, the value of the DEVICE parameter in the
DEFINE PATH command is the name by which the NAS (network attached
storage) file server knows a library or drive. You can obtain these names, known as
special file names, by querying the NAS file server. For information about how to
obtain names for devices that are connected to a NAS file server, consult the
product information for the file server.
1. To obtain the special file names for tape libraries on a Netapp Release ONTAP
10.0 GX, or later, file server, connect to the file server using telnet and issue the
SYSTEM HARDWARE TAPE LIBRARY SHOW command. To obtain the special
file names for tape drives on a Netapp Release ONTAP 10.0 GX, or later, file
server, connect to the file server using telnet and issue the SYSTEM
HARDWARE TAPE DRIVE SHOW command. For details about these
commands, see the Netapp ONTAP GX file server product documentation.
2. For releases earlier than Netapp Release ONTAP 10.0 GX, continue to use the
SYSCONFIG command. For example, to display the device names for tape
libraries, connect to the file server using telnet and issue the following
command:
sysconfig -m
To display the device names for tape drives, issue the following command:
sysconfig -t
3. For the Celerra file server, connect to the Celerra control workstation using
telnet. To see the devices attached to a particular data mover, use the
“server_devconfig” command on the control station:
server_devconfig
server_#
-p
-s -n
The SERVER_# is the data mover on which the command should be run.
Chapter 9. Using NDMP for operations with NAS file servers
239
Labeling and checking tapes into the library
You must label the tapes and check them into the tape library.
These tasks are the same as for other libraries. For more information, see:
“Labeling media” on page 175
Scheduling NDMP operations
You can schedule the backup or restore of images produced by NDMP (network
data management protocol) operations by using administrative schedules that
process the BACKUP NODE or RESTORE NODE administrative commands.
The BACKUP NODE and RESTORE NODE commands can be used only for nodes
of TYPE=NAS. See “Backing up and restoring NAS file servers using NDMP” on
page 241 for information about the commands.
For example, to create an administrative schedule called NASSCHED to back up
all file systems for a node named NASNODE1, enter the following:
define schedule nassched type=administrative cmd='backup node nasnode1' active=yes
starttime=20:00 period=1 perunits=days
The schedule is active, and is set to run at 8:00 p.m. every day. See Chapter 19,
“Automating server operations,” on page 589 for more information.
Defining virtual file spaces
Use a virtual file space definition to perform NAS (network attached storage)
directory level backups. In order to reduce backup and restore times for large file
systems, map a directory path from a NAS file server to a virtual file space name
on the Tivoli Storage Manager server.
To create a virtual file space name for the directory path on the NAS device, issue
the DEFINE VIRTUALFSMAPPING command:
define virtualfsmapping nas1 /mikesdir /vol/vol1 /mikes
This command defines a virtual file space name of /MIKESDIR on the server which
represents the directory path of /VOL/VOL1/MIKES on the NAS file server
represented by node NAS1. See “Directory-level backup and restore for NDMP
operations” on page 247 for more information.
Tape-to-tape copy to back up data
When using NDMP (network data management protocol) tape-to-tape function to
back up data, the library type can be SCSI, 349X, or ACSLS (automated cartridge
system library software). Drives can be shared between the NAS (network attached
storage) devices and the Tivoli Storage Manager server.
Note: When using the NDMP tape-to-tape copy function, your configuration setup
could affect the performance of the Tivoli Storage Manager back end data
movement.
To have one NAS device with paths to four drives in a library, use the MOVE
DATA command after you are done with your configuration setup. This moves
data on the volume VOL1 to any available volumes in the same storage pool as
VOL1:
move data vol1
240
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Tape-to-tape copy to move data
In order to move data from an old tape technology to a new tape technology, using
NDMP (network data management protocol) tape-to-tape copy operation, perform
the steps below as well as the regular steps in your configuration setup.
Note: When using the NDMP tape-to-tape copy function, your configuration setup
could affect the performance of the Tivoli Storage Manager back end data
movement.
1. Define one drive in the library, lib1, that has old tape technology:
define drive lib1 drv1 element=1035
2. Define one drive in the library, lib2, that has new tape technology:
define drive lib2 drv1 element=1036
3. Move data on volume vol1 in the primary storage pool to the volumes in
another primary storage pool, nasprimpool2:
move data vol1 stgpool=nasprimpool2
Backing up and restoring NAS file servers using NDMP
After you have completed the steps to configure Tivoli Storage Manager for NDMP
(network data management protocol) operations, you are ready to begin using
NDMP.
Use either a client interface or an administrative interface to perform a file system
image backup. For example, to use the Windows backup-archive client interface to
back up a file system named /vol/vol1 on a NAS (network attached storage) file
server named NAS1, issue the following command:
dsmc backup nas -nasnodename=nas1 {/vol/vol1}
For more information on the command, see Tivoli Storage Manager for Windows
Backup-Archive Clients Installation and User’s Guide or Tivoli Storage Manager for
UNIX Backup-Archive Clients Installation and User’s Guide.
Tip: Whenever you use the client interface, you are asked to authenticate yourself
as a Tivoli Storage Manager administrator before the operation can begin. The
administrator ID must have at least client owner authority for the NAS node.
You can perform the same backup operation with a server interface. For example,
from the administrative command-line client, back up the file system named
/vol/vol1 on a NAS file server named NAS1, by issuing the following command:
backup node nas1 /vol/vol1
Note: The BACKUP NAS and BACKUP NODE commands do not include
snapshots. To back up snapshots see “Backing up and restoring with snapshots” on
page 247.
You can restore the image using either interface. Backups are identical whether
they are backed up using a client interface or a server interface. For example,
suppose you want to restore the image backed up in the previous examples. For
this example the file system named /vol/vol1 is being restored to /vol/vol2.
Restore the file system with the following command, issued from a Windows
backup-archive client interface:
dsmc restore nas -nasnodename=nas1 {/vol/vol1} {/vol/vol2}
You can choose to restore the file system, using a server interface. For example, to
restore the file system name /vol/vol1 to file system /vol/vol2, for a NAS file
server named NAS1, enter the following command:
Chapter 9. Using NDMP for operations with NAS file servers
241
restore node nas1 /vol/vol1 /vol/vol2
You can restore data from one NAS vendor system to another NAS vendor system
when you use the NDMPDUMP data format, but you should either verify
compatibility between systems or maintain a separate storage pool for each NAS
vendor.
NAS file servers; backups to a single Tivoli Storage Manager
server
If you have several NAS (network attached storage) file servers located in different
locations, you might prefer to send the backup data to a single Tivoli Storage
Manager server rather than attaching a tape library to each NAS device.
When you store NAS backup data in the Tivoli Storage Manager server’s storage
hierarchy, you can apply Tivoli Storage Manager back end data management
functions. Migration, reclamation, and disaster recovery are among the supported
features when using the NDMP file server to Tivoli Storage Manager server option.
In order to back up a NAS device to a Tivoli Storage Manager native storage pool,
set the destination storage pool in the copy group to point to the desired native
storage pool. The destination storage pool provides the information about the
library and drives used for backup and restore. You should ensure that there is
sufficient space in your target storage pool to contain the NAS data, which can be
backed up to sequential, disk, or file type devices. Defining a separate device class
is not necessary.
If you are creating a table of contents, a management class should be specified
with the TOCDESTINATION parameter in the DEFINE and UPDATE
COPYGROUP commands. When backing up a NAS file server to Tivoli Storage
Manager native pools, the TOCDESTINATION may be the same as the destination
of the NDMP (network data management protocol) data.
Firewall considerations are more stringent than they are for filer-to-attached-library
because communications can be initiated by either the Tivoli Storage Manager
server or the NAS file server. NDMP tape servers run as threads within the Tivoli
Storage Manager server and the tape server accepts connections on port of 10001.
This port number can be changed through the following option in the Tivoli
Storage Manager server options file: NDMPPORTRANGE port-number-low,
port-number-high.
During NDMP filer-to-server backup operations, you can use the
NDMPPREFDATAINTERFACE option to specify which network interface the Tivoli
Storage Manager server uses to receive NDMP backup data. The value for this
option is a hostname or IPV4 address that is associated with one of the active
network interfaces of the system on which the Tivoli Storage Manager server is
running. This interface must be IPV4 enabled.
Before using this option, verify that your NAS device supports NDMP operations
that use a different network interface for NDMP control and NDMP data
connections. NDMP control connections are used by Tivoli Storage Manager to
authenticate with an NDMP server and monitor an NDMP operation while NDMP
data connections are used to transmit and receive backup data during NDMP
operations. You must still configure your NAS device to route NDMP backup and
restore data to the appropriate network interface.
242
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
When enabled, the NDMPPREFDATAINTERFACE option affects all subsequent
NDMP filer-to-server operations. It does not affect NDMP control connections
because they use the system’s default network interface. You can update this server
option without stopping and restarting the server by using the SETOPT command
(Set a server option for dynamic update).
NetApp file servers provide an NDMP option (ndmpd.preferred_interface) to
change the interface used for NDMP data connections. Refer to the documentation
that came with your NAS device for more information.
See “Performing NDMP filer to Tivoli Storage Manager server backups” for steps
on how to perform NDMP filer-to-server backups.
See the Administrator’s Reference for server option information.
Performing NDMP filer to Tivoli Storage Manager server
backups
You can back up data to a single Tivoli Storage Manager server rather than
attaching a tape library to each NAS device.
For a filer-to-server backup of a NAS file system, perform the following steps:
1. Set up a native storage pool for the NAS data by issuing the following
command:
define stgpool naspool disk f=100g
Or, select an existing native storage pool with enough available space to hold
your NAS backup data.
2. Set the copy destination to the storage pool defined previously and activate the
associated policy set.
update copygroup standard standard standard destination=naspool
tocdestination=naspool
activate policyset standard standard
The destination for NAS data is determined by the destination in the copy
group. The storage size estimate for NAS differential backups uses the
occupancy of the file space, the same value that is used for a full backup. You
can use this size estimate as one of the considerations in choosing a storage
pool. One of the attributes of a storage pool is the MAXSIZE value, which
indicates that data be sent to the NEXT storage pool if the MAXSIZE value is
exceeded by the estimated size. Because NAS differential backups to Tivoli
Storage Manager native storage pools use the base file space occupancy size as
a storage size estimate, differential backups end up in the same storage pool as
the full backup. Depending on collocation settings, differential backups may
end up on the same media as the full backup.
3. Set up a node and data mover for the NAS device. The data format signifies
that the backup images created by this NAS device are a dump type of backup
image in a NetApp specific format.
register node nas1 nas1 type=nas domain=standard
define datamover nas1 type=nas hla=nas1 user=root
password=***** dataformat=netappdump
The NAS device is now ready to be backed up to a Tivoli Storage Manager
server storage pool. Paths may be defined to local drives, but the destination
specified by the management class determines the target location for this
backup operation.
Chapter 9. Using NDMP for operations with NAS file servers
243
4. Back up the NAS device to the Tivoli Storage Manager storage pool by issuing
the following command:
backup node nas1 /vol/vol0
5. Restore a NAS device from the Tivoli Storage Manager storage pool by issuing
the following command:
restore node nas1 /vol/vol0
File-level backup and restore for NDMP operations
When you do a backup via NDMP (network data management protocol), you can
specify that the Tivoli Storage Manager server collect and store file-level
information in a table of contents (TOC).
If you specify this option at the time of backup, you can later display the table of
contents of the backup image. Through the backup-archive Web client, you can
select individual files or directories to restore directly from the backup images
generated.
Collecting file-level information requires additional processing time, network
resources, storage pool space, temporary database space, and possibly a mount
point during the backup. You should consider dedicating more space in the Tivoli
Storage Manager server database. You must set up policy so that the Tivoli Storage
Manager server stores the table of contents in a different storage pool from the one
where the backup image is stored. The table of contents is treated like any other
object in that storage pool.
You also have the option to do a backup via NDMP without collecting file-level
restore information.
To allow creation of a table of contents for a backup via NDMP, you must define
the TOCDESTINATION attribute in the backup copy group for the management
class to which this backup image is bound. You cannot specify a copy storage pool
or an active-data pool as the destination. The storage pool you specify for the TOC
destination must have a data format of either NATIVE or NONBLOCK, so it
cannot be the tape storage pool used for the backup image.
If you choose to collect file-level information, specify the TOC parameter in the
BACKUP NODE server command. Or, if you initiate your backup using the client,
you can specify the TOC option in the client options file, client option set, or client
command line. You can specify NO, PREFERRED, or YES. When you specify
PREFERRED or YES, the Tivoli Storage Manager server stores file information for a
single NDMP-controlled backup in a table of contents (TOC). The table of contents
is placed into a storage pool. After that, the Tivoli Storage Manager server can
access the table of contents so that file and directory information can be queried by
the server or client. Use of the TOC parameter allows a table of contents to be
generated for some images and not others, without requiring different
management classes for the images.
See the Administrator’s Reference for more information about the BACKUP NODE
command.
To avoid mount delays and ensure sufficient space, use random access storage
pools (DISK device class) as the destination for the table of contents. For sequential
access storage pools, no labeling or other preparation of volumes is necessary if
scratch volumes are allowed.
244
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
See “Managing table of contents” on page 224 for more information.
Interfaces for file-level restore
When you restore individual files and directories, you have the choice of using one
of two interfaces to initiate the restore: the backup-archive Web client or the server
interface.
Restore Using Backup-Archive Web Client
The backup-archive Web client requires that a table of contents exist in
order to restore files and directories. The Web client must be on a Windows
system. The Tivoli Storage Manager server accesses the table of contents
from the storage pool and loads TOC information into a temporary
database table. Then, you can use the backup-archive Web client to
examine directories and files contained in one or more file system images,
and select individual files or directories to restore directly from the backup
images generated.
Restore Using Server Interface
v If you have a table of contents, use the QUERY NASBACKUP command
to display information about backup images generated by NDMP
(network data management protocol), and to see which images have a
corresponding table of contents. Then, use the RESTORE NODE
command with the FILELIST parameter.
v If you did not create a table of contents, the contents of the backup
image cannot be displayed. You can restore individual files, directories,
or both if you know the name of the file or directory, and in which
image the backup is located. Use the RESTORE NODE command with
the FILELIST parameter.
International characters for NetApp file servers
All systems that create or access data on a particular NAS (network attached
storage) file server volume must do so in a manner compatible with the volume
language setting.
You should install Data ONTAP 6.4.1 or later, if it is available, on your NetApp
NAS file server in order to garner full support of international characters in the
names of files and directories.
If your level of Data ONTAP is earlier than 6.4.1, you must have one of the
following two configurations in order to collect and restore file-level information.
Results with configurations other than these two are unpredictable. The Tivoli
Storage Manager server will print a warning message (ANR4946W) during backup
operations. The message indicates that the character encoding of NDMP file history
messages is unknown, and UTF-8 will be assumed in order to build a table of
contents. It is safe to ignore this message only for the following two configurations.
v Your data has directory and file names that contain only English (7-bit ASCII)
characters.
v Your data has directory and file names that contain non-English characters and
the volume language is set to the UTF-8 version of the proper locale (for
example, de.UTF-8 for German).
If your level of Data ONTAP is 6.4.1 or later, you must have one of the following
three configurations in order to collect and restore file-level information. Results
with configurations other than these three are unpredictable.
Chapter 9. Using NDMP for operations with NAS file servers
245
v Your data has directory and file names that contain only English (7-bit ASCII)
characters and the volume language is either not set or is set to one of these:
– C (POSIX)
– en
– en_US
– en.UTF-8
– en_US.UTF-8
v Your data has directory and file names that contain non-English characters, and
the volume language is set to the proper locale (for example, de.UTF-8 or de for
German).
Tip: Using the UTF-8 version of the volume language setting is more efficient in
terms of Tivoli Storage Manager server processing and table of contents storage
space.
v You only use CIFS to create and access your data.
File level restore from a directory-level backup image
File-level restore is supported for directory-level backup images.
As with a NAS (network attached storage) file system backup, a table of contents
(TOC) is created during a directory-level backup and you are able to browse the
files in the image, using the Web client. The default is that the files are restored to
the original location. During a file-level restore from a directory-level backup,
however, you can either select a different file system or another virtual file space
name as a destination.
For a TOC of a directory level backup image, the path names for all files are
relative to the directory specified in the virtual file space definition, not the root of
the file system.
Directory-level backup and restore
If you have a large NAS (network attached storage) file system, initiating a backup
at a directory level will reduce backup and restore times and provide more
flexibility in configuring your NAS backups. By defining virtual file spaces, a file
system backup can be partitioned among several NDMP backup operations and
multiple tape drives. You can also use different backup schedules to back up
sub-trees of a file system.
The virtual file space name cannot be identical to any file system on the NAS
node. If a file system is created on the NAS device with the same name as a virtual
file system, a name conflict will occur on the Tivoli Storage Manager server when
the new file space is backed up. See the Administrator’s Reference for more
information about virtual file space mapping commands.
Note: Virtual file space mappings are only supported for NAS nodes.
246
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Directory-level backup and restore for NDMP operations
The DEFINE VIRTUALFSMAPPING command maps a directory path of a NAS
(network attached storage) file server to a virtual file space name on the Tivoli
Storage Manager server. After a mapping is defined, you can conduct NAS
operations such as BACKUP NODE and RESTORE NODE, using the virtual file
space names as if they were actual NAS file spaces.
To start a backup of the directory, issue the BACKUP NODE command specifying
the virtual file space name instead of a file space name. To restore the directory
subtree to the original location, run the RESTORE NODE command and specify the
virtual file space name.
Virtual file space definitions can also be specified as the destination in a RESTORE
NODE command. This allows you restore backup images (either file system or
directory) to a directory on any file system of the NAS device.
You can use the Web client to select files for restore from a directory-level backup
image because the Tivoli Storage Manager client treats the virtual file space names
as NAS file spaces.
Backing up and restoring with snapshots
NDMP directory level backup gives you the ability to back up user created
snapshots of a NAS file system; those are then stored as subdirectories. The
snapshots can be taken at any time, and the backup to tape can be deferred to a
more convenient time.
For example, to backup a snapshot created for a NetApp file system, perform the
following:
1. On the console for the NAS device, issue the command to create the snapshot.
SNAP CREATE is the command for a NetApp device.
snap create vol2 february17
This command creates a snapshot named FEBRUARY 17 of the /vol/vol2 file
system. The physical location for the snapshot data is in the directory
/vol/vol2/.snapshot/february17. The stored location for snapshot data is dependent
on the NAS vendor implementation. For NetApp, the SNAP LIST command
can be used to display all snapshots for a given file system.
2. Define a virtual file space mapping definition on the Tivoli Storage Manager
server for the snapshot data created in the previous step.
define virtualfsmapping
nas1 /feb17snapshot
/vol/vol2
/.snapshot/february17
This creates a virtual file space mapping definition named /feb17snapshot.
3. Back up the virtual file space mapping.
backup node nas1 /feb17snapshot mode=full toc=yes
4. After the backup is created, you can either restore the entire snapshot image or
restore an individual file. Before restoring the data you can create a virtual file
space mapping name for the target directory. You can select any file system
name as a target. The target location in this example is the directory
/feb17snaprestore on the file system /vol/vol1.
define virtualfsmapping
nas1 /feb17snaprestore
/vol/vol1
/feb17snaprestore
5. Issue the restore of the snapshot backup image.
restore node nas1 /feb17snapshot /feb17snaprestore
Chapter 9. Using NDMP for operations with NAS file servers
247
This restores a copy of the /vol/vol2 file system to the directory
/vol/vol1/feb17snaprestore in the same state as when the snapshot was
created in the first step.
Backup and restore using NetApp SnapMirror to Tape feature
You can back up very large NetAppfile systems using the NetAppSnapMirror to
Tape feature. Using a block-level copy of data for backup, the SnapMirror to Tape
method is faster than a traditional Network Data Management Protocol (NDMP)
full backup and can be used when NDMP full backups are impractical.
Use the NDMP SnapMirror to Tape feature as a disaster recovery option for
copying very large NetAppfile systems to secondary storage. For most NetAppfile
systems, use the standard NDMP full or differential backup method.
Using a parameter option on the BACKUP and RESTORE NODE commands, you
can back up and restore file systems using SnapMirror to Tape. There are several
limitations and restrictions on how SnapMirror images can be used. Consider the
following guidelines before you use it as a backup method:
v You cannot initiate a SnapMirror to Tape backup or restore operation from the
Tivoli Storage Manager Web client, command-line client or the Administration
Center.
v You cannot perform differential backups of SnapMirror images.
v You cannot perform a directory-level backup using SnapMirror-to-Tape, thus
Tivoli Storage Manager does not permit an SnapMirror to Tape backup operation
on a server virtual filespace.
v You cannot perform an NDMP file-level restore operation from SnapMirror to
Tape images. Therefore, a table of contents is never created during SnapMirror
to Tape image backups.
v At the start of a SnapMirror to Tape copy operation, the file server generates a
snapshot of the file system. NetAppprovides an NDMP environment variable to
control whether this snapshot should be removed at the end of the SnapMirror
to Tape operation. Tivoli Storage Manager always sets this variable to remove
the snapshot.
v After a SnapMirror to Tape image is retrieved and copied to a NetAppfile
system, the target file system is left configured as a SnapMirror partner.
NetAppprovides an NDMP environment variable to control whether this
SnapMirror relationship should be broken. Tivoli Storage Manager always
″breaks″ the SnapMirror relationship during the retrieval. After the restore
operation is complete, the target file system is in the same state as that of the
original file system at the point-in-time of backup.
See the BACKUP NODE and RESTORE NODE commands in the Administrator’s
Reference for more information on SnapMirror to Tape feature.
248
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
NDMP backup operations using Celerra file server integrated
checkpoints
When the Tivoli Storage Manager server initiates an NDMP backup operation on a
Celerra data mover, the backup of a large file system might take several hours to
complete. Without Celerra integrated checkpoints enabled, any changes occurring
on the file system are written to the backup image.
As a result, the backup image includes changes made to the file system during the
entire backup operation and is not a true point-in-time image of the file system.
If you are performing NDMP backups of Celerra file servers, you should upgrade
the operating system of your data mover to Celerra file server version T5.5.25.1 or
later. This version of the operating system allows enablement of integrated
checkpoints for all NDMP backup operations from the Celerra Control
Workstation. Enabling this feature ensures that NDMP backups represent true
point-in-time images of the file system that is being backed up.
Refer to the Celerra file server documentation for instructions on enabling
integrated checkpoints during all NDMP backup operations.
If your version of the Celerra file server operating system is earlier than version
T5.5.25.1 and if you use NDMP to back up Celerra data movers, you should
manually generate a snapshot of the file system using Celerra’s command line
checkpoint feature and then initiate an NDMP backup of the checkpoint file system
rather than the original file system.
Refer to the Celerra file server documentation for instructions on creating and
scheduling checkpoints from the Celerra control workstation.
Chapter 9. Using NDMP for operations with NAS file servers
249
250
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 10. Defining device classes
A device class represents a device type that Tivoli Storage Manager can use to
determine which types of devices and volumes are available to store client-node
data in primary storage pools, copy storage pools and active-data pools. Device
classes are also important for storing database backups and for exporting and
importing data.
Sequential-access device types include tape, optical, and sequential-access disk. For
random access storage, Tivoli Storage Manager supports only the DISK device
class, which is defined by Tivoli Storage Manager.
To define a device class, use the DEFINE DEVCLASS command and specify the
DEVTYPE parameter. The DEVTYPE parameter assigns a device type to the device
class. You can define multiple device classes for each device type. For example,
you might need to specify different attributes for different storage pools that use
the same type of tape drive. Variations may be required that are not specific to the
device, but rather to how you want to use the device (for example, mount
retention or mount limit). For all device types other than FILE or SERVER, you
must define libraries and drives to Tivoli Storage Manager before you define the
device classes.
To update an existing device class definition, use the UPDATE DEVCLASS
command. You can also delete a device class and query a device class using the
DELETE DEVCLASS and QUERY DEVCLASS commands, respectively.
Task
Required Privilege Class
Define, update, or delete device classes
System or unrestricted storage
Request information about device classes
Any administrator
Remember:
v One device class can be associated with multiple storage pools, but each storage
pool is associated with only one device class.
v If you include the DEVCONFIG option in the dsmserv.opt file, the files that you
specify with that option are automatically updated with the results of the
DEFINE DEVCLASS, UPDATE DEVCLASS, and DELETE DEVCLASS
commands.
v Tivoli Storage Manager now allows SCSI libraries to include tape drives of more
than one device type. When you define the device class in this environment, you
must declare a value for the FORMAT parameter.
See the following topics:
Tasks
“Defining tape and optical device classes” on page 253
“Defining 3592 device classes” on page 257
“Device classes for devices supported by operating-system drivers” on page 260
“Defining device classes for removable media devices” on page 260
“Defining sequential-access disk (FILE) device classes” on page 260
© Copyright IBM Corp. 1993, 2009
251
Tasks
“Defining LTO device classes” on page 264
“Defining SERVER device classes” on page 267
“Defining device classes for StorageTek VolSafe devices” on page 268
“Defining device classes for CENTERA devices” on page 269
“Obtaining information about device classes” on page 270
“How Tivoli Storage Manager fills volumes” on page 271
For details about commands and command parameters, see the Administrator’s
Reference.
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see
Administrator’s Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see
Administrator’s Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Sequential-access device types
Tivoli Storage Manager supports tape devices, magnetic disk devices, optical
devices, removable media devices, and virtual volumes.
The following tables list supported devices, media types, and Tivoli Storage
Manager device types.
For details and updates, see the following Web site: http://www.ibm.com/
software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
Table 20. Tape devices
252
Examples
Media type
Device Type
IBM 3570 drives
IBM 3570 cartridges
3570
IBM 3590, 3590E drives
IBM 3590 cartridges
3590
IBM 3592 drives
IBM 3592 cartridges
3592
IBM 7206-005
4 mm cartridges
4MM
IBM 7208-001 and 7208-011
8 mm cartridges
8MM
Sony GY-2120, Sony
DMS-8400 drives
Digital tape format (DTF)
cartridges
DTF
Sun StorageTek SD-3, 9490,
9840, 9940, and T10000
drives
Tape cartridges
ECARTRIDGE
Tape drives supported by
operating system device
drivers
Tape cartridges
GENERICTAPE
IBM 3580
LTO Ultrium cartridges
LTO
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 20. Tape devices (continued)
Examples
Media type
Device Type
Tape drives supported by the Unknown
NAS file server for backups
NAS
IBM 7207
Quarter-inch tape cartridges
QIC
Sun StorageTek 9840 drives
Write-once read-many
(WORM) tape cartridges
VOLSAFE
Table 21. Magnetic disk devices
Examples
Media type
Device type
Sequential-access disk
File system or storage
volumes
FILE
EMC Centera
File system or storage
volumes
CENTERA
Examples
Media type
Device type
5.25-inch optical drives
5.25-inch rewritable optical
cartridges
OPTICAL
5.25-inch optical drives
5.25-inch write-once
read-many (WORM) optical
cartridges
WORM
Table 22. Optical devices
Table 23. Removable media (file system) devices
Examples
Media type
Device Type
Removable media devices
that are attached as local,
removable file systems
Iomega Zip or Jaz, DVD, or
CD media
REMOVABLEFILE
Examples
Media type
Device type
Tivoli Storage Manager
target server
Storage volumes or files
archived in another Tivoli
Storage Manager server
SERVER
Table 24. Virtual volumes
Defining tape and optical device classes
Device class definitions for tapes include parameters that let you control storage
operations.
Chapter 10. Defining device classes
253
Specifying the estimated capacity of tape and optical volumes
Tivoli Storage Manager also uses estimated capacity to determine when to begin
reclamation storage pool volumes.
For tape and optical device classes, the default values selected by the server
depend on the recording format used to write data to the volume. You can either
accept the default for a given device type or specify a value.
To specify estimated capacity for tape volumes, use the ESTCAPACITY parameter
when you define the device class or update its definition.
For more information about how Tivoli Storage Manager uses the estimated
capacity value, see “How Tivoli Storage Manager fills volumes” on page 271.
Specifying recording formats for tape and optical media
You can specify the recording format used by Tivoli Storage Manager when
writing data to tape and optical media.
To specify a recording format, use the FORMAT parameter when you define the
device class or update its definition.
If all drives associated with that device class are identical, specify
FORMAT=DRIVE. The server selects the highest format that is supported by the
drive on which a volume is mounted.
If some drives associated with the device class support a higher density format
than others, specify a format that is compatible with all drives. If you specify
FORMAT=DRIVE, mount errors can occur. For example, suppose a device class
uses two incompatible devices such as an IBM 7208-2 and an IBM 7208-12. The
server might select the high-density recording format of 8500 for each of two new
volumes. Later, if the two volumes are to be mounted concurrently, one fails
because only one of the drives is capable of the high-density recording format.
If drives in a single SCSI library use different tape technologies (for example, DLT
and LTO Ultrium), specify a unique value for the FORMAT parameter in each
device class definition.
The recording format that Tivoli Storage Manager uses for a given volume is
selected when the first piece of data is written to the volume. Updating the
FORMAT parameter does not affect media that already contain data until those
media are rewritten from the beginning. This process might happen after a volume
is reclaimed or deleted, or after all of the data on the volume expires.
Associating library objects with device classes
A library contains the drives that can be used to mount the volume. Only one
library can be associated with a given device class. However, multiple device
classes can reference the same library.
To associate a device class with a library, use the LIBRARY parameter when you
define a device class or update its definition.
254
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Controlling media-mount operations for tape and optical
devices
Using device class definitions, you can control the number of mounted volumes,
the amount of time a volume remains mounted, and the amount of time that the
Tivoli Storage Manager server waits for a drive to become available.
Controlling the number of simultaneously mounted volumes
When setting a mount limit for a device class, you need to consider the number of
storage devices connected to your system, whether you are using simultaneous
write, whether you are associating multiple device classes with a single library, and
the number of processes that you want to run at the same time.
When selecting a mount limit for a device class, consider the following issues:
v How many storage devices are connected to your system?
Do not specify a mount limit value that is greater than the number of associated
available drives in your installation. If the server tries to mount as many
volumes as specified by the mount limit and no drives are available for the
required volume, an error occurs and client sessions may be terminated. (This
does not apply when the DRIVES parameter is specified.)
v Are you using the simultaneous write function to primary storage pools, copy
storage pools, and active-data pools?
Specify a mount limit value that provides a sufficient number of mount points to
support a simultaneous write to the primary storage pool and all associated
copy storage pools and active-data pools.
v Are you associating multiple device classes with a single library?
A device class associated with a library can use any drive in the library that is
compatible with the device class’ device type. Because you can associate more
than one device class with a library, a single drive in the library can be used by
more than one device class. However, Tivoli Storage Manager does not manage
how a drive is shared among multiple device classes.
v How many Tivoli Storage Manager processes do you want to run at the same
time, using devices in this device class?
Tivoli Storage Manager automatically cancels some processes to run other,
higher priority processes. If the server is using all available drives in a device
class to complete higher priority processes, lower priority processes must wait
until a drive becomes available. For example, Tivoli Storage Manager cancels the
process for a client backing up directly to tape if the drive being used is needed
for a server migration or tape reclamation process. Tivoli Storage Manager
cancels a tape reclamation process if the drive being used is needed for a client
restore operation. For additional information, see “Preemption of client or server
operations” on page 584.
If processes are often canceled by other processes, consider whether you can
make more drives available for Tivoli Storage Manager use. Otherwise, review
your scheduling of operations to reduce the contention for drives.
This consideration also applies to the simultaneous write function. You must
have enough drives available to allow for a successful simultaneous write.
Best Practice: If the library associated with this device class is EXTERNAL type,
explicitly specify the mount limit instead of using MOUNTLIMIT=DRIVES.
To specify the maximum number of volumes that can be simultaneously mounted,
use the MOUNTLIMIT parameter when you define the device class or update its
definition.
Chapter 10. Defining device classes
255
Controlling the amount of time that a volume remains mounted
You can control the amount of time that a mounted volume remains mounted after
its last I/O activity. If a volume is used frequently, you can improve performance
by setting a longer mount retention period to avoid unnecessary mount and
dismount operations.
If mount operations are being handled by manual, operator-assisted activities, you
might want to specify a long mount retention period. For example, if only one
operator supports your entire operation on a weekend, then define a long mount
retention period so that the operator is not being asked to mount volumes every
few minutes.
To control the amount of time a mounted volume remains mounted, use the
MOUNTRETENTION parameter when you define the device class or update its
definition. For example, if the mount retention value is 60, and a mounted volume
remains idle for 60 minutes, then the server dismounts the volume.
While Tivoli Storage Manager has a volume mounted, the drive is allocated to
Tivoli Storage Manager and cannot be used for anything else. If you need to free
the drive for other uses, you can cancel Tivoli Storage Manager operations that are
using the drive and then dismount the volume. For example, you can cancel server
migration or backup operations. For information on how to cancel processes and
dismount volumes, see:
v “Canceling server processes” on page 584
v “Dismounting idle volumes” on page 192
Controlling the amount of time that the server waits for a drive
You can specify the maximum amount of time, in minutes, that the Tivoli Storage
Manager server waits for a drive to become available for the current mount
request.
To control wait time, use the MOUNTWAIT parameter when you define the device
class or update its definition.
This parameter is not valid for EXTERNAL or RSM library types
Write-once, read-many (WORM) devices
The WORM parameter specifies whether the drive being defined is a WORM
device. This parameter is not supported for all device classes. You cannot change
the value of the WORM parameter using the UPDATE DEVCLASS command.
For an example that shows how to configure a VolSafe device using the WORM
parameter, see “Defining device classes for StorageTek VolSafe devices” on page
268
256
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Defining 3592 device classes
Device class definitions for 3592 devices include parameters for faster
volume-access speeds and drive encryption. Particular methods are required to
prevent or minimize problems when mixing different generations of 3592 drives in
a library.
Mixing generations of 3592 media in a single library
For optimal performance, do not mix generations of 3592 media in a single library.
Media problems can result when different drive generations are mixed. For
example, Tivoli Storage Manager might not be able to read a volume’s label.
The following table shows read-and-write interoperability for the three generations.
Drives
Generation 1 format
Generation 2 format
Generation 3 format
Generation 1
Read and write
n/a
n/a
Generation 2
Read and write
Read and write
n/a
Generation 3
Read only
Read and write
Read and write
If you must mix generations of drives, use one of the following methods in the
following table to prevent or minimize the potential for problems.
Mixing generations of drives
(349X, ACSLS, and SCSI libraries) Force all 3592 generation 3 drives to always write in the
generation 2 density. Do this by explicitly setting the FORMAT parameter on the device
class to either 3592-2 or 3592-2C.
Both generation 2 and generation 3 drives can read media written in the generation 2
format. All drives can verify labels and read all data written on the media. However, this
configuration does not allow the generation 3 drives to write or read in their optimal
format.
Generation 3 drives can read generation 1 format, but cannot write with it. So, mark all
media previously written in generation 1 format to read-only. Generation 3 drives can both
read and write with generation 2 formats.
(349X and ACSLS libraries only) Logically partition the generations without partitioning the
hardware. Define two or three new library objects for each drive generation that the
physical library contains. For example, if you have a physical library with 3592-2 drives
and 3592-3 drives, define two new library objects.
Specify a path with the same special file name for each new library object. In addition, for
349X libraries, specify disjoint scratch categories (including the WORMSCRATCH category,
if applicable) for each library object. Specify a new device class and a new storage pool
that points to each new library object.
Chapter 10. Defining device classes
257
Mixing generations of drives
(SCSI libraries only) Define a new storage pool and device class for the generation 3 drives.
Set the FORMAT parameter to 3592-3 or 3592-3C. (Do not specify DRIVE.) The original
device class will have a FORMAT parameter set to 3592, 3592C, 3952-2, or 3592-2C (not
DRIVE). Update the MAXSCRATCH parameter to 0 for the storage pool that will contain
all the media written in generation 1 or generation 2 formats, for example: UPDATE
STGPOOL UPDATE STGPOOL genpool1 MAXSCRATCH=0.
This method allows both generations to use their optimal format and minimizes potential
media problems that can result from mixing generations. However, it does not resolve all
media issues. For example, competition for mount points and mount failures might result.
(To learn more about mount point competition in the context of LTO drives and media, see
“Defining LTO device classes” on page 264.) The following list describes media restrictions:
v CHECKIN LIBVOL: The issue resides with using the CHECKLABEL=YES option. If the label
is currently written in a generation 3 format, and you specify the CHECKLABEL=YES
option, drives of previous generations fail using this command. As a best practice, use
CHECKLABEL=BARCODE.
v LABEL LIBVOL: When the server tries to use drives of a previous generation to read the
label written in a generation 3 format, the LABEL LIBVOL command fails unless
OVERWRITE=YES is specified. Verify that the media being labeled with OVERWRITE=YES
does not have any active data.
v CHECKOUT LIBVOL: When Tivoli Storage Manager verifies the label
(CHECKLABEL=YES), as a generation 3 format, and read operations by drives of
previous generations, the command fails. As a best practice, use CHECKLABEL=NO.
Controlling data-access speeds for 3592 volumes
Tivoli Storage Manager lets you reduce media capacity to create volumes with
faster data-access speeds. The benefit is that can partition data into storage pools
that have volumes with faster data-access speeds.
To reduce media capacity, use the SCALECAPACITY parameter when you define
the device class or update its definition.
Specify a percentage value of 20, 90 or 100. A value of 20 percent provides the
fastest access time, and 100 percent provides the largest storage capacity. For
example, If you specify a scale capacity of 20 for a 3592 device class without
compression, a 3592 volume in that device class would store 20 percent of its full
capacity of 300 GB, or about 60 GB.
Scale capacity only takes effect when data is first written to a volume. Updates to
the device class for scale capacity do not affect volumes that already have data
written to them until the volume is returned to scratch status.
For information about setting up storage pool hierarchies, see “Setting up a storage
pool hierarchy” on page 296.
258
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Encrypting data with 3592 generation 2 and generation 3 drives
With Tivoli Storage Manager, you can use the following types of drive encryption
with 3592 generation 2 and generation 3 drives: Application, System, and Library.
These methods are defined through the hardware.
Enabling 3592 drive encryption:
The DRIVEENCRYPTION parameter specifies whether drive encryption is enabled
or can be enabled for 3592 generation 2 (3592-2 and 3592-C) formats and 3592
generation 3 formats (3592–3 and 3592–3C). Use this parameter to ensure Tivoli
Storage Manager compatibility with hardware encryption settings for empty
volumes.
v To use the Application method, in which Tivoli Storage Manager generates and
manages encryption keys, set the DRIVEENCRYPTION parameter to ON. This
permits the encryption of data for empty volumes. If the parameter is set to ON
and if the hardware is configured for another encryption method, backup
operations will fail.
v To use the Library or System methods of encryption, set the parameter to
ALLOW. This specifies that Tivoli Storage Manager is not the key manager for
drive encryption, but will allow the hardware to encrypt the volume’s data
through one of the other methods. Specifying this parameter does not
automatically encrypt volumes. Data can only be encrypted by specifying the
ALLOW parameter and configuring the hardware to use one of these methods.
The following simplified example shows how to permit the encryption of data for
empty volumes in a storage pool, using Tivoli Storage Manager as the key
manager:
1. Define a library. For example:
define library 3584 libtype=SCSI
2. Define a device class, 3592_ENCRYPT, and specify the value ON for the
DRIVEENCRYPTION parameter. For example:
define devclass 3592_encrypt library=3584 devtype=3592 driveencryption=on
3. Define a storage pool. For example:
define stgpool 3592_encrypt_pool 3592_encrypt
The DRIVEENCRYPTION parameter is optional. The default value is to allow the
Library or System methods of encryption.
For more information about using drive encryption, refer to “Encrypting data on
tape” on page 516.
Disabling 3592 drive encryption:
To disable any method of encryption on new volumes, set the
DRIVEENCRYPTION parameter to OFF. If the hardware is configured to encrypt
data through either the Library or System method and DRIVEENCRYPTION is set
to OFF, backup operations will fail.
Chapter 10. Defining device classes
259
Device classes for devices supported by operating-system
drivers
To use a tape device that is supported by an operating-system device driver, you
must define a device class whose device type is GENERICTAPE.
For a manual library with multiple drives of device type GENERICTAPE, ensure
that the device types and recording formats of the drives are compatible. Because
the devices are controlled by the operating system device driver, the Tivoli Storage
Manager server is not aware of the following:
v The actual type of device: 4 mm, 8 mm, digital linear tape, and so forth. For
example, if you have a 4 mm device and an 8 mm device, you must define
separate manual libraries for each device.
v The actual cartridge recording format. For example, if you have a manual library
defined with two device classes of GENERICTAPE, ensure the recording formats
are the same for both drives.
Defining device classes for removable media devices
To access volumes that belong to this device class, the server requests that the
removable media be mounted in drives. The server then opens a file on the media
and reads or writes the file data.
Removable file devices include:
Iomega Zip drives, Iomega Jaz drives, CD drives, and DVD drives
To define a device class for removable media, use the
DEVTYPE=REMOVABLEFILE parameter in the device class definition.
Tivoli Storage Manager REMOVABLEFILE device class supports only single-sided
media. Therefore, if a data cartridge that is associated with a REMOVABLEFILE
device class has two sides, the Tivoli Storage Manager server treats each side as a
separate Tivoli Storage Manager volume.
When using CD-ROM media for the REMOVABLEFILE device type, the library
type must be specified as MANUAL. Access this media through a drive letter, for
example, E:.
For more information, see:
“Configuring removable media devices” on page 126
Defining sequential-access disk (FILE) device classes
FILE device classes are used for storing data on disk in simulated storage volumes.
The storage volumes are actually files. Data is written sequentially into the file
system of the server machine. Because each volume in a FILE device class is
actually a file, a volume name must be a fully qualified file name.
To define a FILE device class, use the DEVTYPE=FILE parameter in the device
class definition.
Do not use raw partitions with a device class type of FILE.
260
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Concurrent access to FILE volumes
Concurrent access improves restore performance by allowing two or more clients
to access the same volume at the same time.
The Tivoli Storage Manager server allows multiple client sessions (archive, retrieve,
backup, and restore) or server processes, for example, storage pool backup, to
concurrently read a volume in a storage pool associated with a FILE-type device
class. In addition, one client session can write to the volume while it is being read.
The following server processes are allowed shared read access to FILE volumes:
v BACKUP DB
v BACKUP STGPOOL
v COPY ACTIVEDATA
v EXPORT/IMPORT NODE
v EXPORT/IMPORT SERVER
v GENERATE BACKUPSET
v RESTORE STGPOOL
v RESTORE VOLUME
The following server processes are not allowed shared read access to FILE
volumes:
v AUDIT VOLUME
v DELETE VOLUME
v MIGRATION
v MOVE DATA
v MOVE NODEDATA
v RECLAMATION
Mitigating performance degradation when backing up or
archiving to FILE volumes
The minimum I/O to a volume associated with a FILE device class is 256 KB,
regardless how much data is being written to the volume. For example, if you are
backing up one 500-byte object, it takes 256 KB of I/O to store it on the volume.
The size of the I/O for a volume associated with a FILE device class has the
greatest impact when backing up or archiving a large number of small objects, for
example, small files or small directories.
To reduce the potential for performance degradation, increase the size of
aggregates created by the server. (An aggregate is an object that contains multiple
logical files that are backed up or archived from a client in a single transaction.) To
increase the size of aggregates, do one of the following
v Increase the value of the TXNGROUPMAX option in the server options file
(dsmserv.opt).
v Increase the value of the TXNGROUPMAX parameter on the REGISTER NODE
or UPDATE NODE server commands.
In addition to increasing the TXNGROUPMAX value, you might also need to
increase the values for the following options:
v The client option TXNBYTELIMIT in the client options file (dsm.opt)
v The server options MOVEBATCHSIZE and MOVESIZETHRESH
Chapter 10. Defining device classes
261
For details about the client option TXNBYTELIMIT, refer to the Backup-Archive
Clients Installation and User’s Guide. For details about server commands and
options, refer to the Administrator’s Reference.
Specifying directories in FILE device-class definitions
The directory name in a FILE device-class definition identifies the location where
the server places the files that represent storage volumes for the device class. When
processing the DEFINE DEVCLASS command, the server expands the specified
directory name into its fully qualified form, starting from the root directory.
You can specify one or more directories as the location of the files used in the FILE
device class. The default is the current working directory of the server at the time
the command is issued.
Attention: Do not specify multiple directories from the same file system. Doing
so can cause incorrect space calculations. For example, if the directories /usr/dir1
and /usr/dir2 are in the same file system, the space check, which does a
preliminary evaluation of available space during store operations, will count each
directory as a separate file system. If space calculations are incorrect, the server
could commit to a FILE storage pool, but not be able to obtain space, causing the
operation to fail. If the space check is accurate, the server can skip the FILE pool in
the storage hierarchy and use the next storage pool if one is available.
If the server needs to allocate a scratch volume, it creates a new file in the
specified directory or directories. (The server can choose any of the directories in
which to create new scratch volumes.) To optimize performance, ensure that
multiple directories correspond to separate physical volumes.
The following table lists the file name extension created by the server for scratch
volumes depending on the type of data that is stored.
For scratch volumes used to store this data:
Client data
Export
Database backup
The file extension is:
.BFS
.EXP
.DBB
Avoiding data-integrity problems when using disk subsystems
and file systems
Tivoli Storage Manager supports the use of remote file systems or drives for
reading and writing storage pool data, database backups, and other data
operations. Disk subsystems and file systems must not report successful write
operations when they can fail after a successful write report to Tivoli Storage
Manager.
A write failure after a successful notification constitutes a data-integrity problem
because the data that was reported as successfully written is unavailable for
retrieval. In this situation, all data subsequently written is also at risk due to
positioning mismatches within the target file. To avoid these problems, ensure that
disk subsystems and file systems, whatever implementation you use, are always
able to return data when the data is requested.
For important disk-related information, see “Requirements for disk subsystems” on
page 103.
262
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Giving storage agents access to FILE volumes
You must ensure that storage agents can access newly created FILE volumes. To
access FILE volumes, storage agents replace names from the directory list in the
device class definition with the names in the directory list for the associated path
definition.
The following example illustrates the importance of matching device classes and
paths to ensure that storage agents can access newly created FILE volumes.
Suppose you want to use these three directories for a FILE library:
c:\server
d:\server
e:\server
1. Use the following command to set up a FILE library named CLASSA with one
drive named CLASSA1 on SERVER1:
define devclass classa devtype=file
directory="c:\server,d:\server,e:\server"
shared=yes mountlimit=1
2. You want the storage agent STA1 to be able to use the FILE library, so you
define the following path for storage agent STA1:
define path server1 sta1 srctype=server desttype=drive device=file
directory="\\192.168.1.10\c\server,\\192.168.1.10\d\server,
\\192.168.1.10\e\server" library=classa
In this scenario, the storage agent, STA1, will replace the directory name
c:\server with the directory name \\192.168.1.10\c\server to access FILE
volumes that are in the c:\server directory on the server.
File volume c:\server\file1.dsm is created by SERVER1. If you later change the
first directory for the device class with the following command:
update devclass classa directory="c:\otherdir,d:\server,e:\server"
SERVER1 will still be able to access file volume c:\server\file1.dsm, but the
storage agent STA1 will not be able to access it because a matching directory name
in the PATH directory list no longer exists. If a directory name is not available in
the directory list associated with the device class, the storage agent can lose access
to a FILE volume in that directory. Although the volume will still be accessible
from the Tivoli Storage Manager server for reading, failure of the storage agent to
access the FILE volume can cause operations to be retried on a LAN-only path or
to fail.
Controlling the size of FILE volumes
You can specify a maximum capacity value that controls the size of volumes (that
is, files) associated with a FILE device class.
To restrict the size of volumes, use the MAXCAPACITY parameter when you
define a device class or update its definition. When the server detects that a
volume has reached a size equal to the maximum capacity, it treats the volume as
full and stores any new data on a different volume.
Chapter 10. Defining device classes
263
Controlling the number of concurrently open FILE volumes
Tivoli Storage Manager lets you restrict the number of mount points (volumes or
files) that can be concurrently opened for access by server storage and retrieval
operations. Attempts to access more volumes than the number indicated causes the
requester to wait.
When selecting a mount limit for this device class, consider how many Tivoli
Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available mount points in a device class
to complete higher priority processes, lower priority processes must wait until a
mount point becomes available. For example, Tivoli Storage Manager cancels the
process for a client backup if the mount point being used is needed for a server
migration or reclamation process. Tivoli Storage Manager cancels a reclamation
process if the mount point being used is needed for a client restore operation. For
additional information, see “Preemption of client or server operations” on page
584.
If processes are often canceled by other processes, consider whether you can make
more mount points available for Tivoli Storage Manager use. Otherwise, review
your scheduling of operations to reduce the contention for resources.
To specify the number of concurrently opened mount points, use the
MOUNTLIMIT parameter when you define the device class or update its
definition.
Defining LTO device classes
Special consideration is required to prevent or minimize problems when mixing
different generations of LTO drives and media in a single library. LTO drive
encryption might also be a consideration.
Mixing LTO drives and media in a library
When mixing different generations of LTO drives and media, you need to consider
the read-write capabilities of each generation. As a best practice, configure a
different device class for each generation of media.
If you are considering mixing different generations of LTO media and drives, be
aware of the following restrictions:
Table 25. Read - write capabilities for different generations of LTO drives
Generation 1
media
Generation 2
media
Generation 3
media
Generation 4
media
Generation 1
Read and write
n/a
n/a
n/a
Generation 2
Read and write
Read and write
n/a
n/a
Generation 3
Read only
Read and write
Read and write
n/a
Generation 4
n/a
Read only
Read and write
Read and write
Drives
If you are mixing different types of drives and media, configure different device
classes: one for each type of media. To specify the exact media type, use the
FORMAT parameter in each of the device class definitions. (Do not specify
FORMAT=DRIVE). For example, if you are mixing Ultrium Generation 1 and
264
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Ultrium Generation 2 drives, specify FORMAT=ULTRIUMC (or ULTRIUM) for the
Ultrium Generation 1 device class, and FORMAT=ULTRIUM2C (or ULTRIUM2) for
the Ultrium Generation 2 device class.
Both device classes can point to the same library in which there can be Ultrium
Generation 1 and Ultrium Generation 2 drives. The drives will be shared between
the two storage pools. One storage pool will use the first device class and Ultrium
Generation 1 media exclusively. The other storage pool will use the second device
class and Ultrium Generation 2 media exclusively. Because the two storage pools
share a single library, Ultrium Generation 1 media can be mounted on Ultrium
Generation 2 drives as they become available during mount point processing.
Note:
1. If you are mixing Ultrium Generation 1 and Ultrium Generation 3 drives and
media in a single library, you must mark the Generation 1 media as read-only,
and all Generation 1 scratch volumes must be checked out.
2. If you are mixing Ultrium Generation 2 and Ultrium Generation 4 drives and
media in a single library, you must mark the Generation 2 media as read-only,
and all Generation 2 scratch volumes must be checked out.
Mount limits in LTO mixed-media environments
In a mixed-media library, in which multiple device classes point to the same
library, compatible drives are shared between storage pools. You must pay special
attention to setting an appropriate value for the MOUNTLIMIT parameter in each
of the device classes. In a mixed media library containing Ultrium Generation 1
and Ultrium Generation 2 drives and media, for example, Ultrium Generation 1
media can get mounted in Ultrium Generation 2 drives.
Consider the example of a mixed library: that consists of the following drives and
media:
v Four LTO Ultrium Generation 1 drives and LTO Ultrium Generation 1 media
v Four LTO Ultrium Generation 2 drives and LTO Ultrium Generation 2 media
You created the following device classes:
v LTO Ultrium Generation 1 device class LTO1CLASS specifying
FORMAT=ULTRIUMC
v LTO Ultrium Generation 2 device class LTO2CLASS specifying
FORMAT=ULTRIUM2C
You also created the following storage pools:
v LTO Ultrium Generation 1 storage pool LTO1POOL based on device class
LTO1CLASS
v LTO Ultrium Generation 2 storage pool LTO1POOL based on device class
LTO2CLASS
The number of mount points available for use by each storage pool is specified in
the device class using the MOUNTLIMIT parameter. The MOUNTLIMIT parameter
in the LTO2CLASS device class should be set to 4 to match the number of available
drives that can mount only LTO2 media. The MOUNTLIMIT parameter in the
LTO1CLASS device class should be set to a value higher (5 or possibly 6) than the
number of available drives to adjust for the fact that Ultrium Generation 1 media
can be mounted in Ultrium Generation 2 drives. The optimum value for
MOUNTLIMIT will depend on workload and storage pool access patterns.
Chapter 10. Defining device classes
265
Monitor and adjust the MOUNTLIMIT setting to suit changing workloads. If the
MOUNTLIMIT for LTO1POOL is set too high, mount requests for the LTO2POOL
might be delayed or fail because the Ultrium Generation 2 drives have been used
to satisfy Ultrium Generation 1 mount requests. In the worst scenario, too much
competition for Ultrium Generation 2 drives might cause mounts for Generation 2
media to fail with the following message:
ANR8447E No drives are currently available in the library.
If the MOUNTLIMIT for LTO1POOL is not set high enough, mount requests that
could potentially be satisfied LTO Ultrium Generation 2 drives will be delayed.
Some restrictions apply when mixing Ultrium Generation 1 with Ultrium
Generation 2 or Generation 3 drives because of the way in which mount points are
allocated. For example, processes that require multiple mount points that include
both Ultrium Generation 1 and Ultrium Generation 2 volumes might try to reserve
Ultrium Generation 2 drives only, even when one mount can be satisfied by an
available Ultrium Generation 1 drive. Processes that behave in this manner include
the MOVE DATA and BACKUP STGPOOL commands. These processes will wait
until the needed number of mount points can be satisfied with Ultrium Generation
2 drives.
Encrypting data using LTO generation 4 drives
Tivoli Storage Manager supports the three types of drive encryption available with
LTO generation 4 drives: Application, System, and Library. These methods are
defined through the hardware.
For more information about using drive encryption, refer to “Encrypting data on
tape” on page 516.
Enabling LTO drive encryption
The DRIVEENCRYPTION parameter specifies whether drive encryption is enabled
or can be enabled for IBM and HP LTO generation 4, Ultrium4, and Ultrium4C
formats. This parameter ensures Tivoli Storage Manager compatibility with
hardware encryption settings for empty volumes.
Tivoli Storage Manager supports the Application method of encryption with IBM
and HP LTO-4 drives. Only IBM LTO-4 supports the System and Library methods.
The Library method of encryption is supported only if your system hardware (for
example, IBM 3584) supports it.
Remember: You cannot use drive encryption with write-once, read-many (WORM)
media.
The Application method is defined through the hardware. To use the Application
method, in which Tivoli Storage Manager generates and manages encryption keys,
set the DRIVEENCRYPTION parameter to ON. This permits the encryption of data
for empty volumes. If the parameter is set to ON and the hardware is configured
for another encryption method, backup operations will fail.
The following simplified example shows the steps you would take to permit the
encryption of data for empty volumes in a storage pool:
1. Define a library:
define library 3584 libtype=SCSI
2. Define a device class, LTO_ENCRYPT, and specify Tivoli Storage Manager as
the key manager:
266
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
define devclass lto_encrypt library=3584 devtype=lto driveencryption=on
3. Define a storage pool:
define stgpool lto_encrypt_pool lto_encrypt
Disabling LTO drive encryption
To disable encryption on new volumes, set the DRIVEENCRYPTION parameter to
OFF. The default value is ALLOW. Drive encryption for empty volumes is
permitted if another method of encryption is enabled.
Defining SERVER device classes
SERVER device classes let you create volumes for one Tivoli Storage Manager
server that exist as archived files in the storage hierarchy of another server, called a
target server. These virtual volumes have the characteristics of sequential-access
volumes such as tape.
To define a SERVER device class, use the DEFINE DEVCLASS command with the
DEVTYPE=SERVER parameter. For information about how to use a SERVER device
class, see “Using virtual volumes to store data on another server” on page 730.
Controlling the size of files created on a target server
You can specify a maximum capacity value that controls the size of files that are
created on the target server to store data for the source server.
To specify a file size, use the MAXCAPACITY parameter when you define the
device class or update its definition.
The storage pool volumes of this device type are explicitly set to full when the
volume is closed and dismounted.
Controlling the number of simultaneous sessions between
source and target servers
You can control the number of simultaneous sessions between the source server
and the target server. Any attempts to access more sessions than indicated by the
mount limit causes the requester to wait.
To control the number of simultaneous sessions, use the MOUNTLIMIT parameter
when you define the device class or update its definition.
When specifying a mount limit, consider your network load balancing and how
many Tivoli Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available sessions in a device class to
complete higher priority processes, lower priority processes must wait until a
session becomes available. For example, Tivoli Storage Manager cancels the process
for a client backup if a session is needed for a server migration or reclamation
process. Tivoli Storage Manager cancels a reclamation process if the session being
used is needed for a client restore operation.
When specifying a mount limit, also consider the resources available on the target
server when setting mount limits. Do not set a high mount limit value if the target
cannot move enough data or access enough data to satisfy all of the requests.
Chapter 10. Defining device classes
267
If processes are often canceled by other processes, consider whether you can make
more sessions available for Tivoli Storage Manager use. Otherwise, review your
scheduling of operations to reduce the contention for network resources.
Controlling the amount of time a SERVER volume remains
mounted
You can improve response time for SERVER media mounts by leaving previously
mounted volumes online.
To specify the amount of time, in minutes, to retain an idle sequential access
volume before dismounting it, use the MOUNTRETENTION parameter when you
define the device class or update its definition.
A value of 1 to 5 minutes is recommended.
Defining device classes for StorageTek VolSafe devices
StorageTek VolSafe brand Ultrium drives use media that cannot be overwritten. Do
not use this media for short-term backups of client files, the server database, or
export tapes.
There are two methods for using VolSafe media and drives: This technology uses
media that cannot be overwritten; therefore, do not use this media for short-term
backups of client files, the server database, or export tapes.
v Define a device class using the DEFINE DEVCLASS command and specify
DEVTYPE=VOLSAFE. You can use this device class with EXTERNAL, SCSI, and
ACSLS libraries. All drives in a library must be enabled for VolSafe use.
v Define a device class using DEFINE DEVCLASS command with specify
DEVTYPE=ECARTRIDGE and WORM=YES. For VolSafe devices, WORM=YES is
required and must be specified when the device class is defined. You cannot
update the WORM parameter using the UPDATE DEVCLASS command.
To enable VolSafe function, consult your StorageTek hardware documentation.
Attempting to write to VolSafe media without a VolSafe-enabled drive results in
errors.
To configure a VolSafe device in a SCSI library using the DEVTYPE-ECARTRIDGE
parameter, enter the following series of commands. (The values you select for the
library variable, the drive variable, and so on might be different for your
environment.)
1. Define a library:
define library volsafelib libtype=scsi
2. Define a drive:
define drive volsafelib drive01
3. Define a path:
define path server01 drive01 srctype=server destype=drive device=mt4.0.0.1
library=volsafelib
4. Define a device class:
define devclass volsafeclass library=volsafelib devtype=ecartridge
format=drive worm=yes
For more information about VolSafe media, see “Write-once, read-many (WORM)
tape media” on page 180.
268
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Defining device classes for CENTERA devices
To use a Centera device, you must define a device class whose device type is
CENTERA.
Server operations not supported by centera
Centera storage devices do not support some Tivoli Storage Manager server
operations.
The following server operations are not supported:
v Data-movement operations:
– Moving node data into or out of a Centera storage pool.
– Migrating data into or out of a Centera storage pool.
– Reclaiming a Centera storage pool.
v Backup operations:
– Backing up a Centera storage pool.
– Using a Centera device class to back up a database.
– Backing up a storage pool to a Centera storage pool.
– Copying active data to an active-data pool.
v Restore operations:
– Restoring data from a copy storage pool or an active-data pool to a Centera
storage pool.
– Restoring volumes in a Centera storage pool.
v Other:
– Exporting data to a Centera device class or importing data from a Centera
device class; however, files stored in Centera storage pools can be exported
and files being imported can be stored on Centera.
– Using a Centera device class for creating backup sets; however, files stored in
Centera storage pools can be sent to backup sets.
– Defining Centera volumes.
– Using a Centera device class as the target of volume history, device
configuration, trace logs, error logs, or query output files.
Controlling the number of concurrently open mount points for
centera devices
You can control the number of mount points that can be opened concurrently for
access by server storage and retrieval operations. Any attempts to access more
mount points than indicated by the mount limit causes the requester to wait.
When selecting a mount limit for this device class, consider how many Tivoli
Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available mount points in a device class
to complete higher priority processes, lower priority processes must wait until a
mount point becomes available. For example, the Tivoli Storage Manager server is
currently performing a client backup request to an output volume and another
request from another client to restore data from the same volume. The backup
Chapter 10. Defining device classes
269
request will be preempted and the volume released for use by the restore request.
For additional information, see “Preemption of client or server operations” on page
584.
To control the number of mount points concurrently open for Centera devices, use
the MOUNTLIMIT parameter when you define the device class or update its
definition.
Obtaining information about device classes
You can choose to view a standard or detailed report for a device class.
Task
Required Privilege Class
Request information about device classes
Any administrator
To display a standard report on device classes, enter:
query devclass
Figure 33 provides an example of command output.
Device
Class
Name
--------DISK
TAPE8MM
FILE
GEN1
Device
Access
Strategy
---------Random
Sequential
Sequential
Sequential
Storage
Pool
Count
------9
1
1
2
Device
Type
Format
-------
------
8MM
FILE
LTO
8200
DRIVE
ULTRIUM
Est/Max
Capacity
(MB)
--------
5,000.0
Mount
Limit
----2
1
DRIVES
Figure 33. Example of a standard device class report
To display a detailed report on the GEN1 device class, enter:
query devclass gen1 format=detailed
Figure 34 on page 271 provides an example of command output.
270
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Device Class Name:
Device Access Strategy:
Storage Pool Count:
Device Type:
Format:
Est/Max Capacity (MB):
Mount Limit:
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Drive Letter:
Library:
Directory:
Server Name:
Retry Period:
Retry Interval:
TwoSided:
Shared:
High-level Address:
Minimum Capacity:
WORM:
Scaled Capacity:
Last Update by (administrator):
Last Update Date/Time:
GEN1
Sequential
2
LTO
ULTRIUM
DRIVES
60
60
ADSM
GEN2LIB
ADMIN
01/23/03 12:25:31
Figure 34. Example of a detailed device class report
How Tivoli Storage Manager fills volumes
The DEFINE DEVCLASS command has an optional ESTCAPACITY parameter that
indicates the estimated capacity for sequential volumes associated with the device
class. Tivoli Storage Manager uses the estimated capacity of volumes to determine
the estimated capacity of a storage pool, and the estimated percent utilized.
If the ESTCAPACITY parameter is not specified, Tivoli Storage Manager uses a
default value based on the recording format specified for the device class
(FORMAT=).
If you specify an estimated capacity that exceeds the actual capacity of the volume
in the device class, Tivoli Storage Manager updates the estimated capacity of the
volume when the volume becomes full. When Tivoli Storage Manager reaches the
end of the volume, it updates the capacity for the amount that is written to the
volume.
You can either accept the default estimated capacity for a given device class, or
explicitly specify an estimated capacity. An accurate estimated capacity value is not
required, but is useful. Tivoli Storage Manager uses the estimated capacity of
volumes to determine the estimated capacity of a storage pool, and the estimated
percent utilized. You may want to change the estimated capacity if:
v The default estimated capacity is inaccurate because data compression is being
performed by the drives.
v You have volumes of nonstandard size.
Chapter 10. Defining device classes
271
Data compression
Client files can be compressed to decrease the amount of data sent over networks
and the space occupied by the data in Tivoli Storage Manager storage. With Tivoli
Storage Manager, files can be compressed by the Tivoli Storage Manager client
before the data is sent to the Tivoli Storage Manager server, or by the device where
the file is finally stored.
Use either client compression or device compression, but not both. The following
table summarizes the advantages and disadvantages of each type of compression.
Type of Compression
Advantages
Disadvantages
Tivoli Storage Manager client Reduced load on the network Higher CPU usage by the
compression
client
Longer elapsed time for client
operations such as backup
Drive compression
Amount of compression can
be better than Tivoli Storage
Manager client compression
on some drives
Using drive compression on
files that have already been
compressed by the Tivoli
Storage Manager client can
increase file size
Either type of compression can affect tape drive performance, because compression
affects data rate. When the rate of data going to a tape drive is slower than the
drive can write, the drive starts and stops while data is written, meaning relatively
poorer performance. When the rate of data is fast enough, the tape drive can reach
streaming mode, meaning better performance. If tape drive performance is more
important than the space savings that compression can mean, you may want to
perform timed test backups using different approaches to determine what is best
for your system.
Drive compression is specified with the FORMAT parameter for the drive’s device
class, and the hardware device must be able to support the compression format.
For information about how to set up compression on the client, see “Node
compression considerations” on page 401 and “Registering nodes with the server”
on page 400.
Tape volume capacity and data compression
How Tivoli Storage Manager views the capacity of the volume where the data is
stored depends on whether files are compressed by the Tivoli Storage Manager
client or by the storage device.
It may wrongly appear that you are not getting the full use of the capacity of your
tapes, for the following reasons:
v A tape device manufacturer often reports the capacity of a tape based on an
assumption of compression by the device. If a client compresses a file before it is
sent, the device may not be able to compress it any further before storing it.
v Tivoli Storage Manager records the size of a file as it goes to a storage pool. If
the client compresses the file, Tivoli Storage Manager records this smaller size in
the database. If the drive compresses the file, Tivoli Storage Manager is not
aware of this compression.
Figure 35 on page 273 compares what Tivoli Storage Manager sees as the amount
of data stored on tape when compression is done by the device and by the client.
272
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
For this example, the tape has a physical capacity of 1.2 GB. However, the
manufacturer reports the capacity of the tape as 2.4 GB by assuming the device
compresses the data by a factor of two.
Suppose a client backs up a 2.4 GB file:
v When the client does not compress the file, the server records the file size as 2.4
GB, the file is compressed by the drive to 1.2 GB, and the file fills up one tape.
v When the client compresses the file, the server records the file size as 1.2 GB, the
file cannot be compressed any further by the drive, and the file still fills one
tape.
In both cases, Tivoli Storage Manager considers the volume to be full. However,
Tivoli Storage Manager considers the capacity of the volume in the two cases to be
different: 2.4 GB when the drive compresses the file, and 1.2 GB when the client
compresses the file. Use the QUERY VOLUME command to see the capacity of
volumes from Tivoli Storage Manager’s viewpoint. See “Monitoring the use of
storage pool volumes” on page 366.
Figure 35. Comparing compression at the client and compression at the device
For how to set up compression on the client, see “Node compression
considerations” on page 401 and “Registering nodes with the server” on page 400.
Chapter 10. Defining device classes
273
274
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Chapter 11. Managing storage pools and volumes
Logical storage pools and storage volumes are the principal components in the
Tivoli Storage Manager model of data storage. By manipulating the properties of
these objects, you can optimize the usage of storage devices.
When you configure devices so that the server can use them to store client data,
you create storage pools and storage volumes. The procedures for configuring
devices use the set of defaults that provides for storage pools and volumes. The
defaults can work well. However, you might have specific requirements not met by
the defaults. There are three common reasons to change the defaults:
v Optimize and control storage device usage by arranging the storage hierarchy
and tuning migration through the hierarchy (next storage pool, migration
thresholds).
v Reuse tape volumes through reclamation. Reuse is also related to policy and
expiration.
v Keep a client’s files on a minimal number of volumes (collocation).
You can also make other adjustments to tune the server for your systems. See the
following sections to learn more. For some quick tips, see “Task tips for storage
pools” on page 287.
Concepts
“Storage pools” on page 276
“Storage pool volumes” on page 288
“Access modes for storage pool volumes” on page 294
“Storage pool hierarchies” on page 296
“Migrating files in a storage pool hierarchy” on page 307
“Caching in disk storage pools” on page 317
“Writing data simultaneously to primary, copy, and active-data pools” on page 329
“Keeping client files together using collocation” on page 340
“Reclaiming space in sequential-access storage pools” on page 350
“Estimating space needs for storage pools” on page 361
Tasks
“Defining storage pools” on page 281
“Preparing volumes for random-access storage pools” on page 290
“Preparing volumes for sequential-access storage pools” on page 291
“Defining storage pool volumes” on page 292
“Updating storage pool volumes” on page 293
“Setting up a storage pool hierarchy” on page 296
“Monitoring storage-pool and volume usage” on page 363
“Monitoring the use of storage pool volumes” on page 366
“Moving data from one volume to another volume” on page 381
“Moving data belonging to a client node” on page 386
© Copyright IBM Corp. 1993, 2009
275
Tasks
“Renaming storage pools” on page 389
“Defining copy storage pools and active-data pools” on page 389
“Deleting storage pools” on page 393
“Deleting storage pool volumes” on page 393
For details about devices, see:
Chapter 5, “Magnetic disk devices,” on page 103
Chapter 7, “Configuring storage devices,” on page 121
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see
Administrator’s Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
You can also perform Tivoli Storage Manager tasks from the Administration
Center. For more information about using the Administration Center, see
“Managing servers with the Administration Center” on page 33.
Storage pools
A storage pool is a collection of storage volumes. A storage volume is the basic
unit of storage, such as allocated space on a disk or a single tape cartridge. The
server uses the storage volumes to store backed-up, archived, or space-managed
files.
The server provides three types of storage pools that serve different purposes:
primary storage pools, copy storage pools, and active-data pools. You can arrange
primary storage pools in a storage hierarchy. The group of storage pools that you set
up for the Tivoli Storage Manager server to use is called server storage.
Primary storage pools
When a user tries to restore, retrieve, recall, or export file data, the requested file is
obtained from a primary storage pool, if possible. Primary storage pool volumes
are always located on-site.
The server has a default DISKPOOL storage pool that uses random-access disk
storage. You can easily create other disk storage pools and storage pools that use
tape and other sequential-access media by using Device Configuration Wizard in
the Tivoli Storage Manager Console.
|
|
|
|
To prevent a single point of failure, create separate storage pools for backed-up
and space-managed files. This also includes not sharing a storage pool in either
storage pool hierarchy. Consider setting up a separate, random-access disk storage
pool to give clients fast access to their space-managed files.
|
|
|
Restriction: Backing up a migrated, space-managed file could result in an error if
the destination for the backup is the same storage pool as the storage pool where
the space-managed file currently exists.
A primary storage pool can use random-access storage (DISK device class) or
sequential-access storage (for example, tape or FILE device classes).
276
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Copy storage pools
Copy storage pools contain active and inactive versions of data that is backed up
from primary storage pools. Copy storage pools provide a means of recovering
from disasters or media failures.
For example, when a client attempts to retrieve a file and the server detects an
error in the file copy in the primary storage pool, the server marks the file as
damaged. At the next attempt to access the file, the server can obtain the file from
a copy storage pool.
You can move copy storage pool volumes off-site and still have the server track the
volumes. Moving copy storage pool volumes off-site provides a means of
recovering from an on-site disaster.
A copy storage pool can use only sequential-access storage (for example, a tape
device class or FILE device class).
Remember:
v You can back up data from a primary storage pool defined with the NATIVE,
NONBLOCK, or any of the NDMP formats (NETAPPDUMP, CELERRADUMP,
or NDMPDUMP). The target copy storage pool must have the same data format
as the primary storage pool.
v You cannot back up data from a primary storage pool defined with a CENTERA
device class.
For details about copy storage pools, see:
v “Restoring storage pools” on page 791
v “Backing up storage pools” on page 774
v “Recovering a lost or damaged storage pool volume” on page 811
v “Ensuring the integrity of files” on page 806
v “Backing up the data in a storage hierarchy” on page 301
v “Setting up copy storage pools and active-data pools” on page 302
v “Backing up storage pools” on page 774
Active-data pools
An active-data pool contains only active versions of client backup data. active-data
pools are useful for fast client restores, reducing the number of on-site or off-site
storage volumes, or reducing bandwidth when copying or restoring files that are
vaulted electronically in a remote location.
Data migrated by hierarchical storage management (HSM) clients and archive data
are not permitted in active-data pools. As updated versions of backup data
continue to be stored in active-data pools, older versions are deactivated and
removed during reclamation processing.
Restoring a primary storage pool from an active-data pool might cause some or all
inactive files to be deleted from the database if the server determines that an
inactive file needs to be replaced but cannot find it in the active-data pool. As a
best practice and to protect your inactive data, therefore, you should create a
minimum of two storage pools: one active-data pool, which contains only active
data, and one copy storage pool, which contains both active and inactive data. You
can use the active-data pool volumes to restore critical client node data, and
afterward you can restore the primary storage pools from the copy storage pool
Chapter 11. Managing storage pools and volumes
277
volumes. active-data pools should not be considered for recovery of a primary pool
or volume unless the loss of inactive data is acceptable.
Active-data pools can use any type of sequential-access storage (for example, a
tape device class or FILE device class). However, the precise benefits of an
active-data pool depend on the specific device type associated with the pool. For
example, active-data pools associated with a FILE device class are ideal for fast
client restores because FILE volumes do not have to be physically mounted and
because the server does not have to position past inactive files that do not have to
be restored. In addition, client sessions restoring from FILE volumes in an
active-data pool can access the volumes concurrently, which also improves restore
performance.
Active-data pools that use removable media, such as tape or optical, offer similar
benefits. Although tapes need to be mounted, the server does not have to position
past inactive files. However, the primary benefit of using removable media in
active-data pools is the reduction of the number of volumes used for on-site and
off-site storage. If you vault data electronically to a remote location, an active-data
pool associated with a SERVER device class lets you save bandwidth by copying
and restoring only active data.
Remember:
v The server will not attempt to retrieve client files from an active-data pool
during a point-in-time restore. Point-in-time restores require both active and
inactive file versions. Active-data pools contain only active file versions. For
optimal efficiency during point-in-time restores and to avoid switching between
active-data pools and primary or copy storage pools, the server retrieves both
active and inactive versions from the same storage pool and volumes.
v You cannot copy active data to an active-data pool from a primary storage pool
defined with the NETAPPDUMP, the CELERRADUMP, or the NDMPDUMP
data format.
v You cannot copy active data from a primary storage pool defined with a
CENTERA device class.
For details about active-data pools, see:
v “Backing up the data in a storage hierarchy” on page 301
v “Setting up copy storage pools and active-data pools” on page 302
v “Copying active versions of client backup data to active-data pools”
v “Active-data pools as sources of active file versions for server operations” on
page 279
Copying active versions of client backup data to active-data
pools
To copy active versions of client backup files from primary storage pools to
active-data pools, you can issue the COPY ACTIVEDATA command or you can use
simultaneous write. The simultaneous-write function automatically writes active
backup data to active-data pools at the same time that the backup data is written
to a primary storage pool.
You can issue the COPY ACTIVEDATA command either manually or in an
administrative schedule or maintenance script.
Regardless whether you use the COPY ACTIVEDATA command or simultaneous
write, the Tivoli Storage Manager server writes data to an active-data pool only if
278
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
the data belongs to a node that is a member of a policy domain that specifies the
active-data pool as the destination for active data.
Restriction: The BACKUP STGPOOL command is not supported for active-data
pools.
Active-data pools as sources of active file versions for server
operations
The Tivoli Storage Manager uses a search order to locate active file versions.
During client sessions and processes that require active file versions, the Tivoli
Storage Manager server searches certain types of storage pools, if they exist.
1. An active-data pool associated with a FILE device class
2. A random-access disk (DISK) storage pool
3. A primary or copy storage pool associated with a FILE device class
4. A primary, copy, or active-data pool associated with on-site or off-site
removable media (tape or optical)
Even though the list implies a selection order, the server might select a volume
with an active file version from a storage pool lower in the order if a volume
higher in the order cannot be accessed because of the requirements of the session
or process, volume availability, or contention for resources such as mount points,
drives, and data.
Example: Setting up server storage
All the data in four primary storage pools is backed up to one copy storage pool.
Active versions of data are stored in an active-data pool.
Figure 36 on page 280 shows one way to set up server storage. In this example, the
storage defined for the server includes:
v Three disk storage pools, which are primary storage pools: ARCHIVE, BACKUP,
and HSM
v One primary storage pool that consists of tape cartridges
v One copy storage pool that consists of tape cartridges
v One active-data pool that consists of FILE volumes for fast client restore
Policies defined in management classes direct the server to store files from clients
in the ARCHIVE, BACKUP, or HSM disk storage pools. An additional policy
specifies the following:
v A select group of client nodes that requires fast restore of active backup data
v The active-data pool as the destination for the active-data belonging to these
nodes
v The ARCHIVE, BACKUP, or HSM disk storage pools as destinations for archive,
backup (active and inactive versions), and space-managed data
For each of the three disk storage pools, the tape primary storage pool is next in
the hierarchy. As the disk storage pools fill, the server migrates files to tape to
make room for new files. Large files can go directly to tape. For more information
about setting up a storage hierarchy, see “Storage pool hierarchies” on page 296.
For more information about backing up primary storage pools, see “Backing up
storage pools” on page 774.
Chapter 11. Managing storage pools and volumes
279
Figure 36. Example of server storage
To set up this server storage hierarchy, do the following:
1. Define the three disk storage pools, or use the three default storage pools that
are defined when you install the server. Add volumes to the disk storage pools
if you have not already done so.
For more information, see
“Configuring random access volumes on disk devices” on page 108
2. Define policies that direct the server to initially store files from clients in the
disk storage pools. To do this, you define or change management classes and
copy groups so that they point to the storage pools as destinations. Then
activate the changed policy. See “Changing policy” on page 457 for details.
Define an additional policy that specifies the active-data pool that you will
create as the destination for active data.
3. Assign nodes to the domains. Nodes whose active data you want to restore
quickly should be assigned to the domain that specifies the active-data pool.
4. Attach one or more tape devices, or a tape library, to your server system.
Use the Device Configuration Wizard in the Tivoli Storage Manager Console to
configure the device.
For more information, see:
“Defining storage pools” on page 281
Chapter 7, “Configuring storage devices,” on page 121
5. Update the disk storage pools so that they point to the tape storage pool as the
next storage pool in the hierarchy. See “Example: Updating storage pools” on
page 286.
6. Define a copy storage pool and an active-data pool. The copy storage pool can
use the same tape device or a different tape device as the primary tape storage
pool. The active-data pool uses sequential-access disk storage (a FILE-type
device class) for fast client restores. See “Defining copy storage pools and
active-data pools” on page 389.
7. Set up administrative schedules or a script to back up the disk storage pools
and the tape storage pool to the copy storage pool. Use the same or different
schedules or scripts to copy active data to the active-data pool. Send the copy
storage pool volumes off-site for safekeeping. See “Backing up storage pools”
on page 774.
280
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Defining storage pools
To optimize data storage, you can specify various properties when you define or
update a storage pool using the DEFINE STGPOOL and UPDATE STGPOOL
commands.
For most purposes, you should be able to use the Device Configuration Wizard to
define storage pools. If you decide to set some parameters not handled by the
wizard or later change the storage pools, you can use either commands or the
Administration Center.
Tip: When defining or updating storage pools that use LTO Ultrium media, special
considerations might apply.
Task
Required Privilege Class
Define storage pools
System
Update storage pools
System or unrestricted storage
Properties of storage pool definitions
You can define storage pools using a wide range of properties to control how data
is stored. Each storage pool represents one type of media as specified in the
device-class parameter.
When you define a primary storage pool, be prepared to specify some or all of the
information that is shown in Table 26. Most of the information is optional. Some
information applies only to random-access storage pools or only to
sequential-access storage pools. Required parameters are marked.
Table 26. Information for defining a storage pool
Type of
Storage Pool
Information
Explanation
Storage pool name
The name of the storage pool.
random,
sequential
The name of the device class assigned for the storage pool.
random,
sequential
Pool type
The type of storage pool (primary or copy). The default is to define a
primary storage pool. A storage pool’s type cannot be changed after it has
been defined.
random,
sequential
Maximum number of
scratch volumes 2
When you specify a value greater than zero, the server dynamically
acquires scratch volumes when needed, up to this maximum number.
sequential
(Required for sequential
access)
For automated libraries, set this value equal to the physical capacity of the
library. For details, see:
(Required)
Device class
(Required)
“Adding scratch volumes to automated library devices” on page 185
Chapter 11. Managing storage pools and volumes
281
Table 26. Information for defining a storage pool (continued)
Information
Explanation
Access mode
Defines access to volumes in the storage pool for user operations (such as
backup and restore) and system operations (such as reclamation and server
migration). Possible values are:
Type of
Storage Pool
random,
sequential
Read/Write
User and system operations can read from or write to the
volumes.
Read-Only
User operations can read from the volumes, but not write. Server
processes can move files within the volumes in the storage pool.
However, no new writes are permitted to volumes in the storage
pool from volumes outside the storage pool.
Unavailable
User operations cannot get access to volumes in the storage pool.
No new writes are permitted to volumes in the storage pool from
other volumes outside the storage pool. However, system
processes (like reclamation) are permitted to move files within the
volumes in the storage pool.
Maximum file size
1 2
To exclude large files from a storage pool, set a maximum file size. The
maximum file size applies to the size of a physical file (a single client file
or an aggregate of client files).
random,
sequential
Do not set a maximum file size for the last storage pool in the hierarchy
unless you want to exclude very large files from being stored in server
storage.
Cyclic Redundancy Check Specifies whether the server uses CRC to validate storage pool data during
audit volume processing. For additional information see “Data validation
(CRC) 1
during audit volume processing” on page 799.
random,
sequential
Using the CRC option in conjunction with scheduling audit volume
processing continually ensures the integrity of data stored in your storage
hierarchy. If you always want your storage pool data validated, set your
primary storage pool crcdata definition to YES.
Name of the next storage
pool 1 2
Migration thresholds
Migration processes
Migration delay
282
1 2
1 2
1 2
Specifies the name of the next storage pool in the storage pool hierarchy,
where files can be migrated or where files are stored that exceed the
maximum size for this storage pool. See “Storage pool hierarchies” on
page 296.
random,
sequential
Specifies a percentage of storage pool occupancy at which the server
begins migrating files to the next storage pool (high threshold) and the
percentage when migration stops (low threshold). See “Migrating files in a
storage pool hierarchy” on page 307.
random,
sequential
Specifies the number of concurrent processes to use for migrating files
from this storage pool. See “Migrating disk storage pools” on page 308
and “Specifying multiple concurrent migration processes” on page 316.
random,
sequential
Specifies the minimum number of days a file must remain in a storage
pool before it is eligible for migration. See “Keeping files in a storage
pool” on page 312 and “How the server migrates files from
sequential-access storage pools” on page 314.
random,
sequential
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 26. Information for defining a storage pool (continued)
Type of
Storage Pool
Information
Explanation
Continue migration
process 1 2
Specifies whether migration of files should continue even if files do not
meet the requirement for migration delay. This setting is used only when
the storage pool cannot go below the low migration threshold without
moving additional files. See “Keeping files in a storage pool” on page 312
and “How the server migrates files from sequential-access storage pools”
on page 314.
random,
sequential
Cache
Enables or disables cache. When cache is enabled, copies of files migrated
by the server to the next storage pool are left on disk after the migration.
In this way, a retrieval request can be satisfied quickly. See “Caching in
disk storage pools” on page 317.
random
With collocation enabled, the server attempts to keep all files belonging to
a group of client nodes, a single client node, or a client file space on a
minimal number of sequential-access storage volumes. See “Keeping client
files together using collocation” on page 340.
sequential
Collocation
2
Reclamation threshold
1 2
Specifies what percentage of reclaimable space can accumulate on a
volume before the server initiates a space reclamation process for the
volume. See “Reclamation thresholds” on page 352.
sequential
Reclamation processes
1 2
Specifies the number of concurrent processes to use for reclaiming the
volumes in a storage pool. See “Optimizing drive usage using multiple
concurrent reclamation processes” on page 353.
sequential
Off-site reclaim limit
Specifies the number of off-site volumes to have their space reclaimed
during reclamation for a storage pool. See “Reclamation of off-site
volumes” on page 357.
sequential
Reclamation storage pool
Specifies the name of the storage pool to be used for storing data from
volumes being reclaimed in this storage pool. Use for storage pools whose
device class only has one drive or mount point. See “Reclaiming volumes
in a storage pool with one drive” on page 354.
sequential
Specifies the number of days that must elapse after all of the files have
been deleted from a volume, before the volume can be rewritten or
returned to the scratch pool. See “Delaying reuse of volumes for recovery
purposes” on page 780.
sequential
Specifies the name of a location where volumes are stored when they are
ejected from an automated library by the MOVE MEDIA command.
sequential
1 2
Reuse delay period
Overflow location
2
1 2
Use for a storage pool that is associated with an automated library or an
external library.
For details, see:
“Returning reclaimed volumes to a library (Windows)” on page 184
Data Format
2
The format in which data will be stored. NATIVE is the default data
format. NETAPPDUMP and NONBLOCK are examples of other data
formats.
sequential
Chapter 11. Managing storage pools and volumes
283
Table 26. Information for defining a storage pool (continued)
Information
Explanation
Copy Storage Pools
1 2
Type of
Storage Pool
Specifies the names of copy storage pools where the server simultaneously
writes data when a client backup, archive, import or migration operation
stores data to the primary storage pool. The server writes the data
simultaneously to all listed copy storage pools. This option is restricted to
primary random-access storage pools or to primary sequential-access
storage pools that use the NATIVE or NONBLOCK data format. See the
Copy Continue entry. See the Copy Continue entry and “Writing data
simultaneously to primary, copy, and active-data pools” on page 329 for
related information.
random,
sequential
Attention: The COPYSTGPOOLS parameter is not intended to replace the
BACKUP STGPOOL command. If you use the simultaneous write function,
ensure that the copy of the primary storage pool is complete by regularly
issuing the BACKUP STGPOOL command. Failure to do so could result in
the inability to recover the primary storage pool data if the primary
storage pool becomes damaged or lost.
Copy Continue
1 2
Specifies how the server should react to a copy storage pool write failure
for any of the copy storage pools listed in the COPYSTGPOOLS parameter.
With a value of YES, during a write failure, the server will exclude the
failing copy storage pool from any further writes while that specific client
session is active. With a value of NO, during a write failure, the server will
fail the entire transaction including the write to the primary storage pool.
sequential
This option has no effect on active-data pools.
Active-data pools
1 2
Specifies the names of active-data pools where the server simultaneously
writes active versions of client node data during backups. The server
writes the data simultaneously to all listed active-data pools. This option is
restricted to primary random-access storage pools or to primary
sequential-access storage pools that use the NATIVE or NONBLOCK data
format. Nodes whose data is to be written to an active-data pool during a
simultaneous write operation must be members of a policy domain that
specifies the active-data pool as the destination for active backup data.
random,
sequential
Attention: The ACTIVEDATAPOOLS parameter is not intended to
replace the COPY ACTIVEDATA command. If you use the simultaneous
write function, ensure that the copy of active backup data is complete by
regularly issuing the COPY ACTIVEDATA command. If you do not issue
the COPY ACTIVEDATA command regularly and you do not have copy
storage pools, you might not be able to recover any of the data in a
primary storage pool if the primary storage pool becomes damaged or lost.
Shredding
1
Specifies whether data is physically overwritten when it is deleted. After
client data is deleted, it might still be possible to recover it. For sensitive
data, this condition is a potential security exposure. Shredding the deleted
data increases the difficulty of discovering and reconstructing the data
later. For more information, including how to set up shred pools and how
shredding interacts with other command parameters, see “Securing
sensitive client data” on page 519.
This information is not available for sequential-access storage pools that use the following data formats:
v CELERRADUMP
v NDMPDUMP
v NETAPPDUMP
2
This information is not available or is ignored for Centera sequential-access storage pools.
284
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
random
Example: Defining storage pools
An engineering department requires a separate storage hierarchy. You want the
department’s backed-up files to go to a disk storage pool. When that pool fills, you
want the files to migrate to a tape storage pool.
You want the storage pools to have the following characteristics:
v Disk primary storage pool
– The pool named ENGBACK1 is the storage pool for the engineering
department.
– The size of the largest file that can be stored is five MB. Files larger than five
MB are stored in the tape storage pool.
– Files migrate from the disk storage pool to the tape storage pool when the
disk storage pool is 85% full. File migration to the tape storage pool stops
when the disk storage pool is down to 40% full.
– The access mode is the default, read/write.
– Cache is used.
v Tape primary storage pool
– The name of the pool is BACKTAPE.
– The pool uses the device class TAPE, which has already been defined.
– No limit is set for the maximum file size, because this is the last storage pool
in the hierarchy.
– To group files from the same client on a small number of volumes, use
collocation at the client node level.
– Use scratch volumes for this pool, with a maximum number of 100 volumes.
– The access mode is the default, read/write.
– Use the default for reclamation: Reclaim a partially full volume (to allow tape
reuse) when 60% of the volume’s space can be reclaimed.
You can define the storage pools in a storage pool hierarchy from the top down or
from the bottom up. Defining the hierarchy from the bottom up requires fewer
steps. To define the hierarchy from the bottom up, perform the following steps:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description='tape storage pool for engineering backups'
maxsize=nolimit collocate=node maxscratch=100
2. Define the storage pool named ENGBACK1 with the following command:
define stgpool engback1 disk
description='disk storage pool for engineering backups'
maxsize=5m nextstgpool=backtape highmig=85 lowmig=40
Restrictions:
v You cannot establish a chain of storage pools that lead to an endless loop. For
example, you cannot define StorageB as the next storage pool for StorageA, and
then define StorageA as the next storage pool for StorageB.
v The storage pool hierarchy includes only primary storage pools, not copy
storage pools or active-data pools.
v If a storage pool uses the data format NETAPPDUMP, CELERRADUMP, or
NDMPDUMP, the server will not perform any of the following functions:
– Migration
– Reclamation
– Volume audits
Chapter 11. Managing storage pools and volumes
285
– Data validation
– Simultaneous write
For more information about data formats, see Chapter 9, “Using NDMP for
operations with NAS file servers,” on page 219.
v The Tivoli Storage Manager server does not support the following functions for
Centera storage pools:
– Data-movement operations:
- Moving node data into or out of a Centera storage pool.
- Migrating data into or out of a Centera storage pool.
- Reclaiming a Centera storage pool.
– Backup operations:
- Backing up a Centera storage pool.
- Using a Centera device class to back up a database.
- Backing up a storage pool to a Centera storage pool.
- Copying active data to an active-data pool.
– Restore operations:
- Restoring data from a copy storage pool or an active-data pool to a Centera
storage pool.
- Restoring volumes in a Centera storage pool.
– Other:
- Exporting data to a Centera device class or importing data from a Centera
device class; however, files stored in Centera storage pools can be exported
and files being imported can be stored on Centera.
- Using a Centera device class for creating backup sets; however, files stored
in Centera storage pools can be sent to backup sets.
- Defining Centera volumes.
- Using a Centera device class as the target of volume history, device
configuration, trace logs, error logs, or query output files.
Example: Updating storage pools
You decide to increase the maximum size of a physical file that can be stored in
the ENGBACK1 disk storage pool.
In this example, the ENGBACK1 disk storage pool is defined as shown in
“Example: Defining storage pools” on page 285. To increase the maximum size of a
physical file that can be stored in the storage pool, use the following command:
update stgpool engback1 maxsize=100m
Restrictions:
v You cannot use this command to change the data format for a storage pool.
v For storage pools that have the NETAPPDUMP, the CELERRADUMP, or the
NDMPDUMP data format, you can modify the following parameters only:
– ACCESS
– COLLOCATE
– DESCRIPTION
– MAXSCRATCH
– REUSEDELAY
286
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Task tips for storage pools
Tivoli Storage Manager provides many functions, such as migration and
reclamation, for optimizing data-storage operations. To take advantage of these
functions, you can create specialized storage pools or specify certain properties in
your storage pool definitions.
Table 27 gives tips on how to accomplish some tasks that are related to storage
pools.
Table 27. Task tips for storage pools
For this Goal
Do This
For More Information
Keep the data for a group of client
Enable collocation for the storage
nodes, a single client node, or a client pool.
file space on as few volumes as
possible.
“Keeping client files together using
collocation” on page 340
Reduce the number of volume
mounts needed to back up multiple
clients.
Disable collocation for the storage
pool.
“Keeping client files together using
collocation” on page 340
Perform simultaneous write to a
primary storage pool and to copy
storage pools and active-data pools.
Provide a list of copy storage pools
and active-data pools when defining
the primary storage pool.
“Writing data simultaneously to
primary, copy, and active-data pools”
on page 329
Specify how the server reuses tapes.
Set a reclamation threshold for the
storage pool.
“Reclaiming space in
sequential-access storage pools” on
page 350
Optional: Identify a reclamation
storage pool
Move data from disk to tape
automatically as needed.
Set a migration threshold for the
storage pool.
“Migrating disk storage pools” on
page 308
Identify the next storage pool.
“Migrating disk storage pools” on
page 308
Move data from disk to tape
automatically based on how
frequently users access the data or
how long the data has been in the
storage pool.
Set a migration threshold for the
storage pool.
Improve client restore performance
using concurrent access to FILE
volumes.
Implement a storage pool associated
with the FILE device type.
Back up your storage pools.
Implement a copy storage pool.
“Setting up copy storage pools and
active-data pools” on page 302
Copy active data from a primary
storage pool.
Implement an active-data pool.
“Setting up copy storage pools and
active-data pools” on page 302
Have clients back up directly to a
tape storage pool.
Define a sequential-access storage
pool that uses a tape device class.
“Defining storage pools” on page 281
Identify the next storage pool.
Set the migration delay period.
“Defining storage pools” on page 281
“Setting up copy storage pools and
active-data pools” on page 302
“Changing policy” on page 457
Change the policy that the clients use,
so that the backup copy group points
to the tape storage pool as the
destination.
Chapter 11. Managing storage pools and volumes
287
Table 27. Task tips for storage pools (continued)
For this Goal
Do This
Make the best use of available tape
drives and FILE volumes during
reclamation and migration.
Specify multiple concurrent processes.
For More Information
“Optimizing drive usage using
multiple concurrent reclamation
processes” on page 353
“Specifying multiple concurrent
migration processes” on page 316
Ensure that reclamation completes
within the desired amount of time.
Limit the number of off-site volumes
to be reclaimed.
“Reclamation of off-site volumes” on
page 357
“Starting reclamation manually or in
a schedule” on page 353
For storage pools associated with
random-access and sequential-access
disk (DISK and FILE device classes),
automatically create private volumes
and preassign them to specified
storage pools when predetermined
space utilization thresholds have been
reached.
Use the DEFINE SPACETRIGGER
“Preparing volumes for
and UPDATE SPACETRIGGER
commands to specify the number and random-access storage pools” on page
290
size of volumes.
For storage pools associated with
random-access disk (DISK device
class) and sequential-access disk
(FILE device class), create and format
volumes using one command.
Use the DEFINE VOLUME command
“Preparing volumes for
to specify the size and number of
random-access storage pools” on page
volumes to be created.
290
“Defining storage pool volumes” on
page 292
“Defining storage pool volumes” on
page 292
Storage pool volumes
Storage pool volumes are the physical media that are assigned to a storage pool.
Some examples of volumes are:
v Space allocated on a disk drive
v A tape cartridge
v An optical disk
Storage pools and their volumes are either random access or sequential access,
depending on the device type of the device class to which the pool is assigned.
Random-access storage pool volumes
Random-access storage pools consist of volumes on disk. Random-access storage
pools are always associated with the DISK device class. All volumes in this type of
storage pool have the same form.
A volume in a random-access storage pool is a fixed-size file that is created when
you define a volume for the storage pool or when you use space triggers to
automatically create volumes and assign them to specified storage pools.
For additional information, see:
“Preparing volumes for random-access storage pools” on page 290
“Requirements for disk subsystems” on page 103
288
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Sequential-access storage pool volumes
Sequential-access volumes are volumes in data is accessed sequentially, one block
at a time, one after the other. Each volume defined in a sequential-access storage
pool must be the same type as the device class associated with the storage pool.
You can define volumes in a sequential-access storage pool or you can specify that
the server dynamically acquire scratch volumes. You can also use a combination of
defined and scratch volumes. What you choose depends on the amount of control
you want over individual volumes.
For information about preparing sequential-access volumes, see “Preparing
volumes for sequential-access storage pools” on page 291.
Types of sequential-access volumes
Each Tivoli Storage Manager sequential-access device type is associated with a
particular type of storage pool volume.
Some examples of sequential-access volumes are:
v Tape cartridge
v Optical disk
v File
Table 28 lists the types of volumes associated with each device type.
Table 28. Volume types
Device Type
Volume Description
Label
Required
3570
IBM 3570 tape cartridge
Yes
3590
IBM 3590 tape cartridge
Yes
3592
IBM 3592 tape cartridge
Yes
4MM
4 mm tape cartridge
Yes
8MM
8 mm tape cartridge
Yes
CENTERA
A logical collection of files stored on the Centera storage device
No
DLT
A digital linear tape
Yes
DTF
A digital tape format (DTF) tape
Yes
ECARTRIDGE
A cartridge tape that is used by a tape drive such as the StorageTek SD-3 or
9490 tape drive
Yes
FILE
A file in the file system of the server machine
No
GENERICTAPE
A tape that is compatible with the drives that are defined to the device class
Yes
LTO
IBM Ultrium tape cartridge
Yes
NAS
A tape drive that is used for NDMP backups by a network-attached storage
(NAS) file server
Yes
OPTICAL
A two-sided 5.25-inch rewritable optical cartridge
Yes
QIC
A 1/4-inch tape cartridge
Yes
REMOVABLEFILE
A file on a removable medium. If the medium has two sides, each side is a
separate volume.
Yes
SERVER
One or more objects that are archived in the server storage of another server
No
VOLSAFE
A StorageTek cartridge tape that is for write-once use on tape drives that are
enabled for VolSafe function.
No
Chapter 11. Managing storage pools and volumes
289
Table 28. Volume types (continued)
Device Type
Volume Description
Label
Required
WORM
A two-sided 5.25-inch write-once optical cartridge
Yes
WORM12
A two-sided 12-inch write-once optical cartridge
Yes
WORM14
A two-sided 14-inch write-once optical cartridge
Yes
Defined volumes
Use defined volumes when you want to control precisely which volumes are used
in the storage pool. Defined volumes can also be useful when you want to
establish a naming scheme for volumes.
You can also use defined volumes to reduce potential disk fragmentation and
maintenance overhead for storage pools associated with random-access and
sequential-access disk.
Scratch volumes
Use scratch volumes to enable the server to define a volume when needed and
delete the volume when it becomes empty. Using scratch volumes frees you from
the task of explicitly defining all of the volumes in a storage pool.
The server tracks whether a volume being used was originally a scratch volume.
Scratch volumes that the server acquired for a primary storage pool are deleted
from the server database when they become empty. The volumes are then available
for reuse by the server or other applications.
Scratch volumes in a copy storage pool or an active-data storage pool are handled
in the same way as scratch volumes in a primary storage pool, except for volumes
with the access value of off-site. If an off-site volume becomes empty, the server
does not immediately return the volume to the scratch pool. The delay prevents
the empty volumes from being deleted from the database, making it easier to
determine which volumes should be returned to the on-site location. The
administrator can query the server for empty off-site copy storage pool volumes or
active-data pool volumes, and return them to the on-site location. The volume is
returned to the scratch pool only when the access value is changed to
READWRITE, READONLY, or UNAVAILABLE.
For scratch volumes that were acquired in a FILE device class, the space that the
volumes occupied is freed by the server and returned to the file system.
Preparing volumes for random-access storage pools
Volumes in random-access storage pools must be defined before the server can
access them.
Task
Required Privilege Class
Define volumes in any storage pool
System or unrestricted storage
Define volumes in specific storage pools
System, unrestricted storage, or restricted
storage for those pools
To prepare a volume for use in a random-access storage pool, you can use the Disk
Volume wizard in the Tivoli Storage Manager Console. The Formatter panels guide
you through the steps you need to take. If you choose not to use the Formatter,
290
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
you can instead define the volume. For example, you want to define a 21 MB
volume for the BACKUPPOOL storage pool. You want the volume to be located in
the path c:\program files\tivoli\tsm\server and named stgvol.001. Enter the
following command:
define volume backuppool 'c:\program files\tivoli\tsm\server\stgvol.001'
formatsize=21
If you do not specify a full path name for the volume name, the command uses the
path associated with the registry key of this server instance.
You can also define volumes in a single step using the DEFINE VOLUME
command. For example, to define ten, 5000 MB volumes in a random-access
storage pool that uses a DISK device class, you would enter the following
command:
define volume diskpool diskvol numberofvolumes=10 formatsize=5000
Tips:
1. For important disk-related information, see “Requirements for disk
subsystems” on page 103.
2. The file system where storage pool volumes are allocated can have an effect on
performance and reliability. For better performance in backing up and restoring
large numbers of small files, allocate storage pool volumes on a FAT file
system. To take advantage of the ability of the operating system to recover from
problems that can occur during I/O to a disk, allocate storage pool volumes on
NTFS.
You can also use a space trigger to automatically create volumes assigned to a
particular storage pool.
Preparing volumes for sequential-access storage pools
For most purposes, in a sequential-access storage pool, the server can use
dynamically acquired scratch volumes, volumes that you define, or a combination
of both.
For sequential-access storage pools with a FILE or SERVER device type, no labeling
or other preparation of volumes is necessary. For sequential-access storage pools
associated with device types other than a FILE or SERVER, you must prepare
volumes for use.
When the server accesses a sequential-access volume, it checks the volume name in
the header to ensure that the correct volume is being accessed. To prepare a
volume:
1. Label the volume. Table 28 on page 289 shows the types of volumes that
require labels. You must label those types of volumes before the server can use
them.
For details, see:
“Labeling media” on page 175
Tip: When you use the LABEL LIBVOLUME command with drives in an
automated library, you can label and check in the volumes with one command.
2. For storage pools in automated libraries, use the CHECKIN LIBVOLUME
command to check the volume into the library. For details, see:
“Checking media into automated library devices” on page 177.
Chapter 11. Managing storage pools and volumes
291
3. If you have not allowed scratch volumes in the storage pool, you must identify
the volume, by name, to the server. For details, see “Defining storage pool
volumes.”
If you allowed scratch volumes in the storage pool by specifying a value
greater than zero for the MAXSCRATCH parameter, you can let the server use
scratch volumes, identify volumes by name, or do both. See “Acquiring scratch
volumes dynamically” on page 293 for information about scratch volumes.
Defining storage pool volumes
Defined volumes let you control precisely which volumes are used in the storage
pool. Using defined volumes can also be useful when you want to establish a
naming scheme for volumes.
Task
Required Privilege Class
Define volumes in any storage pool
System or unrestricted storage
Define volumes in specific storage pools
System, unrestricted storage, or restricted
storage for those pools
When you define a storage pool volume, you inform the server that the volume is
available for storing backup, archive, or space-managed data.
For a sequential-access storage pool, the server can use dynamically acquired
scratch volumes, volumes that you define, or a combination.
To define a volume named VOL1 in the ENGBACK3 tape storage pool, enter:
define volume engback3 vol1
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries but that are used by the
same server.
For storage pools associated with FILE device classes, you can define private
volumes in a single step using the DEFINE VOLUME command. For example, to
define ten, 5000 MB volumes, in a sequential-access storage pool that uses a FILE
device class, you would enter the following command.
define volume filepool filevol numberofvolumes=10 formatsize=5000
For storage pools associated with the FILE device class, you can also use the
DEFINE SPACETRIGGER and UPDATE SPACETRIGGER commands to have the
server create volumes and assign them to a specified storage pool when
predetermined space-utilization thresholds have been exceeded. One volume must
be predefined.
Remember: You cannot define volumes for storage pools defined with a Centera
device class.
292
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Acquiring scratch volumes dynamically
If you allow sequential-access storage pools to use scratch volumes, you do not
need to define volumes. You can control the maximum number of scratch volumes
that the server can request using the MAXSCRATCH parameter on the DEFINE
STGPOOL and UPDATE STGPOOL command.
To allow the storage pool to acquire volumes as needed, set the MAXSCRATCH
parameter to a value greater than zero. The server automatically defines the
volumes as they are acquired. The server also automatically deletes scratch
volumes from the storage pool when the server no longer needs them.
Before the server can use a scratch volume with a device type other than FILE or
SERVER, the volume must have a label.
Restriction: Tivoli Storage Manager only accepts tapes labeled with IBM standard
labels. IBM standard labels are similar to ANSI Standard X3.27 labels except that
the IBM standard labels are written in EBCDIC (extended binary coded decimal
interchange code). For a list of IBM media sales contacts who can provide
compatible tapes, go to the IBM Web site. If you are using non-IBM storage devices
and media, consult your tape-cartridge distributor.
For details about labeling, see “Preparing volumes for sequential-access storage
pools” on page 291.
Updating storage pool volumes
You can update a volume to reset an error state to an access mode of read/write.
You can also update a volume to change the its location in a sequential-access
storage pool. or to change the access mode of the volume, for example, if a tape
cartridge is moved off-site or is damaged.
Task
Required Privilege Class
Update volumes
System or operator
To change the properties of a volume that has been defined to a storage pool, issue
the UPDATE VOLUME command. For example, suppose you accidentally damage
a volume named VOL1. To change the access mode to unavailable so that the
server does not try to write or read data from the volume, issue the following
command:
update volume vol1 access=unavailable
For details about access modes, see “Access modes for storage pool volumes” on
page 294.
Volume properties that you can update
Update volume properties by changing the values of those properties in the
volume definition.
Chapter 11. Managing storage pools and volumes
293
Table 29 lists volume properties that you can update.
Table 29. Information for updating a storage pool volume
Information
Explanation
Volume name
Specifies the name of the storage pool volume to be updated. You can
specify a group of volumes to update by using wildcard characters in
the volume name. You can also specify a group of volumes by
specifying the storage pool, device class, current access mode, or status
of the volumes you want to update. See the parameters that follow.
(Required)
New access mode Specifies the new access mode for the volume (how users and server
processes such as migration can access files in the storage pool volume).
See “Access modes for storage pool volumes” for descriptions of access
modes.
A random-access volume must be varied offline before you can change
its access mode to unavailable or destroyed. To vary a volume offline, use
the VARY command. See “Varying disk volumes online or offline” on
page 109.
If a scratch volume that is empty and has an access mode of off-site is
updated so that the access mode is read/write, read-only, or unavailable,
the volume is deleted from the database.
Location
Specifies the location of the volume. This parameter can be specified
only for volumes in sequential-access storage pools.
Storage pool
Restricts the update to volumes in the specified storage pool.
Device class
Restricts the update to volumes in the specified device class.
Current access
mode
Restricts the update to volumes that currently have the specified access
mode.
Status
Restricts the update to volumes with the specified status (online, offline,
empty, pending, filling, or full).
Preview
Specifies whether you want to preview the update operation without
actually performing the update.
Access modes for storage pool volumes
Access to a volume in a storage pool is determined by the access mode assigned to
that volume. You can manually change the access mode of a volume, or the server
can change the access mode based on what happens when it tries to access a
volume.
For example, if the server cannot write to a volume having read/write access
mode, the server automatically changes the access mode to read-only.
The following access modes apply to storage pool volumes:
Read/write
Allows files to be read from or written to a volume in the storage pool.
If the server cannot write to a read/write access volume, the server
automatically changes the access mode to read-only.
If a scratch volume that is empty and has an access mode of off-site is
updated so that the access mode is read/write, the volume is deleted from
the database.
Read-only
Allows files to be read from but not written to a disk or tape volume.
294
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
If a scratch volume that is empty and has an access mode of off-site is
updated so that the access mode is read-only, the volume is deleted from
the database.
Unavailable
Specifies that the volume is not available for any type of access by the
server.
You must vary offline a random-access volume before you can change its
access mode to unavailable. To vary a volume offline, use the VARY
command. See “Varying disk volumes online or offline” on page 109.
If a scratch volume that is empty and has an access mode of off-site is
updated so that the access mode is unavailable, the volume is deleted from
the database.
Destroyed
Specifies that a primary storage pool volume has been permanently
damaged. Neither users nor system processes (like migration) can access
files stored on the volume.
This access mode is used to indicate an entire volume that should be
restored using the RESTORE STGPOOL or RESTORE VOLUME command.
After all files on a destroyed volume are restored to other volumes, the
destroyed volume is automatically deleted from the database. See “Storage
pool restore processing” on page 771 for more information.
Only volumes in primary storage pools can be updated to an access mode
of destroyed.
You must vary offline a random-access volume before you can change its
access mode to destroyed. To vary a volume offline, use the VARY
command. See “Varying disk volumes online or offline” on page 109. Once
you update a random-access storage pool volume to destroyed, you cannot
vary the volume online without first changing the access mode.
If you update a sequential-access storage pool volume to destroyed, the
server does not attempt to mount the volume.
If a volume contains no files and the UPDATE VOLUME command is used
to change the access mode to destroyed, the volume is deleted from the
database.
Offsite
Specifies that a copy storage pool volume or active-data pool volume is at
an off-site location and therefore cannot be mounted. Use this mode to
help you track volumes that are off-site. The server treats off-site volumes
differently, as follows:
v Mount requests are not generated for off-site volumes.
v Data can be reclaimed or moved from off-site volumes by retrieving files
from other storage pools.
v Empty, off-site scratch volumes are not deleted from the copy storage
pool or from the active-data pool.
You can only update volumes in a copy storage pool or an active-data pool
to off-site access mode. Volumes that have the device type of SERVER
(volumes that are actually archived objects stored on another Tivoli Storage
Manager server) cannot have an access mode of off-site.
Chapter 11. Managing storage pools and volumes
295
Storage pool hierarchies
You can arrange storage pools in a storage hierarchies, which consist of at least one
primary storage pool to which a client node backs up, archives, or migrates data.
Typically, data is stored initially in a disk storage pool for fast client restores, and
then moved to a tape-based storage pool, which is slower to access but which has
greater capacity. The location of all data objects is automatically tracked within the
server database.
You can set up your devices so that the server automatically moves data from one
device to another, or one media type to another. The selection can be based on
characteristics such as file size or storage capacity. A typical implementation might
have a disk storage pool with a subordinate tape storage pool. When a client backs
up a file, the server might initially store the file on disk according to the policy for
that file. Later, the server might move the file to tape when the disk becomes full.
This action by the server is called migration. You can also place a size limit on files
that are stored on disk, so that large files are stored initially on tape instead of on
disk.
For example, your fastest devices are disks, but you do not have enough space on
these devices to store all data that needs to be backed up over the long term. You
have tape drives, which are slower to access, but have much greater capacity. You
define a hierarchy so that files are initially stored on the fast disk volumes in one
storage pool. This provides clients with quick response to backup requests and
some recall requests. As the disk storage pool becomes full, the server migrates, or
moves, data to volumes in the tape storage pool.
Another option to consider for your storage pool hierarchy is IBM 3592 tape
cartridges and drives, which can be configured for an optimal combination of
access time and storage capacity. For more information, see “Controlling
data-access speeds for 3592 volumes” on page 258.
Migration of files from disk to sequential storage pool volumes is particularly
useful because the server migrates all the files for a group of nodes or a single
node together. This gives you partial collocation for clients. Migration of files is
especially helpful if you decide not to enable collocation for sequential storage
pools. For details, see “Keeping client files together using collocation” on page 340.
Setting up a storage pool hierarchy
To establish a hierarchy, identify the next storage pool, sometimes called the
subordinate storage pool. The server migrates data to the next storage pool if the
original storage pool is full or unavailable.
You can set up a storage pool hierarchy when you configure devices by using the
Device Configuration Wizard. You can also go back to this wizard to change the
storage hierarchy.
Restrictions:
v You cannot establish a chain of storage pools that leads to an endless loop. For
example, you cannot define StorageB as the next storage pool for StorageA, and
then define StorageA as the next storage pool for StorageB.
v The storage pool hierarchy includes only primary storage pools, not copy
storage pools or active-data pools. See “Backing up the data in a storage
hierarchy” on page 301.
296
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v A storage pool must use the NATIVE or NONBLOCK data formats to be part of
a storage pool hierarchy. For example, a storage pool using the NETAPPDUMP
data format cannot be part of a storage pool hierarchy.
For detailed information about how migration between storage pools works, see
“Migrating files in a storage pool hierarchy” on page 307.
Example: Defining a storage pool hierarchy
You determined that an engineering department requires a separate storage
hierarchy. You set up policy so that the server initially stores backed up files for
this department to a disk storage pool. When that pool fills, you want the server to
migrate files to a tape storage pool.
You want the storage pools to have the following characteristics:
v Primary storage pool on disk
– Name the storage pool ENGBACK1.
– Limit the size of the largest file that can be stored to 5 MB. The server stores
files that are larger than 5 MB in the tape storage pool.
– Files migrate from the disk storage pool to the tape storage pool when the
disk storage pool is 85% full. File migration to the tape storage pool stops
when the disk storage pool is down to 40% full.
– Use caching, so that migrated files stay on disk until the space is needed for
other files.
v Primary storage pool on tape:
– Name the storage pool BACKTAPE.
– Use the device class TAPE, which has already been defined, for this storage
pool.
– Do not set a limit for the maximum file size, because this is the last storage
pool in the hierarchy.
– Use scratch volumes for this pool, with a maximum number of 100 volumes.
You can define the storage pools in a storage pool hierarchy from the top down or
from the bottom up. Defining the hierarchy from the bottom up requires fewer
steps. To define the hierarchy from the bottom up:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description='tape storage pool for engineering backups'
maxsize=nolimit collocate=node maxscratch=100
2. Define the storage pool named ENGBACK1 with the following command:
define stgpool engback1 disk
description='disk storage pool for engineering backups'
maxsize=5M nextstgpool=backtape highmig=85 lowmig=40
You can set up a storage pool hierarchy when you are adding clients. You can also
use the Storage Pool Hierarchy wizard in the Tivoli Storage Manager Console.
1. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
2. Click Wizards, then double click Storage Pool Hierarchy in the right pane. The
Storage Pool Hierarchy in the right pane.
3. Follow the instructions in the wizard to rearrange storage pools in the
hierarchy.
Chapter 11. Managing storage pools and volumes
297
Example: Updating a storage pool hierarchy
You already defined the ENGBACK1 disk storage pool. Now you decide to set up
a tape storage pool to which files from ENGBACK1 can migrate.
If you have already defined the storage pool at the top of the hierarchy, you can
update the storage hierarchy to include a new storage pool. You can update the
storage pool by using the UPDATE STGPOOL command or by using the Tivoli
Storage Manager Console, which includes a wizard. The wizard allows you to
change your storage pool hierarchy by using a drag and drop interface.
To define the new tape storage pool and update the hierarchy:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description='tape storage pool for engineering backups'
maxsize=nolimit collocate=node maxscratch=100
2. Update the storage-pool definition for ENGBACK1 to specify that BACKTAPE
is the next storage pool defined in the storage hierarchy:
update stgpool engback1 nextstgpool=backtape
To use the Storage Pool Hierarchy wizard in the Tivoli Storage Manager Console:
1. From the Tivoli Storage Manager Console, expand the tree for the machine
you are configuring.
2. Click Wizards, then double click Storage Pool Hierarchy in the right pane. The
Storage Pool Hierarchy in the right pane.
3. Follow the instructions in the wizard to rearrange storage pools in the
hierarchy.
How the server groups files before storing
When client files are backed up or archived, the server can group them into an
aggregate of files. By controlling the size of aggregates, you can control the
performance of client operations.
The size of the aggregate depends on the sizes of the client files being stored, and
the number of bytes and files allowed for a single transaction. Two options affect
the number of files and bytes allowed for a single transaction. TXNGROUPMAX,
located in the server options file, affects the number of files allowed.
TXNBYTELIMIT, located in the client options file, affects the number of bytes
allowed in the aggregate.
v The TXNGROUPMAX option in the server options file indicates the maximum
number of logical files (client files) that a client may send to the server in a
single transaction. The server might create multiple aggregates for a single
transaction, depending on how large the transaction is.
It is possible to affect the performance of client backup, archive, restore, and
retrieve operations by using a larger value for this option. When transferring
multiple small files, increasing the TXNGROUPMAX option can improve
throughput for operations to tape.
Important: If you increase the value of the TXNGROUPMAX option by a large
amount, watch for possible effects on the recovery log. A larger value for the
TXNGROUPMAX option can result in increased utilization of the recovery log,
as well as an increased length of time for a transaction to commit. If the effects
are severe enough, they can lead to problems with operation of the server. For
more information, see “Files moved as a group between client and server” on
page 628.
298
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
You can override the value of the TXNGROUPMAX server option for individual
client nodes by using the TXNGROUPMAX parameter in the REGISTER NODE
and UPDATE NODE commands.
v The TXNBYTELIMIT option in the client options file indicates the total number
of bytes that the client can send to the server in a single transaction.
When a Tivoli Storage Manager for Space Management client (HSM client)
migrates files to the server, the files are not grouped into an aggregate.
Server file aggregation is disabled for client nodes storing data associated with a
management class that has a copy group whose destination is a Centera storage
pool.
Where the server stores files
When a client file is backed up, archived, or migrated, the server verifies the
management class that is bound to the file. The management class specifies the
destination storage pool in which to store the file.
The server checks the destination storage pool to determine:
v If it is possible to write file data to the storage pool (access mode).
v If the size of the physical file exceeds the maximum file size allowed in the
storage pool. For backup and archive operations, the physical file may be an
aggregate or a single client file.
v Whether sufficient space is available on the available volumes in the storage
pool.
v What the next storage pool is, if any of the previous conditions prevent the file
from being stored in the storage pool that is being checked.
Using these factors, the server determines if the file can be written to that storage
pool or the next storage pool in the hierarchy.
Subfile backups: When the client backs up a subfile, it still reports the size of the
entire file. Therefore, allocation requests against server storage and placement in
the storage hierarchy are based on the full size of the file. The server does not put
a subfile in an aggregate with other files if the size of the entire file is too large to
put in the aggregate. For example, the entire file is 8 MB, but the subfile is only 10
KB. The server does not typically put a large file in an aggregate, so the server
begins to store this file as a stand-alone file. However, the client sends only 10 KB,
and it is now too late for the server to put this 10 KB file with other files in an
aggregate. As a result, the benefits of aggregation are not always realized when
clients back up subfiles.
Example: How the server determines where to store files in a
hierarchy
The server determines where to store a file based upon the destination storage
pool specified in the copy group of the management class to which the file is
bound. The server also checks the capacity utilization of the storage pool and the
maximum file size allowed.
Assume a company has a storage pool hierarchy as shown in Figure 37 on page
300.
Chapter 11. Managing storage pools and volumes
299
DISKPOOL
Read/Write Access
Max File Size=3MB
TAPEPOOL
Read/Write Access
Figure 37. Storage hierarchy example
The storage pool hierarchy consists of two storage pools:
DISKPOOL
The top of the storage hierarchy. It contains fast disk volumes for storing
data.
TAPEPOOL
The next storage pool in the hierarchy. It contains tape volumes accessed
by high-performance tape drives.
Assume a user wants to archive a 5 MB file that is named FileX. FileX is bound to
a management class that contains an archive copy group whose storage destination
is DISKPOOL, see Figure 37.
When the user archives the file, the server determines where to store the file based
on the following process:
1. The server selects DISKPOOL because it is the storage destination specified in
the archive copy group.
2. Because the access mode for DISKPOOL is read/write, the server checks the
maximum file size allowed in the storage pool.
The maximum file size applies to the physical file being stored, which may be a
single client file or an aggregate. The maximum file size allowed in DISKPOOL
is 3 MB. FileX is a 5 MB file and therefore cannot be stored in DISKPOOL.
3. The server searches for the next storage pool in the storage hierarchy.
If the DISKPOOL storage pool has no maximum file size specified, the server
checks for enough space in the pool to store the physical file. If there is not
enough space for the physical file, the server uses the next storage pool in the
storage hierarchy to store the file.
4. The server checks the access mode of TAPEPOOL, which is the next storage
pool in the storage hierarchy. The access mode for TAPEPOOL is read/write.
5. The server then checks the maximum file size allowed in the TAPEPOOL
storage pool. Because TAPEPOOL is the last storage pool in the storage
hierarchy, no maximum file size is specified. Therefore, if there is available
space in TAPEPOOL, FileX can be stored in it.
300
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Backing up the data in a storage hierarchy
You can use copy storage pools and active-data pools to protect the data in
primary storage pools. Copy storage pools can contain any combination of active
and inactive data, archive data, or space-migrated data. Active-data pools contain
only active versions of client backup data.
Restoring a primary storage pool from an active-data pool might cause some or all
inactive files to be deleted from the database if the server determines that an
inactive file needs to be replaced but cannot find it in the active-data pool.
As a best practice, therefore, and to prevent the permanent loss of inactive versions
of client backup data, you should create a minimum of one active-data pool, which
contains active-data only, and one copy storage pool, which contains both active
and inactive data. To recover from a disaster, use the active-data pool to restore
critical client node data, and then restore the primary storage pools from the copy
storage pool. Do not use active-data pools for recovery of a primary pool or
volume unless the loss of inactive data is acceptable.
“Setting up copy storage pools and active-data pools” on page 302 describes the
high-level steps for implementation.
Neither copy storage pools nor active-data pools are part of a storage hierarchy,
which, by definition, consists only of primary storage pools. Data can be stored in
copy storage pools and active-data pools using the following methods:
v Including the BACKUP STGPOOL and COPY ACTIVEDATA commands in
administrative scripts or schedules so that data is automatically backed up or
copied at regular intervals.
v Enabling the simultaneous write function so that data is written to primary
storage pools, copy storage pools, and active-data pools during the same
transaction. Simultaneous write to copy storage pools is supported for backup,
archive, space-management, and import operations. Simultaneous write to
active-data pools is supported only for client backup operations and only for
active backup versions.
v (copy storage pools only) Manually issuing the BACKUP STGPOOL command,
specifying the primary storage pool as the source and a copy storage pool as the
target. The BACKUP STGPOOL command backs up whatever data is in the
primary storage pool (client backup data, archive data, and space-managed
data).
v (active-data pools only) Manually issuing the COPY ACTIVEDATA command,
specifying the primary storage pool as the source and an active-data pool as the
target. The COPY ACTIVEDATA command copies only the active versions of
client backup data. If an aggregate being copied contains all active files, then the
entire aggregate is copied to the active-data pool during command processing. If
an aggregate being copied contains some inactive files, the aggregate is
reconstructed during command processing into a new aggregate without the
inactive files.
For efficiency, you can use a single copy storage pool and a single active-data pool
to back up all primary storage pools that are linked in a storage hierarchy. By
backing up all primary storage pools to one copy storage pool and one active-data
pool, you do not need to repeatedly copy a file when the file migrates from its
original primary storage pool to another primary storage pool in the storage
hierarchy.
Chapter 11. Managing storage pools and volumes
301
In most cases, a single copy storage pool and a single active-data pool can be used
for backup of all primary storage pools. However, the number of copy storage
pools and active-data pools you actually need depends on whether you have more
than one primary storage pool hierarchy and on the type of disaster recovery
protection you want to implement. Multiple copy storage pools and active-data
pools might be needed to handle particular situations, including the following:
v Special processing of certain primary storage hierarchies (for example, archive
storage pools or storage pools dedicated to priority clients)
v Creation of multiple copies for multiple locations (for example, to keep one copy
on-site and one copy off-site)
v Rotation of full storage pool backups (See “Backing up storage pools” on page
774.)
Inactive files in volumes in an active-data pool are deleted by reclamation
processing. The rate at which reclaimable space accumulates in active-data pool
volumes is typically faster than the rate for volumes in non-active-data pools. If
reclamation of volumes in an active-data pool is occurring too frequently, requiring
extra resources such as tape drives and libraries to mount and dismount volumes,
you can adjust the reclamation threshold until the rate of reclamation is acceptable.
The default reclamation threshold for active-data pools is 60 percent, which means
that reclamation begins when the storage pool reaches 60 percent of capacity. Note
that accelerated reclamation of volumes has more of an effect on active-data pools
that use removable media and, in particular, on removable media that is taken
off-site.
Setting up copy storage pools and active-data pools
To back up the data in primary storage pools, use copy storage pools, active-data
pools, or combination of the two.
To set up a copy storage pool or an active-data pool:
1. Define a copy storage pool or active-data pool. For details, see “Defining copy
storage pools and active-data pools” on page 389.
2. (active-data pools only) Create a policy domain, and specify the name of the
active-data pool as the value of the ACTIVEDATAPOOL parameter. To learn
more about creating domains and the ACTIVEDATAPOOL parameter, see
“Defining and updating a policy domain” on page 476.
3. (active-data pools only) Identify the nodes whose active backup data is to be
stored in the active-data pool, and then assign the nodes to the domain defined
in step 2. For details about assigning nodes to a domain, see “Assigning client
nodes to a policy domain” on page 490.
4. (optional) If you want to use simultaneous write, update the primary storage
pool definition, specifying the name of the copy storage pool and active-data
pool as the values of the COPYSTGPOOLS and ACTIVEDATAPOOLS
parameters, respectively. For details about the simultaneous-write function, see
“Writing data simultaneously to primary, copy, and active-data pools” on page
329.
5. Set up administrative schedules or scripts to automatically issue the BACKUP
STGPOOL and COPY ACTIVEDATA commands. See “Automating a basic
administrative command schedule” on page 590 and “IBM Tivoli Storage
Manager server scripts” on page 596.
302
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Example: Setting up an active-data pool for fast client restore:
A sequential-access disk (FILE) device class is used to set up an active-data pool
for fast restore of client-node data.
decide which client nodes have data that needs to be restored quickly if a disaster
occurs. Only the data belonging to those nodes should be stored in the active-data
pool.
For the purposes of this example, the following definitions already exist on the
server:
v The default STANDARD domain, STANDARD policy set, STANDARD
management class, and STANDARD copy group.
v A primary storage pool, BACKUPPOOL, and a copy storage pool, COPYPOOL.
BACKUPPOOL is specified in the STANDARD copy group as the storage pool
in which the server initially stores backup data. COPYPOOL contains copies of
all the active and inactive data in BACKUPPOOL.
v Three nodes that are assigned to the STANDARD domain (NODE1, NODE2, and
NODE 3).
v A FILE device class named FILECLASS.
You have identified NODE2 as the only high-priority node, so you need to create a
new domain to direct the data belonging to that node to an active-data pool. To set
up and enable the active-data pool, follow these steps:
1. Define the active-data pool:
DEFINE STGPOOL ADPPOOL FILECLASS POOLTYPE=ACTIVEDATA MAXSCRATCH=1000
2. Define a new domain and specify the active-data pool in which you want to
store the data belonging to NODE2:
DEFINE DOMAIN ACTIVEDOMAIN ACTIVEDESTINATION=ADPPOOL
3. Define a new policy set:
DEFINE POLICYSET ACTIVEDOMAIN ACTIVEPOLICY
4. Define a new management class:
DEFINE MGMTCLASS ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT
5. Define a backup copy group:
DEFINE COPYGROUP ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT DESTINATION=BACKUPPOOL
This command specifies that the active and inactive data belonging to client
nodes that are members of ACTIVEDOMAIN will be backed up to
BACKUPPOOL. Note that this is the destination storage pool for data backed
up from nodes that are members of the STANDARD domain.
6. Assign the default management class for the active-data pool policy set:
ASSIGN DEFMGMTCLASS ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT
7. Activate the policy set for the active-data pool:
ACTIVATE POLICYSET ACTIVEDOMAIN ACTIVEPOLICY
8. Assign the high-priority node, NODE2, to the new domain:
UPDATE NODE NODE2 DOMAIN=ACTIVEDOMAIN
A node can belong to only one domain. When you update a node by changing
its domain, you remove it from its current domain.
Chapter 11. Managing storage pools and volumes
303
9. (optional) Update the primary storage pool, BACKUPPOOL, with the name of
the active-data pool, ADPPOOL, where the server simultaneously will write
data during a client backup operation:
UPDATE STGPOOL BACKUPPOOL ACTIVEDATAPOOLS=ADPPOOL
Only active versions of backup data can be simultaneously written to
active-data pools.
10. To ensure that copies of active data are complete, define a schedule to copy
active data from BACKUPPOOL to ADPPOOL every day at 8:00 p.m.:
DEFINE SCHEDULE COPYACTIVE_BACKUPPOOL TYPE=ADMINISTRATIVE
CMD="COPY ACTIVEDATA BACKUPPOOL ADPPOOL" ACTIVE=YES
STARTTIME=20:00 PERIOD=1
Instead of defining a schedule, you can issue the COPY ACTIVEDATA
command manually whenever it is convenient to copy the active data.
Every time NODE2 stores data into BACKUPPOOL, the server simultaneously
writes the data to ADPPOOL. The schedule, COPYACTIVE_BACKUPPOOL,
ensures that any data that was not stored during simultaneous write is copied to
the active-data pool. When client nodes NODE1 and NODE3 are backed up, their
data is stored in BACKUPPOOL only, and not in ADPPOOL. When the
administrative schedule runs, only the data belonging to NODE2 is copied to the
active-data pool.
Remember: If you want all the nodes belonging to an existing domain to store
their data in the active-data pool, then you can skip steps 2 through 8. Use the
UPDATE DOMAIN command to update the STANDARD domain, specifying the
name of the active-data pool, ADPPOOL, as the value of the
ACTIVEDESTINATION parameter.
Example: Setting up an active-data pool to reduce media resources:
Backup data is simultaneously written to an active-data pool so that volumes in
the pool can be taken off-site.
In addition to using active-data pools for fast restore of client-node data, you can
also use active-data pools to reduce the number of tape volumes that are stored
either on-site or off-site for the purpose of disaster recovery. This example assumes
that, in your current configuration, all data is backed up to a copy storage pool
and taken off-site. However, your goal is to create an active-data pool, take the
volumes in that pool off-site, and maintain the copy storage pool on-site to recover
primary storage pools.
Attention: Active-data pools should not be considered for recovery of a primary
pool or volume unless the loss of inactive data is acceptable.
The following definitions already exist on the server:
v The default STANDARD domain, STANDARD policy set, STANDARD
management class, and STANDARD copy group.
v A primary storage pool, BACKUPPOOL, and a copy storage pool, COPYPOOL.
BACKUPPOOL is specified in the STANDARD copy group as the storage pool
in which the server initially stores backup data. COPYPOOL contains copies of
all the active and inactive data in BACKUPPOOL.
304
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v An administrative schedule, named BACKUP_BACKUPPOOL, that issues a
BACKUP STGPOOL command to back up the data in BACKUPPOOL to
COPYPOOL. The schedule runs every day at 10:00 p.m.
v Three nodes that are assigned to the STANDARD domain (NODE1, NODE2, and
NODE 3).
v A device class of type 3592 named 3592CLASS.
To set up and enable an active-data pool, follow these steps:
1. Define the active-data pool:
DEFINE STGPOOL ADPPOOL 3592CLASS POOLTYPE=ACTIVEDATA MAXSCRATCH=1000
2. Update the STANDARD domain to allow data from all nodes to be stored in
the active-data pool:
UPDATE DOMAIN STANDARD ACTIVEDESTINATION=ADPPOOL
3. (optional) Update the primary storage pool, BACKUPPOOL, with the name of
the active-data pool, ADPPOOL, where the server will write data
simultaneously during client backup operations:
UPDATE STGPOOL BACKUPPOOL ACTIVEDATAPOOL=ADPPOOL
Only active versions of backup data can be simultaneously written to
active-data pools.
4. To ensure that copies of active data are complete, define a schedule to copy
active data from BACKUPPOOL to ADPPOOL every day at 8:00 p.m.:
DEFINE SCHEDULE COPYACTIVE_BACKUPPOOL TYPE=ADMINISTRATIVE
CMD="COPY ACTIVEDATA BACKUPPOOL ADPPOOL" ACTIVE=YES STARTTIME=20:00 PERIOD=1
Instead of defining a schedule, you can issue the COPY ACTIVEDATA
command manually whenever it is convenient to copy the active data.
Every time data is stored into BACKUPPOOL, the data is simultaneously written
to ADPPOOL. The schedule, COPYACTIVE_BACKUPPOOL, ensures that any data
that was not stored during simultaneous write is copied to the active-data pool.
You can now move the volumes in the active-data pool to a safe location off-site.
If your goal is to replace the copy storage pool with the active-data pool, follow
the steps below. As a best practice and to protect your inactive data, however, you
should maintain the copy storage pool so that you can restore inactive versions of
backup data if required. If the copy storage pool contains archive or files that were
migrated by a Tivoli Storage Manager for Space Management client, do not delete
it.
1. Stop backing up to the copy storage pool:
DELETE SCHEDULE BACKUP_BACKUPPOOL
UPDATE STGPOOL BACKUPPOOL COPYSTGPOOLS=""
2. After all data has been copied to the active-data pool, delete the copy storage
pool and its volumes.
Chapter 11. Managing storage pools and volumes
305
Staging client data from disk to tape
Typically, client backup data is stored initially in disk-based storage pools. To make
room for additional backups, you can migrate the older data to tape. If you are
using copy storage pools or active-data pools, store data in those pools before
beginning the migration process.
Typically you need to ensure that you have enough disk storage to process one
night’s worth of the clients’ incremental backups. While not always possible, this
guideline proves to be valuable when considering storage pool backups.
For example, suppose you have enough disk space for nightly incremental backups
for clients, but not enough disk space for a FILE-type, active-data pool. Suppose
also that you have tape devices. With these resources, you can set up the following
pools:
v A primary storage pool on disk, with enough volumes assigned to contain the
nightly incremental backups for clients
v A primary storage pool on tape, which is identified as the next storage pool in
the hierarchy for the disk storage pool
v An active-data pool on tape
v A copy storage pool on tape
You can then schedule the following steps every night:
1. Perform an incremental backup of the clients to the disk storage pool.
2. After clients complete their backups, back up the active and inactive versions in
the disk primary storage pool (now containing the incremental backups) to the
copy storage pool. Then copy the active backup versions to the active-data
pool.
Backing up disk storage pools before migration processing allows you to copy
as many files as possible while they are still on disk. This saves mount requests
while performing your storage pool backups. If the migration process starts
while active data is being copied to active-data pools or while active and
inactive data is being backed up to copy storage pools, some files might be
migrated before they are copied or backed up.
3. Start the migration of the files in the disk primary storage pool to the tape
primary storage pool (the next pool in the hierarchy) by lowering the high
migration threshold. For example, lower the threshold to 40%.
When this migration completes, raise the high migration threshold back to
100%.
4. To ensure that all files are backed up, back up the tape primary storage pool to
the copy storage pool. In addition, copy the active backup data in the tape
primary storage pool to the active-data pool.
The tape primary storage pool must still be backed up (and active files copied)
to catch any files that might have been missed in the backup of the disk storage
pools (for example, large files that went directly to sequential media).
For more information about storage pool space, see “Estimating space needs for
storage pools” on page 361
306
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Migrating files in a storage pool hierarchy
To maintain free space in primary storage pools, the Tivoli Storage Manager server
can automatically migrate data from one primary pool to the next storage pool in
the hierarchy. You can control when migration begins and ends, which files to
migrate, and whether to run concurrent migration processes.
The migration process helps to ensure that there is sufficient free space in the
storage pools at the top of the hierarchy, where faster devices can provide the most
benefit to clients. For example, the server can migrate data stored in a
random-access disk storage pool to a slower but less expensive sequential-access
storage pool.
You can control:
When migration begins and ends
Migration thresholds are used to control when migration begins and ends.
Thresholds are set as levels of the space that is used in a storage pool, and
expressed as a percent of total space available in the storage pool. For
random-access and sequential-access disk storage pools, the server
compares the threshold to the amount of data stored in the pool as a
percent of the total data capacity of the volumes in the pool. Total data
capacity for sequential-access disk storage pools includes the capacity of all
scratch volumes specified for the pool. For tape and optical storage pools,
the server compares the threshold to the number of volumes containing
data as a percent of the total number of volumes available to the pool,
including scratch volumes.
You can also schedule migration activities to occur when they are most
convenient to you. In addition, you can specify how long migration will
run before being automatically canceled, whether the server attempts
reclamation before migration, and whether the migration process runs in
the background or foreground.
How the server chooses files to migrate
By default, the server does not consider how long a file has been in a
storage pool or how long since a file was accessed before choosing files to
migrate. Optional parameters allow you to change the default. You can
ensure that files remain in a storage pool for a minimum number of days
before the server migrates them to another pool. To do this, you set a
migration delay period for a storage pool. Before the server can migrate a
file, the file must be stored in the storage pool at least as long as the
migration delay period. For random-access disk storage pools, the last time
the file was accessed is also considered for migration delay. For
sequential-access storage pools, including sequential-access disk storage
pools associated with a FILE device class, all files on a volume must
exceed the value specified as a migration delay before the server migrates
all of the files on the volume.
The number of concurrent migration processes
You can specify a single migration process or multiple concurrent
migration processes for a random-access or sequential-access storage pool.
Multiple concurrent processes let you make better use of your available
tape drives and FILE volumes. However, because you can perform
migration concurrently on different storage pools during auto-migration,
you must carefully consider the resources (for example, drives) you have
available for the operation.
Chapter 11. Managing storage pools and volumes
307
Migration processing can differ for disk storage pools versus sequential-access
storage pools. If you plan to modify the default migration parameter settings for
storage pools or want to understand how migration works, read the following
topics:
v “Migrating disk storage pools”
v “Migrating sequential-access storage pools” on page 313
v “Starting migration manually or in a schedule” on page 316
Remember:
v Data cannot be migrated into or out of storage pools defined with a CENTERA
device class.
v If you receive an error message during the migration process, refer to IBM Tivoli
Storage Manager Messages, which can provide useful information for diagnosing
and fixing problems.
Migrating disk storage pools
Migration thresholds specify when the server should begin and stop migrating
data to the next storage pool in the storage hierarchy. Migration thresholds are
defined as a percentage of total storage-pool data capacity.
You can use the defaults for the migration thresholds, or you can change the
threshold values to identify the maximum and minimum amount of space for a
storage pool.
To control how long files must stay in a storage pool before they are eligible for
migration, specify a migration delay for a storage pool. For details, see “Keeping
files in a storage pool” on page 312.
If you decide to enable cache for disk storage pools, files can temporarily remain
on disks even after migration. When you use cache, you might want to set lower
migration thresholds.
For more information about migration thresholds, see “How the server selects files
to migrate” and “Migration thresholds” on page 310. For information about using
the cache, see “Minimizing access time to migrated files” on page 313 and
“Caching in disk storage pools” on page 317.
How the server selects files to migrate
When data in a storage pool comprises a percentage of the pool’s capacity that is
equal to the high migration threshold, the server migrates files from the pool to the
next storage pool. The process for selecting files to migrate is based on the space
consumed by a client node’s files and on the setting for migration delay.
The server selects the files to migrate as follows:
1. The server checks for the client node that has backed up or migrated the largest
single file space or has archived files that occupy the most space.
2. For all files from every file space belonging to the client node that was
identified, the server examines the number of days since the files were stored
in the storage pool and last retrieved from the storage pool. The server
compares the number (whichever is less) to the migration delay that is set for
the storage pool. The server migrates any of these files for which the number is
more than the migration delay set for the storage pool.
3. After the server migrates the files for the first client node to the next storage
pool, the server checks the low migration threshold for the storage pool. If the
308
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
amount of space that is used in the storage pool is now below the low
migration threshold, migration ends. If not, the server chooses another client
node by using the same criteria as described above, and the migration process
continues.
The server may not be able to reach the low migration threshold for the pool by
migrating only files that have been stored longer than the migration delay period.
When this happens, the server checks the storage pool characteristic that
determines whether migration should stop even if the pool is still above the low
migration threshold. For more information, see “Keeping files in a storage pool” on
page 312.
If multiple migration processes are running (controlled by the MIGPROCESS
parameter of the DEFINE STGPOOL command), the server may choose the files
from more than one node for migration at the same time.
For example, Table 30 displays information that is contained in the database that is
used by the server to determine which files to migrate. This example assumes that
the storage pool contains no space-managed files. This example also assumes that
the migration delay period for the storage pool is set to zero, meaning any files can
be migrated regardless of time stored in the pool or the last time of access.
Table 30. Database information on files stored in DISKPOOL
Archived Files (All Client File
Spaces)
Client Node
Backed-Up File Spaces and Sizes
TOMC
TOMC/C
200 MB
TOMC/D
100 MB
CAROL
CAROL
50 MB
5 MB
PEASE
PEASE/home
150 MB
40 MB
PEASE/temp
175 MB
Before Migration
During Migration
55 MB
After Migration
High
Migration
Threshold
80%
Low
Migration
Threshold
DISKPOOL
20%
DISKPOOL
DISKPOOL
TAPEPOOL
Figure 38. The migration process and migration thresholds
Chapter 11. Managing storage pools and volumes
309
Figure 38 on page 309 shows what happens when the high migration threshold
defined for the disk storage pool DISKPOOL is exceeded. When the amount of
migratable data in DISKPOOL reaches 80%, the server performs the following
tasks:
1. Determines that the TOMC/C file space is taking up the most space in the
DISKPOOL storage pool, more than any other single backed-up or
space-managed file space and more than any client node’s archived files.
2. Locates all data belonging to node TOMC stored in DISKPOOL. In this
example, node TOMC has backed up or archived files from file spaces
TOMC/C and TOMC/D stored in the DISKPOOL storage pool.
3. Migrates all data from TOMC/C and TOMC/D to the next available storage
pool. In this example, the data is migrated to the tape storage pool,
TAPEPOOL.
The server migrates all of the data from both file spaces belonging to node
TOMC, even if the occupancy of the storage pool drops below the low
migration threshold before the second file space has been migrated.
If the cache option is enabled, files that are migrated remain on disk storage
(that is, the files are cached) until space is needed for new files. For more
information about using cache, see “Caching in disk storage pools” on page
317.
4. After all files that belong to TOMC are migrated to the next storage pool, the
server checks the low migration threshold. If the low migration threshold has
not been reached, then the server again determines which client node has
backed up or migrated the largest single file space or has archived files that
occupy the most space. The server begins migrating files belonging to that
node.
In this example, the server migrates all files that belong to the client node
named PEASE to the TAPEPOOL storage pool.
5. After all the files that belong to PEASE are migrated to the next storage pool,
the server checks the low migration threshold again. If the low migration
threshold has been reached or passed, then migration ends.
Migration thresholds
Migration thresholds specify when migration for a storage pool begins and ends.
Setting migration thresholds for disk storage pools ensures sufficient free space on
faster devices, which can lead to better performance.
Choosing thresholds appropriate for your situation takes some experimenting. Start
by using the default high and low values. You need to ensure that migration
occurs frequently enough to maintain some free space but not so frequently that
the device is unavailable for other use.
High-migration thresholds:
Before changing the high-migration threshold, you need to consider the amount of
storage capacity provided for each storage pool and the amount of free storage
space needed to store additional files, without having migration occur.
If you set the high-migration threshold too high, the pool may be just under the
high threshold, but not have enough space to store an additional, typical client file.
Or, with a high threshold of 100%, the pool may become full and a migration
process must start before clients can back up any additional data to the disk
storage pool. In either case, the server stores client files directly to tape until
migration completes, resulting in slower performance.
310
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
If you set the high-migration threshold too low, migration runs more frequently
and can interfere with other operations.
Keeping the high-migration threshold at a single value means that migration
processing could start at any time of day, whenever that threshold is exceeded. You
can control when migration occurs by using administrative command schedules to
change the threshold. For example, set the high-migration threshold to 95% during
the night when clients run their backup operations. Lower the high-migration
threshold to 50% during the time of day when you want migration to occur. By
scheduling when migration occurs, you can choose a time when your tape drives
and mount operators are available for the operation.
Low-migration thresholds:
Before setting the low-migration threshold, you need to consider the amount of
free disk storage space needed for normal daily processing, whether you use cache
on disk storage pools, how frequently you want migration to occur, and whether
data in the next storage pool is being collocated by group.
To choose the low-migration threshold, consider:
v The amount of free disk storage space needed for normal daily processing. If
you have disk space to spare, you can keep more data on the disk (a larger low
threshold). If clients’ daily backups are enough to fill the disk space every day,
you may need to empty the disk (a smaller low threshold).
If your disk space is limited, try setting the threshold so that migration frees
enough space for the pool to manage the amount of client data that is typically
stored every day. Migration then runs about every day, or you can force it to run
every day by lowering the high-migration threshold at a time you choose.
You may also want to identify clients that are transferring large amounts of data
daily. For these clients, you may want to set up policy (a new copy group or a
new policy domain) so that their data is stored directly to tape. Using a separate
policy in this way can optimize the use of disk for the majority of clients.
v Whether you use cache on disk storage pools to improve how quickly some files
are retrieved. If you use cache, you can set the low threshold lower, yet still
maintain faster retrieval for some data. Migrated data remains cached on the
disk until new client data pushes the data off the disk. Using cache requires
more disk space for the database, however, and can slow backup and archive
operations that use the storage pool.
If you do not use cache, you may want to keep the low threshold at a higher
number so that more data stays on the disk.
v How frequently you want migration to occur, based on the availability of
sequential-access storage devices and mount operators. The larger the low
threshold, the shorter time that a migration process runs (because there is less
data to migrate). But if the pool refills quickly, then migration occurs more
frequently. The smaller the low threshold, the longer time that a migration
process runs, but the process runs less frequently.
You may need to balance the costs of larger disk storage pools with the costs of
running migration (drives, tapes, and either operators or automated libraries).
v Whether data in the next storage pool is being collocated by group. During
migration from a disk storage pool, all the data for all nodes belonging to the
same collocation group are migrated by the same process. Migration will
continue regardless whether the low migration threshold has been reached or
the amount of data that the group has to migrate.
Chapter 11. Managing storage pools and volumes
311
Keeping files in a storage pool
For some applications, you might want to delay the migration of files in the
storage pool where they were initially stored by the server. You can delay
migration of files for a specified number of days.
For example, you might have backups of monthly summary data that you want to
keep in your disk storage pool for faster access until the data is 30 days old. After
the 30 days, the server moves the files to a tape storage pool.
To delay file migration of files, set the MIGDELAY parameter when you define or
update a storage pool. The number of days is counted from the day that a file was
stored in the storage pool or accessed by a client, whichever is more recent. You
can set the migration delay separately for each storage pool. When you set the
delay to zero, the server can migrate any file from the storage pool, regardless of
how short a time the file has been in the storage pool. When you set the delay to
greater than zero, the server checks how long the file has been in the storage pool
and when it was last accessed by a client. If the number of days exceeds the
migration delay, the server migrates the file.
Note: If you want the number of days for migration delay to be counted based
only on when a file was stored and not when it was retrieved, use the
NORETRIEVEDATE server option. For more information about this option, see the
Administrator’s Reference .
If you set migration delay for a pool, you must decide what is more important:
either ensuring that files stay in the storage pool for the migration delay period, or
ensuring that there is enough space in the storage pool for new files. For each
storage pool that has a migration delay set, you can choose what happens as the
server tries to move enough data out of the storage pool to reach the low
migration threshold. If the server cannot reach the low migration threshold by
moving only files that have been stored longer than the migration delay, you can
choose one of the following:
v Allow the server to move files out of the storage pool even if they have not been
in the pool for the migration delay (MIGCONTINUE=YES). This is the default.
Allowing migration to continue ensures that space is made available in the
storage pool for new files that need to be stored there.
v Have the server stop migration without reaching the low migration threshold
(MIGCONTINUE=NO). Stopping migration ensures that files remain in the
storage pool for the time you specified with the migration delay. The
administrator must ensure that there is always enough space available in the
storage pool to hold the data for the required number of days.
If you allow more than one migration process for the storage pool and allow the
server to move files that do not satisfy the migration delay time
(MIGCONTINUE=YES), some files that do not satisfy the migration delay time
may be migrated unnecessarily. As one process migrates files that satisfy the
migration delay time, a second process could begin migrating files that do not
satisfy the migration delay time to meet the low migration threshold. The first
process that is still migrating files that satisfy the migration delay time might have,
by itself, caused the storage pool to meet the low migration threshold.
312
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Minimizing access time to migrated files
Caching is a method of minimizing access time to files on disk storage, even if the
server has migrated files to a tape storage pool. However, cached files are removed
from disk when the space they occupy is required. The files must then be obtained
from the storage pool to which they were migrated
Important: For information about the disadvantages of using cache, see “Caching
in disk storage pools” on page 317.
To ensure that files remain on disk storage and do not migrate to other storage
pools, use one of the following methods:
v Do not define the next storage pool.
A disadvantage of using this method is that if the file exceeds the space
available in the storage pool, the operation to store the file fails.
v Set the high-migration threshold to 100%.
When you set the high migration threshold to 100%, files will not migrate at all.
You can still define the next storage pool in the storage hierarchy, and set the
maximum file size so that large files are stored in the next storage pool in the
hierarchy.
A disadvantage of setting the high threshold to 100% is that after the pool
becomes full, client files are stored directly to tape instead of to disk.
Performance may be affected as a result.
Migrating sequential-access storage pools
You can set up migration thresholds for sequential-access storage pools. Migrating
data from one sequential-access storage pool to another might be appropriate in
some cases, for example, when you install a tape drive that uses a different type of
tape and want to move data to that tape.
You probably will not want the server to migrate sequential-access storage pools
on a regular basis. An operation such as tape-to-tape migration has limited benefits
compared to disk-to-tape migration, and requires at least two tape drives.
You can migrate data from a sequential-access storage pool only to another
sequential-access storage pool. You cannot migrate data from a sequential-access
storage pool to a disk storage pool. If you need to move data from a
sequential-access storage pool to a disk storage pool, use the MOVE DATA
command. See “Moving data from one volume to another volume” on page 381.
To control the migration process, set migration thresholds and migration delays for
each storage pool using the DEFINE STGPOOL and UPDATE STGPOOL
commands. You can also specify multiple concurrent migration processes to better
use your available tape drives or FILE volumes. (For details, see “Specifying
multiple concurrent migration processes” on page 316.) Using the MIGRATE
STGPOOL command, you can control the duration of the migration process and
whether reclamation is attempted prior to migration. For additional information,
see “Starting migration manually or in a schedule” on page 316.
Tip: Data in storage pools that have an NDMP format (NETAPPDUMP,
CELERRADUMP, or NDMPDUMP) cannot be migrated. However, in primary
storage pools that have an NDMP format, you can make space available by using
the MOVE DATA command. The target storage pool must have the same data
format as the source storage pool.
Chapter 11. Managing storage pools and volumes
313
How the server migrates files from sequential-access storage
pools
The server migrates files by volume from sequential-access storage pools. Volumes
that exceed the reclamation threshold are migrated first. Files in the least
frequently referenced volumes are migrated next. Before files are migrated, the
server checks the migration delay for the storage pool.
For tape and optical storage pools, the server begins the migration process when
the ratio of volumes containing data to the total number of volumes in the storage
pool, including scratch volumes, reaches the high migration threshold. For
sequential-access disk (FILE) storage pools, the server starts the migration process
when the ratio of data in a storage pool to the pool’s total estimated data capacity
reaches the high migration threshold. The calculation of data capacity includes the
capacity of all the scratch volumes specified for the pool.
Tip: When Tivoli Storage Manager calculates the capacity for a sequential-access
disk storage pool, it takes into consideration the amount of disk space available in
the file system. For this reason, be sure that you have enough disk space in the file
system to hold all the defined and scratch volumes specified for the storage pool.
For example, suppose that the capacity of all the scratch volumes specified for a
storage pool is 10 TB. (There are no predefined volumes.) However, only 9 TB of
disk space is available in the file system. The capacity value used in the migration
threshold is 9 TB, not 10 TB. If the high migration threshold is set to 70%,
migration will begin when the storage pool contains 6.3 TB of data, not 7 TB.
When migrating files by volume from sequential-access storage pools, including
sequential-access disk storage pools associated with a FILE device class, the server
performs the following procedure:
1. The server first reclaims volumes that have exceeded the reclamation threshold.
Reclamation is a server process of consolidating files from several volumes onto
one volume. (See “Reclaiming space in sequential-access storage pools” on page
350.)
2. After reclamation processing, the server compares the space used in the storage
pool to the low migration threshold.
3. If the space used is now below the low migration threshold, the server stops
processing. If the space used is still above the low migration threshold, the
server determines which volume is the least recently referenced volume.
4. If the amount of time a file has been in the storage pool exceeds the amount of
time specified as the migration delay for the storage pool, the file is eligible for
migration. The server selects the volume for migration only when all files on
the volume are eligible for migration.
5. The server repeats steps 3 and 4 until the storage pool reaches the low
migration threshold.
Because migration delay can prevent volumes from being migrated, the server can
migrate files from all eligible volumes but still find that the storage pool is above
the low migration threshold. If you set migration delay for a pool, you need to
decide what is more important: either ensuring that files stay in the storage pool
for as long as the migration delay, or ensuring there is enough space in the storage
pool for new files. For each storage pool that has a migration delay set, you can
choose what happens as the server tries to move enough files out of the storage
pool to reach the low migration threshold. If the server cannot reach the low
migration threshold by migrating only volumes that meet the migration delay
requirement, you can choose one of the following:
314
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v Allow the server to migrate volumes from the storage pool even if they do not
meet the migration delay criteria (MIGCONTINUE=YES). This is the default.
Allowing migration to continue ensures that space is made available in the
storage pool for new files that need to be stored there.
v Have the server stop migration without reaching the low migration threshold
(MIGCONTINUE=NO). Stopping migration ensures that volumes are not
migrated for the time you specified with the migration delay. The administrator
must ensure that there is always enough space available in the storage pool to
hold the data for the required number of days.
Migration criteria for sequential-access storage pools
If you are planning to use migration for sequential-access storage pools, you need
to consider a number of factors, including the time required to mount tapes into
drives and whether collocation is enabled.
When defining migration criteria for sequential-access storage pools, consider:
v The capacity of the volumes in the storage pool
v The time required to migrate data to the next storage pool
v The speed of the devices that the storage pool uses
v The time required to mount media, such as tape volumes, into drives
v Whether operator presence is required
v The number of concurrent migration processes
If you decide to migrate data from one sequential-access storage pool to another,
ensure that:
v Two drives (mount points) are available, one in each storage pool.
v The access mode for the next storage pool in the storage hierarchy is set to
read/write.
For information about setting an access mode for sequential-access storage pools,
see “Defining storage pools” on page 281.
v Collocation is set the same in both storage pools. For example, if collocation is
set to NODE in the first storage pool, then collocation should be set to NODE in
the next storage pool.
When you enable collocation for a storage pool, the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client file
space on a minimal number of volumes. For information about collocation for
sequential-access storage pools, see “Keeping client files together using
collocation” on page 340.
v You have sufficient resources (for example, staff) available to manage any
necessary media mount and dismount operations. (This is especially true for
multiple concurrent processing, For details, see “Specifying multiple concurrent
migration processes” on page 316.) More mount operations occur because the
server attempts to reclaim space from sequential-access storage pool volumes
before it migrates files to the next storage pool.
If you want to limit migration from a sequential-access storage pool to another
storage pool, set the high-migration threshold to a high percentage, such as 95%.
For information about setting a reclamation threshold for tape storage pools, see
“Reclaiming space in sequential-access storage pools” on page 350.
There is no straightforward way to selectively migrate data for a specific node
from one sequential storage pool to another. You can use the MOVE NODEDATA
command to move file spaces for a node from one storage pool to another. See
Chapter 11. Managing storage pools and volumes
315
“Moving data belonging to a client node” on page 386.
Starting migration manually or in a schedule
To gain more control over how and when the migration process occurs, you can
use the MIGRATE STGPOOL command. Issuing this command starts migration
from one storage pool to the next storage pool in the hierarchy, regardless of the
value of the HIGHMIG parameter of the storage pool definition.
You can specify the maximum number of minutes the migration will run before
automatically cancelling. If you prefer, you can include this command in a
schedule to perform migration when it is least intrusive to normal production
needs.
For example, to migrate data from a storage pool named ALTPOOL to the next
storage pool, and specify that it end as soon as possible after one hour, issue the
following command:
migrate stgpool altpool duration=60
Do not use this command if you are going to use automatic migration. To prevent
automatic migration from running, set the HIGHMIG parameter of the storage
pool definition to 100. For details about the MIGRATE STGPOOL command, refer
to the Administrator’s Reference.
Restriction: Data cannot be migrated into or out of storage pools defined with a
CENTERA device class.
Specifying multiple concurrent migration processes
Running multiple migration processes concurrently lets you make better use of
your available tape drives or FILE volumes. When calculating the number of
concurrent processes to run, you must carefully consider available resources.
Each migration process requires at least two simultaneous volume mounts (at least
two mount points) and, if the device type is not FILE, at least two drives. One of
the drives is for the input volume in the storage pool from which files are being
migrated. The other drive is for the output volume in the storage pool to which
files are being migrated.
When calculating the number of concurrent processes to run, carefully consider the
resources you have available, including the number of storage pools that will be
involved with the migration, the number of mount points, the number of drives
that can be dedicated to the operation, and (if appropriate) the number of mount
operators available to manage migration requests. The number of available mount
points and drives depends on other Tivoli Storage Manager and system activity
and on the mount limits of the device classes for the storage pools that are
involved in the migration. For more information about mount limit, see:
“Controlling the number of simultaneously mounted volumes” on page 255
For example, suppose that you want to migrate data on volumes in two sequential
storage pools simultaneously and that all storage pools involved have the same
device class. Each process requires two mount points and, if the device type is not
FILE, two drives. To run four migration processes simultaneously (two for each
storage pool), you need a total of at least eight mount points and eight drives if
the device type is not FILE. The device class must have a mount limit of at least
eight.
316
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
If the number of migration processes you specify is more than the number of
available mount points or drives, the processes that do not obtain mount points or
drives will wait indefinitely or until the other migration processes complete and
mount points or drives become available.
To specify one or more migration processes for each primary sequential-access
storage pool, use the MIGPROCESS parameter on the DEFINE STGPOOL and
UPDATE STGPOOL commands.
The Tivoli Storage Manager server starts the specified number of migration
processes regardless of the number of volumes that are eligible for migration. For
example, if you specify ten migration processes and only six volumes are eligible
for migration, the server will start ten processes and four of them will complete
without processing a volume.
Multiple concurrent migration processing does not affect collocation. If you specify
collocation and multiple concurrent processes, the Tivoli Storage Manager server
attempts to migrate the files for each collocation group, client node, or client file
space onto as few volumes as possible. If files are collocated by group, each
process can migrate only one group at a single time. In addition, if files belonging
to a single collocation group (or node or file space) are on different volumes and
are being migrated at the same time by different processes, the files could be
migrated to separate output volumes.
The effect of migration on copy storage pools and active-data
pools
Files in copy storage pools and active-data pools cannot be migrated. Migration of
files between primary storage pools does not affect copy storage pool files or
active-data pool files. Neither copy storage pool files nor active-data pool files
move when primary storage pool files move.
For example, suppose a copy of a file is made while it is in a disk storage pool.
The file then migrates to a primary tape storage pool. If you then back up the
primary tape storage pool to the same copy storage pool, a new copy of the file is
not needed. The server knows it already has a valid copy of the file.
The only way to store files in copy storage pools is by backing up (the BACKUP
STGPOOL command) or by simultaneous write. The only way to store files in
active-data pools is by copying active data (the COPY ACTIVEDATA command) or
by simultaneous write.
Caching in disk storage pools
When cache is enabled, the migration process leaves behind duplicate copies of
files after the server migrates these files to the next storage pool in the storage
hierarchy. Using cache can improve the speed with which the server retrieves some
files. Consider enabling cache for space-managed files that are frequently accessed
by clients.
If space is needed to store new data in the disk storage pool, cached files are
erased and the space they occupied is used for the new data.
Using cache has some important disadvantages:
v Using cache can increase the time required for client backup operations to
complete. Performance is affected because, as part of the backup operation, the
Chapter 11. Managing storage pools and volumes
317
server must erase cached files to make room for storing new files. The effect can
be severe when the server is storing a very large file and must erase cached files.
For the best performance for client backup operations to disk storage pools, do
not use cache.
v Using cache can require more space for the server database. When you use
cache, more database space is needed because the server has to keep track of
both the cached copy of the file and the new copy in the next storage pool.
v If you want to use caching, you cannot also enable shredding for that disk
storage pool. See “Securing sensitive client data” on page 519 for more
information about shredding.
When cache is disabled and migration occurs, the server migrates the files to the
next storage pool and erases the files from the disk storage pool. By default, the
system disables caching for each disk storage pool because of the potential effects
of cache on backup performance. If you leave cache disabled, consider higher
migration thresholds for the disk storage pool. A higher migration threshold keeps
files on disk longer because migration occurs less frequently.
If fast restores of active client data is your objective, you can also use active-data
pools, which are storage pools containing only active versions of client backup
data. For details, see “Active-data pools” on page 277.
To enable cache, specify CACHE=YES when defining or updating a storage pool.
How the server removes cached files
When space is needed, the server reclaims space occupied by cached files. Files
that have the oldest retrieval date are overwritten first.
For example, assume that two files, File A and File B, are cached files that are the
same size. If File A was last retrieved on 05/16/08 and File B was last retrieved on
06/19/08, then File A is deleted to reclaim space first.
If you do not want the server to update the retrieval date for files when a client
restores or retrieves the file, specify the server option NORETRIEVEDATE in the
server options file. If you specify this option, the server removes copies of files in
cache regardless how recently the files were retrieved.
Effect of caching on storage pool statistics
The space-utilization statistic for the pool (Pct Util) includes the space used by any
cached copies of files in the storage pool. The migratable-data statistic (Pct Migr)
does not include space occupied by cached copies of files.
The server compares the migratable-data statistic with migration-threshold
parameters to determine when migration should begin or end. For more
information about storage pool statistics, see “Monitoring storage-pool and volume
usage” on page 363.
318
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Deduplicating data
Data deduplication eliminates redundant data in sequential-access disk (FILE)
primary, copy, and active-data storage pools. One unique instance of the data is
retained on storage media, and redundant data is replaced with a pointer to a
unique data copy.
Note: You can use the data-deduplication feature with Tivoli Storage Manager
Extended Edition only.
Data deduplication overview
The Tivoli Storage Manager server can deduplicate any type of data, except
encrypted data. For example, the server can deduplicate unencrypted client
backup-and-archive data, Tivoli Data Protection data, and so on.
Using deduplication, can reduce the overall amount of time that is required to
retrieve data because you can store more data on disk, rather than on tape.
In addition to whole files, Tivoli Storage Manager can also deduplicate parts of
files that are common with parts of other files. If you update a storage pool for
deduplication, Tivoli Storage Manager deduplicates the data that has already been
stored. No additional backup, archive, or migration is required.
Data deduplication in Tivoli Storage Manager is a two-phase process. In the first
phase, the server identifies the duplicate data in the storage pool. As volumes in
the storage pool are filled, data becomes eligible for duplicate identification. A
volume does not have to be full before duplicate identification starts. In the second
phase, duplicate data is removed by any of the following processes:
v Reclaiming volumes in the primary-storage pool, copy-storage pool, or
active-data pool
v Backing up a primary-storage pool to a copy-storage pool that is also set up for
deduplication
v Copying active data in the primary-storage pool to an active-data pool that is
also set up for deduplication
v Migrating data from the primary-storage pool to another primary-storage pool
that is also set up for deduplication
v Moving data from the primary-storage pool to a different primary-storage pool
that is also set up for deduplication, moving data within the same copy-storage
pool, or moving data within the same active-data pool
Important:
v Restore operations from a sequential-access disk (FILE) storage pool that is set
up for deduplication have different performance characteristics than restore
operations from a FILE storage pool that is not set up for deduplication.
In a FILE storage pool that is not set up for deduplication, files are typically
restored in a mainly sequential process. In a FILE storage pool that is set up for
deduplication, however, data is distributed throughout the storage pool. As a
result, the I/O is more random, which can lead to slower restore times. This
behavior occurs more often with small (less than 100 KB) files. In addition, more
server processor resources are consumed when restoring from a deduplicated
storage pool. This occurs because the data is checked to ensure that it has been
reassembled properly.
Chapter 11. Managing storage pools and volumes
319
Although small-file restore operations from a deduplicated storage pool might
be relatively slow, these operations are still typically faster than small-file restore
operations from tape because of the added tape mount-and-locate time. As a
best practice, test your restore scenarios to ensure that performance objectives
will be met.
v For optimal efficiency when deduplicating, upgrade to the version 6.1
backup-archive client.
For more information, see the following topics:
v “Planning for deduplication” on page 322
v “Setting up storage pools for deduplication” on page 323
v “Controlling duplicate-identification processing” on page 324
v “Displaying statistics about deduplication” on page 326
v “Effects on deduplication when moving or copying data” on page
327“Improving performance when reading from deduplicated storage pools” on
page 328
Effect of deduplication on data collocation
You can use collocation for storage pools that are set up for deduplication.
However, collocation might not have the same benefit as it does for storage pools
that are not set up for deduplication.
By using collocation with storage pools that are set up for deduplication, you can
control the placement of data on volumes. However, the physical location of
duplicate data might be on different volumes. No-query-restore, and other
processes remain efficient in selecting volumes that contain non-deduplicated data.
However, the efficiency declines when additional volumes are required to provide
the duplicate data.
Estimating space savings from deduplication
Before setting up deduplication in your production environment, you can estimate
the amount of storage space that is saved by backing up the data in a
primary-storage pool to a temporary copy-storage pool that is set up for
deduplication.
To estimate space savings:
1. Create a sequential-access disk (FILE) copy-storage pool and specify that the
pool will deduplicate data.
2. Back up the contents of the primary-storage pool that you want to test to the
copy-storage pool.
3. Run the duplicate-identification processes against the volumes in the
copy-storage pool.
If you specified one or more duplicate-identification processes when you
created the copy-storage pool, those processes will start automatically. If you
did not specify any processes, you must specify and start duplicateidentification processes manually.
4. After all the data in the copy-storage pool is identified, start reclamation by
changing the reclamation percentage on the copy-storage pool to 1%.
5. When reclamation finishes, use the QUERY STGPOOL command to check the
copy storage-pool statistics to determine the amount of space that was saved.
If the results are satisfactory, perform one of the following tasks:
320
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v If the primary-storage pool is a sequential-access disk storage pool, update the
storage, specifying deduplication.
v If the primary-storage pool is not a sequential-access disk storage pool, create a
new primary sequential-access disk-storage pool, specifying deduplication. Move
the data or migrate the data from the original storage pool to the new storage
pool.
Duplicate-identification processing states
Duplicate-identification processes are different than other server processes. When
other server processes finish a task, they end. When duplicate-identification
processes finish processing available files, they quiesce and go into an idle state.
Duplicate-identification processes can be either active or idle. Processes that are
currently working on files are active. Processes that are waiting for files to work on
are idle. Processes remain idle until volumes with data to be deduplicated become
available. Processes end only when cancelled or when you change the number of
duplicate-identification processes for the storage pool to a value less than that
currently specified.
The output of the QUERY PROCESS command for a duplicate-identification
process includes the total number of bytes and files that have been processed since
the process first started. (For example, if a duplicate-identification process
processes four files, idles, and then processes five more files, the total number of
files processed is nine.)
Protecting data in primary storage pools set up for deduplication
By default, primary sequential-access storage pools that are set up for
deduplication must be backed up to a copy-storage pool before they can be
reclaimed and duplicate data can be removed. To minimize the potential of data
loss, do not change the default setting.
To protect the data in primary-storage pools, issue the BACKUP STGPOOL
command to copy the data to copy-storage pools. Ensure that the copy storage
pools are not set up for deduplication.
Copying active data to an active-data pool does not qualify as a valid backup for
the purpose of protecting data. Data must be backed up to a copy-storage pool that
is not set up for deduplication.
Attention: You can change the default setting to permit reclamation of
primary-storage pools that are not backed up. However, there is a remote
possibility that changing the default can result in unrecoverable data loss if a
data-integrity error occurs. To change the default and permit reclamation of
primary sequential-access storage pools that are not backed up, set the value of the
DEDUPREQUIRESBACKUP server option to NO. Changing the default does not
change the reclamation criteria that you specified for a storage pool.
Use the DEDUPREQUIRESBACKUP server option only for primary-storage pools.
Do not use the option for copy-storage pools or active-data pools.
Reclamation of a volume in a storage pool that is set up for deduplication might
not occur when the volume first becomes eligible. The server makes additional
checks to ensure that data from a storage pool that is set up for deduplication has
been backed up to a copy-storage pool. These checks require more than one
BACKUP STGPOOL instance before the server reclaims a volume. After the server
verifies that the data was backed up, the volume is reclaimed.
Chapter 11. Managing storage pools and volumes
321
Planning for deduplication
Careful planning for deduplication can increase the efficiency of the setup process.
Before setting up storage pools for deduplication:
v Determine which client nodes have data that you want to deduplicate.
v Decide whether you want to define a new storage pool exclusively for
deduplication or update an existing storage pool. The storage pool must be a
sequential-access disk (FILE) pool. Deduplication occurs at the storage pool
level, and all data within a storage pool, except encrypted data, is deduplicated.
v Decide how you want to control duplicate-identification processes. For example,
you might want to run duplicate-identification processes automatically all of the
time. Alternatively, you might want to start and stop duplicate-identification
processes manually. You can also start duplicate-identification processes
automatically and then increase or decrease the number of processes depending
on your server’s workload. Whatever you decide, you can always change the
settings later, after the initial setup, to meet the requirements of your operations.
The following table can help in the planning process.
Table 31. Options for controlling duplicate-identification processes
If you create a new storage pool for
deduplication...
You can specify 1 - 20 duplicateidentification processes to start
automatically. The Tivoli Storage Manager
server does not start any processes if you
specify zero.
If you update an existing storage pool...
You can specify 0 - 20 duplicateidentification processes to start
automatically. If you do not specify any
duplicate-identification processes, you must
start and stop processes manually.
The Tivoli Storage Manager server does not
If you are creating a primary
start any duplicate-identification processes
sequential-access storage pool and you do
automatically by default.
not specify a value, the server starts one
process automatically. If you are creating a
copy storage pool or an active-data pool and
you do not specify a value, the server does
not start any processes automatically.
After the storage pool has been created, you
can increase and decrease the number of
duplicate-identification processes manually.
You can also start, stop, and restart
duplicate-identification processes manually.
v Decide whether to define or update a storage pool for deduplication, but not
actually perform deduplication. For example, if you have a primary
sequential-access disk storage pool and a copy sequential-access disk storage
pool, and both pools are set up for deduplication, you might want to run
duplicate-identification processes for the primary storage pool only. In this way,
only the primary storage pool will read and deduplicate data. However, when
the data is moved to the copy storage pool, the deduplication is preserved, and
no duplicate identification is required.
To estimate space savings from deduplication, see “Estimating space savings from
deduplication” on page 320
For more information, see the following topics:
v “Data deduplication overview” on page 319
322
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
v
v
v
v
“Setting up storage pools for deduplication”
“Controlling duplicate-identification processing” on page 324
“Displaying statistics about deduplication” on page 326
“Effects on deduplication when moving or copying data” on page 327
Setting up storage pools for deduplication
You can create a new storage pool for deduplication or you can upgrade an
existing storage pool. In either case, Tivoli Storage Manager provides the option of
running duplicate-identification processes automatically or manually.
Before setting up:
v Determine which client nodes have data that you want to deduplicate.
v Decide whether you want to define a new storage pool exclusively for
deduplication or update an existing storage pool. You can also define or update
a storage pool for deduplication, but not actually perform deduplication.
v Decide how you want to control duplicate-identification processes.
You can create a new storage pool for deduplication or update an existing storage
pool for deduplication. To set up a storage pool for deduplication:
v If you are defining a new storage pool:
1. Use the DEFINE STGPOOL command and specify the DEDUPLICATE=YES
parameter.
2. Define a new policy domain to direct eligible client-node data to the storage
pool.
v If you are updating an existing storage pool:
1. Determine whether the storage pool contains data from one or more client
nodes that you want to exclude from deduplication. If it does:
a. Using the MOVE DATA command, move the excluded nodes’ data from
the storage pool to be converted to another storage pool.
b. Direct data belonging to the excluded nodes to the other storage pool.
The easiest way to do this is to create another policy domain and
designate the other storage pool as the destination storage pool.
2. Change the storage-pool definition using the UPDATE STGPOOL command.
Specify the DEDUPLICATE and NUMPROCESSES parameters.
As data is stored in the pool, the duplicates are identified. When the reclamation
threshold for the storage pool is reached, reclamation begins and the space that is
occupied by duplicate data is reclaimed.
In the storage pool definition, you can specify as many as 20 duplicateidentification processes to start automatically. If you do not specify any
duplicate-identification processes in the storage pool definition, you must control
deduplication manually. Duplicate identification requires extra disk I/O and
processor resources. To mitigate the effects on server workload, you can manually
increase or decrease the number of duplicate-identification processes, as well as
their duration.
Chapter 11. Managing storage pools and volumes
323
Attention: By default, the Tivoli Storage Manager server requires that you back
up primary storage pools that are set up for deduplication before volumes in the
storage pool are reclaimed and before duplicate data is discarded. The copy
storage pools and active-data pools to which you back up data and copy active
data must not be set up for deduplication. As a best practice and to prevent
possible data loss, do not change the default. If you do change the default,
reclamation criteria remains unchanged.
For more information, see the following topics:
v “Data deduplication overview” on page 319
v “Planning for deduplication” on page 322
v “Controlling duplicate-identification processing”
v “Displaying statistics about deduplication” on page 326
v “Effects on deduplication when moving or copying data” on page 327
Controlling duplicate-identification processing
When you define or update a storage pool for deduplication, you can specify 0 - 20
duplicate-identification processes to start automatically and run indefinitely. To
avoid resource impacts during server operations (for example, client backups), you
can also control deduplication processing manually.
For more information, see the following topics:
v “Data deduplication overview” on page 319
v
v
v
v
“Planning for deduplication” on page 322
“Setting up storage pools for deduplication” on page 323
“Displaying statistics about deduplication” on page 326
“Effects on deduplication when moving or copying data” on page 327
Interaction of manual deduplication controls (IDENTIFY
DUPLICATES command)
You can change the number of duplicate-identification processes and the length of
time that processes are allowed to run by using the IDENTIFY DUPLICATES
command. You can change those settings as often as you want.
Table 32 on page 325 shows how these two controls (number and duration of
processes) interact for a particular storage pool.
Remember:
v When the amount of time that you specify as a duration expires, the number of
duplicate-identification processes always reverts back to the number of processes
specified in the storage pool definition.
v When the server stops a duplicate-identification process, the process completes
the current physical file and then stops. As a result, it might take several
minutes to reach the value that you specify as a duration.
v To change the number of duplicate-identification processes, you can also update
the storage pool definition using the UPDATE STGPOOL command. However,
when you update a storage pool definition, you cannot specify a duration. The
processes that you specify in the storage pool definition run indefinitely, or until
you issue the IDENTIFY DUPLICATES command, update the storage pool
definition again, or cancel a process.
324
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
In this example, you specified three duplicate-identification processes in the
storage pool definition. You use the IDENTIFY DUPLICATES command to change
the number of processes and to specify the amount of time the change is to remain
in effect.
Table 32. Controlling duplicate-identification processes manually
Using the IDENTIFY
DUPLICATES command, you
specify...
2 duplicate-identification
processes
4 duplicate-identification
processes
0 duplicate-identification
processes
None specified
...and a duration of...
The result is...
None specified
One duplicate-identification processes finishes the file
it is working on, if any, and then stops. Two processes
run indefinitely, or until you reissue the IDENTIFY
DUPLICATES command, update the storage pool
definition, or cancel a process.
60 minutes
One duplicate-identification process finishes the file it
is working on, if any, and then stops. After 60
minutes, the server starts one process so that three are
running.
None specified
The server starts one duplicate-identification process.
Four processes run indefinitely, or until you reissue
the IDENTIFY DUPLICATES command, update the
storage pool definition, or cancel a process.
60 minutes
The server starts one duplicate-identification process.
At the end of 60 minutes, one process finishes the file
it is working on, if any, and then stops. The
additional process started by this command might not
be the one that stops when the duration has expired.
None specified
All duplicate-identification processes finish the files
that they are working on, if any, and stop. This
change lasts indefinitely, or until you reissue the
IDENTIFY DUPLICATES command, update the
storage pool definition, or cancel a process.
60 minutes
All duplicate-identification processes finish the files
that they are working on, if any, and stop. At the end
of 60 minutes, the server starts three processes.
Not available
The number of duplicate-identification processes
resets to the number of processes specified in the
storage pool definition. This change lasts indefinitely,
or until you reissue the IDENTIFY DUPLICATES
command, update the storage pool definition, or
cancel a process.
Starting and stopping duplicate-identification processes
You can start additional duplicate-identification processes, stop some or all active
processes, and specify an amount of time that the change remains in effect. If you
did not specify any duplicate-identification processes in the storage pool definition,
you can start new processes and stop them manually.
To specify the number and duration of duplicate-identification processes for a
storage pool, issue the IDENTIFY DUPLICATES command.
For example, suppose that you have four storage pools (stgpoolA, stgpoolB,
stgpoolC, and stgpoolD), all of which are associated with a particular Tivoli
Storage Manager server. Storage pools A and B are each running one
duplicate-identification process, and storage pools C and D are each running two.
Chapter 11. Managing storage pools and volumes
325
A 60-minute client backup is scheduled to take place, and you want to reduce the
server workload from these processes by two-thirds.
Issue the following commands:
IDENTIFY
IDENTIFY
IDENTIFY
IDENTIFY
DUPLICATES
DUPLICATES
DUPLICATES
DUPLICATES
STGPOOLA
STGPOOLB
STGPOOLC
STGPOOLD
DURATION=60
DURATION=60
DURATION=60
DURATION=60
NUMPROCESS=0
NUMPROCESS=0
NUMPROCESS=1
NUMPROCESS=1
Now two processes are running for 60 minutes, one third of the number running
before the change. At the end of 60 minutes, the Tivoli Storage Manager server
automatically restarts one duplicate-identification process in storage pools A and B,
and one process in storage pools C and D.
Turning deduplication on or off
If you turn deduplication off for a storage pool by updating the storage pool
definition, new data that enters the storage pool is not deduplicated.
Deduplicated data, which was in the storage pool before you turned deduplication
off, is not reassembled. Deduplicated data continues to be removed due to normal
reclamation and deletion. All information about deduplication for the storage pool
is retained.
To turn deduplication off for a storage pool, use the UPDATE STGPOOL command
and specify DEDUPLICATE=NO.
If you turn deduplication on for the same storage pool, duplicate-identification
processes resume, skipping any files that have already been processed.
Displaying statistics about deduplication
Important statistics about deduplication are available by querying the server for
information about storage pools or duplicate-identification processes.
For more information, see the following topics:
v “Data deduplication overview” on page 319
v “Planning for deduplication” on page 322
v “Setting up storage pools for deduplication” on page 323
v “Controlling duplicate-identification processing” on page 324
v “Effects on deduplication when moving or copying data” on page 327
Querying a storage pool for statistics about deduplication
You can query a storage pool to determine if a storage pool has been set up for
deduplication, the default number of duplicate-identification processes specified
when the storage pool was created, and the amount of data that was removed
from the storage pool by reclamation processing.
To query a storage pool for statistics about deduplication, issue the QUERY
STGPOOL command.
You might notice a discrepancy between the number of duplicate-identification
processes specified as the default for a storage pool and the number of
duplicate-identification processes currently running. This discrepancy occurs when
you manually increase or decrease the number of duplicate-identification processes
for the storage pool.
326
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Remember: Querying a storage pool displays storage-pool utilization as a
percentage of its assigned capacity. (Storage-pool utilization is expressed as Pct Util
in the command output.) This field does not display a value for storage pools that
are set up for deduplication. If you turn off deduplication for a storage pool, a
value for percentage utilized is not displayed until all duplicate data is removed
from the storage pool.
Querying a duplicate-identification process
Querying a duplicate-identification process displays the total number of bytes and
total number of files processed.
To query a duplicate-identification process, issue the QUERY PROCESS command.
Effects on deduplication when moving or copying data
You can move or copy data between storage pools regardless of whether they are
set up for deduplication.
The following table illustrates what happens to deduplication when data objects
are moved or copied.
Table 33. Effects when moving or copying data
If the source storage
pool is...
Set up for
deduplication
Not set up for
deduplication
...and you move or copy
data to a target storage
pool that is...
The result is...
Set up for deduplication
All data objects in the source pool are
examined for existence in the target
pool. If an object exists in the target
pool, information about deduplication
is preserved so that the data does not
need to be deduplicated again. If an
object does not exist in the target pool,
it is moved or copied.
Not set up for
deduplication
The data is not deduplicated in the
target pool.
Set up for deduplication
Normal deduplication processing takes
place after the data is moved or
copied.
Not set up for
deduplication
No deduplication occurs.
For more information, see the following topics:
v
v
v
v
v
“Data deduplication overview” on page 319
“Planning for deduplication” on page 322
“Setting up storage pools for deduplication” on page 323
“Controlling duplicate-identification processing” on page 324
“Displaying statistics about deduplication” on page 326
Chapter 11. Managing storage pools and volumes
327
Improving performance when reading from deduplicated
storage pools
To obtain the different extents that make up a file from a deduplicated storage
pool, client restore operations and certain server processes might require opening
and closing FILE volumes multiple times. The frequency with which FILE volumes
are opened and closed during a session can severely affect performance.
Opening and closing volumes multiple times can affect the following server
processes that read data from a deduplicated storage pool:
v Volume reclamation
v MOVE DATA or MOVE NODEDATA
v
v
v
v
v
EXPORT
AUDIT VOLUME
Storage-pool restore operation
Volume restore operation
Data migration
To reduce the number of times a volume is opened and closed, Tivoli Storage
Manager allows multiple input FILE volumes in a deduplicated storage pool to
remain open at the same time during a session. To specify the number of open
FILE volumes in deduplicated storage pools that can remain open, use the
NUMOPENVOLSALLOWED server option. Set this option in the server options
file or by using the SETOPT command.
Each session within a client operation or server process can have as many open
FILE volumes as specified by this option. A session is initiated by a client
operation or by a server process. Multiple sessions can be started within each.
During a client-restore operation, volumes can remain open for the duration of a
client-restore operation and as long a client session is active. During a no-query
restore operation, the volumes remain open until the no-query restore completes.
At that time, all volumes are closed and released. However, for a classic restore
operation started in interactive mode, the volumes might remain open at the end
of the restore operation. The volumes are closed and released when the next classic
restore operation is requested.
Tip: This option can significantly increase the number of volumes and mount
points in use at any one time. To optimize performance, follow these steps:
v To set NUMOPENVOLSALLOWED, select a beginning value (the default is
recommended). Monitor client sessions and server processes. Note the highest
number of volumes open for a single session or process. Increase the setting of
NUMOPENVOLSALLOWED if the highest number of open volumes is equal to
the value specified by NUMOPENVOLSALLOWED.
v To prevent sessions or processes from having to wait for a mount point, increase
the value of the MOUNTLIMIT parameter in the device-class definition. Set the
value of the MOUNTLIMIT parameter high enough to allow all client sessions
and server processes using deduplicated storage pools to open the number of
volume specified by the NUMOPENVOLSALLOWED option. For client sessions,
check the destination in the copy group definition to determine how many
nodes are storing data in the deduplicated storage pool. For server processes,
check the number of processes allowed for each process for the storage pool.
v For any node backing up or archiving data into a deduplicated storage pool, set
the value of the MAXNUMMP parameter in the client-node definition to a value
328
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
at least as high as the NUMOPENVOLSALLOWED parameter. Increase this
value if you notice that the node is failing client operations because the
MAXNUMMP value is being exceeded.
Writing data simultaneously to primary, copy, and active-data pools
The simultaneous-write function increases your level of data protection and
reduces the amount of time required for storage pool backup by letting you write
data simultaneously to a primary storage pool, copy storage pools, and active-data
pools.
The maximum number of copy storage pools and active-data pools to which data
can be simultaneously written is three. For example, you can write data
simultaneously to three copy storage pools, two copy storage pools and one
active-data pool, and so on.
Simultaneous write is supported for the following operations:
v Backup and archive operations by Tivoli Storage Manager backup-archive clients
or application clients using the Tivoli Storage Manager API. Only active versions
of backup data can be simultaneously written to active-data pools.
v Migration operations by hierarchical storage management (HSM) clients.
Migrated data can be simultaneously written to copy storage pools only.
Migrated data is not permitted in active-data pools.
v Import operations that involve copying exported file data from external media to
a primary storage pool which is configured for simultaneous write configuration.
Imported data can be simultaneously written to copy storage pools. Imported
data will not be simultaneously written to active-data pools. Use the COPY
ACTIVEDATA command to store the newly imported data into an active-data
pool.
Simultaneous-write overview
You control the simultaneous-write function to copy storage pools and active-data
pools by specifying certain parameters when you define or update primary storage
pools. Certain rules apply when a store operation has to switch primary storage
pools or if a write failure occurs.
The parameters used to control the simultaneous-write function to copy storage
pools are the COPYSTGPOOLS and the COPYCONTINUE parameters. The
parameter used to control the simultaneous-write function to active-data pools is
ACTIVEDATAPOOLS. (The COPYCONTINUE parameter only applies to copy
storage pools and is not supported for active-data pools.) You can specify these
parameters in a primary storage pool definition or update using the DEFINE
STGPOOL or UPDATE STGPOOL commands. For details about these commands,
refer to the Administrator’s Reference.
When a client backs up, archives, or migrates a file or when the server imports
data, the data is written to the primary storage pool specified by the copy group of
the management class that is bound to the data. If a data storage operation or a
server import operation switches from the primary storage pool at the top of a
storage hierarchy to a next primary storage pool in the hierarchy, the next storage
pool inherits the list of copy storage pools, the list of active-data pools, and the
value of the COPYCONTINUE parameter from the primary storage pool at the top
of the storage pool hierarchy.
Chapter 11. Managing storage pools and volumes
329
The following rules apply during a store operation when the server has to switch
primary storage pools:
v If the destination primary storage pool has one or more copy storage pools or
active-data pools defined using the COPYSTGPOOL or ACTIVEDATAPOOLS
parameters, the server will write the data to the next storage pool and to the
copy storage pools and active-data pools that are defined to the destination
primary pool, regardless of whether the next pool has copy pools defined. The
setting of the COPYCONTINUE of the destination primary storage pool will be
inherited by the next primary storage pool. If the next pool has copy storage
pools or active-data pools defined, they will be ignored as well as the value of
the COPYCONTINUE parameter.
v If no copy storage pools or active-data pools are defined in the destination
primary storage pool, the server will write the data to the next primary storage
pool. If the next pool has copy storage pools or active-data pools defined, they
will be ignored.
These rules apply to all the primary storage pools within the storage pool
hierarchy.
If a write failure occurs for any of the copy storage pools, the setting of the
COPYCONTINUE parameter determines how the server will react.
v If the COPYCONTINUE parameter is set to YES, the server will stop writing to
the failing copy pools for the remainder of the session, but continue storing files
into the primary pool and any remaining copy pools or active-data pools. The
copy storage pool list is active only for the life of the session and applies to all
the primary storage pools in a particular storage pool hierarchy.
v If the COPYCONTINUE parameter is set to NO, the server will fail the current
transaction and discontinue the store operation.
The setting of the COPYCONTINUE parameter has no effect on active-data pools.
If a write failure occurs for any of the active-data pools, the server will stop
writing to the failing active-data pool for the remainder of the session, but
continue storing files into the primary pool and any remaining active-data pools
and copy storage pools. The active-data pool list is active only for the life of the
session and applies to all the primary storage pools in a particular storage pool
hierarchy.
Notes:
v Simultaneous write to copy storage pools and active-data pools is not supported
for data movements performed by the server, such as server migration,
reclamation, moving data from one storage pool to another storage pool, or
backing up a storage pool.
v Simultaneous write takes precedence over LAN-free operations. The operations
go over the LAN, and the simultaneous write configuration is honored.
v Create current backup and archive versions of the files before the Tivoli Storage
Manager for Space Management client migrates them. If you back up or archive
a copy of a migrated file to the same Tivoli Storage Manager server to which it
was migrated, the file will only be stored into the primary storage pool.
v Target storage pools used for simultaneous write operations can have different
device classes. Performance is limited by the speed of the slowest device.
v You cannot use the simultaneous write function with Centera storage devices.
v The COPYSTGPOOLS parameter is available only to primary storage pools that
use NATIVE or NONBLOCK data format. This parameter is not available for
storage pools that use the following data formats:
330
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
– NETAPPDUMP
– CELERRADUMP
– NDMPDUMP
v When a NAS backup operation is writing a TOC file, if the primary storage pool
specified in the TOCDESTINATION in the copy group of the management class
has copy storage pools or active-data pools defined, the copy storage pools and
active-data pools are ignored, and the data is stored into the primary storage
pool only.
Attention: Use of the simultaneous write function is not intended to replace
regular backup of storage pools. If you use the function to simultaneously write to
copy storage pools, active-data pools, or both, ensure that the copy of each primary
storage pool is complete by regularly issuing the BACKUP STGPOOL command
and the COPY ACTIVEDATA command. If you fail to perform regular storage pool
backups, you could lose the ability to recover primary storage pool data. For
example, if a copy storage pool fails during a write operation and the
COPYCONTINUE parameter is set to YES, the Tivoli Storage Manager server will
remove the failed copy storage pool from the copy pool list for the remainder of
the client session. After the copy storage pool is removed, the Tivoli Storage
Manager server will continue to write to the primary storage pool and to any
remaining copy storage pools and active-data pools. If these pools become
damaged or lost and if you did not issue the BACKUP STGPOOL for the copy
storage pool that failed, you might not be able to recover your data.
How simultaneous write works
Three examples show how simultaneous write works. In all three examples, client
nodes whose files require fast restore are members of a policy domain that
specifies an active-data pool.
For these examples, assume the following:
v Primary storage pools DISKPOOL and TAPEPOOL are linked to form a storage
hierarchy. DISKPOOL is at the top of the storage hierarchy and TAPEPOOL is
the next pool in the storage hierarchy.
v The active backup data belonging to certain clients must be restored as quickly
as possible if a disaster occurs. These clients are members of policy domain
FASTRESTORE, which specifies an active-data pool as the destination for active
backup data. Files A and B belong to a node in this domain and are bound to
management class STANDARD. The destination specified in its backup copy
group is DISKPOOL. (For detailed information about creating policies, see
Chapter 14, “Implementing policies for client data,” on page 455.)
v The data belonging to other nodes is less critical. Restore times are flexible.
These nodes are assigned to policy domain NORMAL, which does not have an
active-data pool specified. Files C, D, and E belong to one of the nodes in this
domain and are bound to management class STANDARD. The destination
specified in its backup copy group is DISKPOOL.
v DISKPOOL has enough space to store only files C and D, but its next pool
(TAPEPOOL) has enough space for file E.
Chapter 11. Managing storage pools and volumes
331
Example: Simultaneous write to copy storage pools and an
active-data pool
The simultaneous write function automatically copies client data to two copy
storage pools, COPYPOOL1 and COPYPOOL2, and an active-data pool,
ACTIVEDATAPOOL, during a backup operation. If a write failure occurs to any of
the storage pools, the server stops writing to the failing pools for the remainder of
the session but continues to store files into the primary pool and any remaining
copy storage pools and the active-data pool.
With DISKPOOL and TAPEPOOL already defined as your storage pool hierarchy,
issue the following commands to enable simultaneous write:
define stgpool copypool1 mytapedevice pooltype=copy
define stgpool copypool2 mytapedevice pooltype=copy
define stgpool activedatapool mydiskdevice pooltype=activedata
update stgpool diskpool copystgpools=copypool1,copypool2 copycontinue=yes
activedatapools=activedatapool
where MYTAPEDEVICE is the device-class name associated with the copy storage
pools and MYDISKDEVICE is the device-class name associated with the
active-data pool.
The storage pool hierarchy and the copy storage pools and active-data pool
associated with DISKPOOL are displayed in Figure 39.
Clients
Clients
Policy Domain - NORMAL
Policy Domain - FASTRESTORE
Policy Set
Policy Set
STANDARD
Management Class
STANDARD
Management Class
Backup
Copy
Group
to
s
nt
Po
i
ts
in
Po
to
Backup
Copy
Group
ACTIVEDATAPOOL
DISKPOOL
COPYPOOL2
TAPEPOOL
COPYPOOL1
Figure 39. Example of storage pool hierarchy with copy storage pools defined for DISKPOOL
During a simultaneous write operation, the next storage pool TAPEPOOL inherits
the list of copy storage pools (COPYPOOL1 and COPYPOOL2) and the value of
the COPYCONTINUE parameter from DISKPOOL, the primary pool at the top of
332
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
the storage pool hierarchy. TAPEPOOL also inherits the list of active-data pools
(ACTIVEDATAPOOL). When files A, B, C, D, and E are backed up, the following
events occur.
v A and B are written to DISKPOOL, COPYPOOL1, COPYPOOL2,
ACTIVEDATAPOOL.
v C and D are written to DISKPOOL, COPYPOOL1, and COPYPOOL2.
v File E is written to TAPEPOOL, COPYPOOL1 and COPYPOOL2.
See Figure 40.
Client in
FASTRESTORE
domain
ACTIVEDATAPOOL
A
B
B
A
E
B
A
COPYPOOL2
D
C
B
C
C
D
C
D
E
A
D
Server
B
E
COPYPOOL1
A
E
Client in
NORMAL
domain
DISKPOOL
next pool
TAPEPOOL
Figure 40. Inheriting a list of copy storage pools
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA
commands after the backup operation has completed.
Example: Simultaneous write not used by the next storage pool
in a hierarchy
The next storage pool in a hierarchy inherits empty copy storage pool and
active-data pool lists from the primary storage pool at the top of the storage
hierarchy.
You do not specify a list of copy storage pools for DISKPOOL. However, you do
specify copy storage pools for TAPEPOOL (COPYPOOL1 and COPYPOOL2) and
an active-data pool (ACTIVEDATAPOOL). You also specify a value of YES for the
COPYCONTINUE parameter. Issue the following commands to enable
simultaneous write:
define stgpool copypool1 mytapedevice pooltype=copy
define stgpool copypool2 mytapedevice pooltype=copy
define stgpool activedatapool mydiskdevice pooltype=activedata
update stgpool tapepool copystgpools=copypool1,copypool2
copycontinue=yes activedatapools=activedatapool
where MYTAPEDEVICE is the device-class name associated with the copy storage
pools and MYDISKDEVICE is the device-class name associated with the
active-data pool. Figure 41 on page 334 displays this configuration:
Chapter 11. Managing storage pools and volumes
333
Clients
Clients
Policy Domain - NORMAL
Policy Domain - FASTRESTORE
Policy Set
Policy Set
STANDARD
Management Class
STANDARD
Management Class
Backup
Copy
Group
nt
s
Po
in
ts
Po
i
to
Backup
Copy
Group
to
DISKPOOL
ACTIVEDATAPOOL
TAPEPOOL
COPYPOOL2
COPYPOOL1
Figure 41. Example of storage pool hierarchy with copy storage pools defined for TAPEPOOL
When files A, B, C, D, and E are backed up, the following events occur:
v A, B, C, and D are written to DISKPOOL.
v File E is written to TAPEPOOL.
See Figure 42 on page 335.
334
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Client in
FASTRESTORE
domain
ACTIVEDATAPOOL
B
A
COPYPOOL2
D
C
D
Server
C
B
E
COPYPOOL1
A
DISKPOOL
E
Client in
NORMAL
domain
next pool
TAPEPOOL
Figure 42. Inheriting an empty copy storage pool list
Although TAPEPOOL has copy storage pools and an active-data pool defined, file
E is not copied because TAPEPOOL inherits empty copy storage pool and
active-data pool lists from DISKPOOL.
Example: An error during a simultaneous write
An error occurs during a simultaneous write operation and data is not written to
one copy storage pool.
You specify COPYPOOL1 and COPYPOOL2 as copy storage pools for DISKPOOL
and you set the value of the COPYCONTINUE parameter to YES. You also specify
ACTIVEDATAPOOL as the active-data pool for DISKPOOL. This configuration is
identical to that in the first example.
When files A, B, C, D, and E are backed up, the following events occur. (See
Figure 43 on page 336.)
v An error occurs while writing to COPYPOOL1, and it is removed from the copy
storage pool list held in memory by the server. The transaction fails.
v Because the value of the COPYCONTINUE parameter is YES, the client retries
the backup operation. The in-memory copy storage pool list, which is retained
by the server for the duration of the client session, no longer contains
COPYPOOL1.
v Files A and B are simultaneously written to DISKPOOL, ACTIVEDATAPOOL,
and COPYPOOL2.
v Files C and D are simultaneously written to DISKPOOL and COPYPOOL2.
v File E is simultaneously written to TAPEPOOL and COPYPOOL2.
See Figure 39 on page 332.
Chapter 11. Managing storage pools and volumes
335
Client in
FASTRESTORE
domain
ACTIVEDATAPOOL
A
B
B
retry
A
D
E
C
txn
retr
A
COPYPOOL2
D
C
D
E
B
Server
n
y tx
B
A
E
Client in
NORMAL
domain
C
COPYPOOL1
(removed for
the duration of
the session)
DISKPOOL
next pool
TAPEPOOL
Figure 43. Inheriting a list of copy storage pools
In this scenario, if the primary storage pools and COPYPOOL2 become damaged
or lost, you might not be able to recover your data. For this reason, issue the
following BACKUP STGPOOL command for the copy storage pool that failed:
backup stgpool diskpool copystgpool1
backup stgpool tapepool copystgpool1
Suppose, in this scenario, that an error occurred while writing to
ACTIVEDATAPOOL, rather than COPYPOOL1. In this situation,
ACTIVEDATAPOOL would be removed from the active-data pool list held in
memory by the server, and the transaction would fail. The client would retry the
backup operation. The in-memory active-data pool list would not contain
ACTIVEDATAPOOL. Files A, B, C, and D would be written simultaneously to
DISKPOOL, COPYPOOL1, and COPYPOOL2. File E would be written to
TAPEPOOL, COPYPOOL1, and COPYPOOL2. However, files A and B would not
be written to the active-data pool.
You can still recover your primary storage pools from COPYPOOL1 and, if
necessary, COPYPOOL2. However, if you want active backup data available in the
active-data pool for fast client restores, you must issue the following command:
copy activedata diskpool activedatapool
Implementing simultaneous write
Before implementing simultaneous write, you need to consider available resources
and configuration settings. As a best practice, you also need to consider separating
your data into discrete storage hierarchies.
336
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Controlling the number of client mount points
During simultaneous write, a client session requires a mount point for each
sequential-access storage pool to which data will be written. A transaction will fail
if the number of mount points required for a client session is insufficient.
Give careful consideration to the number of mount points available for a
simultaneous write operation. A client session requires a mount point in order to
store data to a sequential-access storage pool. For example, if a storage pool
hierarchy includes a sequential primary storage pool, the client node requires one
mount point for that pool plus one mount point for each copy storage pool and
active-data pool.
Suppose, for example, you create a storage pool hierarchy like that shown in
Figure 39 on page 332. DISKPOOL is a random-access storage pool, and
TAPEPOOL, COPYPOOL1, COPYPOOL2, and ACTIVEDATAPOOL are
sequential-access storage pools. For each client backup session, the client might
have to acquire four mount points if it has to write data to TAPEPOOL. To run
two backup sessions concurrently, the client requires a total of eight mount points.
To indicate the number of mount points a client can have, specify a value for the
MAXNUMMP parameter on the REGISTER NODE or UPDATE NODE commands.
Be sure to check the value of the MAXNUMMP parameter and, if necessary,
update it if you want to enable simultaneous write. A value of 3 for the
MAXNUMMP parameter might be sufficient if, during a client session, all the data
is stored in DISKPOOL, COPYPOOL1, COPYPOOL2, and ACTIVEDATAPOOL.
If the number of mount points required for a client session exceeds the value of the
client’s MAXNUMMP parameter, the transaction fails. If the transaction involves
an active-data pool, all the active-data pools are removed from the active-data pool
list for the duration of the client’s session, and the client retries the operation. If
the transaction involves a copy storage pool, the setting of the COPYCONTINUE
parameter determines whether the transaction is retried:
v If the value of the COPYCONTINUE parameter on the COPYSTGPOOLS
command is NO, the client does not retry the operation.
v If the value of the COPYCONTINUE parameter is YES, all the copy storage
pools are removed from the copy storage pool list for the duration of the client’s
session. The client retries the operation.
Controlling the number of mount points for a device class
If the number of sequential-access volumes that need to be mounted for a
simultaneous write operation exceeds the maximum number of mount points
specified for a device class, the server will not be able to acquire the mount points
and the operation will fail.
To specify the maximum number of sequential-access volumes that can be
simultaneously mounted, use the MOUNTLIMIT parameter in the device class
definition.
If the simultaneous write operation involves an active-data pool, the Tivoli Storage
Manager server attempts to remove the active-data pools that use this device class
until enough mount points can be acquired. The transaction fails, and the client
retries the operation. If sufficient mount points can be acquired when the operation
is retried, the data is written into the primary storage pool, any remaining
active-data pools, and any copy storage pools, if they exist.
Chapter 11. Managing storage pools and volumes
337
If the operation involves a copy storage pool, the value of the COPYCONTINUE
parameter determines whether the client retries the operation:
v If the value of the COPYCONTINUE parameter is NO, the client does not retry
the operation.
v If the value of the COPYCONTINUE parameter is YES, the server attempts to
remove the copy storage pools that use this device class until enough mount
points can be acquired. The transaction fails, and the client retries the operation.
If sufficient mount points can be acquired when the operation is retried, the data
is written into the primary storage pool, any remaining copy storage pools, and
any active-data pools, if they exist.
Storing data without using simultaneous write
Using simultaneous write to copy storage pools and active-data pools might not be
an efficient solution for every primary storage pool. When simultaneous write is
impractical, use the BACKUP STGPOOL and COPY ACTIVEDATA commands to
store data in copy storage pools and active-data pools.
Suppose you use a DISK primary storage pool that is accessed by a large number
of clients at the same time during client data-storage operations. If this storage
pool is associated with copy storage pools, active-data pools, or both, the clients
might have to wait until enough tape drives are available to perform the store
operation. In this scenario, simultaneous write could extend the amount of time
required for client data-storage operations. It might be more efficient, then, to store
the data in the primary storage pool and use the BACKUP STGPOOL command to
back up the DISK storage pool to the copy storage pools and the COPY
ACTIVEDATA command to copy active backup data from the DISK storage pool to
the active-data pools.
Reducing the potential for switching storage pools
Switching primary storage pools can delay the completion of a simultaneous write
operation. To reduce the potential for switching, ensure that enough space is
available in the primary storage pools and that the pools can accommodate files of
any size.
Resources such as disk space, tape drives, and tapes are allocated at the beginning
of a simultaneous write operation, and typically remain allocated during the entire
operation. If, for any reason, the current destination primary pool cannot contain
the data being stored, the Tivoli Storage Manager server attempts to store the data
into a next storage pool in the storage hierarchy. This next storage pool normally
uses a sequential-access device class. If new resources have to be acquired for the
next storage pool, or the allocated resources have to be released because the server
has to wait to acquire the new resources, the client session will have to wait until
the resources are available.
To reduce the potential for switching storage pools, follow these guidelines:
v Ensure that enough space is available in the primary storage pools that are
targets for the simultaneous write operation. For example, to make space
available, run the server migration operation before backing up or archiving
client data and before HSM migrations.
v The MAXSIZE parameter on the DEFINE STGPOOL and UPDATE STGPOOL
commands limits the size of the files that the Tivoli Storage Manager server can
store in the primary storage pools during client operations. Honoring the
MAXSIZE parameter for a storage pool during a store operation will cause the
server to switch pools. To prevent switching pools, avoid using this parameter if
possible.
338
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Separate storage hierarchies for simultaneous write
When considering simultaneous write as part of your backup strategy, you should,
as a best practice, separate your data in different storage pool hierarchies.
For example, you can configure your production servers to store mission critical
data in one storage pool hierarchy and use simultaneous write to backup the data
to copy storage pools and an active-data pool. (See Figure 44.) In addition, you can
configure the servers to store noncritical, workstation data in another storage pool
hierarchy and back up that data using the BACKUP STGPOOL command.
Policy Domain
Policy Set
STANDARD
Management Class
Backup
Copy
Group
Mission Critical
Management Class
Backup
Copy
Group
Points to
Points to
DISKPOOL A
DISKPOOL B
ACTIVEDATAPOOL B
COPYPOOL B2
COPYPOOL B1
TAPEPOOL A
TAPEPOOL B
Figure 44. Separate storage pool hierarchies for different types of data
Example: Making simultaneous write part of a backup strategy
Simultaneous write is used to create on-site backups of a storage pool for easy
availability. The BACKUP STGPOOL command is used to create storage pool
backups and database backups that are moved off-site to provide data protection
in case a disaster occurs.
This example also shows how to use the COPY ACTIVEDATA command to copy
active data from primary storage pools to an on-site sequential-access disk (FILE)
active-data pool. This example is provided for illustrative purposes only. When
designing a backup strategy, you should carefully consider your own system, data
storage, and disaster-recovery requirements.
1. Define the following storage pools:
v Two copy storage pools, ONSITECOPYPOOL and DRCOPYPOOL
v One active-data pool, ACTIVEDATAPOOL
v Two primary storage pools, DISKPOOL and TAPEPOOL
As part of the storage pool definition for DISKPOOL, specify TAPEPOOL as the
next storage pool, ONSITECOPYPOOL as the copy storage pool, and
ACTIVEDATAPOOL as the active-data pool. Set the copy continue parameter
Chapter 11. Managing storage pools and volumes
339
for copy storage pools to YES so that if an error occurs writing to a copy
storage pool, the operation will continue storing data into the primary pool, the
remaining copy storage pool, and the active-data pool.
define stgpool tapepool mytapedevice
define stgpool onnsitepool mytapedevice
define stgpool drcopypoool mytapedevice
define stgpool activedatapool mydiskdevice
define stgpool diskpool mydiskdevice nextstgpool=tapepool
copystgpool=onsitecopypool copycontinue=yes activedatapools=
activedatapool
This basic configuration is similar to that shown in Figure 39 on page 332.
2. Schedule or issue the following commands to ensure that all the files are
backed up:
backup stgpool diskpool onsitecopypool
backup stgpool tapepool onsitecopypool
copy activedata diskpool activedatapool
copy activedata tapepool activedatapool
3. To create the storage pool backup volumes that will be moved off-site, schedule
the following two commands to run every night:
backup stgpool diskpool drcopypool
backup stgpool tapepool drcopypool
4. Every night, after the storage pool backups have completed, back up the
database.
5. To process the database and storage pool backups for off-site storage, issue the
following command every night:
move drmedia copystgpool=drcopypool wherestate=mountable tostate=vault wait=yes
6. Start migration of the files in the DISKPOOL to ensure that sufficient space will
be available in DISKPOOL in preparation for the next storage operations:
migrate stgpool diskpool
Keeping client files together using collocation
With collocation enabled, the server attempts to keep files belonging to a group of
client nodes, a single client node, or client file space on a minimal number of
sequential-access storage volumes. Collocation reduces the number of volume
mounts required when users restore, retrieve, or recall a large number of files from
the storage pool. Collocation thus reduces the amount of time required for these
operations.
You can set collocation for each sequential-access storage pool when you define or
update the pool.
Figure 45 on page 341 shows an example of collocation by client node with three
clients, each having a separate volume containing that client’s data.
340
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Figure 45. Example of collocation enabled
Figure 46 shows an example of collocation by group of client nodes. Three groups
have been defined, and the data for each group is stored on separate volumes.
Figure 46. Example of collocation enabled
When collocation is disabled, the server attempts to use all available space on each
volume before selecting a new volume. While this process provides better
utilization of individual volumes, user files can become scattered across many
volumes. Figure 47 on page 342 shows an example of collocation disabled, with
three clients sharing space on single volume.
Chapter 11. Managing storage pools and volumes
341
Figure 47. Example of collocation disabled
With collocation disabled, more media mount operations might be required to
mount volumes when users restore, retrieve, or recall a large number of files.
Collocation by group is the Tivoli Storage Manager system default for primary
sequential-access storage pools. The default for copy storage pools and active-data
pools is no collocation.
The effects of collocation on operations
The effect of collocation on resources and system performance depends on the type
of operation that is being performed.
Table 34 summarizes the effects of collocation on operations.
Table 34. Effect of collocation on operations
Operation
Collocation Enabled
Collocation Disabled
Backing up, archiving, or migrating
client files
More media mounts to collocate files.
Usually fewer media mounts are
required.
Restoring, retrieving or recalling
client files
Large numbers of files can be
restored, retrieved, or recalled more
quickly because files are located on
fewer volumes.
Multiple mounts of media may be
required for a single user because
files may be spread across multiple
volumes.
More than one user’s files can be
stored on the same sequential-access
storage volume. For example, if two
users attempt to recover a file that
resides on the same volume, the
second user will be forced to wait
until the first user’s files are
recovered.
Storing data on tape
342
The server attempts to use all
available tape volumes to separate
user files before it uses all available
space on every tape volume.
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
The server attempts to use all
available space on each tape volume
before using another tape volume.
Table 34. Effect of collocation on operations (continued)
Operation
Collocation Enabled
Collocation Disabled
Media mount operations
More mount operations required
More mount operations when user
files are backed up, archived, or
during restore, retrieve, and recall of
migrated from client nodes directly to client files.
sequential-access volumes.
More mount operations during
reclamation and storage pool
migration.
More volumes to manage because
volumes are not fully used.
Generating backup sets
Less time spent searching database
entries and fewer mount operations.
More time spent searching database
entries and fewer mount operations.
During the following server operations, all the data belonging to a collocation
group, a single client node, or a single client file space is moved or copied by one
process: For example, if data is collocated by group, all data for all nodes
belonging to the same collocation group is migrated by the same process.
1. Moving data from random-access and sequential-access volumes
2. Moving node data from sequential-access volumes
3. Backing up a random-access or sequential-access storage pool
4. Restoring a sequential-access storage pool
5. Reclamation of a sequential-access storage pool or off-site volumes
6. Migration from a random-access storage pool.
When collocating node data, the Tivoli Storage Manager server attempts to keep
files together on a minimal number of sequential-access storage volumes. However,
when the server is backing up data to volumes in a sequential-access storage pool,
the backup process has priority over collocation settings. As a result, the server
completes the backup, but might not be able to collocate the data. For example,
suppose you are collocating by node, and you specify that a node can use two
mount points on the server. Suppose also that the data being backed up from the
node could easily fit on one tape volume. During backup, the server might mount
two tape volumes, and the node’s data might be distributed across two tapes,
rather than one.
If collocation is by node or file space, nodes or file spaces are selected for
migration based on the amount of data to be migrated. The node or file space with
the most data is migrated first. If collocation is by group, all nodes in the storage
pool are first evaluated to determine which node has the most data. The node with
the most data is migrated first along with all the data for all the nodes belonging
to that collocation group regardless of the amount of data in the nodes’ file spaces
or whether the low migration threshold has been reached.
One reason to collocate by group is that individual client nodes often do not have
sufficient data to fill high-capacity tape volumes. Collocating data by groups of
nodes can reduce unused tape capacity by putting more collocated data on
individual tapes. In addition, because all data belonging to all nodes in the same
collocation group are migrated by the same process, collocation by group can
reduce the number of times a volume containing data to be migrated needs to be
mounted. Collocation by group can also minimize database scanning and reduce
tape passes during data transfer from one sequential-access storage pool to
Chapter 11. Managing storage pools and volumes
343
another.
How the server selects volumes with collocation enabled
Volume selection depends on whether collocation is by group, by node, or by file
space.
Table 35 shows how the Tivoli Storage Manager server selects the first volume
when collocation is enabled for a storage pool at the client-node, collocation-group,
and file-space level.
Table 35. How the server selects volumes when collocation is enabled
Volume Selection
Order
When collocation is by group
When collocation is by node
When collocation is by file
space
1
A volume that already
contains files from the
collocation group to which the
client belongs
A volume that already
contains files from the same
client node
A volume that already
contains files from the same
file space of that client node
2
An empty predefined volume
An empty predefined volume
An empty predefined volume
3
An empty scratch volume
An empty scratch volume
An empty scratch volume
4
A volume with the most
available free space among
volumes that already contain
data
A volume with the most
available free space among
volumes that already contain
data
A volume containing data
from the same client node
5
Not applicable
Not applicable
A volume with the most
available free space among
volumes that already contain
data
When the server needs to continue to store data on a second volume, it uses the
following selection order to acquire additional space:
1. An empty predefined volume
2. An empty scratch volume
3. A volume with the most available free space among volumes that already
contain data
4. Any available volume in the storage pool
When collocation is by client node or file space, the server attempts to provide the
best use of individual volumes while minimizing the mixing of files from different
clients or file spaces on volumes. This is depicted in Figure 48 on page 345, which
shows that volume selection is horizontal, where all available volumes are used
before all available space on each volume is used. A, B, C, and D represent files
from four different client nodes.
Remember:
1. If collocation is by node and the node has multiple file spaces, the server does
not attempt to collocate those file spaces.
2. If collocation is by file space and a node has multiple file spaces, the server
attempts to put data for different file spaces on different volumes.
344
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Amount
of space
used on
each
volume
D
A
D
B
VOL1
VOL2
C
VOL3
VOL4
VOL5
Numbers of volumes (1 to n)
Figure 48. Using all available sequential access storage volumes with collocation enabled at
the group or file space level
When collocation is by group, the server attempts to collocate data from nodes
belonging to the same collocation group. As shown in the Figure 49, data for the
following groups of nodes has been collocated:
v Group 1 consists of nodes A, B, and C
v Group 2 consists of nodes D and E
v Group 3 consists of nodes F, G, H, and I
Whenever possible, the Tivoli Storage Manager server collocates data belonging to
a group of nodes on a single tape, as represented by Group 2 in the figure. Data
for a single node can also be spread across several tapes associated with a group
(Group 1 and 2). If the nodes in the collocation group have multiple file spaces, the
server does not attempt to collocate those file spaces.
H
C
Amount
of space
used on
each
volume
B
A
E
C
D
G
F
I
H
Numbers of volumes (1 to n)
Figure 49. Using all available sequential access storage volumes with collocation enabled at
the group level
Remember: Normally, the Tivoli Storage Manager server always writes data to the
current filling volume for the operation being performed. Occasionally, however,
you might notice more than one filling volume in a collocated storage pool. This
can occur if different server processes or client sessions attempt to store data into
the collocated pool at the same time. In this situation, Tivoli Storage Manager will
allocate a volume for each process or session needing a volume so that both
operations complete as quickly as possible.
Chapter 11. Managing storage pools and volumes
345
How the server selects volumes with collocation disabled
When collocation is disabled, the server attempts to use all available space in a
storage volume before it accesses another volume.
When storing client files in a sequential-access storage pool where collocation is
disabled, the server selects a volume using the following selection order:
1. A previously used sequential volume with available space (a volume with the
most amount of data is selected first)
2. An empty volume
When the server needs to continue to store data on a second volume, it attempts to
select an empty volume. If none exists, the server attempts to select any remaining
available volume in the storage pool.
Figure 50 shows that volume utilization is vertical when collocation is disabled. In
this example, fewer volumes are used because the server attempts to use all
available space by mixing client files on individual volumes. A, B, C, and D
represent files from four different client nodes.
B
D
Amount
of space
used on
each
volume
C
A
B
A
D
D
A
C
VOL1
C
VOL2
VOL3
VOL4
VOL5
Numbers of volumes (1 to n)
Figure 50. Using all available space on sequential volumes with collocation disabled
Collocation on or off settings
After you define a storage pool, you can change the collocation setting by
updating the storage pool. The change in collocation for the pool does not affect
files that are already stored in the pool.
For example, if collocation is off for a storage pool and you turn it on, from then on
client files stored in the pool are collocated. Files that had previously been stored
in the pool are not moved to collocate them. As volumes are reclaimed, however,
the data in the pool tends to become more collocated. You can also use the MOVE
DATA or MOVE NODEDATA commands to move data to new volumes to increase
collocation. However, this causes an increase in the processing time and the
volume mount activity.
Remember: A mount wait can occur or increase when collocation by file space is
enabled and a node has a volume containing multiple file spaces. If a volume is
eligible to receive data, Tivoli Storage Manager will wait for that volume.
346
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Collocation of copy storage pools and active-data pools
Using collocation on copy storage pools and active-data pools requires special
consideration. Collocation of copy storage pools and active-data pools, especially
by node or file space, results in more partially filled volumes and potentially
unnecessary off-site reclamation activity.
Using collocation on copy storage pools and active-data pools requires special
consideration.
Primary storage pools perform a different recovery role than those performed by
copy storage pools and active-data pools. Normally you use primary storage pools
(or active-data pools) to recover data to clients directly. In a disaster, when both
clients and the server are lost, you might use off-site active-data pool volumes to
recover data directly to clients and the copy storage pool volumes to recover the
primary storage pools. The types of recovery scenarios that concern you the most
will help you to determine whether to use collocation on your copy storage pools
and active-data pools.
Collocation typically results in partially filled volumes when you collocate by node
or by file space. (Partially filled volumes are less prevalent, however, when you
collocate by group.) Partially filled volumes might be acceptable for primary
storage pools because the volumes remain available and can be filled during the
next migration process. However, this may be unacceptable for copy storage pools
and active-data pools whose storage pool volumes are taken off-site immediately. If
you use collocation for copy storage pools or active-data pools, you must decide
among the following:
v Taking more partially filled volumes off-site, thereby increasing the reclamation
activity when the reclamation threshold is lowered or reached. Remember that
rate of reclamation for volumes in an active-data pool is typically faster than the
rate for volumes in other types of storage pools.
v Leaving these partially filled volumes on-site until they fill and risk not having
an off-site copy of the data on these volumes.
v Whether to collocate by group in order to use as much tape capacity as possible
With collocation disabled for a copy storage pool or an active-data pool, typically
there will be only a few partially filled volumes after data is backed up to the copy
storage pool or copied to the active-data pool.
Consider carefully before using collocation for copy storage pools and active-data
pools. Even if you use collocation for your primary storage pools, you may want
to disable collocation for copy storage pools and active-data pools. Collocation on
copy storage pools or active-data pools might be desirable if you have few clients,
but each of them has large amounts of incremental backup data each day.
Planning for and enabling collocation
Understanding the effects of collocation can help reduce the number of media
mounts, make better use of space on sequential volumes, and improve the
efficiency of server operations.
Table 36 on page 348 lists the four collocation options that you can specify on the
DEFINE STGPOOL and UPDATE STGPOOL commands. The table also describes
the effects of collocation on data belonging to nodes that are members of
collocation groups and nodes that are not members of any collocation group.
Chapter 11. Managing storage pools and volumes
347
Table 36. Collocation options and effects on node data
Collocation option
If a node is not defined as a member of a
collocation group...
If a node is defined as a member of a
collocation group...
No
The node’s data is not collocated.
The node’s data is not collocated.
Group
The server stores the node’s data on as few
volumes in the storage pool as possible.
The server stores the data for the node and for
other nodes that belong to the same
collocation group on as few volumes as
possible.
Node
The server stores the node’s data on as few
volumes as possible.
The server stores the node’s data on as few
volumes as possible.
Filespace
The server stores the data for the node’s file
space on as few volumes as possible. If a node
has multiple file spaces, the server stores the
data for different file spaces on different
volumes in the storage pool.
The server stores the data for the node’s file
space on as few volumes as possible. If a node
has multiple file spaces, the server stores the
data for different file spaces on different
volumes in the storage pool.
When deciding whether and how to collocate data, do the following:
1. Familiarize yourself with the potential advantages and disadvantages of
collocation, in general. For a summary of effects of collocation on operations,
see Table 34 on page 342.
2. If the decision is to collocate, determine how data should be organized,
whether by client node, group of client nodes, or file space. If the decision is to
collocate by group, you need to decide how to group nodes:
v If the goal is to save space, you may wish to group small nodes together to
better use tapes.
v If the goal is potentially faster client restores, group nodes together so that
they fill as many tapes as possible. Doing so increases the probability that
individual node data will be distributed across two or more tapes and that
more tapes can be mounted simultaneously during a multi-session No Query
Restore operation.
v If the goal is to departmentalize data, then you can group nodes by
department.
3. If collocation by group is the desired result:
a. Define collocation groups using the DEFINE COLLOCGROUP command.
b. Add client nodes to the collocation groups using the DEFINE
COLLOCGROUPMEMBER command.
The following query commands are available to help in collocating groups:
QUERY COLLOCGROUP
Displays the collocation groups defined on the server.
QUERY NODE
Displays the collocation group, if any, to which a node belongs.
QUERY NODEDATA
Displays information about the data for one or more nodes in a
sequential-access storage pool.
QUERY STGPOOL
Displays information about the location of client data in a
sequential-access storage pool and the amount of space a node occupies
in a volume.
348
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
For more information about these commands, refer to the Administrator’s
Reference.
You can also use Tivoli Storage Manager server scripts or PERL scripts to
display information that can be useful in defining collocation groups.
4. Specify how data is to be collocated in a storage pool using the COLLOCATE
parameter on the DEFINE STGPOOL or UPDATE STGPOOL command.
5. If you decide later that you want to delete members of a collocation group, you
can use the DELETE COLLOCMEMBER command. You can also update the
description of a collocation group using the UPDATE COLLOCGROUP
command and delete entire collocation groups using the DELETE
COLLOCGROUP command.
Tip: If you use collocation, but want to reduce the number of media mounts and
use space on sequential volumes more efficiently, you can do the following:
v Define a storage pool hierarchy and policy to require that backed-up, archived,
or space-managed files are stored initially in disk storage pools.
When files are migrated from a disk storage pool, the server attempts to migrate
all files belonging to the client node or collocation group that is using the most
disk space in the storage pool. This process works well with the collocation
option because the server tries to place all of the files from a given client on the
same sequential-access storage volume.
v Use scratch volumes for sequential-access storage pools to allow the server to
select new volumes for collocation.
v Specify the client option COLLOCATEBYFILESPEC to limit the number of tapes
to which objects associated with one file specification are written. This
collocation option makes collocation by the server more efficient; it does not
override collocation by file space or collocation by node. For general information
about client options, see “Managing client option files” on page 436. For details
about the COLLOCATEBYFILESPEC option, refer to the Backup-Archive Clients
Installation and User’s Guide.
When creating collocation groups, keep in mind that the ultimate destination of the
data belonging to nodes in a collocation group depends on the policy domain to
which nodes belong. For example, suppose you create a collocation group
consisting of nodes that belong to Policy Domain A. Policy Domain A specifies an
active-data pool as the destination of active data only and has a backup copy
group that specifies a primary storage pool, Primary1, as the destination for active
and inactive data. Other nodes in the same collocation group belong to a domain,
Policy Domain B, that does not specify an active-data pool, but that has a backup
copy group that specifies Primary1 as the destination for active and inactive data.
Primary1 has a designated copy storage pool. The collocation setting on
PRIMARY1, the copy storage pool, and the active-data pool is GROUP.
When the nodes’ data is backed up and simultaneous write occurs, active and
inactive data is stored in Primary1 and the copy storage pool. Note, however, that
although all the nodes belong to a single collocation group, only the active data
belonging to nodes in Domain A are stored in the active-data pool. The data in
Primary1 and the copy storage pool is collocated by group. The data in the
active-data pool is also collocated by group, but the ″group″ consists only of nodes
that are members of Policy Domain A.
Chapter 11. Managing storage pools and volumes
349
Reclaiming space in sequential-access storage pools
Space on a sequential-access storage volume becomes reclaimable as files expire or
are deleted from the volume. Reclamation processing involves consolidating the
remaining data from many sequential-access volumes onto fewer new
sequential-access volumes.
Files become obsolete because of aging or limits on the number of versions of a
file. Space in volumes in active-data pools also becomes reclaimable as updated
files are added to the pools and as older file versions are deactivated. In
reclamation processing, the server rewrites files on the volume being reclaimed to
other volumes in the storage pool, making the reclaimed volume available for
reuse.
The server reclaims the space in storage pools based on a reclamation threshold that
you can set for each sequential-access storage pool. When the percentage of space
that can be reclaimed on a volume rises above the reclamation threshold, the
server reclaims the volume.
Restrictions:
v Storage pools defined with the NETAPPDUMP, the CELERRADUMP or the
NDMPDUMP data format cannot be reclaimed. However, you can use the
MOVE DATA command to move data out of a volume so that the volume can
be reused. The volumes in the target storage pool must have the same data
format as the volumes in the source storage pool.
v Storage pools defined with a CENTERA device class cannot be reclaimed.
How Tivoli Storage Manager reclamation works
You can set a reclamation threshold for a sequential-access storage pool when you
define or update the pool. When the percentage of reclaimable space on a volume
exceeds the reclamation threshold set for the storage pool, the volume is eligible
for reclamation.
The server checks whether reclamation is needed at least once per hour and begins
space reclamation for eligible volumes. During space reclamation, the server copies
files that remain on eligible volumes to other volumes. For example, Figure 51 on
page 351 shows that the server consolidates the files from tapes 1, 2, and 3 on tape
4. During reclamation, the server copies the files to volumes in the same storage
pool unless you have specified a reclamation storage pool. Use a reclamation
storage pool to allow automatic reclamation for a storage pool with only one drive.
Remember: To prevent contention for the same tapes, the server does not allow a
reclamation process to start if a DELETE FILESPACE process is active. The server
checks every hour for whether the DELETE FILESPACE process has completed so
that the reclamation process can start. After the DELETE FILESPACE process has
completed, reclamation begins within one hour.
The server also reclaims space within an aggregate. An aggregate is a physical file
that contains multiple logical files that are backed up or archived from a client in a
single transaction. Space within the aggregate becomes reclaimable space as logical
files in the aggregate expire, as files are deleted by the client, or as files become
deactivated in active-data pools. The server removes unused space as the server
copies the aggregate to another volume during reclamation processing. However,
reclamation does not aggregate files that were originally stored in non-aggregated
form. Reclamation also does not combine aggregates to make new aggregates. You
350
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
can also reclaim space in an aggregate by issuing the MOVE DATA command. See
“Reclaiming space in aggregates by moving data” on page 385 for details.
4
= valid data
Figure 51. Tape reclamation
After the server moves all readable files to other volumes, one of the following
occurs for the reclaimed volume:
v If you have explicitly defined the volume to the storage pool, the volume
becomes available for reuse by that storage pool
v If the server acquired the volume as a scratch volume, the server deletes the
volume from the Tivoli Storage Manager database
Volumes that have a device type of SERVER are reclaimed in the same way as
other sequential-access volumes. However, because the volumes are actually data
stored in the storage of another Tivoli Storage Manager server, the reclamation
process can consume network resources. See “Controlling reclamation of virtual
volumes” on page 356 for details about how the server reclaims these types of
volumes.
Volumes in a copy storage pool and active-data pools are reclaimed in the same
manner as a primary storage pool except for the following:
v Off-site volumes are handled differently.
v The server copies active files from the candidate volume only to other volumes
in the same storage pool.
For details, see “Reclaiming copy storage pools and active-data pools” on page 356.
Chapter 11. Managing storage pools and volumes
351
Reclamation thresholds
Space is reclaimable because it is occupied by files that have been expired or
deleted from the Tivoli Storage Manager database, or because the space has never
been used. The reclamation threshold indicates how much reclaimable space a
volume must have before the server reclaims the volume.
The server checks whether reclamation is needed at least once per hour. The lower
the reclamation threshold, the more frequently the server tries to reclaim space.
Frequent reclamation optimizes the use of a sequential-access storage pool’s space,
but can interfere with other processes, such as backups from clients.
If the reclamation threshold is high, reclamation occurs less frequently. A high
reclamation threshold is useful if mounting a volume is a manual operation and
the operations staff is at a minimum. Setting the reclamation threshold to 100%
prevents automatic reclamation from occurring. You might want to do this to
control when reclamation occurs, to prevent interfering with other server processes.
When it is convenient for you and your users, you can use the RECLAIM
STGPOOL command to invoke reclamation, or you can lower the reclamation
threshold to cause reclamation to begin.
If you set the reclamation threshold to 50% or greater, the server can combine the
usable files from two or more volumes onto a single new volume.
Reclamation of volumes in an active-data pool usually returns volumes to scratch
status more frequently than reclamation of volumes in non-active-data pools. This
is because the percentage of reclaimable space for sequential volumes in
active-data pools reflects not only the space of deleted files, but also the space of
inactive files. Frequent reclamation requires more resources such as tape drives and
libraries to mount and dismount volumes.
If reclamation is occurring too frequently in your active-data pools, you can
increase the reclamation thresholds until the rate of reclamation is acceptable.
Accelerated reclamation of volumes has more of an effect on active-data pools that
use removable media and, in particular, on removable media that is taken off-site.
Reclaiming volumes with the most reclaimable space
If you have been running with a high reclamation threshold and decide you need
to reclaim volumes, you can lower the threshold in several steps. Lowering the
threshold in steps ensures that volumes with the most reclaimable space are
reclaimed first.
For example, if you set the reclamation threshold to 100%, first lower the threshold
to 98%. Volumes that have reclaimable space of 98% or greater are reclaimed by
the server. Lower the threshold again to reclaim more volumes.
If you lower the reclamation threshold while a reclamation process is active, the
reclamation process does not immediately stop. If an on-site volume is being
reclaimed, the server uses the new threshold setting when the process begins to
reclaim the next volume. If off-site volumes are being reclaimed, the server does
not use the new threshold setting during the process that is running (because all
eligible off-site volumes are reclaimed at the same time).
Use the CANCEL PROCESS command to stop a reclamation process.
352
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Starting reclamation manually or in a schedule
To gain more control over how and when the reclamation process occurs, you can
use the RECLAIM STGPOOL command. You can also specify the maximum
amount of time a reclamation process will take before it is automatically canceled.
To perform reclamation when it is least intrusive to normal production needs,
include the RECLAIM STGPOOL command in a schedule. For example, to start
reclamation in a storage pool named ALTPOOL, and to have reclamation end as
soon as possible after one hour, you would issue the following command:
reclaim stgpool altpool duration=60
For copy storage pools and active-data pools, you can also use the RECLAIM
STGPOOL command to specify the maximum number of off-site storage pool
volumes the server should attempt to reclaim:
reclaim stgpool altpool duration=60 offsitereclaimlimit=230
Do not use this command if you are going to use automatic reclamation for the
storage pool. To prevent automatic reclamation from running, set the RECLAIM
parameter of the storage pool definition to 100.
For details about the RECLAIM STGPOOL command, refer to the Administrator’s
Reference.
Restriction: Storage pools defined with a CENTERA device class cannot be
reclaimed.
Optimizing drive usage using multiple concurrent reclamation
processes
Multiple reclamation processes run concurrently, allowing you to make better use
of your available tape drives or FILE volumes.
You can specify one or more reclamation processes for each primary
sequential-access storage pool, copy storage pool, or active-data pool using the
RECLAIMPROCESS parameter on the DEFINE STGPOOL and UPDATE STGPOOL
commands.
Each reclamation process requires at least two simultaneous volume mounts (at
least two mount points) and, if the device type is not FILE, at least two drives.
One of the drives is for the input volume in the storage pool being reclaimed. The
other drive is for the output volume in the storage pool to which files are being
moved.
When calculating the number of concurrent processes to run, you must carefully
consider the resources you have available, including the number of storage pools
that will be involved with the reclamation, the number of mount points, the
number of drives that can be dedicated to the operation, and (if appropriate) the
number of mount operators available to manage reclamation requests. The number
of available mount points and drives depends on other Tivoli Storage Manager and
system activity and on the mount limits of the device classes for the storage pools
that are involved in the reclamation. For more information about mount limit, see:
“Controlling the number of simultaneously mounted volumes” on page 255
For example, suppose that you want to reclaim the volumes from two sequential
storage pools simultaneously and that all storage pools involved have the same
Chapter 11. Managing storage pools and volumes
353
device class. Each process requires two mount points and, if the device type is not
FILE, two drives. To run four reclamation processes simultaneously (two for each
storage pool), you need a total of at least eight mount points and eight drives. The
device class for each storage pool must have a mount limit of at least eight.
If the device class for the storage pools being reclaimed does not have enough
mount points or drives, you can use the RECLAIMSTGPOOL parameter to direct
the reclamation to a storage pool with a different device class that has the
additional mount points or drives.
If the number of reclamation processes you specify is more than the number of
available mount points or drives, the processes that do not obtain mount points or
drives will wait indefinitely or until the other reclamation processes complete and
mount points or drives become available.
The Tivoli Storage Manager server will start the specified number of reclamation
processes regardless of the number of volumes that are eligible for reclamation. For
example, if you specify ten reclamation processes and only six volumes are eligible
for reclamation, the server will start ten processes and four of them will complete
without processing a volume.
Multiple concurrent reclamation processing does not affect collocation. For
additional information, see “How collocation affects reclamation” on page 360.
Reclaiming volumes in a storage pool with one drive
When a storage pool has only one mount point (that is, just one drive) available to
it through the device class, data cannot be reclaimed from one volume to another
within that same storage pool. To reclaim volumes in a storage pool that has only
drive, you can define a reclamation storage pool and use it for temporary storage of
reclaimed data.
When the server reclaims volumes, the server moves the data from volumes in the
original storage pool to volumes in the reclamation storage pool. The server always
uses the reclamation storage pool when one is defined, even when the mount limit
is greater than one.
If the reclamation storage pool does not have enough space to hold all of the data
being reclaimed, the server moves as much of the data as possible into the
reclamation storage pool. Any data that could not be moved to volumes in the
reclamation storage pool still remains on volumes in the original storage pool.
The pool identified as the reclamation storage pool must be a primary sequential
storage pool. The primary purpose of the reclamation storage pool is for temporary
storage of reclaimed data. To ensure that data moved to the reclamation storage
pool eventually moves back into the original storage pool, specify the original
storage pool as the next pool in the storage hierarchy for the reclamation storage
pool. For example, if you have a tape library with one drive, you can define a
storage pool to be used for reclamation using a device class with a device type of
FILE:
define stgpool reclaimpool fileclass maxscratch=100
Define the storage pool for the tape drive as follows:
define stgpool tapepool1 tapeclass maxscratch=100
reclaimstgpool=reclaimpool
354
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Finally, update the reclamation storage pool so that data migrates back to the tape
storage pool:
update stgpool reclaimpool nextstgpool=tapepool1
Tip:
v You can specify multiple concurrent reclamation processes for a primary storage
pool with one drive by using the RECLAIMSTGPOOL parameter. If multiple
concurrent processing is not desired, specify a value of 1 for the
RECLAIMPROCESS parameter on the DEFINE STGPOOL or UPDATE
STGPOOL commands.
v In a mixed-media library, reclaiming volumes in a storage pool defined with a
device class with a single mount point (that is, a single drive) requires one of the
following:
– At least one other drive with a compatible read/write format
– Enough disk space to create a storage pool with a device type of FILE
Reducing the time to reclaim tape volumes with high capacity
When a storage pool uses tape volumes with high capacity, reclamation processes
might run for a long time if the drives are relatively slow at positioning tapes.
There are steps that you can take to reduce overall process time.
To help reduce overall process time:
1. Set up the storage pool hierarchy so that the tape storage pool is the next
storage pool for a storage pool that uses either a DISK device type or a FILE
device type.
2. When you need to reclaim volumes, move data from the tape storage pool to
the DISK or FILE storage pool.
3. Allow the data to migrate from the DISK or FILE storage pool back to the tape
storage pool by adjusting the migration thresholds.
Reclamation of write-once, read-many (WORM) media
Reclamation of WORM volumes does not mean that you can reuse this write-once
media. However, reclamation does allow you to make more library space available.
Reclamation of WORM volumes consolidates data from partially filled volumes to
other WORM volumes. You can then eject the empty, used WORM volumes and
add new volumes.
To prevent reclamation of WORM media, storage pools that are assigned to device
classes with a device type of WORM have a default reclamation value of 100.
To allow reclamation, you can set the reclamation value to something lower when
defining or updating the storage pool.
Chapter 11. Managing storage pools and volumes
355
Controlling reclamation of virtual volumes
When virtual volumes (volumes with the device type of SERVER) in a primary
storage pool are reclaimed, the client data stored on those volumes is sent across
the network between the source server and the target server. As a result, the
reclamation process can tie up your network resources.
To control when reclamation starts for these volumes, consider setting the
reclamation threshold to 100% for any primary storage pool that uses virtual
volumes. Lower the reclamation threshold at a time when your network is less
busy, so that the server can reclaim volumes.
For virtual volumes in a copy storage pool or an active-data pool, the server
reclaims a volume as follows:
1. The source server determines which files on the volume are still valid.
2. The source server obtains these valid files from volumes in a primary storage
pool, or if necessary, from removable-media volumes in an on-site copy storage
pool or in an on-site active-data pool. The server can also obtain files from
virtual volumes in a copy storage pool or an active-data pool.
3. The source server writes the files to one or more new virtual volumes in the
copy storage pool or active-data pool and updates its database.
4. The server issues a message indicating that the volume was reclaimed.
Tip: You can specify multiple concurrent reclamation processes for a primary
storage pool with a device type of SERVER. However, running multiple concurrent
processes for this type of storage pool can tie up network resources because the
data is sent across the network between the source server and target server.
Therefore, if you want to run multiple concurrent processes, do so when the
network is less busy. If multiple concurrent processing is not desired, specify a
value of 1 for the RECLAIMPROCESS parameter on the DEFINE STGPOOL or
UPDATE STGPOOL commands.
For information about using the SERVER device type, see “Using virtual volumes
to store data on another server” on page 730.
Reclaiming copy storage pools and active-data pools
On-site and off-site volumes in copy storage pools and active-data pools are
reclaimed when the amount of unused space exceeds the reclamation threshold.
When reclamation occurs and how reclamation processing is done depends on
whether the volumes are marked as off-site.
Reclamation of volumes in copy storage pools and active-data pools is similar to
reclamation in primary storage pools. For volumes that are on-site, reclamation
usually occurs after the volume is full and then begins to empty because of file
deletion, expiration, or, in the case of active-data pools, deactivation. When the
percentage of reclaimable space on a volume rises above the reclamation threshold,
the server reclaims the volume. Active files on the volume are rewritten to other
volumes in the storage pool, making the original volume available for new files.
For off-site volumes, reclamation can occur when the percentage of unused space
on the volume is greater than the reclaim parameter value. The unused space in
copy storage pool volumes includes both space that has never been used on the
volume and space that has become empty because of file deletion or expiration.
For volumes in active-data pools, reclaimable space also includes inactive versions
of files. Most volumes in copy storage pools and active-data pools might be set to
356
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
an access mode of off-site, making them ineligible to be mounted. During
reclamation, the server copies valid files on off-site volumes from the original files
in the primary storage pools. In this way, the server copies valid files on off-site
volumes without having to mount these volumes. For more information, see
“Reclamation of off-site volumes.”
Reclamation of copy storage pool volumes and active-data pool volumes should be
done periodically to allow the reuse of partially filled volumes that are off-site.
Reclamation can be done automatically by setting the reclamation threshold for the
copy storage pool or the active-data pool to less than 100%. However, you need to
consider controlling when reclamation occurs because of how off-site volumes are
treated. For more information, see “Controlling when reclamation occurs for
off-site volumes” on page 358.
Virtual Volumes: Virtual volumes (volumes that are stored on another Tivoli
Storage Manager server through the use of a device type of SERVER) cannot be set
to the off-site access mode.
Using the RECLAIMPROCESS parameter on the DEFINE STGPOOL or UPDATE
STGPOOL command, you can specify multiple concurrent reclamation processes
for a single copy storage pool or active-data pool. Doing so will let you make
better use of your available tape drives or FILE volumes. The principles underlying
multiple concurrent reclamation processes for copy storage pools and active-data
pools are the same principles as those for primary sequential-access storage pools.
In particular, you need to carefully consider available resources (for example, the
number of mount points) when calculating how many processes you can run
concurrently. For details, see “Optimizing drive usage using multiple concurrent
reclamation processes” on page 353.
Reclamation of primary storage pool volumes does not affect copy storage pool
files or files in active-data pools.
Reclamation of off-site volumes
Volumes with the access value of off-site are eligible for reclamation if the amount
of empty space on a volume exceeds the reclamation threshold for the copy storage
pool or active-data pool. The default reclamation threshold for copy storage pools
and active-data pools is 100%, which means that reclamation is not performed.
When an off-site volume is reclaimed, the files on the volume are rewritten to a
read/write volume. Effectively, these files are moved back to the on-site location.
The files may be obtained from the off-site volume after a disaster, if the volume
has not been reused and the database backup that you use for recovery references
the files on the off-site volume.
The server reclaims an off-site volume as follows:
1. The server determines which files on the volume are still valid.
2. The server obtains these valid files from a primary storage pool or, if necessary,
from an on-site volume of a copy storage pool or active-data pool.
3. The server writes the files to one or more volumes in the copy storage pool or
active-data pool and then updates the database. If a file is an aggregate with
unused space, the unused space is removed during this process.
4. A message is issued indicating that the off-site volume was reclaimed.
For a single storage pool, the server reclaims all off-site volumes that are
eligible for reclamation at the same time. Reclaiming all the eligible volumes at
the same time minimizes the tape mounts for primary storage pool volumes.
Chapter 11. Managing storage pools and volumes
357
If you are using the disaster recovery manager, see:
“Moving copy storage pool and active-data pool volumes on-site” on page 836
.
Controlling when reclamation occurs for off-site volumes
If you send copy storage pool volumes off-site, you can control reclamation by
adjusting the reclamation threshold.
Suppose you plan to make daily storage pool backups to a copy storage pool, then
mark all new volumes in the copy storage pool as offsite and send them to the
off-site storage location. This strategy works well with one consideration if you are
using automatic reclamation (the reclamation threshold is less than 100%).
Each day’s storage pool backups will create a number of new copy-storage pool
volumes, the last one being only partially filled. If the percentage of empty space
on this partially filled volume is higher than the reclaim percentage, this volume
becomes eligible for reclamation as soon as you mark it off-site. The reclamation
process would cause a new volume to be created with the same files on it. The
volume you take off-site would then be empty according to the Tivoli Storage
Manager database. If you do not recognize what is happening, you could
perpetuate this process by marking the new partially filled volume off-site.
One way to resolve this situation is to keep partially filled volumes on-site until
they fill up. However, this would mean a small amount of your data would be
without an off-site copy for another day.
If you send copy storage pool volumes off-site, it is recommended you control pool
reclamation by using the default value of 100. This turns reclamation off for the
copy storage pool. You can start reclamation processing at desired times by
changing the reclamation threshold for the storage pool. To monitor off-site volume
utilization and help you decide what reclamation threshold to use, enter the
following command:
query volume * access=offsite format=detailed
Depending on your data expiration patterns, you may not need to do reclamation
of off-site volumes each day. You may choose to perform off-site reclamation on a
less frequent basis. For example, suppose you ship copy-storage pool volumes to
and from your off-site storage location once a week. You can run reclamation for
the copy-storage pool weekly, so that as off-site volumes become empty they are
sent back for reuse.
When you do perform reclamation for off-site volumes, the following sequence is
recommended:
1. Back up your primary-storage pools to copy-storage pools or copy the active
data in primary-storage pools to active-data pools.
2. Turn on reclamation for copy-storage pools and active-data pools by lowering
the reclamation threshold for copy-storage pools below 100%. The default for
active-data pools is 60.
3. When reclamation processing completes, turn off reclamation by raising the
reclamation thresholds to 100%.
4. Mark any newly created copy-storage pool volumes and active-data pool
volumes as off-site, and then move them to the off-site location.
358
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
This sequence ensures that the files on the new copy-storage pool volumes and
active-data pool volumes are sent off-site, and are not inadvertently kept on-site
because of reclamation.
Preventing off-site marking of partially-filled copy storage pool and active-data
pool volumes:
To prevent marking partially-filled copy storage pool or active-data pool volumes
as off-site, you can use storage on another Tivoli Storage Manager server (device
type of SERVER) for storage-pool backups.
If the other server is at a different site, the copy-storage pool volumes or
active-data pool volumes are already off-site, with no moving of physical volumes
between the sites. See “Using virtual volumes to store data on another server” on
page 730 for more information.
Limiting the number of off-site volumes to be reclaimed
To ensure that reclamation completes within the desired amount of time, you can
use OFFSITERECLAIMLIMIT parameter on the DEFINE STGPOOL or UPDATE
STGPOOL command to limit the number of off-site volumes to be reclaimed.
When determining the value for the OFFSITERECLAIMLIMIT parameter,
consider using the statistical information in the message issued at the end of the
off-site volume reclamation operation.
|
|
|
|
Alternatively, you can use the following Tivoli Storage Manager SQL SELECT
command to obtain records from the SUMMARY table for the off-site volume
reclamation operation:
|
|
|
|
Two kinds of records are displayed for the off-site reclamation process. One
volume record is displayed for each reclaimed off-site volume. However, the
volume record does not display the following items:
v The number of examined files.
v The number of affected files.
v The total bytes involved in the operation.
|
|
|
|
|
|
|
|
|
|
select * from summary where activity='OFFSITE RECLAMATION'
This information is summarized in the statistical summary record for the offsite
reclamation. The statistical summary record displays the following items:
v
v
v
v
v
v
The
The
The
The
The
The
number of examined files.
number of affected files.
total bytes involved in the operation.
number of off-site volumes that were processed.
number of parallel processes that were used.
total amount of time required for the processing.
The order in which off-site volumes are reclaimed is based on the amount of
unused space in a volume. (Unused space includes both space that has never been
used on the volume and space that has become empty because of file deletion.)
Volumes with the largest amount of unused space are reclaimed first.
For example, suppose a copy storage pool contains three volumes: VOL1, VOL2,
and VOL3. VOL1 has the largest amount of unused space, and VOL3 has the least
amount of unused space. Suppose further that the percentage of unused space in
Chapter 11. Managing storage pools and volumes
359
each of the three volumes is greater than the value of the RECLAIM parameter. If
you do not specify a value for the OFFSITERECLAIMLIMIT parameter, all three
volumes will be reclaimed when the reclamation runs. If you specify a value of 2,
only VOL1 and VOL2 will be reclaimed when the reclamation runs. If you specify
a value of 1, only VOL1 will be reclaimed.
Delayed reuse of reclaimed volumes
Delaying reuse may help you to recover data under certain conditions during
recovery from a disaster.
As a best practice, delay the reuse of any reclaimed volumes in copy storage pools
and active-data pools for as long as you keep your oldest database backup. For
more information about delaying volume reuse, see “Delaying reuse of volumes
for recovery purposes” on page 780.
Reclamation of volumes in active-data pools
Inactive files in volumes in an active-data pool are deleted by reclamation
processing. The rate at which reclaimable space accumulates in active-data pool
volumes is typically faster than the rate for volumes in non-active-data pools.
If reclamation of volumes in an active-data pool is occurring too frequently,
requiring extra resources such as tape drives and libraries to mount and dismount
volumes, you can adjust the reclamation threshold until the rate of reclamation is
acceptable. The default reclamation threshold for active-data pools is 60 percent,
which means that reclamation begins when the storage pool reaches 60 percent of
capacity. Accelerated reclamation of volumes has more of an effect on active-data
pools that use removable media and, in particular, on removable media that is
taken off-site.
How collocation affects reclamation
If collocation is enabled and reclamation occurs, the server tries to move the files
for each client node, group of client nodes or client file space onto a minimal
number of volumes.
If the volumes are manually mounted, the mount operators must:
v Be aware that a tape volume may be rewound more than once if the server
completes a separate pass to move the data for each client node or client file
space.
v Mount and dismount multiple volumes to allow the server to select the most
appropriate volume on which to move data for each client node or client file
space. The server tries to select a volume in the following order:
1. A volume that already contains files belonging to the client file space or
client node
2. An empty volume
3. The volume with the most available space
4. Any available volume
If collocation is disabled and reclamation occurs, the server tries to move usable
data to new volumes by using the following volume selection criteria, in the order
shown:
1. The volume that contains the most data
2. Any partially full volume
3. An empty predefined volume
360
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
4. An empty scratch volume
If you specify collocation and multiple concurrent processes, the server attempts to
move the files for each collocation group, client node, or client file space onto as
few volumes as possible. However, if files belonging to a single collocation group
(or node or file space) are on different volumes to begin with and are being moved
at the same time by different processes, the files could be moved to separate
output volumes. For details about multiple concurrent reclamation processing, see
“Optimizing drive usage using multiple concurrent reclamation processes” on page
353.
See also “Reducing the time to reclaim tape volumes with high capacity” on page
355.
Estimating space needs for storage pools
Three default random-access disk storage pools are provided at installation. You
can add space to these storage pools by adding volumes, or you can define
additional storage pools.
The following default random-access disk storage pools are available at
installation:
v BACKUPPOOL for backed-up files
v ARCHIVEPOOL for archived files
v SPACEMGPOOL for files migrated from client nodes (space-managed files)
As your storage environment grows, you may want to consider how policy and
storage pool definitions affect where workstation files are stored. Then you can
define and maintain multiple storage pools in a hierarchy that allows you to
control storage costs by using sequential-access storage pools in addition to disk
storage pools, and still provide appropriate levels of service to users.
To help you determine how to adjust your policies and storage pools, get
information about how much storage is being used (by client node) and for what
purposes in your existing storage pools. For more information on how to do this,
see “Obtaining information about the use of storage space” on page 377.
Estimating space requirments in random-access storage
pools
The amount of storage space required for each random-access disk storage pool is
based on your storage needs for backup, archive, and space-management
operations.
To estimate the amount of storage space required for each random-access disk
storage pool:
v Determine the amount of disk space needed for different purposes:
– For backup storage pools, provide enough disk space to support efficient
daily incremental backups.
– For archive storage pools, provide sufficient space for a user to archive a
moderate size file system without causing migration from the disk storage
pool to occur.
Chapter 11. Managing storage pools and volumes
361
– For storage pools for space-managed files, provide enough disk space to
support the daily space-management load from HSM clients, without causing
migration from the disk storage pool to occur.
v Decide what percentage of this data you want to keep on disk storage space.
Establish migration thresholds to have the server automatically migrate the
remainder of the data to less expensive storage media in sequential-access
storage pools.
See “Migration thresholds” on page 310 for recommendations on setting
migration thresholds.
Estimating space for backed-up files in random-access storage
pools
Space requirements for backed-up files stored in a single random-access storage
pool are based on the total number of workstations, the average data capacity of a
workstation, the fraction of each workstation disk space used, and the number
backup versions you will keep.
To estimate the total amount of space needed for all backed-up files stored in a
single random-access (disk) storage pool, use the following formula:
Backup space = WkstSize * Utilization * VersionExpansion * NumWkst
where:
Backup Space
The total amount of storage pool disk space needed.
WkstSize
The average data storage capacity of a workstation. For example, if the
typical workstation at your installation has a 4 GB hard drive, then the
average workstation storage capacity is 4 GB.
Utilization
An estimate of the fraction of each workstation disk space used, in the
range 0 to 1. For example, if you expect that disks on workstations are 75%
full, then use 0.75.
VersionExpansion
An expansion factor (greater than 1) that takes into account the additional
backup versions, as defined in the copy group. A rough estimate allows 5%
additional files for each backup copy. For example, for a version limit of 2,
use 1.05, and for a version limit of 3, use 1.10.
NumWkst
The estimated total number of workstations that the server supports.
If clients use compression, the amount of space required may be less than the
amount calculated, depending on whether the data is compressible.
362
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Estimating space for archived files in random-access storage
pools
The number of archived files generated by users is not necessarily related to the
amount of data stored on their workstations. To estimate the total amount of space
needed for all archived files in a single random-access (disk) storage pool,
determine what percentage of user files are typically archived.
Work with policy administrators to calculate this percentage based on the number
and type of archive copy groups defined. For example, if policy administrators
have defined archive copy groups for only half of the policy domains in your
enterprise, then estimate that you need less than 50% of the amount of space you
have defined for backed-up files.
Because additional storage space can be added at any time, you can start with a
modest amount of storage space and increase the space by adding storage volumes
to the archive storage pool, as required.
Estimating space needs in sequential-access storage pools
Estimating the space needs in sequential-access storage pools is a relatively
complex calculation based upon multiple considerations.
To estimate the amount of space required for sequential-access storage pools,
consider:
v The amount of data being migrated from disk storage pools
v The length of time backed-up files are retained, as defined in backup copy
groups
v The length of time archived files are retained, as defined in archive copy groups
v How frequently you reclaim unused space on sequential volumes
See “Reclaiming space in sequential-access storage pools” on page 350 for
information about setting a reclamation threshold.
v Whether or not you use collocation to reduce the number of volume mounts
required when restoring or retrieving large numbers of files from sequential
volumes
If you use collocation, you may need additional tape drives and volumes.
See “Keeping client files together using collocation” on page 340 for information
about using collocation for your storage pools.
v The type of storage devices and sequential volumes supported at your
installation
Monitoring storage-pool and volume usage
Monitor your storage pools and volumes to determine space requirements, the
status of data migration from one to storage pool to the next storage pool in the
storage hierarchy, and the use of disk space by cached copies of files that have
been migrated to the next storage pool.
Chapter 11. Managing storage pools and volumes
363
Monitoring space available in a storage pool
Monitoring the space available in storage pools is important to ensure that client
operations such as backup can complete successfully. To make more space
available, you might need to define more volumes for disk storage pools, or add
more volumes for sequential-access storage pools such as tape.
For more information about maintaining a supply of volumes in libraries, see:
“Managing volumes” on page 174
Obtaining capacity estimates and utilization percentages of
storage pools
Standard reports about storage pools list basic information, such as the estimated
capacity and utilization percentage of all storage pools defined to the system.
To obtain a standard report, issue the following command:
query stgpool
Figure 52 shows a standard report with all storage pools defined to the system. To
monitor the use of storage pool space, review the Estimated Capacity and Pct Util
columns.
Storage
Pool Name
Device
Class Name
Estimated
Capacity
----------ARCHIVEPOOL
BACKTAPE
BACKUPPOOL
COPYPOOL
ENGBACK1
---------DISK
TAPE
DISK
TAPE
DISK
---------0.0 M
180.0 M
80.0 M
300.0 M
0.0 M
Pct
Util
Pct
Migr
High Low
Mig Mig
Pct Pct
----- ----- ---- ---0.0
0.0
90
70
85.0 100.0
90
70
51.6 51.6
50
30
42.0
0.0
0.0
85
40
Next
Storage
Pool
-----------
BACKTAPE
BACKTAPE
Figure 52. Information about storage pools
Estimated Capacity
Specifies the space available in the storage pool in megabytes (M) or
gigabytes (G).
For a disk storage pool, this value reflects the total amount of available
space in the storage pool, including any volumes that are varied offline.
For a sequential-access storage pool, this value is an estimate of the total
amount of available space on all volumes in the storage pool. The total
includes volumes with any access mode (read-write, unavailable, read-only,
off-site, or destroyed). The total includes scratch volumes that the storage
pool can acquire only when the storage pool is using at least one scratch
volume for data.
Volumes in a sequential-access storage pool, unlike those in a disk storage
pool, do not contain a precisely known amount of space. Data is written to
a volume as necessary until the end of the volume is reached. For this
reason, the estimated capacity is truly an estimate of the amount of
available space in a sequential-access storage pool.
Pct Util
Specifies, as a percentage, the space used in each storage pool.
For disk storage pools, this value reflects the total number of disk blocks
currently allocated by Tivoli Storage Manager. Space is allocated for
backed-up, archived, or space-managed files that are eligible for server
364
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
migration, cached files that are copies of server-migrated files, and files
that reside on any volumes that are varied offline.
Note: The value for Pct Util can be higher than the value for Pct Migr if
you query for storage pool information while a client transaction (such as a
backup) is in progress. The value for Pct Util is determined by the amount
of space actually allocated (while the transaction is in progress). The value
for Pct Migr represents only the space occupied by committed files. At the
end of the transaction, Pct Util and Pct Migr become synchronized.
For sequential-access storage pools, this value is the percentage of the total
bytes of storage available that are currently being used to store active data
(data that is not expired). Because the server can only estimate the
available capacity of a sequential-access storage pool, this percentage also
reflects an estimate of the actual utilization of the storage pool.
Figure 52 on page 364 shows that the estimated capacity for a disk storage pool
named BACKUPPOOL is 80 MB, which is the amount of available space on disk
storage. More than half (51.6%) of the available space is occupied by either backup
files or cached copies of backup files.
The estimated capacity for the tape storage pool named BACKTAPE is 180 MB,
which is the total estimated space available on all tape volumes in the storage
pool. This report shows that 85% of the estimated space is currently being used to
store workstation files.
Note: This report also shows that volumes have not yet been defined to the
ARCHIVEPOOL and ENGBACK1 storage pools, because the storage pools show
an estimated capacity of 0.0 MB.
Obtaining statistics about space-trigger and scratch-volume
utilization in storage pools
Detailed reports about a storage pools list not only estimated capacity and
utilization percentage, but also space-trigger and scratch-volume utilization.
To obtain a detailed report, issue the following command:
query stgpool format=detailed
Space Trigger Utilization
Specifies the utilization of a storage pool, as calculated by the storage pool
space trigger, if any, for the storage pool. You can define space triggers
only for storage pools associated with DISK or FILE device types.
For sequential-access devices, space trigger utilization is expressed as a
percentage of the number of used bytes on each sequential-access volume
relative to the size of the volume, and the estimated capacity of all existing
volumes in the storage pool. It does not include potential scratch volumes.
Unlike the calculation for percent utilization (Pct Util), the calculation for
space trigger utilization favors creation of new private file volumes by the
space trigger over usage of additional scratch volumes.
For disk devices, space trigger utilization is expressed as a percentage of
the estimated capacity, including cached data and deleted data that is
waiting to be shredded. However, it excludes data that resides on any
volumes that are varied offline. If you issue QUERY STGPOOL while a file
creation is in progress, the value for space trigger utilization can be higher
than the value for percent migration (Pct Migr). The value for space trigger
utilization is determined by the amount of space actually allocated while
Chapter 11. Managing storage pools and volumes
365
the transaction is in progress. The value for percent migration represents
only the space occupied by committed files. At the end of the transaction,
these values are synchronized.
The value for space trigger utilization includes cached data on disk
volumes. Therefore, when cache is enabled and migration occurs, the value
remains the same because the migrated data remains on the volume as
cached data. The value decreases only when the cached data expires or
when the space that cached files occupy needs to be used for no-cached
files.
Number of Scratch Volumes Used
Specifies the number of scratch volumes used in a sequential-access storage
pool. You can use this value, along with the value of the field Maximum
Scratch Volumes Allowed to determine the remaining number of scratch
volumes that the server can request for a storage pool.
Monitoring the use of storage pool volumes
Monitoring how storage pool volumes are used lets you make the most efficient
use available storage.
Task
Required Privilege Class
Display information about volumes
Any administrator
You can query the server for information about storage pool volumes:
v General information about a volume, for example:
– Current access mode and status of the volume
– Amount of available space on the volume
– Location
v Contents of a storage pool volume (user files on the volume)
v The volumes that are used by a client node
Obtaining information about storage pool volumes
Standard reports provide a quick overview of basic information about storage pool
volumes. More information is available in detailed reports.
To request general information about all volumes defined to the server, enter:
query volume
Figure 53 on page 367 shows an example of the output of this standard query. The
example illustrates that data is being stored on the 8 mm tape volume named
WREN01, as well as on several other volumes in various storage pools.
366
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Volume Name
-----------------------D:\STOR\AIXVOL.1
D:\STOR\AIXVOL.2
D:\STOR\DOSVOL.1
D:\STOR\DOSVOL.2
D:\STOR\OS2VOL.1
D:\STOR\OS2VOL.2
WREN00
WREN01
Storage
Pool Name
----------AIXPOOL1
AIXPOOL2
DOSPOOL1
DOSPOOL2
OS2POOL1
OS2POOL2
TAPEPOOL
TAPEPOOL
Device
Estimated
Pct
Class Name Capacity Util
---------- --------- ----DISK
240.0 M
26.3
DISK
240.0 M
36.9
DISK
240.0 M
72.2
DISK
240.0 M
74.1
DISK
240.0 M
55.7
DISK
240.0 M
51.0
TAPE8MM
2.4 G
0.0
TAPE8MM
2.4 G
2.2
Volume
Status
-------On-Line
On-Line
On-Line
On-Line
On-Line
On-Line
Filling
Filling
Figure 53. Information about storage pool volumes
To query the server for a detailed report on volume WREN01 in the storage pool
named TAPEPOOL, enter:
query volume wren01 format=detailed
Figure 54 shows the output of this detailed query. Table 37 gives some suggestions
on how you can use the information.
Volume Name:
Storage Pool Name:
Device Class Name:
Estimated Capacity:
Pct Util:
Volume Status:
Access:
Pct. Reclaimable Space:
Scratch Volume?:
In Error State?:
Number of Writable Sides:
Number of Times Mounted:
Write Pass Number:
Approx. Date Last Written:
Approx. Date Last Read:
Date Became Pending:
Number of Write Errors:
Number of Read Errors:
Volume Location:
Last Update by (administrator):
Last Update Date/Time:
WREN01
TAPEPOOL
TAPE8MM
2.4 G
26.3
Filling
Read/Write
5.3
No
No
1
4
2
09/04/2002 11:33:26
09/03/2002 16:42:55
0
0
TANAGER
09/04/2002 11:33:26
Figure 54. Detailed information for a storage pool volume
Table 37. Using the detailed report for a volume
Task
Fields and Description
Ensure the volume is available.
Volume Status
Access
Check the Volume Status to see if a disk volume has been varied offline, or if a
sequential-access volume is currently being filled with data.
Check the Access to determine whether files can be read from or written to this
volume.
Chapter 11. Managing storage pools and volumes
367
Table 37. Using the detailed report for a volume (continued)
Task
Fields and Description
Monitor the use of storage space.
Estimated Capacity
Pct Util
The Estimated Capacity is determined by the device class associated with the
storage pool to which this volume belongs. Based on the estimated capacity, the
system tracks the percentage of space occupied by client files (Pct Util).
In this example, 26.3% of the estimated capacity is currently in use.
Monitor the error status of the
volume.
Number of Write Errors
Number of Read Errors
The server reports when the volume is in an error state and automatically
updates the access mode of the volume to read-only. The Number of Write Errors
and Number of Read Errors indicate the type and severity of the problem. Audit a
volume when it is placed in error state. See “Auditing storage pool volumes” on
page 797 for information about auditing a volume.
368
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Table 37. Using the detailed report for a volume (continued)
Task
Monitor the life of
sequential-access volumes that
you have defined to the storage
pool.
Fields and Description
Scratch Volume?
Write Pass Number
Number of Times Mounted
Approx. Date Last Written
Approx. Date Last Read
The server maintains usage statistics on volumes that are defined to storage
pools. Statistics on a volume explicitly defined by an administrator remain for as
long as the volume is defined to the storage pool. The server continues to
maintain the statistics on defined volumes even as the volume is reclaimed and
reused. However, the server deletes the statistics on the usage of a scratch
volume when the volume returns to scratch status (after reclamation or after all
files are deleted from the volume).
In this example, WREN01 is a volume defined to the server by an administrator,
not a scratch volume (Scratch Volume? is No).
The Write Pass Number indicates the number of times the volume has been
written to, starting from the beginning of the volume. A value of one indicates
that a volume is being used for the first time.
In this example, WREN01 has a write pass number of two, which indicates space
on this volume may have been reclaimed or deleted once before.
Compare this value to the specifications provided with the media that you are
using. The manufacturer may recommend a maximum number of write passes
for some types of tape media. You may need to retire your tape volumes after
reaching the maximum passes to better ensure the integrity of your data. To
retire a volume, move the data off the volume by using the MOVE DATA
command. See “Moving data from one volume to another volume” on page 381.
Use the Number of Times Mounted, the Approx. Date Last Written, and the Approx.
Date Last Read to help you estimate the life of the volume. For example, if more
than six months have passed since the last time this volume has been written to
or read from, audit the volume to ensure that files can still be accessed. See
“Auditing storage pool volumes” on page 797 for information about auditing a
volume.
The number given in the field, Number of Times Mounted, is a count of the
number of times that the server has opened the volume for use. The number of
times that the server has opened the volume is not always the same as the
number of times that the volume has been physically mounted in a drive. After a
volume is physically mounted, the server can open the same volume multiple
times for different operations, for example for different client backup sessions.
Determine the location of a
volume in a sequential-access
storage pool.
Location
Determine if a volume in a
sequential-access storage pool is
waiting for the reuse delay period
to expire.
Date Became Pending
When you define or update a sequential-access volume, you can give location
information for the volume. The detailed query displays this location name. The
location information can be useful to help you track volumes (for example,
off-site volumes in copy storage pools or active-data pools).
A sequential-access volume is placed in the pending state after the last file is
deleted or moved from the volume. All the files that the pending volume had
contained were expired or deleted, or were moved from the volume. Volumes
remain in the pending state for as long as specified with the REUSEDELAY
parameter for the storage pool to which the volume belongs.
Chapter 11. Managing storage pools and volumes
369
Whether or not a volume is full, at times the Pct Util (percent of the volume
utilized) plus the Pct Reclaimable Space (percent of the volume that can be
reclaimed) may add up to more than 100 percent. This can happen when a volume
contains aggregates that have empty space because of files in the aggregates that
have expired or been deleted. The Pct Util field shows all space occupied by both
non-aggregated files and aggregates, including empty space within aggregates. The
Pct Reclaimable Space field includes any space that is reclaimable on the volume,
also including empty space within aggregates. Because both fields include the
empty space within aggregates, these values may add up to more than 100 percent.
For more information about aggregates, see “How the server groups files before
storing” on page 298 and “Obtaining information about the use of storage space”
on page 377.
Obtaining information about the contents of a storage pool
volume
Any administrator can request information about the contents of a storage pool
volume. Viewing the contents of a storage volume is useful when a volume is
damaged or before you request the server to correct inconsistencies in the volume,
move files from one volume to another, or delete a volume from a storage pool.
Because the server tracks the contents of a storage volume through its database,
the server does not need to access the requested volume to determine its contents.
To produce a report that shows the contents of a volume, issue the QUERY
CONTENT command.
This report can be extremely large and may take a long time to produce. To reduce
the size of this report, narrow your search by selecting one or all of the following
search criteria:
Node name
Name of the node whose files you want to include in the query.
File space name
Names of file spaces to include in the query. File space names are
case-sensitive and must be entered exactly as they are known to the server.
Use the QUERY FILESPACE command to find the correct capitalization.
Number of files to be displayed
Enter a positive integer, such as 10, to list the first ten files stored on the
volume. Enter a negative integer, such as -15, to list the last fifteen files
stored on the volume.
Filetype
Specifies which types of files, that is, backup versions, archive copies, or
space-managed files, or a combination of these. If the volume being
queried is assigned to an active-data pool, the only valid values are ANY
and Backup.
Format of how the information is displayed
Standard or detailed information for the specified volume.
Damaged
Specifies whether to restrict the query output either to files that are known
to be damaged, or to files that are not known to be damaged.
Copied
Specifies whether to restrict the query output to either files that are backed
370
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
up to a copy storage pool, or to files that are not backed up to a copy
storage pool. Whether files are stored in an active-data pool does not affect
the output.
Note: There are several reasons why a file might have no usable copy in a
copy storage pool:
The file was recently added to the volume and has not yet been backed
up to a copy storage pool
The file should be copied the next time the storage pool is backed
up.
The file is damaged
To determine whether the file is damaged, issue the QUERY
CONTENT command, specifying the DAMAGED=YES parameter.
The volume that contains the files is damaged
To determine which volumes contain damaged files, issue the
following command:
select * from contents where damaged=yes
The file is segmented across multiple volumes, and one or more of the
other volumes is damaged
To determine whether the file is segmented, issue the QUERY
CONTENT command, specifying the FORMAT=DETAILED
parameter. If the file is segmented, issue the following command to
determine whether any of the volumes containing the additional
file segments are damaged:
select volume_name from contents where damaged=yes and
file_name like '%filename%'
For more information about using the SELECT command, see the
Administrator’s Reference.
Example: Generating a standard report about the contents of a volume:
A standard report about the contents of a volume displays basic information such
as the names of files.
To view the first seven backup files on volume WREN01 from file space /usr on
client node TOMC, for example, enter:
query content wren01 node=tomc filespace=/usr count=7 type=backup
Figure 55 displays a standard report which shows the first seven files from file
space /usr on TOMC stored in WREN01.
Node Name
Type Filespace
Name
------------------------ ---- ---------TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
Client's Name for File
-------------------------------------/bin/ acctcom
/bin/ acledit
/bin/ aclput
/bin/ admin
/bin/ ar
/bin/ arcv
/bin/ banner
Figure 55. A standard report on the contents of a volume
Chapter 11. Managing storage pools and volumes
371
The report lists logical files on the volume. If a file on the volume is an aggregate
of logical files (backed-up or archived client files), all logical files that are part of
the aggregate are included in the report. An aggregate can be stored on more than
one volume, and therefore not all of the logical files in the report may actually be
stored on the volume being queried.
Example: Generating a detailed report about the contents of a volume:
A detailed report about volume contents provides basic information as well as
information about whether the file is stored across multiple volumes, whether the
file is part of an aggregate, and whether the file is a cached copy of a file that has
been migrated to the next storage pool in the hierarchy.
To display detailed information about the files stored on volume VOL1, enter:
query content vol1 format=detailed
Figure 56 on page 373 displays a detailed report that shows the files stored on
VOL1. The report lists logical files and shows whether each file is part of an
aggregate. If a logical file is stored as part of an aggregate, the information in the
Segment Number, Stored Size, and Cached Copy? fields apply to the aggregate,
not to the individual logical file.
If a logical file is part of an aggregate, the Aggregated? field shows the sequence
number of the logical file within the aggregate. For example, the Aggregated? field
contains the value 2/4 for the file AB0CTGLO.IDE, meaning that this file is the
second of four files in the aggregate. All logical files that are part of an aggregate
are included in the report. An aggregate can be stored on more than one volume,
and therefore not all of the logical files in the report may actually be stored on the
volume being queried.
For disk volumes, the Cached Copy? field identifies whether the file is a cached
copy of a file that has been migrated to the next storage pool in the hierarchy.
372
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Node Name:
Type:
Filespace Name:
Client's Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
OS2
\ README
No
27,089
1/1
No
Node Name:
Type:
Filespace Name:
Client's Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTCOM.ENT
1/4
202,927
1/1
No
Node Name:
Type:
Filespace Name:
Client's Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTGLO.IDE
2/4
202,927
1/1
No
Node Name:
Type:
Filespace Name:
Client's Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTTRD.IDE
3/4
202,927
1/1
No
Node Name:
Type:
Filespace Name:
Client's Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTSYM.ENT
4/4
202,927
1/1
No
Figure 56. Viewing a detailed report of the contents of a volume
Identifying the volumes used by a client node
To identify the sequential volumes used by a client node, you can use the server’s
SELECT command.
The SELECT command queries the VOLUMEUSAGE table in the Tivoli Storage
Manager database. For example, to get a list of volumes used by the EXCH1 client
node in the TAPEPOOL storage pool, enter the following command:
select volume_name from volumeusage where node_name='EXCH1' and
stgpool_name='TAPEPOOL'
The results are something like the following:
VOLUME_NAME
-----------------TAPE01
TAPE08
TAPE13
TAPE21
For more information about using the SELECT command, see the Administrator’s
Reference.
Chapter 11. Managing storage pools and volumes
373
Monitoring migration processes
To obtain information about migration processing, you can request a standard
storage-pool report.
Four fields on the standard storage-pool report provide you with information
about the migration process. They include:
Pct Migr
Specifies the percentage of data in each storage pool that can be migrated.
This value is used to determine when to start or stop migration.
For random-access and sequential-access disk storage pools, this value
represents the amount of disk space occupied by backed-up, archived, or
space-managed files that can be migrated to another storage pool. The
calculation for random-access disk storage pools excludes cached data, but
includes files on volumes that are varied offline.
For sequential-access tape and optical storage pools, this value is the
percentage of the total volumes in the storage pool that actually contain
data at the moment. For example, assume a storage pool has four explicitly
defined volumes, and a maximum scratch value of six volumes. If only
two volumes actually contain data at the moment, then Pct Migr is 20%.
This field is blank for copy storage pools and active-data pools.
High Mig Pct
Specifies when the server can begin migrating data from this storage pool.
Migration can begin when the percentage of data that can be migrated
reaches this threshold. (This field is blank for copy storage pools and
active-data pools.)
Low Mig Pct
Specifies when the server can stop migrating data from this storage pool.
Migration can end when the percentage of data that can be migrated falls
below this threshold. (This field is blank for copy storage pools and
active-data pools.)
Next Storage Pool
Specifies the primary storage pool destination to which data is migrated.
(This field is blank for copy storage pools and active-data pools.)
Example: Monitoring data migration between storage pools
A storage pool is queried to determine high and low migration thresholds. The
server is queried to monitor the migration process.
Figure 52 on page 364 shows that the migration thresholds for BACKUPPOOL
storage pool are set to 50% for the high migration threshold and 30% for the low
migration threshold.
When the amount of migratable data stored in the BACKUPPOOL storage pool
reaches 50%, the server can begin to migrate files to BACKTAPE.
To monitor the migration of files from BACKUPPOOL to BACKTAPE, enter:
query stgpool back*
See Figure 57 on page 375 for an example of the results of this command.
If caching is on for a disk storage pool and files are migrated, the Pct Util value
does not change because the cached files still occupy space in the disk storage
374
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
pool. However, the Pct Migr value decreases because the space occupied by cached
files is no longer migratable.
Storage
Pool Name
Device
Class Name
----------BACKTAPE
BACKUPPOOL
---------TAPE
DISK
Estimated
Capacity
Pct
Util
Pct
Migr
High Low
Mig Mig
Pct Pct
---------- ----- ----- ---- ---180.0 M 95.2 100.0
90
70
80.0 M 51.6 28.8
50
30
Next
Storage
Pool
----------BACKTAPE
Figure 57. Information on backup storage pools
You can query the server to monitor the migration process by entering:
query process
A message similar to Figure 58 is displayed:
Process Process Description
Status
Number
-------- ------------------------ --------------------------------------------2 Migration
Disk Storage Pool BACKUPPOOL, Moved Files:
1086, Moved Bytes: 25555579, Unreadable
Files: 0, Unreadable Bytes: 0
Figure 58. Information on the migration process
When migration is finished, the server displays the following message:
ANR1101I Migration ended for storage pool BACKUPPOOL.
Managing problems during migration processes
Migration processes can be suspended if a problem occurs. If migration is
suspended, you can retry the process, cancel the process, end the migration process
by changing the attributes of the storage pool from which data is being migrated,
or provide additional space.
Canceling migration processes
To stop server migration when a problem occurs or when you need the resources
the process is using, you can cancel the process.
First determine the identification number of the migration process by entering:
query process
A message similar to Figure 59 is displayed:
Process Process Description
Status
Number
-------- ------------------------ --------------------------------------------1 Migration
ANR1113W Migration suspended for storage pool
BACKUPPOOL - insufficient space in
subordinate storage pool.
Figure 59. Getting the identification number of the migration process
Then you can cancel the migration process by entering:
cancel process 1
Chapter 11. Managing storage pools and volumes
375
Stopping repeated attempts by the server to restart migration
Some errors cause the server to continue attempting to restart the migration
process after 60 seconds. (If the problem still exists after several minutes, the
migration process ends.) To stop the repeated attempts at restart, you can change
some characteristics of the storage pool from which data is being migrated.
Depending on your environment, you can:
v Set higher migration thresholds for the storage pool from which data is being
migrated. The higher threshold means the storage pool must have more
migratable data before migration starts. This change delays migration.
In the example in “Example: Monitoring data migration between storage pools”
on page 374, you could update the disk storage pool BACKUPPOOL.
v Add volumes to the pool from which data is being migrated. Adding volumes
decreases the percentage of data that is migratable (Pct Migr).
In the example in “Example: Monitoring data migration between storage pools”
on page 374, you could add volumes to the disk storage pool BACKUPPOOL to
increase its storage capacity.
Tip: Do this only if you received an out-of-space message for the storage pool to
which data is being migrated.
Providing additional space for the migration process
A migration process can be suspended because of insufficient space in the storage
pool to which data is being migrated. To allow the migration process to complete,
you can provide additional storage volumes for that storage pool.
In the example in “Example: Monitoring data migration between storage pools” on
page 374, you can add volumes to the BACKTAPE storage pool or increase the
maximum number of scratch tapes allowed for it. Either way, you increase the
storage capacity of BACKTAPE.
Monitoring the use of cache space on disk storage
To determine whether cache is being used on disk storage and to monitor how
much space is being used by cached copies, query the server for a detailed storage
pool report.
The Pct Util value includes cached data on a volume (when cache is enabled) and
the Pct Migr value excludes cached data. Therefore, when cache is enabled and
migration occurs, the Pct Migr value decreases while the Pct Util value remains the
same. The Pct Util value remains the same because the migrated data remains on
the volume as cached data. In this case, the Pct Util value only decreases when the
cached data expires.
If you update a storage pool from CACHE=YES to CACHE=NO, the cached files
will not disappear immediately. The Pct Util value will be unchanged. The cache
space will be reclaimed over time as the server needs the space, and no additional
cached files will be created.
For example, to request a detailed report for BACKUPPOOL, enter:
query stgpool backuppool format=detailed
Figure 60 on page 377 displays a detailed report for the storage pool.
376
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Storage Pool Name:
Storage Pool Type:
Device Class Name:
Estimated Capacity:
Space Trigger Util:
Pct Util:
Pct Migr:
Pct Logical:
High Mig Pct:
Low Mig Pct:
Migration Delay:
Migration Continue:
Migration Processes:
Reclamation Processes:
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold:
Access:
Description:
Overflow Location:
Cache Migrated Files?:
Collocate?:
Reclamation Threshold:
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed:
Number of Scratch Volumes Used:
Delay Period for Volume Reuse:
Migration in Progress?:
Amount Migrated (MB):
Elapsed Migration Time (seconds):
Reclamation in Progress?:
Last Update by (administrator):
Last Update Date/Time:
Storage Pool Data Format:
Copy Storage Pool(s):
Active-data Pool(s):
Continue Copy on Error?:
CRC Data:
Reclamation Type:
Overwrite Data when Deleted:
BACKUPPOOL
PRIMARY
DISK
80.0 M
0.0
42.0
29.6
82.1
50
30
0
Yes
1
BACKTAPE
No Limit
Read/Write
Yes
0 Day(s)
Yes
0.10
5
SERVER_CONSOLE
09/04/2002 16:47:49
Native
No
2 Time(s)
Figure 60. Detailed storage pool report
When Cache Migrated Files? is set to Yes, the value for Pct Util should not change
because of migration, because cached copies of files migrated to the next storage
pool remain in disk storage.
This example shows that utilization remains at 42%, even after files have been
migrated to the BACKTAPE storage pool, and the current amount of data eligible
for migration is 29.6%.
When Cache Migrated Files? is set to No, the value for Pct Util more closely
matches the value for Pct Migr because cached copies are not retained in disk
storage.
Obtaining information about the use of storage space
You can generate reports to determine the amount of space used by client nodes
and file spaces, storage pools and device classes, or types of data (backup, archive,
or space-managed). Generating occupancy reports on a regular basis can help you
with capacity planning.
Task
Required Privilege Class
Query the server for information about
server storage
Any administrator
Chapter 11. Managing storage pools and volumes
377
To obtain reports with information broken out by node or file space, issue the
QUERY OCCUPANCY command.
Each report gives two measures of the space in use by a storage pool:
v Logical space occupied
The amount of space used for logical files. A logical file is a client file. A logical
file is stored either as a single physical file, or in an aggregate with other logical
files. The logical space occupied in active-data pools includes the space occupied
by inactive logical files. Inactive logical files in active-data pools are removed by
reclamation.
v Physical space occupied
The amount of space used for physical files. A physical file is either a single
logical file, or an aggregate composed of logical files.
An aggregate might contain empty space that was used by logical files that are
now expired or deleted, or that were deactivated in active-data pools. Therefore,
the amount of space used by physical files is equal to or greater than the space
used by logical files. The difference gives you a measure of how much unused
space any aggregates may have. The unused space can be reclaimed in
sequential storage pools.
You can also use this report to evaluate the average size of workstation files stored
in server storage.
Obtaining information about space used by client nodes
You can request information about how much data a client has backed up,
archived, or migrated to server storage. You can also request information about the
amount of storage space used by each client node and file space, as well as the
number of files that are in server storage that were backed up to a copy storage
pool or an active-data pool.
To determine the amount of server storage space used by the /home file space
belonging to the client node MIKE, for example, enter:
query occupancy mike /home
File space names are case-sensitive and must be entered exactly as they are known
to the server. To determine the correct capitalization, issue the QUERY FILESPACE
command. For more information, see “Managing file spaces” on page 423.
Figure 61 shows the results of the query. The report shows the number of files
backed up, archived, or migrated from the /home file space belonging to MIKE.
The report also shows how much space is occupied in each storage pool.
If you back up the ENGBACK1 storage pool to a copy storage pool, the copy
storage pool would also be listed in the report. To determine how many of the
client node’s files in the primary storage pool have been backed up to a copy
storage pool, compare the number of files in each pool type for the client node.
Node Name
Type
Filespace
Name
Storage
Pool Name
--------------- ---- ----------- ----------MIKE
Bkup /home
ENGBACK1
Physical Logical
Space
Space
Occupied Occupied
(MB)
(MB)
--------- ---------- -------513
3.52
3.01
Number of
Files
Figure 61. A report of the occupancy of storage pools by client node
378
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
You can also use the QUERY NODEDATA command to display information about
the data for one or more nodes in a sequential-access storage pool. (The command
is not supported for random-access storage pools.) The output of the QUERY
NODEDATA command displays the name of the volume on which a node’s data is
written, the name of the storage pool in which the volume is located, and the
amount of space occupied by the data on the volume. For example, to display
information about the data for nodes whose names begin with the letter “e,” you
would enter the following command using a wildcard character:
query nodedata e*
Node Name
Volume Name
Storage Pool
Name
--------EDU_J2
EDU_J2
EDU_J3
-----------------------------E:\tsm\server\00000117.BFS
E:\tsm\server\00000122.BFS
E:\tsm\server\00000116.BFS
-----------EDU512
EDU319
EDU512
Physical
Space
Occupied
(MB)
-------0.01
0.01
0.01
For details about the QUERY NODEDATA command, refer to the Administrator’s
Reference.
Obtaining information about space utilization of storage pools
You can monitor the amount of space being used by an individual storage pool or
a group of storage pools.
To query the server for the amount of data stored in backup tape storage pools
belonging to the TAPECLASS device class, for example, enter:
query occupancy devclass=tapeclass
Figure 62 displays a report on the occupancy of tape storage pools assigned to the
TAPECLASS device class.
Node Name
Type
--------------CAROL
CAROL
PEASE
---- ----------- ----------Arch OS2C
ARCHTAPE
Bkup OS2C
BACKTAPE
Arch /home/peas- ARCHTAPE
e/dir
Bkup /home/peas- BACKTAPE
e/dir
Bkup /home/peas- BACKTAPE
e/dir1
Arch /home/tomc ARCHTAPE
/driver5
Bkup /home
BACKTAPE
PEASE
PEASE
TOMC
TOMC
Filespace
Name
Storage
Pool Name
Number of
Files
--------5
21
492
Physical Logical
Space
Space
Occupied Occupied
(MB)
(MB)
---------- -------.92
.89
1.02
1.02
18.40
18.40
33
7.60
7.38
2
.80
.80
573
20.85
19.27
13
2.02
1.88
Figure 62. A report on the occupancy of storage pools by device class
Tip: For archived data, you might see “(archive)” in the Filespace Name column
instead of a file space name. This means that the data was archived before
collocation by file space was supported by the server.
Chapter 11. Managing storage pools and volumes
379
Requesting information about space used by backed-up,
archived, and space-managed files
You can query the server for the amount of space used by backed-up, archived,
and space-managed files. By determining the average size of workstation files
stored in server storage, you can estimate how much storage capacity you might
need when registering new client nodes to the server.
For example, to request a report about backup versions stored in the disk storage
pool named BACKUPPOOL, enter:
query occupancy stgpool=backuppool type=backup
Figure 63 displays a report on the amount of server storage used for backed-up
files.
Node Name
Type
Filespace
Name
--------------CAROL
CAROL
PEASE
PEASE
TOMC
---- ----------Bkup OS2C
Bkup OS2D
Bkup /marketing
Bkup /business
Bkup /
Storage
Pool Name
Number of
Files
----------BACKUPPOOL
BACKUPPOOL
BACKUPPOOL
BACKUPPOOL
BACKUPPOOL
--------513
573
132
365
177
Physical Logical
Space
Space
Occupied Occupied
(MB)
(MB)
---------- -------23.52
23.52
20.85
20.85
12.90
9.01
13.68
6.18
21.27
21.27
Figure 63. A report of the occupancy of backed-up files in storage pools
To determine the average size of backup versions stored in BACKUPPOOL,
complete the following steps using the data provided in Figure 63:
1. Add the number of megabytes of space occupied by backup versions. In this
example, backup versions occupy 92.22 MB of space in BACKUPPOOL.
2. Add the number of files stored in the storage pool. In this example, 1760
backup versions reside in BACKUPPOOL.
3. Divide the space occupied by the number of files to determine the average size
of each file backed up to the BACKUPPOOL. In this example, the average size
of each workstation file backed up to BACKUPPOOL is about 0.05 MB, or
approximately 50 KB.
You can use this average to estimate the capacity required for additional storage
pools that are defined to the server.
For information about planning storage space, see “Estimating space needs for
storage pools” on page 361 and “Estimating space for archived files in
random-access storage pools” on page 363.
Obtaining information about free disk space in FILE device
classes
You can monitor the amount of free disk space in directories associated with FILE
device classes. The Tivoli Storage Manager server uses directories as the location
for files that represent storage-pool volumes.
To request information about the amount of free disk space in each directory for all
device classes with a device type of FILE, issue QUERY DIRSPACE command.
Figure 64 on page 381 displays the output for this command.
380
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Device
Class
----------DBBKUP
DBBKUP
DBBKUP2
Directory
Estimated Estimated
Capacity Available
---------- ---------13,000 M
5,543 M
13,000 M
7,123 M
2,256 G
2,200 G
---------------------------G:\This\is\a\large\directory
G:\This\is\directory2
G:\This\is\a\huge\directory
Figure 64. A report of the free disk space for all device classes of device type FILE
To obtain the amount of free space associated with a particular device class, issue
the following command:
query dirspace device_class_name
Moving data from one volume to another volume
You might need to move data in some situations, for example, when you need to
salvage readable data from a damaged volume. To move data (files) from one
volume to another volume in the same or a different storage pool, use the MOVE
DATA command. The volumes can be on-site volumes or off-site volumes.
Task
Required Privilege Class
Move files from a volume in any storage
pool to an available volume in any storage
pool
System or unrestricted storage
Move files from one volume to an available
volume in any storage pool to which you
are authorized
Restricted storage
During the data movement process, the server:
v Moves any readable files to available volumes in the specified destination
storage pool
v Deletes any cached copies from a disk volume
v Attempts to bypass any files that previously were marked as damaged
During the data movement process, users cannot access the volume to restore or
retrieve files, and no new files can be written to the volume.
Remember:
v Files in a copy storage pool or an active-data pool do not move when primary
files are moved.
v You cannot move data into or out of a storage pool defined with a CENTERA
device class.
v In addition to moving data from volumes in storage pools that have NATIVE or
NONBLOCK data formats, you can also move data from volumes in storage
pools that have NDMP data formats (NETAPPDUMP, CELERRADUMP, or
NDMPDUMP). The target storage pool must have the same data format as the
source storage pool. If you are moving data out of a storage pool for the
purpose of upgrading to new tape technology, the target primary storage pool
must be associated with a library that has the new device for the tape drives.
Chapter 11. Managing storage pools and volumes
381
Data movement within the same storage pool
Moving files from one volume to other volumes in the same storage pool provides
a number of benefits.
Moving files from one volume to other volumes in the same storage pool is useful:
v When you want to free up all space on a volume so that it can be deleted from
the Tivoli Storage Manager server
See “Deleting storage pool volumes” on page 393 for information about deleting
backed-up, archived, or space-managed data before you delete a volume from a
storage pool.
v When you need to salvage readable files from a volume that has been damaged
v When you want to delete cached files from disk volumes
If you want to force the removal of cached files, you can delete them by moving
data from one volume to another volume. During the move process, the server
deletes cached files remaining on disk volumes.
If you move data between volumes within the same storage pool and you run out
of space in the storage pool before all data is moved from the target volume, then
you cannot move all the data from the target volume. In this case, consider moving
data to available space in another storage pool as described in “Data movement to
a different storage pool.”
Data movement to a different storage pool
You can move all data from a volume in one storage pool to volumes in another
storage pool. When you specify a target storage pool that is different than the
source storage pool, the server uses the storage hierarchy to move data if more
space is required.
Remember: Data cannot be moved from a primary storage pool to a copy storage
pool or to an active-data pool. Data in a copy storage pool or an active-data pool
cannot be moved to another storage pool.
You can move data from random-access storage pools to sequential-access storage
pools. For example, if you have a damaged disk volume and you have a limited
amount of disk storage space, you could move all files from the disk volume to a
tape storage pool. Moving files from a disk volume to a sequential storage pool
may require many volume mount operations if the target storage pool is
collocated. Ensure that you have sufficient personnel and media to move files from
disk to sequential storage.
When a data move from a shred pool is complete, the original data is shredded.
However, if the destination is not another shred pool, you must set the
SHREDTONOSHRED parameter to YES to force the movement to occur. If this
value is not specified, the server issues an error message and does not allow the
data to be moved. See “Securing sensitive client data” on page 519 for more
information about shredding.
382
IBM Tivoli Storage Manager for Windows: Administrator’s Guide
Data movement from off-site volumes in copy storage pools
or active-data pools
You can move data from off-site volumes without bringing the volumes on-site.
Processing of the MOVE DATA command for volumes in copy -storage pools and
active-data pools is similar to that of primary-storage pools, with the following
exceptions:
v Volumes in copy-storage pools and active-data pools might be set to an access
mode of offsite, making them ineligible to be mounted. During processing of the
MOVE DATA command, valid files on off-site volumes are copied from the
original files in the primary-storage pools. In this way, valid files on off-site
volumes are copied without having to mount these volumes. These new copies
of the files are written to another volume in the copy-storage pool or active-data
pool.
v With the MOVE DATA command, you can move data from any primary-storage
pool volume to any primary-storage pool. However, you can move data from a
copy-storage pool volume only to another volume within the same-copy storage
pool. Similarly, you can move data from an active-data pool volume only to
another volume within the same active-data pool.
When you move files from a volume marked as off-site, the server performs the
following actions:
1. Determines which files are still active on the volume from which you are
moving data
2. Obtains these active files from a primary-storage pool or from another
copy-storage pool or active-data pool
3. Copies the files to one or more volumes in the destination copy-storage pool or
active-data pool
Processing of the MOVE DATA command for primary-storage pool volumes does
not affect copy-storage pool or active-data pool files.
Moving data
You can move data using the MOVE DATA command. Before moving data,
however, take steps to ensure that the move operation succeeds.
Before beginning this procedure:
v If you want to ensure that no new files are written to a volume after you move
data from it, change the volume’s access mode to read-only. This prevents the
server from filling the volume with data again as soon as data is moved. You
might want to do this if you want to delete the volume.
See “Updating storage pool volumes” on page 293 for information about
u