advertisement
Chapter 5. Specifying commands that define your DFSMShsm environment
You can specify SETSYS commands in the ARCCMDxx member to define your site’s DFSMShsm environment. The command options are described along with the reasons for choosing a command.
The starter set creates a basic (and somewhat generic) DFSMShsm environment. If you choose not to begin with the starter set or you want to expand or customize the starter set functions, the information you need is in this section.
Regardless of the DFSMShsm functions you choose to implement, you must establish the DFSMShsm environment for those functions. Your site’s DFSMShsm environment is established when you perform the following tasks: v
“Defining the DFSMShsm startup environment”
v
“Defining storage administrators to DFSMShsm” on page 74
v
“Defining the DFSMShsm MVS environment” on page 75
v
“Defining the DFSMShsm security environment for DFSMShsm-owned data sets” on page 83
v
“Defining data formats for DFSMShsm operations” on page 86
v
“Defining DFSMShsm reporting and monitoring” on page 90
v
“Defining the tape environment” on page 92
v
“Defining the installation exits that DFSMShsm invokes” on page 92
v
“Controlling DFSMShsm control data set recoverability” on page 92
v
“Defining migration level 1 volumes to DFSMShsm” on page 93
v
“Defining the common recall queue environment” on page 95
v
“Defining common SETSYS commands” on page 98
Defining the DFSMShsm startup environment
Before starting DFSMShsm, you must prepare the system by performing the following tasks: v
“Allocating DFSMShsm data sets”
v
“Establishing the DFSMShsm startup procedures” on page 68
v
“Establishing the START command in the COMMNDnn member” on page 71
v
“Establishing SMS-related conditions in storage groups and management classes” on page 71
v
“Writing an ACS routine that directs DFSMShsm-owned data sets to non-SMS-managed storage” on page 71
v
“Directing DFSMShsm temporary tape data sets to tape” on page 72
v
“Establishing the ARCCMDxx member of a PARMLIB” on page 73
Allocating DFSMShsm data sets
The DFSMShsm data sets are the data sets DFSMShsm requires for full-function processing. The DFSMShsm data sets are not user data sets and they are not
DFSMShsm-managed data sets. Rather, they are the following DFSMShsm record keeping, reporting, and problem determination data sets:
© Copyright IBM Corp. 1984, 2017
67
v DFSMShsm control data sets v DFSMShsm control data set copies v Journal data set v Log data sets v Problem determination aid (PDA) log data sets v SDSP data sets (if used)
You, or the person who installed DFSMShsm on your system, probably have allocated these data sets during installation or testing of DFSMShsm. The data sets are required for the DFSMShsm starter set. For SMS environments, you must associate the DFSMShsm data sets with a storage class having the GUARANTEED
SPACE=YES attribute so that you can control their placement. Data sets having the guaranteed space attribute are allocated differently than non-guaranteed space data sets, especially if candidate volumes are specified. Refer to z/OS DFSMShsm Storage
Administration for a discussion of the guaranteed space attribute and for information about establishing storage classes.
You must prevent the following DFSMShsm data sets from migrating: v Control data sets v DFSMShsm log data sets v Journal v Problem determination aid logs
For more information about preventing DFSMShsm data sets from migrating, see
“Storage guidance for control data set and journal data set backup copies” on page
28 and “Migration considerations for the control data sets and the journal” on page
Establishing the DFSMShsm startup procedures
If you specify an HSMPARM DD, it will take precedence over MVS concatenated
PARMLIB support. However, if you are using MVS concatenated PARMLIB support, DFSMShsm uses the PARMLIB data set containing the ARCCMDxx member and the (possibly different) PARMLIB data set containing the ARCSTRxx member (if any) that is indicated in the startup procedure.
When ABARS is used, its address space (one or more) is termed ‘secondary’ to a
‘primary address space’. That primary address space must have
HOSTMODE=MAIN; you must start it with a startup procedure in SYS1.PROCLIB
(similar to the startup procedure in Figure 13 on page 69.) If your disaster recovery
policy includes aggregate backup and recovery support (ABARS), also include a second startup procedure in SYS1.PROCLIB for the DFSMShsm secondary address space.
Primary address space startup procedure
Figure 13 on page 69 is a sample DFSMShsm primary address space startup
procedure.
68
z/OS DFSMShsm Implementation and Customization Guide
//**********************************************************************/
//* SAMPLE DFSMSHSM STARTUP PROCEDURE THAT STARTS THE DFSMSHSM PRIMARY */
//* ADDRESS SPACE.
*/
//**********************************************************************/
//*
//DFSMSHSM PROC CMD=00,
// EMERG=NO,
USE PARMLIB MEMBER ARCCMD00
ALLOW ALL DFSMSHSM FUNCTIONS
//
//
//
LOGSW=YES,
STARTUP=YES,
UID=HSM,
SWITCH LOGS AT STARTUP
STARTUP INFO PRINTED AT STARTUP
DFSMSHSM-AUTHORIZED USER ID
//
//
//
//
SIZE=0M,
DDD=50,
HOST=?HOST,
PRIMARY=?PRIMARY,
REGION SIZE FOR DFSMSHSM
MAX DYNAMICALLY ALLOCATED DATA SETS
PROC.UNIT ID AND LEVEL FUNCTIONS
LEVEL FUNCTIONS
//
//
PDA=YES,
CDSR=YES
BEGIN PDA TRACING AT STARTUP
RESERVE CONTROL DATA SET VOLUMES
//DFSMSHSM EXEC PGM=ARCCTL,DYNAMNBR=&DDD,REGION=&SIZE,TIME=1440,
// PARM=(’EMERG=&EMERG’,’LOGSW=&LOGSW’,’CMD=&CMD’,’UID=&UID’,
// ’STARTUP=&STARTUP’,’HOST=&HOST’,’PRIMARY=&PRIMARY’,
’PDA=&PDA’,’CDSR=&CDSR’)
//*****************************************************************/
//* HSMPARM DD must be deleted from the JCL or made into a */
//* a comment to use Concatenated Parmlib Support */
//*****************************************************************/
//HSMPARM DD DSN=SYS1.PARMLIB,DISP=SHR
//MSYSOUT DD SYSOUT=A
//MSYSIN DD DUMMY
//SYSPRINT DD SYSOUT=A,FREE=CLOSE
//SYSUDUMP DD SYSOUT=A
//*
//*****************************************************************/
//* THIS PROCEDURE ASSUMES A SINGLE CLUSTER MCDS.
IF MORE THAN */
//* ONE VOLUME IS DESIRED, FOLLOW THE RULES FOR A MULTICLUSTER */
//* CDS.
*/
//*****************************************************************/
//*
//MIGCAT DD DSN=HSM.MCDS,DISP=SHR
//JOURNAL DD DSN=HSM.JRNL,DISP=SHR
//ARCLOGX DD DSN=HSM.HSMLOGX1,DISP=OLD
//ARCLOGY DD DSN=HSM.HSMLOGY1,DISP=OLD
//ARCPDOX DD DSN=HSM.HSMPDOX,DISP=OLD
//ARCPDOY DD DSN=HSM.HSMPDOY,DISP=OLD
//*
Figure 13. Sample Startup Procedure for the DFSMShsm Primary Address Space
Figure 14 is a sample startup procedure using STR.
Example of a startup procedure:
//DFSMSHSM PROC CMD=00,
//
//
STR=00,
HOST=?HOST,
USE PARMLIB MEMBER ARCCMD00
STARTUP PARMS IN ARCSTR00
PROC UNIT AND LEVEL FUNCTIONS
//
//
PRIMARY=?PRIMARY,LEVEL FUNCTIONS
DDD=50, MAX DYNAMICALLY ALLOCATED DS
// SIZE=0M REGION SIZE FOR DFSMSHSM
//DFSMSHSM EXEC PGM=ARCCTL,DYNAMNBR=&DDD,REGION=&SIZE,TIME=1440,
// PARM=(’STR=&STR’,’CMD=&CMD’,’HOST=&HOST’,
’PRIMARY=&PRIMARY’)
//HSMPARM DD DSN=SYS1.PARMLIB,DISP=SHR
//MSYSOUT DD SYSOUT=A
//MSYSIN DD DUMMY
//SYSPRINT DD SYSOUT=A,FREE=CLOSE
. . .
PARMLIB Member ARCSTR00 contains 4 records:
1st record: EMERG=NO,CDSQ=YES,STARTUP=YES
2nd record: /* This is a comment.
3rd record: /* This is another comment.
*/
4nd record: PDA=YES,LOGSW=YES
Figure 14. Sample of STR Usage
Chapter 5. Specifying commands that define your DFSMShsm environment
69
For an explanation of the keywords, see “Startup procedure keywords” on page
The CMD=00 keyword refers to the ARCCMD00 member of PARMLIBs discussed
in “Parameter libraries (PARMLIB)” on page 303. You can have as many
ARCCMDxx and ARCSTRxx members as you need in the PARMLIBs. DFSMShsm does not require the values of CMD= and STR= to be the same, but you may want to use the same values to indicate a given configuration. In this publication, the
ARCCMD member is referred to generically as ARCCMDxx because each different
ARCCMDxx member can be identified by a different number.
Much of the rest of this discussion pertains to what to put into the ARCCMDxx member.
For information about the ARCCMDxx member in a multiple DFSMShsm-host
ARCCMDxx and a single ARCSTRxx member for all DFSMShsm hosts sharing a common set of control data sets in an HSMplex.
Secondary address space startup procedure
Figure 15 is a sample DFSMShsm secondary address space startup procedure.
//**********************************************************************/
//* SAMPLE AGGREGATE BACKUP AND RECOVERY STARTUP PROCEDURE THAT STARTS */
//* THE ABARS SECONDARY ADDRESS SPACE.
*/
//**********************************************************************/
//*
//DFHSMABR PROC
//DFHSMABR EXEC PGM=ARCWCTL,REGION=0M
//SYSUDUMP DD SYSOUT=A
//MSYSOUT DD SYSOUT=A
//MSYSIN DD DUMMY
//*
Figure 15. Sample Aggregate Backup and Recovery Startup Procedure
The private (24-bit) and extended private (31-bit) address space requirements for
DFSMShsm are dynamic. DFSMShsm’s region size should normally default to the private virtual address space (REGION=0).
To run ABARS processing, each secondary address space for aggregate backup or aggregate recovery requires 6 megabytes (MB). Three MBs of this ABARS secondary address space are above the line (in 31-bit extended private address space). The other three MBs are below the line (in 24-bit address space). An option that can directly increase this requirement is the specification of SETSYS
ABARSBUFFERS(n). If this is specified with an ‘n’ value greater than one, use the following quick calculation to determine the approximate storage above the line you will need:
2MB + (’n’ * 1MB) ’n’ = number specified in SETSYS ABARSBUFFERS
As you add more functions and options to the DFSMShsm base product, the region-size requirement increases. You should therefore include the maximum region size in your setup procedure.
For a detailed discussion of the DFSMShsm primary address space startup procedure, the ABARS secondary address space startup procedure, and the startup
procedure keywords, see “DFSMShsm procedures” on page 307.
70
z/OS DFSMShsm Implementation and Customization Guide
Establishing the START command in the COMMNDnn member
When you initialize the MVS operating system, you want DFSMShsm to start automatically. You direct DFSMShsm to start when the MVS operating system is initialized by adding the following command to the SYS1.PARMLIB.
COM='S DFSMSHSM parameters'
You can also start DFSMShsm from the console. DFSMShsm can be run only as a started task and never as a batch job.
DFSMShsm can run concurrently with another space-management product. This can be useful if you are switching from another product to DFSMShsm, and do not want to recall many years’ worth of data just to switch to the new product over a short period like a weekend. By running the two products in parallel, you can recall data automatically from the old product, and migrate all new data with
DFSMShsm.
What makes this possible is that the other product usually provides a module that must be renamed to IGG026DU to serve as the automatic locate intercept for recall.
Instead, rename this module to $IGG26DU, and link-edit this module to the existing IGG026DU which DFSMS ships for DFSMShsm. In this manner, for each locate request, DFSMShsm’s IGG026DU gives the other product control via
$IGG26DU, providing it a chance to perform the recall if the data was migrated by that product. After control returns, DFSMShsm then proceeds to recall the data set if it is still migrated.
Establishing SMS-related conditions in storage groups and management classes
For your SMS-managed data sets, you must establish a DFSMShsm environment that coordinates the activities of both DFSMShsm and SMS. You can define your storage groups and management classes at one time and can modify the appropriate attributes for DFSMShsm management of data sets at another time.
The storage group contains one attribute that applies to all DFSMShsm functions, the status attribute. DFSMShsm can process volumes in storage groups having a status of ENABLE, DISNEW (disable new for new data set allocations), or
QUINEW (quiesce new for new data set allocations). The other status attributes
QUIALL (quiesce for all allocations), DISALL (disable all for all data set allocations), and NOTCON (not connected) prevent DFSMShsm from processing any volumes in the storage group so designated. Refer to z/OS DFSMSdfp Storage
Administration for an explanation of the status attribute and how to define storage groups.
Writing an ACS routine that directs DFSMShsm-owned data sets to non-SMS-managed storage
Programming Interface Information
DFSMShsm must be able to direct allocation of data sets it manages to its owned storage devices so that backup versions of data sets go to backup volumes, migration copies go to migration volumes, and so forth. DFSMShsm-owned DASD volumes are not SMS-managed. If SMS were allowed to select volumes for
DFSMShsm-owned data sets, DFSMShsm could not control which volumes were selected. If SMS is allowed to allocate the DFSMShsm-owned data sets to a volume
Chapter 5. Specifying commands that define your DFSMShsm environment
71
other than the one selected by DFSMShsm, DFSMShsm detects that the data set is allocated to the wrong volume and fails the function being performed. Therefore,
include a filter routine (similar to the sample routine in Figure 16) within your
automatic class selection (ACS) routine that filters DFSMShsm-owned data sets to non-SMS managed volumes. For information on the SMS-management of
DFSMShsm-owned tape volumes, see Chapter 10, “Implementing DFSMShsm tape environments,” on page 189.
End Programming Interface Information
/***********************************************************************/
/* SAMPLE ACS ROUTINE THAT ASSIGNS A NULL STORAGE CLASS TO */
/* DFSMSHSM-OWNED DATA SETS INDICATING THAT THE DATA SET SHOULD NOT BE */
/* SMS-MANAGED.
*/
/***********************************************************************/
/*
PROC &STORCLAS
*/
SET &STORCLAS = ’SCLASS2’
FILTLIST &HSMLQ1 INCLUDE(’DFHSM’,’HSM’)
FILTLIST &HSMLQ2 INCLUDE(’HMIG’,’BACK’,’VCAT’,’SMALLDS’,’VTOC’,
’DUMPVTOC’,’MDB’)
IF &DSN(1) = &HSMLQ1 AND
&DSN(2) = &HSMLQ2 THEN
SET &STORCLAS = ’
END
/* */
Figure 16. Sample ACS Routine that Directs DFSMShsm-Owned Data Sets to Non-SMS-Managed Storage
The high-level qualifiers for &HSMLQ1 and &HSMLQ2 are the prefixes that you specify with the BACKUPPREFIX (for backup and dump data set names) and the
MIGRATEPREFIX (for migrated copy data set names). If you do not specify prefixes, specify the user ID from the UID parameter of the DFSMShsm startup
procedure (shown in topic “Starter set example” on page 109). These prefixes and
how to specify them are discussed in the z/OS DFSMShsm Storage Administration.
Directing DFSMShsm temporary tape data sets to tape
Programming Interface Information
It is often efficient to direct tape allocation requests to DASD when the tapes being requested are for temporary data sets. However, DFSMShsm’s internal naming conventions request temporary tape allocations for backup of DFSMShsm control data sets. Therefore, it is important to direct DFSMShsm tape requests to tape.
End Programming Interface Information
If your ACS routines direct temporary data sets to DASD, DFSMShsm allocation requests for temporary tape data sets should be allowed to be directed to tape as
requested (see the sample ACS routine in Figure 17 on page 73). To identify
temporary tape data sets, test the &DSTYPE variable for “TEMP”, and test the
&PGM variable for “ARCCTL”.
72
z/OS DFSMShsm Implementation and Customization Guide
/***********************************************************************/
/* SAMPLE ACS ROUTINE THAT PREVENTS DFSMSHSM TEMPORARY (SCRATCH TAPE) */
/* TAPE REQUESTS FROM BEING REDIRECTED TO DASD.
*/
/***********************************************************************/
:
:
/***********************************************************************/
/* SET FILTLIST FOR PRODUCTION DATA SETS */
/***********************************************************************/
FILTLIST EXPGMGRP INCLUDE('ARCCTL')
:
:
/***********************************************************************/
/* FILTER TEMPORARY (SCRATCH TAPE) TAPE REQUESTS INTO DFSMSHSM */
/* REQUESTS AND NON-DFSMSHSM REQUESTS. SEND DFSMSHSM REQUESTS TO TAPE */
/* AS REQUESTED. SEND NON-DFSMSHSM REQUESTS TO DASD.
*/
/***********************************************************************/
IF (&DSTYPA = 'TEMP' && &UNIT = &TAPE_UNITS)
THEN DO
IF (&PGM ^= &EXPGMGRP) THEN DO
SET &STORCLAS = 'DASD'
WRITE '******************************************************'
WRITE '* NON-DFSMSHSM TEMPORARY DATA SET REDIRECTED TO DISK *'
WRITE '******************************************************'
END
ELSE DO
WRITE '************************************************'
WRITE '* DFSMSHSM TEMPORARY DATA SET DIRECTED TO TAPE *'
WRITE '************************************************'
END
END
Figure 17. Sample ACS Routine That Prevents DFSMShsm Temporary Tape Requests from being Redirected to
DASD
Establishing the ARCCMDxx member of a PARMLIB
At DFSMShsm startup, DFSMShsm reads the ARCCMDxx parameter library
(PARMLIB) member that is pointed to by the DFSMShsm startup procedure or is found in the MVS concatenated PARMLIB data sets.
An ARCCMDxx member consisting of DFSMShsm commands that define your site’s DFSMShsm processing environment must exist in a PARMLIB data set. (The
PARMLIB containing the ARCCMDxx member may be defined in the startup procedure.) An example of the ARCCMDxx member can be seen starting at
“Starter set example” on page 109.
Modifying the ARCCMDxx member
In most cases, adding a command to the ARCCMDxx member provides an addendum to any similar command that already exists in the member. For example, the ARCCMDxx member that exists from the starter set contains a set of commands with their parameters. You can remove commands that do not meet your needs from the ARCCMDxx member and replace them with commands that do meet your needs.
ARCCMDxx member for the starter set
The ARCCMDxx member provided with the starter set is written to accommodate any system so some commands are intentionally allowed to default and others specify parameters that are not necessarily optimal. Because the starter set does not provide an explanation of parameter options, we discuss the implications of choosing SETSYS parameters in this section.
Chapter 5. Specifying commands that define your DFSMShsm environment
73
Issuing DFSMShsm commands
DFSMShsm commands can be issued from the operator’s console, from a TSO terminal, as a CLIST from a TSO terminal, as a job (when properly surrounded by
JCL) from the batch reader, or from a PARMLIB member. DFSMShsm commands can be up to 1024 bytes long. The z/OS DFSMShsm Storage Administration explains how to issue the DFSMShsm commands and why to issue them.
Implementing new DFSMShsm ARCCMDxx functions
If you have DFSMShsm running with an established ARCCMDxx member, for example ARCCMD00, you can copy the ARCCMDxx member to a member with another name, for example, ARCCMD01. You can then modify the new
ARCCMDxx member by adding and deleting parameters.
To determine how the new parameters affect DFSMShsm’s automatic processes,
z/OS DFSMShsm Storage Administration for an explanation of running DFSMShsm in DEBUG mode.
Defining storage administrators to DFSMShsm
As part of defining your DFSMShsm environment, you must designate storage administrators and define their authority to issue authorized DFSMShsm commands. The authority to issue authorized commands is granted either through
RACF FACILITY class profiles or the DFSMShsm AUTH command.
Because DFSMShsm operates as an MVS-authorized task, it can manage data sets automatically, regardless of their security protection. DFSMShsm allows an installation to control the authorization of its commands through the use of either
RACF FACILITY class profiles or the AUTH command.
If the RACF FACILITY class is active, DFSMShsm always uses it to protect all
DFSMShsm commands. If the RACF FACILITY class is not active, DFSMShsm uses the AUTH command to protect storage administrator DFSMShsm commands.
There is no protection of user commands in this environment.
The RACF FACILITY class environment
DFSMShsm provides a way to protect all DFSMShsm command access through the use of RACF FACILITY class profiles. An active RACF FACILITY class establishes the security environment.
An individual, such as a security administrator, defines RACF FACILITY class profiles to grant or deny permission to issue individual DFSMShsm commands.
For more information about establishing the RACF FACILITY class environment,
The DFSMShsm AUTH command environment
If you are not using the RACF FACILITY class to protect all DFSMShsm commands, the AUTH command is used to protect DFSMShsm-authorized commands.
To prevent unwanted changes to the parameters that control all data sets, commands within DFSMShsm are classified as authorized and nonauthorized.
74
z/OS DFSMShsm Implementation and Customization Guide
Authorized commands can be issued only by a user specifically authorized by a storage administrator. Generally, authorized commands can affect data sets not owned by the person issuing the command and should, therefore, be limited to only those whom you want to have that level of control.
Nonauthorized commands can be issued by any user, but they generally affect only those data sets for which the user has appropriate security access. Nonauthorized commands are usually issued by system users who want to manage their own data sets with DFSMShsm user commands.
DFSMShsm has two categories of authorization: USER and CONTROL.
If you specify . . .
AUTH U012345 DATABASEAUTHORITY(USER)
AUTH U012345
DATABASEAUTHORITY(CONTROL)
Then . . .
User U012345 can issue any
DFSMShsm command except the command that authorizes other users.
DFSMShsm gives user U012345 authority to issue the AUTH command to authorize other users. User U012345 can then issue the AUTH command with the
DATABASEAUTHORITY(USER) parameter to authorize other storage administrators who can issue authorized commands.
Anyone can issue authorized commands from the system console, but they cannot authorize other users. The ARCCMDxx member must contain an AUTH command granting CONTROL authority to a storage administrator. That storage administrator can then authorize or revoke the authority of other users as necessary. If no AUTH command grants CONTROL authority to any user, no storage administrator can authorize any other user. If the ARCCMDxx member does not contain any AUTH command, authorized commands can be issued only at the operator’s console.
Defining the DFSMShsm MVS environment
You define the MVS environment to DFSMShsm when you specify: v
The job entry subsystem v The amount of common service area storage v The sizes of cell pools v Operator intervention in DFSMShsm automatic operation v Data set serialization v Swap capability of DFSMShsm’s address space v Maximum secondary address space
Each of the preceding tasks relates to a SETSYS command in the ARCCMDxx member.
Figure 18 on page 76 is an example of the commands that define an MVS
environment:
Chapter 5. Specifying commands that define your DFSMShsm environment
75
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DEFAULT MVS ENVIRONMENT */
/***********************************************************************/
/*
SETSYS JES2
SETSYS CSALIMITS(MAXIMUM(100) ACTIVE(90) INACTIVE(30) MWE(4))
SETSYS NOREQUEST
SETSYS USERDATASETSERIALIZATION
SETSYS NOSWAP
SETSYS MAXABARSADDRESSSPACE(1)
/*
Figure 18. Sample SETSYS Commands That Define the Default MVS Environment
Specifying the job entry subsystem
As part of defining your MVS environment to DFSMShsm, you must identify the job entry subsystem (JES) at your site as either JES2 or JES3 by specifying the
SETSYS(JES2|JES3) command in the ARCCMDxx member. The ARCCMDxx is located in a PARMLIB.
JES3 considerations
When you implement DFSMShsm in a JES3 environment, you must observe certain practices and restrictions to ensure correct operation: v
For a period of time after the initialization of JES3 and before the initialization of
DFSMShsm, all JES3 locates will fail. To reduce this exposure:
– Start DFSMShsm as early as possible after the initialization of JES3.
– Specify the SETSYS JES3 command as early as possible in the startup procedure and before any ADDVOL commands.
v Specify JES3 during DFSMShsm startup when DFSMShsm is started in a JES3 system. This avoids an error message being written when DFSMShsm receives the first locate request from the JES3 converter/interpreter.
v Depend on the computing system catalog to determine the locations of data sets.
v Do not allocate the control data sets and the JES3 spool data set on the same volume because you could prevent DFSMShsm from starting on a JES3 local processor.
v All devices that contain volumes automatically managed or processed by
DFSMShsm must be controlled by JES3. All volumes managed by DFSMShsm
(even those managed by command) should be used on devices controlled by
JES3.
v DFSMShsm must be active on the processing units that use volumes managed by DFSMShsm and on any processing unit where JES3 can issue the locate request for the setup of jobs that use volumes managed by DFSMShsm.
The specification of JES3 places a constraint on issuing certain DFSMShsm commands. When you use JES2, you can issue ADDVOL, DEFINE, and SETSYS commands at any time. When you specify JES3, you must issue ADDVOL commands for primary volumes, DEFINE commands for pools (except aggregate recovery pools), and the SETSYS JES2 or SETSYS JES3 commands in the
ARCCMDxx member. In addition, if you are naming tape devices with esoteric names, you must include the SETSYS USERUNITTABLE command in the
ARCCMDxx member before the ADDVOL command for any of the tapes that are in groups defined with esoteric names.
76
z/OS DFSMShsm Implementation and Customization Guide
If you specify JES3 but the operating system uses JES2, DFSMShsm is not notified of the error. However, DFSMShsm uses the rules that govern pool configuration for
JES3, and one or both of the following situations can occur: v Some ADDVOL, SETSYS, and DEFINE commands fail if they are issued when they are not acceptable in a JES3 system.
v Volumes eligible for recall in a JES2 system might not qualify for the DFSMShsm general pool and, in some cases, are not available for recall in the JES3 system.
When you use DFSMShsm and JES3, the usual configuration is a symmetric configuration. A symmetric configuration is one where the primary volumes are added to DFSMShsm in all processing units and the hardware is connected in all processing units. Because of the dynamic reconfiguration of JES3, you should use a symmetric JES3 configuration.
If your device types are 3490, define the special esoteric names SYS3480R and
SYS348XR to JES3. This may only be done after the system software support (JES3,
DFP, and MVS) for 3490 is available on all processing units.
The main reason for this is conversion from 3480s, to allow DFSMShsm to convert the following generic unit names to the special esoteric names: v 3480 (used for output) is changed to SYS3480R for input drive selection.
SYS3480R is a special esoteric name that is associated with all 3480, 3480X, and
3490 devices. Any device in this esoteric is capable of reading a cartridge written by a 3480 device.
v 3480X (used for output) is changed to SYS348XR for input drive selection.
SYS348XR is a special esoteric name that is associated with all 3480X and 3490 devices. Any device in this esoteric is capable of reading a cartridge written by a
3480X device.
Note:
1.
Because of the DFSMShsm use of the S99DYNDI field in the SVC99 parameter list, the JES3 exit IATUX32 is not invoked when DFSMShsm is active.
2.
By default, JES3 support is not enabled for DFSMShsm hosts defined using
HOSTMODE=AUX. Contact IBM support if you require JES3 support for AUX
DFSMShsm hosts. When JES3 for AUX DFSMShsm hosts is enabled, you should start the main DFSMShsm host before starting any AUX hosts and stop all AUX hosts before stopping the main host.
Specifying the amount of common service area storage
Common Service Area (CSA) storage is cross-memory storage (accessible to any address space in the system) for management work elements (MWEs). The SETSYS
CSALIMITS command determines the amount of common service area (CSA) storage that DFSMShsm is allowed for its management work elements. The subparameters of the CSALIMITS parameter specify how CSA is divided among the MWEs issued to DFSMShsm. Unless almost all of DFSMShsm’s workload is
initiated from an external source, the defaults are satisfactory. Figure 18 on page 76
specifies the same values as the defaults.
One MWE is generated for each request for service that is issued to DFSMShsm.
Requests for service that generate MWEs include: v
Batch jobs that need migrated data sets v Both authorized and nonauthorized DFSMShsm commands including TSO requests to migrate, recall, and back up data sets
Chapter 5. Specifying commands that define your DFSMShsm environment
77
Two types of MWEs can be issued: wait and nowait. A WAIT MWE remains in
CSA until DFSMShsm finishes acting on the request. A NOWAIT MWE remains in
CSA under control of the MWE subparameter until DFSMShsm accepts it for processing. The NOWAIT MWE is then purged from CSA unless the MWE subparameter of CSALIMITS specifies that some number of NOWAIT MWEs are to be retained in CSA.
Note:
If you are running more than one DFSMShsm host in a z/OS image, the
CSALIMITS values used are those associated with the host with
HOSTMODE=MAIN. Any CSALIMITS values specified for an AUX host are ignored.
Selecting values for the SETSYS CSA command subparameters
DFSMShsm can control the amount of common-service-area (CSA) storage for management work elements (MWEs) whether or not DFSMShsm has been active during the current system initial program load (IPL). When DFSMShsm has not been active during the current IPL, DFSMShsm defaults control the amount of
CSA. When DFSMShsm has been active, either the DFSMShsm defaults or SETSYS values control the amount of CSA. The DFSMShsm defaults for CSA are shown in
Figure 18 on page 76. The subparameters of the SETSYS CSA command are
discussed in the following:
Selecting the value for the MAXIMUM subparameter:
The MAXIMUM subparameter determines the upper limit of CSA storage for cross-memory communication of MWEs. After this amount of CSA has been used, additional
MWEs cannot be stored. The average MWE is 400 bytes. The DFSMShsm default for this subparameter is 100KB (1KB equals 1024 bytes).
Limiting CSA has two potential uses in most data centers; protecting other application systems from excessive CSA use by DFSMShsm or serving as an early-warning sign of a DFSMShsm problem.
Setting CSALIMIT to protect other applications: Setting CSALIMITs to protect other applications depends on the amount of CSA available in the “steady-state” condition when you know the amount of CSA left over after the other application is active. This method measures the CSA usage of applications other than
DFSMShsm.
1.
Run the system without DFSMShsm active.
2.
Issue the QUERY CSALIMIT command to determine DFSMShsm’s CSA use.
3.
Set the MAXIMUM CSA subparameter to a value less than the “steady-state” amount available for the CSA.
4.
Think of DFSMShsm as a critical application with high availability requirements to set the remaining CSALIMITs.
Setting CSALIMIT as an early warning: Setting CSALIMITs as an early warning is different. Rather than measuring the CSA usage of some other application, you measure DFSMShsm’s CSA use. This method uses DFSMShsm CSALIMITS as an alarm system that notifies the console operator if DFSMShsm’s CSA usage deviates from normal.
1.
Run the system for a week or two with CSALIMIT inactive or set to a very high value.
2.
Issue the QUERY CSALIMIT command periodically to determine DFSMShsm’s
CSA use.
3.
Identify peak periods of CSA use.
78
z/OS DFSMShsm Implementation and Customization Guide
4.
Select a maximum value based on the peak, multiplied by a safety margin that is within the constraints of normally available CSA.
Selecting the value for the ACTIVE subparameter:
The ACTIVE subparameter specifies the percentage of maximum CSA available to DFSMShsm for both WAIT and NOWAIT MWEs when DFSMShsm is active. Until this limit is reached, all
MWEs are accepted. After this limit has been reached, only WAIT MWEs from batch jobs are accepted. The active limit is a percentage of the DFSMShsm maximum limit; the DFSMShsm default is 90%.
Selecting the value for the INACTIVE subparameter:
The INACTIVE subparameter specifies the percentage of CSA that is available to DFSMShsm for
NOWAIT MWEs when DFSMShsm is inactive. This prevents the CSA from being filled with NOWAIT MWEs when DFSMShsm is inactive.
Both the ACTIVE and INACTIVE CSALIMITs are expressed as percentages of the maximum amount of CSA DFSMShsm is limited to. Both specifications (ACTIVE and INACTIVE) affect the management of NOWAIT MWEs, which are ordinarily a small part of the total DFSMShsm workload.
The DFSMShsm default is 30%. When you start DFSMShsm, this limit changes to the active limit.
Selecting the value for the MWE subparameter:
The MWE subparameter specifies the number of NOWAIT MWEs from each user address space that are kept in CSA until they are completed.
The MWE subparameter can be set to 0 if DFSMShsm is solely responsible for making storage management decisions. The benefit of setting the MWE subparameter to zero (the default is four) is that the CSA an MWE consumes is freed immediately after the MWE has been copied into DFSMShsm ’s address space, making room for additional MWEs in CSA. Furthermore, if DFSMShsm is solely responsible for storage management decisions, the loss of one or more
NOWAIT MWEs (such as, a migration copy that is not being deleted) when
DFSMShsm is stopped could be viewed as insignificant.
The benefit of setting the MWE subparameter to a nonzero quantity is that MWEs remain in CSA until the function completes, so if DFSMShsm stops, the function is continued after DFSMShsm is restarted. The default value of 4 is sufficient to restart almost all requests; however, a larger value provides for situations where users issue many commands. MWEs are not retained across system outages; therefore, this parameter is valuable only in situations where DFSMShsm is stopped and restarted.
Restartable MWEs are valuable when a source external to DFSMShsm is generating critical work that would be lost if DFSMShsm failed. Under such conditions, an installation would want those MWEs retained in CSA until they had completed.
The decision for the storage administrator is whether to retain NOWAIT MWEs in
CSA. No method exists to selectively discriminate between MWEs that should be
retained and other MWEs unworthy of being held in CSA. Figure 19 on page 80
shows the three storage limits in the common service area storage.
Chapter 5. Specifying commands that define your DFSMShsm environment
79
MVS
Memory
90%
Maximum Limit
Active Limit
30%
Inactive Limit
Figure 19. Overview of Common Service Area Storage
WAIT and NOWAIT MWE considerations:
DFSMShsm keeps up to four
NOWAIT MWEs on the CSA queue for each address space. Subsequent MWEs from the same address space are deleted from CSA when the MWE is copied to the
DFSMShsm address space. When the number of MWEs per address space falls under four, MWEs are again kept in CSA until the maximum of four is reached.
Table 6 shows the types of requests and how the different limits affect these
requests.
Table 6. How Common Service Area Storage Limits Affect WAIT and NOWAIT Requests
Type of
Request DFSMShsm Active
Batch WAIT If the current CSA storage is less than the maximum limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.
TSO WAIT If the current CSA storage is less than the active limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.
NOWAIT If the current CSA storage is less than the active limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.
DFSMShsm Inactive
If the current CSA storage is less than the maximum limit, the operator is required to either start DFSMShsm or cancel the request.
The operator is prompted to start
DFSMShsm but the request fails.
If the current CSA storage is less than the inactive limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.
80
z/OS DFSMShsm Implementation and Customization Guide
A system programmer can use the SETSYS command to change any one of these values. The SETSYS command is described in z/OS DFSMShsm Storage
Administration.
Specifying the size of cell pools
DFSMShsm uses cell pools (the MVS CPOOL function) to allocate virtual storage for frequently used modules and control blocks. Cell pool storage used for control blocks is extendable, while cell pool storage used by modules is not. Using cell pools reduces DFSMShsm CPU usage and improves DFSMShsm performance. The
DFSMShsm startup procedure specifies the size (in number of cells) of five cell pools used by DFSMShsm.
DFSMShsm is configured with a default size for each cell pool. You can change these sizes by changing the CELLS keyword in the startup procedure for the
DFSMShsm primary address space. Typically the default values are acceptable.
However, if you run many concurrent DFSMShsm tasks, you may receive an
ARC0019I message, which identifies a cell pool that has run out of cells. If you receive this message, you should increase the size of the indicated cell pool by at least the number of cells specified in the message.
Related reading
v
“Adjusting the size of cell pools” on page 301
v
“DFSMShsm startup procedure” on page 307
v
“CELLS (default = (200,100,100,50,20))” on page 311
Specifying operator intervention in DFSMShsm automatic operations
The SETSYS(REQUEST|NOREQUEST) command determines whether DFSMShsm prompts the operator before beginning its automatic functions.
If you specify . . .
SETSYS NOREQUEST
SETSYS REQUEST
Then . . .
DFSMShsm begins its automatic functions without asking the operator.
DFSMShsm prompts the operator for permission to begin its automatic functions by issuing message ARC0505D. You can write code for the MVS message exit IEAVMXIT to respond to the ARC0505D message automatically. The code could query the status of various other jobs in the system and make a decision to start or not to start the DFSMShsm automatic function, based on the workload in the system at the time.
Specifying data set serialization
When DFSMShsm is backing up or migrating data sets, it must prevent those data sets from being changed. It does this by serialization. Serialization is the process of controlling access to a resource to protect the integrity of the resource. DFSMShsm serialization is determined by the SETSYS DFHSMDATASETSERIALIZATION |
USERDATASETSERIALIZATION command.
Note:
In DFSMS/MVS Version 1 Release 5, the incremental backup function has been restructured in order to improve the performance of that function. The
SETSYS DFHSMDATASETSERIALIZATION command disables that improvement.
Chapter 5. Specifying commands that define your DFSMShsm environment
81
Only use the SETSYS DFHSMDATASETSERIALIZATION command if your environment requires it. Otherwise, it is recommended that you use the SETSYS
USERDATASETSERIALIZATION command.
If you specify . . .
SETSYS
DFHSMDATASETSERIALIZATION
SETSYS
USERDATASETSERIALIZATION
Then . . .
DFSMShsm issues a RESERVE command that prevents other processing units from accessing the volume while DFSMShsm is copying a data set during volume migration. To prevent system interlock, DFSMShsm releases the reserve on the volume to update the control data sets and the catalog. After the control data sets have been updated, DFSMShsm reads the data set VTOC entry for the data set that was migrated to ensure that no other processing unit has updated the data set while the control data sets were being updated. If the data set has not been updated, it is scratched. If the data set has been updated,
DFSMShsm scratches the migration copy of the data set and again updates the control data sets and the catalog to reflect the current location of the data set. Multivolume non-VSAM data sets are not supported by this serialization option because of possible deadlock situations. For more information about volume reserve serialization,
see “DFHSMDATASETSERIALIZATION” on page
Serialization is maintained throughout the complete migration operation, including the scratch of the copy on the user volume. No other processing unit can update the data set while
DFSMShsm is performing its operations, and no second read of the data set VTOC entry is required for checking. Also since there is no volume reserved while copying the data set, other data sets on the volume are accessible to users.
Therefore, USERDATASETSERIALIZATION provides a performance advantage to DFSMShsm and users in those systems equipped to use it.
You may use SETSYS USERDATASETSERIALIZATION if: v The data sets being processed are only accessible to a single z/OS image, even if you are running multiple DFSMShsm hosts in that single z/OS image.
OR v The data sets can be accessed from multiple z/OS images, and a product like
GRS must be active and is required in a multiple-image environment.
Specifying the swap capability of the DFSMShsm address space
The SETSYS SWAP|NOSWAP command determines whether the DFSMShsm address space can be swapped out of real storage.
If you specify . . .
SETSYS SWAP
Then . . .
The DFSMShsm address space can be swapped out of real storage.
82
z/OS DFSMShsm Implementation and Customization Guide
If you specify . . .
SETSYS NOSWAP
Then . . .
The DFSMShsm address space cannot be swapped out of real storage.
Guideline:
The NOSWAP option is recommended. DFSMShsm always sets the option to NOSWAP when the ABARS secondary address space is active.
In a multisystem environment, DFSMShsm also always sets the option to
NOSWAP so that cross-system coupling facility (XCF) functions are available. See
Chapter 13, “DFSMShsm in a sysplex environment,” on page 283 for the definition
of a multisystem (or a sysplex) environment.
Specifying maximum secondary address space
The SETSYS MAXABARSADDRESSSPACE (number) command specifies the maximum number of aggregate backup and recovery secondary address spaces that DFSMShsm allows to process concurrently. The SETSYS
ABARSPROCNAME(name) command specifies the name of the procedure that starts an ABARS secondary address space.
Defining the DFSMShsm security environment for DFSMShsm-owned data sets
The SETSYS commands control the relationship of DFSMShsm to RACF and control the way DFSMShsm prevents unauthorized access to DFSMShsm-owned data sets. You can use the following SETSYS commands to define your security environment: v How DFSMShsm determines the user ID when RACF is not installed and active.
v Whether to indicate that migration copies and backup versions of data sets are
RACF protected.
v How DFSMShsm protects scratched data sets.
Figure 20 is an example of a typical DFSMShsm security environment.
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DFSMSHSM SECURITY ENVIRONMENT*/
/***********************************************************************/
/*
SETSYS NOACCEPTPSCBUSERID
SETSYS NOERASEONSCRATCH
SETSYS NORACFIND
/*
Figure 20. Sample SETSYS Commands to Define the Security Environment for DFSMShsm
DFSMShsm maintains the security of those data sets that are RACF protected.
DFSMShsm does not check data set security for: v Automatic volume space management v Automatic dump v
Automatic backup v Automatic recall v Operator commands entered at the system console
Chapter 5. Specifying commands that define your DFSMShsm environment
83
v Commands issued by a DFSMShsm-authorized user
DFSMShsm checks security for data sets when a user who is not
DFSMShsm-authorized issues a nonauthorized user command (HALTERDS,
HBDELETE, HMIGRATE, HDELETE, HBACKDS, HRECALL, or HRECOVER).
Security checking is not done when DFSMShsm-authorized users issue the
DFSMShsm user commands. If users are not authorized to manipulate data,
DFSMShsm does not permit them to alter the backup parameters of a data set, delete backup versions, migrate data, delete migrated data, make backup versions of data, recall data sets, or recover data sets.
Authorization checking is done for the HCANCEL and CANCEL commands.
However the checking does not include security checking the user’s authority to access a data set. Whether a user has comprehensive or restricted command authority controls whether RACF authority checking is performed for each data set processed by the ABACKUP command. Refer to z/OS DFSMShsm Storage
Administration for more information about authorization checking during aggregate backup.
Determining batch TSO user IDs
When a TSO batch job issues a DFSMShsm-authorized command, DFSMShsm must be able to verify the authority of the TSO user ID to issue the command. For authorization checking purposes when processing batch TSO requests, DFSMShsm obtains a user ID as follows: v If RACF is active, the user ID is taken from the access control environment element (ACEE), a RACF control block.
v If RACF is not active and the SETSYS ACCEPTPSCBUSERID command has been specified, the user ID is taken from the TSO-protected step control block (PSCB).
If no user ID is present in the PSCB, the user ID is set to **BATCH*. It is the installation’s responsibility to ensure that a valid user ID is present in the PSCB.
v If RACF is not active and the SETSYS ACCEPTPSCBUSERID command has not been specified or if the default (NOACCEPTPSCBUSERID) has been specified, the user ID is set to **BATCH* for authorization checking purposes.
If you have RACF installed and active, you can specify that RACF protect resources; therefore, specify NOACCEPTPSCBUSERID. (NOACCEPTPSCBUSERID has no relevance but is included for completeness. However, if your system does not have RACF installed and active, you should use ACCEPTPSCBUSERID.)
The NOACCEPTPSCBUSERID parameter specifies how DFSMShsm determines the user ID for TSO submission of DFSMShsm-authorized commands in systems that do not have RACF installed and active.
Specifying whether to indicate RACF protection of migration copies and backup versions of data sets
When DFSMShsm migrates or backs up a data set, it can indicate that the copy is protected by a RACF discrete profile. Such a data set, when its indicator is on, is called RACF-indicated. RACF indication provides protection only for data sets that are RACF-indicated on the level 0 volume and it allows only the RACF security administrator to directly access the migration and backup copies.
For a non-VSAM data set, the RACF indicator is a bit in the volume table of contents (VTOC) of the DASD volume on which the data set resides.
84
z/OS DFSMShsm Implementation and Customization Guide
For a VSAM data set, the RACF indicator is a bit in the catalog record. The indicator remains with the data set even if the data set is moved to another system. However, if the data set profile fails to move or is somehow lost, a RACF security administrator must take action before anyone can access the data set.
The SETSYS RACFIND|NORACFIND command determines whether
DFSMShsm-owned data sets are RACF-indicated.
If you specify . . .
Then . . .
SETSYS RACFIND DFSMShsm sets the RACF indicator in the data set VTOC entry for migration copies and backup versions. The RACFIND option is recommended for systems that do not have an always-call environment, do not have generic profiles enabled, but do have
RACF discrete data set profiles.
SETSYS NORACFIND DFSMShsm does not perform I/O operations to turn on the RACF indicator for migration copies and backup versions when
RACF-indicated data sets are migrated and backed up to DASD.
Before specifying the SETSYS NORACFIND command, ensure that you: v Define a generic profile for the prefixes of DFSMShsm-owned data sets v Enable generic DATASET profiles
The preferred implementation is to create an environment in which you can specify the NORACFIND option. Generic profiles enhance DFSMShsm performance because DFSMShsm does not perform I/O operations to turn on the
RACF-indicated bit.
For a discussion of RACF environments and profiles, refer to z/OS DFSMShsm
Storage Administration.
Specifying security for scratched DFSMShsm-owned DASD data sets
Some data sets are so sensitive that you must ensure that DASD residual data cannot be accessed after they have been scratched. RACF has a feature to erase the space occupied by a data set when the data set is scratched from a DASD device.
This feature, called erase-on-scratch, causes overwriting of the DASD residual data by data management when a data set is deleted.
If you specify . . .
SETSYS ERASEONSCRATCH
Then . . .
Erase-on-scratch processing is requested only for
DFSMShsm-owned DASD data sets.
When the ERASEONSCRATCH parameter is in effect, DFSMShsm queries RACF for the erase status of the user’s data set for use with the backup version or the migration copy. If the erase status from the RACF profile is ERASE when the backup version or the migration copy is scratched, the DASD residual data is overwritten by data management. If the erase status from the RACF profile is
NOERASE when the backup version or the migration copy is scratched, the DASD residual data is not overwritten by data management.
Chapter 5. Specifying commands that define your DFSMShsm environment
85
The ERASEONSCRATCH parameter has no effect on data sets on level 0 volumes on which the RACF erase attribute is supported. The ERASEONSCRATCH parameter allows the erase attribute to be carried over to migration copies and backup versions.
Note:
Records making up a data set in a small-data-set-packing (SDSP) data set are not erased. Refer to z/OS DFSMShsm Storage Administration for information about small-data-set-packing data set security.
If you specify . . .
Then . . .
SETSYS NOERASEONSCRATCH No erase-on-scratch processing is requested for
DFSMShsm-owned volumes.
Erase-on-scratch considerations
Before you specify the erase-on-scratch option for integrated catalog facility (ICF) cataloged VSAM data sets that have the ERASE attribute and have backup profiles, consider the following results: v DFSMShsm copies of ICF cataloged VSAM data sets with the ERASE attribute indicated in the RACF profile are erased with the same erase-on-scratch support as for all other data sets.
DFSMShsm does not migrate ICF cataloged VSAM data sets that have the ERASE attribute in the catalog record. The migration fails with a return code 99 and a reason code 2 indicating that the user can remove the ERASE attribute from the catalog record and can specify the attribute in the RACF profile to obtain
DFSMShsm migration and erase-on-scratch support of the data set.
v
ERASE status is obtained only from the original RACF profile. Backup profiles created by DFSMShsm (refer to z/OS DFSMShsm Storage Administration) are not checked. The original ERASE attribute is saved in the backup version (C) record at the time of backup and is checked at recovery time if the original RACF profile is missing.
v The records in an SDSP data set are not overwritten on recall even if the SETSYS
ERASEONSCRATCH command has been specified. When a data set is recalled from an SDSP data set, the records are read from the control interval and returned as a data set to the level 0 volume. When migration cleanup is next performed, the VSAM erase process reformats the control interval but does not overwrite any residual data. Erase-on-scratch is effective for SDSP data sets only when the SDSP data set itself is scratched. Refer to z/OS DFSMShsm Storage
Administration for a discussion of protecting small-data-set-packing data sets.
Defining data formats for DFSMShsm operations
Because DFSMShsm moves data between different device types with different device geometries, the format of data can change as it moves from one device to another.
There are three data formats for DFSMShsm operations: v The format of the data on DFSMShsm-owned volumes v The blocking of the data on DFSMShsm-owned DASD volumes v The blocking of data sets that are recalled and recovered
You can control each of these format options by using SETSYS command parameters. The parameters control the data compaction option, the optimum
DASD blocking option (see “Optimum DASD blocking option” on page 90), the
86
z/OS DFSMShsm Implementation and Customization Guide
use of the tape device improved data recording capability, and the conversion option. You can also use DFSMSdss dump COMPRESS for improved tape utilization. Refer to z/OS DFSMShsm Storage Administration for additional
information about invoking full-volume dump compression. Figure 21 lists sample
SETSYS commands for defining data formats.
/***********************************************************************/
/* SAMPLE DFSMSHSM DATA FORMAT DEFINITIONS */
/***********************************************************************/
/*
SETSYS COMPACT(DASDMIGRATE NOTAPEMIGRATE DASDBACKUP NOTAPEBACKUP)
SETSYS COMPACTPERCENT(30)
SETSYS OBJECTNAMES(OBJECT,LINKLIB)
SETSYS SOURCENAMES(ASM,PROJECT)
SETSYS OPTIMUMDASDBLOCKING
SETSYS CONVERSION(REBLOCKTOANY)
SETSYS TAPEHARDWARECOMPACT
/*
Figure 21. Sample Data Format Definitions for a Typical DFSMShsm Environment
Data compaction option
The data compaction option can save space on migration and backup volumes by encoding each block of each data set that DFSMShsm migrates or backs up.
DFSMShsm compacts data with the Huffman Frequency Encoding compaction algorithm. The compacted output blocks can vary in size. An input block consisting of many least-used EBCDIC characters can be even longer after being encoded. If this occurs, DFSMShsm passes the original data block without compaction to the output routine.
Whether DFSMShsm compacts each block of data as the data is backed up or migrated from a level 0 volume is determined by the SETSYS COMPACT command. DFSMShsm compacts each block of data as the data set is backed up or migrated from a level 0 volume. Compaction or decompaction never occurs when a data set moves from one migration volume to another or from one backup volume to another. DFSMShsm does not compact data sets when they are migrated for extent reduction, are in compressed format, or during DASD conversion.
If you specify . . .
SETSYS COMPACT(DASDMIGRATE
NOTAPEMIGRATE DASDBACKUP
NOTAPEBACKUP)
Then . . .
Every block of data that migrates or is backed up to DASD is a candidate for compaction.
When DFSMShsm recalls or recovers a compacted data set, DFSMShsm automatically decodes and expands the data set. DFSMShsm decompacts encoded data even if you later run with SETSYS COMPACT(NONE).
If you do not want a specific data set to be compacted during volume migration or volume backup, invoke the data set migration exit (ARCMDEXT) or the data set backup exit (ARCBDEXT) to prevent compaction of that data set. For more information about the data set migration exit and the data set backup exit, refer to
z/OS DFSMS Installation Exits.
Compaction tables
When choosing an algorithm for compacting a data set, DFSMShsm selects either the unique source or object compaction table or selects the default general compaction table. You can identify data sets that you want to compact with unique
Chapter 5. Specifying commands that define your DFSMShsm environment
87
source or object compaction tables by specifying the low-level qualifiers for those data sets when you specify the SETSYS SOURCENAMES and SETSYS
OBJECTNAMES commands.
For generation data groups, DFSMShsm uses the next-to-the-last qualifier of the data set name. DFSMShsm uses the same compaction table for all blocks in each data set. The source compaction table is designed to compact data sets that contain programming language source code. The object compaction table is designed to compact data sets containing object code and is based on an expected frequency distribution of byte values.
Compaction percentage
When compacting a data set during migration or backup, DFSMShsm keeps a running total of the number of bytes in each compacted block that is written to the migration or backup volume. DFSMShsm also keeps a running total of the number of bytes that were in the blocks before compaction. With these values, DFSMShsm determines the space savings value, expressed as a percentage.
Total Bytes Total Bytes
Before Compaction — After Compaction
Space Savings = _________________________________________ x 100
Total Bytes Before Compaction
DFSMShsm uses the space savings percentage to determine if it should compact recalled or recovered data sets when they are subsequently backed up or migrated again. You specify this space saving percentage when you specify the SETSYS
COMPACTPERCENT command.
At least one track on the DASD migration or backup volume must be saved, or the compacted data set is not eligible for compaction when it is subsequently migrated or backed up.
Note:
For SDSP data sets, DFSMShsm considers only the space saving percentage because small-data-set packing is intended for small user data sets where the space savings is typically less than a track.
If you specify . . .
SETSYS COMPACT
(DASDMIGRATE |
TAPEMIGRATE)
SETSYS COMPACT
(DASDBACKUP |
TAPEBACKUP)
Then . . .
DFSMShsm compacts each record of a data set on a level 0 volume the first time it migrates the data set. During subsequent migrations from level 0 volumes (as a result of recall), DFSMShsm performs additional compaction of the data set only if the percentage of space savings (as indicated from the original backup or migration) exceeds the value specified with the SETSYS COMPACTPERCENT command.
DFSMShsm compacts each record of a data set on a level 0 volume the first time it backs up the data set. During subsequent backups (as a result of recovery), DFSMShsm performs additional compaction of the data set only if the percentage of space savings (as indicated by the original backup) exceeds the value specified with the SETSYS
COMPACTPERCENT command.
DFSMShsm stores the space saving percentage in the MCDS data set (MCD record) or the BCDS data set (MCB record). If the MCD or MCB record is deleted (for example, during migration cleanup or expiration of backup versions), the previous
88
z/OS DFSMShsm Implementation and Customization Guide
savings by compaction is lost and cannot affect whether or not DFSMShsm compacts the data set during subsequent migrations or backups.
Compaction considerations
Data sets sometimes exist on volumes in a format (compacted or uncompacted) that seems to conflict with the type of compaction specified with the SETSYS command. The following examples illustrate how this occurs.
DFSMShsm compacts data sets only when it copies them onto a
DFSMShsm-owned volume from a level 0 volume.
If you specify . . .
SETSYS COMPACT
(TAPEMIGRATION (ML2TAPE)) and
SETSYS COMPACT (DASDMIGRATE
TAPEMIGRATE)
SETSYS COMPACT (DASDMIGRATE
NOTAPEMIGRATE)
SETSYS COMPACT
(DASDMIGRATE)
SETSYS COMPACT
(NOTAPEMIGRATE)
Then . . .
DFSMShsm compacts data sets that migrate from level 0 volumes whether they migrate to DASD or whether they migrate directly to migration level 2 tape. DFSMShsm retains the compacted form when it moves data sets from migration level 1 DASD to migration level 2 tape.
DFSMShsm places both compacted and uncompacted data sets on migration level 2 tapes.
DFSMShsm compacts any data set migrating to migration level 1 DASD (or migration level 2 DASD, if DASD are used for ML2 volumes).
DFSMShsm does not compact data sets that migrate from level 0 volumes directly to migration level 2 tapes. However, data sets migrating from level 1 volumes to level 2 tapes remain compacted; therefore, both compacted and uncompacted data sets can be on the tape.
Similarly, if you are not compacting data sets that migrate to DASD and you are compacting data sets that migrate directly to tape, both compacted and uncompacted data sets can migrate to level 2 tapes. The uncompacted data sets occur because the data sets are not compacted when they migrate to the migration level 1 DASD and the compaction is not changed when they later migrate to a migration level 2 tape. However, data sets migrating directly to tape are compacted.
If you specify . . .
SETSYS TAPEMIGRATION
(DIRECT)
Then . . .
The DASDMIGRATE or NODASDMIGRATE subparameter of the SETSYS COMPACT command has no effect on DFSMShsm processing.
You can also have mixed compacted and uncompacted backup data sets and they, too, can be on either DASD or tape.
If you specify compaction for data sets backed up to DASD but no compaction for migrated data sets, any data set that migrates when it needs to be backed up is uncompacted when it is backed up from the migration volume.
Similarly, if you specify compaction for migrated data sets but no compaction for backed up data sets, a data set that migrates when it needs to be backed up
Chapter 5. Specifying commands that define your DFSMShsm environment
89
migrates in compacted form. When the data set is backed up from the migration volume, it is backed up in its compacted form even though you specified no compaction for backup.
Data sets that are backed up to DASD volumes retain their compaction characteristic when they are spilled to tape. Thus, if you are not compacting data sets backed up to tape but you are compacting data sets backed up to DASD, you can have both compacted and uncompacted data sets on the same tapes. Data sets that are compacted and backed up to tape, likewise, can share tapes with uncompacted data sets that were backed up to DASD.
Optimum DASD blocking option
Each DASD device has an optimum block size for storing the maximum
DFSMShsm data on each track. The default block size for DFSMShsm when it is storing data on its owned DASD devices is determined by the device type for each of the DFSMShsm-owned DASD devices to ensure that the maximum data is stored on each track of the device. For example, all models of 3390 DASD have the same track length, and therefore an optimum block size of 18KB (1KB equals 1024 bytes).
If you specify (not recommended) . . .
Then . . .
SETSYS NOOPTIMUMDASDBLOCKING
DFSMShsm uses a block size of 2KB for storing data on its owned DASD.
Data Set Reblocking
The purpose of reblocking is to make the most efficient use of available space.
If you specify . . .
SETSYS CONVERSION
(REBLOCKTOANY)
Then . . .
DFSMShsm allows reblocking during recall or recovery to any device type supported by DFSMShsm, including target volumes of the same type as the source volume. This is the only parameter used by DFSMSdss.
Defining DFSMShsm reporting and monitoring
DFSMShsm produces information that can make the storage administrator, the operator, and the system programmer aware of what is occurring in the system.
This information is provided in the form of activity logs, system console output, and entries in the System Management Facility (SMF) logs. You can specify a
SETSYS command to control: v The information that is stored in the activity logs v The device type for the activity logs v The messages that appear on the system console v The type of output device for listings and reports v Whether entries are made in the SMF logs
Figure 22 on page 91 is an example of the SETSYS commands that define a typical
DFSMShsm reporting and monitoring environment.
90
z/OS DFSMShsm Implementation and Customization Guide
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE A TYPICAL DFSMSHSM REPORTING */
/* AND MONITORING ENVIRONMENT */
/***********************************************************************/
/*
SETSYS ACTLOGMSGLVL(EXCEPTIONONLY)
SETSYS ACTLOGTYPE(DASD)
SETSYS MONITOR (BACKUPCONTROLDATASET(80 ) -
JOURNAL(80 ) -
MIGRATIONCONTROLDATASET(80 ) -
OFFLINECONTROLDATASET(80 ) -
NOSPACE NOSTARTUP NOVOLUME)
SETSYS SYSOUT(A 1)
SETSYS SMF
/*
Figure 22. Sample Reporting and Monitoring Environment Definition for Typical DFSMShsm Environment
The activity logs are discussed in detail in Chapter 3, “DFSMShsm data sets,” on page 9.
Controlling messages that appear on the system console
You can control the types of messages that appear at the system console by selecting the options for the SETSYS MONITOR command.
If you specify . . .
SETSYS MONITOR
(MIGRATIONCONTROLDATASET(threshold)
SETSYS MONITOR
(BACKUPCONTROLDATASET(threshold))
SETSYS MONITOR
(OFFLINECONTROLDATASET(threshold))
SETSYS MONITOR(JOURNAL(threshold))
SETSYS MONITOR(NOSPACE)
Then . . .
DFSMShsm notifies the operator when a control data set is becoming full. You specify the threshold (percentage) of the allocated data set space that triggers a message.
SETSYS MONITOR(NOSTARTUP)
SETSYS MONITOR(NOVOLUME)
DFSMShsm does not issue volume space usage messages.
DFSMShsm does not issue informational messages for startup progress.
DFSMShsm does not issue messages about data set activity on volumes it is processing.
For more information about the SETSYS command, see z/OS DFSMShsm Storage
Administration.
Controlling the output device for listings and reports
The SYSOUT parameter controls where lists and reports are printed if the command that causes the list or report does not specify where it is to be printed.
The default for this parameter is SYSOUT(A 1).
Controlling entries for the SMF logs
You determine if DFSMShsm writes System Management Facility (SMF) records to the SYS1.MANX and SYS1.MANY system data sets when you specify the SETSYS
SMF or SETSYS NOSMF commands.
Chapter 5. Specifying commands that define your DFSMShsm environment
91
If you specify . . .
SETSYS SMF
SETSYS NOSMF
Then . . .
DFSMShsm writes daily statistics records, function statistics records, and volume statistics records to the SYS1.MANX and
SYS1.MANY system data sets.
DFSMShsm does not write daily statistics records (DSRs), function statistics records (FSRs) or volume statistics records (VSRs) into the system data sets. For the formats of the FSR, DSR, and VSR records, see z/OS DFSMShsm Diagnosis.
Defining the tape environment
Chapter 10, “Implementing DFSMShsm tape environments,” on page 189, contains
information about setting up your tape environment, including discussions about
SMS-managed tape libraries, tape management policies, device management policies, and performance management policies.
Defining the installation exits that DFSMShsm invokes
You determine the installation exits that DFSMShsm invokes when you specify the
SETSYS(EXITON) or SETSYS(EXITOFF) commands. The installation exits can be dynamically loaded at startup by specifying them in your ARCCMDxx member in a PARMLIB.
Note:
Examples of the DFSMShsm installation exits can be found in
SYS1.SAMPLIB.
If you specify . . .
SETSYS EXITON(exit,exit,exit)
SETSYS EXITOFF(exit,exit,exit)
SETSYS EXITOFF(exit1)
Then . . .
The specified installation exits are immediately loaded and activated.
The specified installation exits are immediately disabled and the storage is freed.
You modify and link-edit exit1 and you then specify
SETSYS EXITON(
exit1), DFSMShsm replaces the original exit1 with the newly modified exit1.
z/OS DFSMS Installation Exits describes the installation exits and what each exit accomplishes.
Controlling DFSMShsm control data set recoverability
The DFSMShsm journal data set records any activity that occurs to the DFSMShsm control data sets. By maintaining a journal, you ensure that damaged control data sets can be recovered by processing the journal against the latest backup copies of the control data sets.
If you specify . . .
SETSYS JOURNAL RECOVERY
SETSYS JOURNAL SPEED
Then . . .
DFSMShsm waits until the journal entry has been written into the journal before it updates the control data sets and continues processing.
DFSMShsm continues with its processing as soon as the journaling request has been added to the journaling queue. (Not recommended.)
92
z/OS DFSMShsm Implementation and Customization Guide
For examples of data loss and recovery situations, refer to z/OS DFSMShsm Storage
Administration.
Defining migration level 1 volumes to DFSMShsm
Whether you are implementing space management or availability management, you need migration level 1 volumes. Migration processing requires migration level
1 volumes as targets for data set migration. Backup processing requires migration level 1 volumes to store incremental backup and dump VTOC copy data sets. They may also be used as intermediate storage for data sets that are backed up by data set command backup.
Fast Replication backup requires migration level 1 volumes to store Catalog
Information Data Sets. They may also be used as intermediate storage for data sets that are backed up by data set command backup.
Ensure that you include the ADDVOL command specifications for migration level
1 volumes in the ARCCMDxx member located in a PARMLIB, so that DFSMShsm recognizes the volumes at each startup. If ADDVOL commands for migration level
1 volumes are not in the ARCCMDxx member, DFSMShsm does not recognize that they are available unless you issue an ADDVOL command at the terminal for each
migration level 1 volume. Figure 23 shows the sample ADDVOL commands for
adding migration level 1 volumes to DFSMShsm control.
/***********************************************************************/
/*
/*
SAMPLE ADDVOL COMMANDS FOR ADDING MIGRATION LEVEL 1 VOLUMES TO
DFSMSHSM CONTROL
*/
*/
/***********************************************************************/
/*
ADDVOL ML1001 UNIT(3390) -
MIGRATION(MIGRATIONLEVEL1 -
SMALLDATASETPACKING) THRESHOLD(90)
ADDVOL ML1002 UNIT(3390) -
MIGRATION(MIGRATIONLEVEL1 -
SMALLDATASETPACKING) THRESHOLD(90)
ADDVOL ML1003 UNIT(3390) -
MIGRATION(MIGRATIONLEVEL1 -
NOSMALLDATASETPACKING) THRESHOLD(90)
Figure 23. Example ADDVOL Commands for Adding Migration Level 1 Volumes to DFSMShsm Control
Parameters for the migration level 1 ADDVOL commands
The example below shows parameters used with the the MIGRATIONLEVEL1 parameter:
ADDVOL
▌1▐ML1001 ▌2▐UNIT(3390) -
▌3▐MIGRATION(MIGRATIONLEVEL1 -
SMALLDATASETPACKING) THRESHOLD(90) v
▌1▐ - The first parameter of the ADDVOL command is a positional required parameter that specifies the volume serial number of the volume being added to
DFSMShsm. In Figure 23, migration level 1 volumes are identified by volume
serial numbers that start with ML1.
v
▌2▐ - The second parameter of the ADDVOL command is a required parameter that specifies the unit type of the volume. For our example, all ML1 volumes are
3390s.
v
▌3▐ - The third parameter is a required parameter that specifies that the volume is being added as a migration volume. This parameter has subparameters that
Chapter 5. Specifying commands that define your DFSMShsm environment
93
specify the kind of migration volume and the presence of a small-data-set-packing (SDSP) data set on the volume. If you specify
SMALLDATASETPACKING, the volume must contain a VSAM key-sequenced
The number of SDSP data sets defined must be at least equal to the maximum number of concurrent volume migration tasks that could be executing in your complex. Additional SDSPs are recommended for RECALL processing and
ABARS processing and if some SDSPs should become full during migration.
v The THRESHOLD parameter in our ADDVOL command examples specifies the level of occupancy that signals the system to migrate data sets from migration level 1 volumes to migration level 2 volumes. If you want DFSMShsm to do automatic migration from level 1 to level 2 volumes, you must specify the occupancy thresholds for the migration level 1 volumes.
Note:
1.
Automatic secondary space management determines whether to perform level 1 to level 2 migration by checking to see if any migration level 1 volume has an occupancy that is equal to or greater than its threshold. DFSMShsm migrates all eligible data sets from all migration level 1 volumes to migration level 2 volumes.
2.
If the volume is being defined as a migration level 1 OVERFLOW volume then the threshold parameter is ignored. Use the SETSYS
ML1OVERFLOW(THRESHOLD(nn)) command to specify the threshold for the entire OVERFLOW volume pool.
3.
If you're adding volumes in an HSMplex environment, and those added volumes will be managed by each host in an HSMplex, then be sure to issue the ADDVOL on each system that will manage that volume.
For more information about level 1 to level 2 migration, see z/OS DFSMShsm
Storage Administration.
In specifying the threshold parameter, you want to maintain equal free space on all of your migration level 1 volumes. If you use different device types for migration level 1 volumes, you must calculate the appropriate percentages that will make the same amount of free space available on each device type. For example, if you have a mixture of 3390 models 1 and 2, you might specify 88% for model 1 (92M) and
94% for model 2 (96M).
Using migration level 1 OVERFLOW volumes for migration and backup
An optional OVERFLOW parameter of the ADDVOL command lets you specify that OVERFLOW volumes are to be considered for backup or migration to migration level 1 when both the following are true: v The file you are migrating or backing up is larger than a given size, as specified on the SETSYS ML1OVERFLOW(DATASETSIZE(dssize)) command v DFSMShsm cannot allocate enough space on a NOOVERFLOW volume by selecting either the least used volume or the volume with the most free space.
Note that DFSMShsm will use OVERFLOW ML1 volumes for the following backup functions: v Inline backup v HBACKDS and BACKDS commands
94
z/OS DFSMShsm Implementation and Customization Guide
v ARCHBACK macro for data sets larger than dssize K bytes
You can specify the OVERFLOW parameter as follows:
ADDVOL ML1003 UNIT(3390) -
MIGRATION(MIGRATIONLEVEL1 OVERFLOW)
Related reading:
For more information about the ADDVOL command and the
SETSYS command, see z/OS DFSMShsm Storage Administration.
User or system data on migration level 1 volumes
Migration level 1 volumes, once defined to DFSMShsm, are known and used as
DFSMShsm-owned volumes. That expression implies, among other things, that when DFSMShsm starts using such a volume, it determines the space available and creates its own record of that free space. For reasons of performance, DFSMShsm maintains that record as it creates and deletes its own migration copies, backup versions, and so on; DFSMShsm does not keep scanning the VTOC to see what
other data sets may have been added or deleted.
Restrictions:
The point is that your installation can store certain types of user or system data on migration level 1 volumes as long as you keep certain restrictions in mind: v Such data sets cannot be SMS-managed, because these volumes cannot be
SMS-managed.
v Once such a data set is allocated, do not change its size during an DFSMShsm startup.
v
Do not request DFSMShsm to migrate or (except perhaps as part of a full dump of such a volume) back up such data sets.
Given that you maintain these restrictions, you can gain certain advantages by sharing these volumes with non-DFSMShsm data: v A given amount of level-1 storage for DFSMShsm can be spread across more volumes, reducing volume contention.
v Since only one SDSP data set can be defined per volume, the number of such data sets can be increased.
Defining the common recall queue environment
DFSMShsm supports an HSMplex-wide common recall queue (CRQ). This CRQ balances recall workload across the HSMplex. This queue is implemented through the use of a coupling facility (CF) list structure. For an overview of the CRQ environment, refer to the z/OS DFSMShsm Storage Administration.
Updating the coupling facility resource manager policy for the common recall queue
The CRQ function requires that an HSMplex resides in a Parallel Sysplex
® configuration. To fully utilize this function, allocate the list structure in a CF that supports the system-managed duplexing rebuild function. Before DFSMShsm can use the common recall queue, the active coupling facility resource management
(CFRM) policy must be updated to include the CRQ definition. You can use the
following information (see Table 7 on page 96) to define the CRQ and update the
CFRM policy:
Chapter 5. Specifying commands that define your DFSMShsm environment
95
Table 7. Information that can be used to define the CRQ and update the CFRM policy.
Type
Requirements
Recommendations
Useful Information
Information
The structure name that must be defined in the active CFRM policy is 'SYSARC_'basename'_RCL', where basename is the base name specified in SETSYS
COMMONQUEUE(RECALL(CONNECT(basename))). basename must be exactly five characters.
The minimum CFLEVEL is eight. If the installation indicates that the structure must be duplexed, the system attempts to allocate the structure on a CF with a minimum of CFLEVEL=12.
DFSMShsm does not specify size parameters when it connects to the CRQ. Size parameters must be specified in the CFRM policy.
number of concurrent recalls
Because the list structure implements locks, the CF maintains an additional cross-system coupling facility (XCF) group in relation to this structure. Make sure that your XCF configuration can support the addition of another group.
When implementing a CRQ environment, all hosts sharing a unique queue should be within the same SMSplex, have access to the same catalogs and DASD, and have common RACF configurations. The system administrator must ensure that all hosts connected to the CRQ are capable of recalling any migrated data set that originated from any other host that is connected to the same CRQ.
Nonvolatility is recommended, but not required. For error recovery purposes, each host maintains a local copy of each recall MWE that it places on the CRQ.
CF failure independence is strongly recommended. For example, do not allocate the CRQ in a CF that is on an z/OS image that is on the same processor as another z/OS image with a system running a
DFSMShsm that is using that CRQ.
Each CRQ is contained within a single list structure.
A host can only connect to one CRQ at a time.
DFSMShsm supports the alter function, including RATIO alters, and system-managed rebuilds. DFSMShsm does not support user-managed rebuilds.
Note:
System-managed rebuilds do not support the
REBUILDPERCENT option.
DFSMShsm supports the system-managed duplexing rebuild function.
The CRQ is a persistent structure with nonpersistent connections.
The structure remains allocated even if all connections have been deleted.
Determining the structure size of the common recall queue
The common recall queue needs to be sized such that it can contain the maximum number of concurrent recalls that may occur. Due to the dynamic nature of recall activity, there is no exact way to determine what the maximum number of concurrent recall requests may be.
Guideline:
Use an INITSIZE value of 5120KB and a SIZE value of 10240KB.
96
z/OS DFSMShsm Implementation and Customization Guide
A structure of this initial size is large enough to manage up to 3900 concurrent recall requests with growth up to 8400 concurrent recalls. These values should be
large enough for most environments. Table 8 shows the maximum number of
recalls that may be contained in structures of various sizes. No structure of less than 2560KB should be used.
Note:
The maximum number of recall requests that may be contained within a structure is dependent on the number of requests that are from a unique Migration
Level 2 tape. The figures shown in Table 8 are based on 33% of the recall requests
requiring a unique ML2 tape. If fewer tapes are needed, then the structure will be able to contain more recall requests than is indicated.
Table 8. Maximum Concurrent Recalls
Structure Size
2560KB
5120KB
10240KB
15360KB
Maximum Concurrent Recalls
1700
3900
8400
12900
It should be recognized that the utilization percentage of the common recall queue will be low most of the time. This is because the average number of concurrent requests will be much lower than the maximum number of concurrent requests. In order to be prepared for a high volume of unexpected recall activity, the common recall queue structure size must be larger than the size needed to contain the average number of recall requests.
Altering the list structure size
DFSMShsm monitors how full a list structure has become. When the structure becomes 95% full, DFSMShsm no longer places recall requests onto the CRQ, but routes all new requests to the local queues. Routing recall requests to the CRQ resumes once the structure drops below 85% full. The structure is not allowed to become 100% full so that requests that are in-process can be moved between lists within the structure without failure. When the structure reaches maximum capacity, the storage administrator can increase the size by altering the structure to a larger size or by rebuilding it. A rebuild must be done if the maximum size has already been reached. (The maximum size limit specified in the CFRM policy must be increased before the structure is rebuilt). You can use the CF services structure full monitoring feature to monitor the structure utilization of the common recall queue.
How to alter the common recall queue list structure size
Initiate alter processing using the SETXCF START,ALTER command. Altering is a nondisruptive method for changing the size of the list structure. Alter processing can increase the size of the structure up to the maximum size specified in the
CFRM policy. The SETXCF START,ALTER command can also decrease the size of a structure to the specified MINSIZE or default to the value specified in the CFRM policy.
How to rebuild the common recall queue list structure size
DFSMShsm supports the system-managed duplexing rebuild function. DFSMShsm does not support user-managed rebuilds.
Chapter 5. Specifying commands that define your DFSMShsm environment
97
Note:
The coupling facility auto rebuild function does not support the use of
REBUILDPERCENT. If the system rebuild function is not available because the structure was not allocated on a coupling facility that supports it, and a user needs to increase the maximum size of the structure or remedy a number of lost connections, then the user has to reallocate the structure.
Perform the following steps to reallocate the structure:
1.
Disconnect all the hosts from the structure using the SETSYS
COMMONQUEUE(RECALL(DISCONNECT)) command.
2.
Deallocate the structure using the SETXCF FORCE command.
3.
Reallocate the structure using the SETSYS
COMMONQUEUE(RECALL(CONNECT(basename ))) command.
Rule:
If the intent of the rebuild is to increase the maximum structure size, you must update the CFRM policy before you perform the above steps.
Defining the common dump queue environment
DFSMShsm supports an HSMplex-wide common dump queue (CDQ). With CDQ, dump requests are distributed to a group of hosts for processing. This increases the number of available tasks to perform the work and improves throughput by distributing the workload rather than concentrating it on a single host's address space. For an overview of the CDQ environment and a description of how to define it, refer to Common dump queue in z/OS DFSMShsm Storage Administration.
|
|
|
|
|
|
|
Defining the common recover queue environment
DFSMShsm supports an HSMplex-wide common recover queue (CVQ). With CVQ, volume restore requests are distributed to a group of hosts for processing. This increases the number of available tasks to perform the work and improves throughput by distributing the workload instead of concentrating it on a single host's address space. For an overview of the CVQ environment and how to define it, refer to Common recover queue in z/OS DFSMShsm Storage Administration.
Defining common SETSYS commands
The following example shows typical SETSYS commands for an example system.
Each of the parameters in “Defining common SETSYS commands” can be treated
as a separate SETSYS command with the cumulative effect of a single SETSYS command. This set of SETSYS commands becomes part of the ARCCMDxx member pointed to by the DFSMShsm startup procedure.
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE THE MVS ENVIRONMENT */
/***********************************************************************/
/*
SETSYS JES2
SETSYS CSALIMITS(MAXIMUM(100) ACTIVE(90) INACTIVE(30) MWE(4))
SETSYS NOREQUEST
SETSYS USERDATASETSERIALIZATION
SETSYS NOSWAP
SETSYS MAXABARSADDRESSSPACE(1)
/*
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DFSMSHSM SECURITY */
/***********************************************************************/
/*
SETSYS NOACCEPTPSCBUSERID
98
z/OS DFSMShsm Implementation and Customization Guide
SETSYS NOERASEONSCRATCH
SETSYS NORACFIND
/*
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DATA FORMATS */
/***********************************************************************/
/*
SETSYS COMPACT(DASDMIGRATE NOTAPEMIGRATE DASDBACKUP NOTAPEBACKUP)
SETSYS COMPACTPERCENT(30)
SETSYS OBJECTNAMES(OBJECT,LINKLIB)
SETSYS SOURCENAMES(ASM,PROJECT)
SETSYS OPTIMUMDASDBLOCKING
SETSYS CONVERSION(REBLOCKTOANY)
SETSYS TAPEHARDWARECOMPACT
/*
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE DFSMSHSM REPORTING AND
/* MONITORING ENVIRONMENT
*/
*/
/***********************************************************************/
/*
SETSYS ACTLOGMSGLVL(EXCEPTIONONLY)
SETSYS ACTLOGTYPE(DASD)
SETSYS MONITOR(BACKUPCONTROLDATASET(80 ) -
JOURNAL(80 ) -
MIGRATIONCONTROLDATASET(80 ) -
OFFLINECONTROLDATASET(80 ) -
NOSPACE NOSTARTUP NOVOLUME)
SETSYS SYSOUT(A 1)
SETSYS SMF
/*
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DEFINE THE EXITS DFSMSHSM USES */
/***********************************************************************/
/*
SETSYS EXITON(CD)
/*
/***********************************************************************/
/* SAMPLE SETSYS COMMANDS THAT DETERMINE DFSMSHSM RECOVERABILITY */
/***********************************************************************/
/*
SETSYS JOURNAL(RECOVERY)
/*
/***********************************************************************/
/*
/*
SAMPLE SETSYS COMMAND TO CONNECT TO A COMMON RECALL QUEUE LIST */
STRUCTURE. TEST1 IS THE BASE NAME OF THE CRQ LIST STRUCTURE */
/***********************************************************************/
/*
SETSYS COMMONQUEUE(RECALL(CONNECT(TEST1)))
/***********************************************************************/
/* SAMPLE SETSYS COMMAND THAT SPECIFIES DATA SET SIZE AT WHICH AN
/* 0VERFLOW VOLUME IS PREFERRED FOR MIGRATION OR BACKUP
*/
*/
/***********************************************************************/
/*
SETSYS ML1OVERFLOW(DATASETSIZE(2000000))
/*
/***********************************************************************/
/* SAMPLE SETSYS COMMAND THAT SPECIFIES THE THRESHOLD OF ML1 OVERFLOW */
/* VOLUME POOL SPACE FILLED BEFORE MIGRATION TO ML2 DURING SECONDARY */
/* SPACE MANAGEMENT */
/***********************************************************************/
/*
SETSYS ML1OVERFLOW(THRESHOLD(80))
/*
Chapter 5. Specifying commands that define your DFSMShsm environment
99
100
z/OS DFSMShsm Implementation and Customization Guide
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Related manuals
advertisement
Table of contents
- 3 Contents
- 9 Figures
- 11 Tables
- 13 About this document
- 13 Who should read this document
- 13 Major divisions of this document
- 14 Required product knowledge
- 14 z/OS information
- 15 How to send your comments to IBM
- 15 If you have a technical problem
- 17 Summary of changes
- 17 Summary of changes for z/OS Version 2 Release 3 (V2R3)
- 17 Summary of changes for z/OS Version 2 Release 2 (V2R2)
- 17 Summary of changes for z/OS Version 2 Release 1 (V2R1) as updated September 2014
- 18 z/OS Version 2 Release 1 summary of changes
- 19 Part 1. Implementing DFSMShsm
- 21 Chapter 1. Introduction
- 21 How to implement DFSMShsm
- 22 Starter set
- 22 Starter set adaptation jobs
- 23 Functional verification procedure (FVP)
- 25 Chapter 2. Installation verification procedure
- 27 Chapter 3. DFSMShsm data sets
- 27 Control and journal data sets
- 28 Preventing interlock of control data sets
- 28 Migration control data set
- 29 Migration control data set size
- 31 Backup control data set
- 32 Backup control data set size
- 34 Offline control data set
- 35 Offline control data set size
- 37 Enabling the use of extended TTOCs
- 37 Journal data set
- 38 Journal data set size
- 39 Migrating the journal to a large format data set
- 39 Updating IFGPSEDI for the enhanced data integrity function
- 40 Specifying the names of the backup data sets
- 40 Defining the backup environment for control data sets
- 41 Steps for defining the CDS and journal backup environment
- 43 Improving performance of CDS and journal backup
- 44 Monitoring the control and journal data sets
- 45 Reorganizing the control data sets
- 45 Control data set and journal data set backup copies
- 46 Storage guidance for control data set and journal data set backup copies
- 46 Size of the control data set and journal data set backup copies
- 47 Considerations for DFSMShsm control data sets and the journal
- 47 Backup considerations for the control data sets and the journal
- 47 Migration considerations for the control data sets and the journal
- 48 Volume allocation considerations for the control data sets and the journal
- 50 RACF considerations for the control data sets and the journal
- 50 Translating resource names in a GRSplex
- 50 Determining the CDS serialization technique
- 50 Using VSAM record level sharing
- 50 Requirements for CDS RLS serialization
- 51 Multiple host considerations
- 51 Required changes to the CDS before accessing RLS
- 51 VSAM RLS coupling facility structures recommendations
- 51 Required CDS version backup parameters
- 51 Invoking CDS RLS serialization
- 52 Using multicluster control data sets
- 52 Considerations for using multicluster control data sets
- 53 Converting a multicluster control data set from VSAM key range to non-key-range
- 53 Determining key ranges for a multicluster control data set
- 54 Multicluster control data set conversion
- 56 Changing the number of clusters of a multicluster control data set
- 56 Changing only the key boundaries of a multicluster control data set
- 57 Updating the startup procedure for multicluster control data sets
- 57 Updating the DCOLLECT JCL for multicluster control data sets
- 58 Using VSAM extended addressability capabilities
- 59 Converting control data sets to extended addressability with either CDSQ or CDSR serialization
- 59 DFSMShsm problem determination aid facility
- 59 Problem determination aid log data sets
- 60 Planning to use the problem determination aid (PDA) facility
- 60 Determining how long to keep trace information
- 60 Problem determination aid (PDA) log data set size requirements
- 61 Controlling the problem determination aid (PDA) facility
- 61 Allocating the problem determination aid (PDA) log data sets
- 62 Printing the problem determination aid (PDA) log data sets
- 62 DFSMShsm logs
- 63 DFSMShsm log data set
- 64 DFSMShsm log size
- 64 Optionally disabling logging
- 64 Printing the DFSMShsm log
- 64 Edit log data sets
- 65 Printing the edit log
- 65 Activity log data sets
- 66 Activity log information for the Storage Administrator
- 66 Activity log information for the System Programmer
- 66 Controlling the amount of information written to the activity logs
- 68 Controlling the device type for the activity logs
- 69 Considerations for creating log data sets
- 69 DFSMShsm small-data-set-packing data set facility
- 70 Preparing to implement small-data-set-packing data sets
- 70 Defining the size of a small user data set
- 70 Allocating SDSP data sets
- 70 Specifying the SDSP parameter on the ADDVOL statement
- 70 Data mover considerations for SDSP data sets
- 71 VSAM considerations for SDSP data sets
- 72 Multitasking considerations for SDSP data sets
- 76 System data sets
- 76 Data set with DDNAME of MSYSIN
- 76 Data set with DDNAME of MSYSOUT
- 77 Chapter 4. User data sets
- 77 User data sets supported by DFSMShsm
- 77 Physical sequential data sets
- 78 Physical sequential data sets and OSAM
- 78 Direct access data sets and OSAM
- 78 Direct access data sets
- 79 Hierarchical file system data sets
- 79 zFS data sets
- 79 Exceptions to the standard MVS access methods support
- 79 Size limit on DFSMShsm DASD copies
- 79 Supported data set types
- 79 Data set type support for space management functions
- 82 Data set type support for availability management functions
- 85 Chapter 5. Specifying commands that define your DFSMShsm environment
- 85 Defining the DFSMShsm startup environment
- 85 Allocating DFSMShsm data sets
- 86 Establishing the DFSMShsm startup procedures
- 86 Primary address space startup procedure
- 88 Secondary address space startup procedure
- 89 Establishing the START command in the COMMNDnn member
- 89 Establishing SMS-related conditions in storage groups and management classes
- 89 Writing an ACS routine that directs DFSMShsm-owned data sets to non-SMS-managed storage
- 90 Directing DFSMShsm temporary tape data sets to tape
- 91 Establishing the ARCCMDxx member of a PARMLIB
- 91 Modifying the ARCCMDxx member
- 91 ARCCMDxx member for the starter set
- 92 Issuing DFSMShsm commands
- 92 Implementing new DFSMShsm ARCCMDxx functions
- 92 Defining storage administrators to DFSMShsm
- 92 The RACF FACILITY class environment
- 92 The DFSMShsm AUTH command environment
- 93 Defining the DFSMShsm MVS environment
- 94 Specifying the job entry subsystem
- 94 JES3 considerations
- 95 Specifying the amount of common service area storage
- 96 Selecting values for the SETSYS CSA command subparameters
- 99 Specifying the size of cell pools
- 99 Specifying operator intervention in DFSMShsm automatic operations
- 99 Specifying data set serialization
- 100 Specifying the swap capability of the DFSMShsm address space
- 101 Specifying maximum secondary address space
- 101 Defining the DFSMShsm security environment for DFSMShsm-owned data sets
- 102 Determining batch TSO user IDs
- 102 Specifying whether to indicate RACF protection of migration copies and backup versions of data sets
- 103 Specifying security for scratched DFSMShsm-owned DASD data sets
- 104 Erase-on-scratch considerations
- 104 Defining data formats for DFSMShsm operations
- 105 Data compaction option
- 105 Compaction tables
- 106 Compaction percentage
- 107 Compaction considerations
- 108 Optimum DASD blocking option
- 108 Data Set Reblocking
- 108 Defining DFSMShsm reporting and monitoring
- 109 Controlling messages that appear on the system console
- 109 Controlling the output device for listings and reports
- 109 Controlling entries for the SMF logs
- 110 Defining the tape environment
- 110 Defining the installation exits that DFSMShsm invokes
- 110 Controlling DFSMShsm control data set recoverability
- 111 Defining migration level 1 volumes to DFSMShsm
- 111 Parameters for the migration level 1 ADDVOL commands
- 112 Using migration level 1 OVERFLOW volumes for migration and backup
- 113 User or system data on migration level 1 volumes
- 113 Defining the common recall queue environment
- 113 Updating the coupling facility resource manager policy for the common recall queue
- 114 Determining the structure size of the common recall queue
- 115 Altering the list structure size
- 115 How to alter the common recall queue list structure size
- 115 How to rebuild the common recall queue list structure size
- 116 Defining the common dump queue environment
- 116 Defining the common recover queue environment
- 116 Defining common SETSYS commands
- 119 Chapter 6. DFSMShsm starter set
- 119 Basic starter set jobs
- 119 Starter set objectives
- 119 Starter set configuration considerations
- 119 Setup requirements
- 120 Steps for running the starter set
- 124 HSMSTPDS
- 124 Member HSM.SAMPLE.CNTL
- 125 STARTER
- 127 Starter set example
- 142 Adapting and using the starter set
- 143 ARCCMD90
- 144 ARCCMD01
- 145 ARCCMD91
- 146 HSMHELP
- 159 HSMLOG
- 160 HSMEDIT
- 160 ALLOCBK1
- 163 ALLOSDSP
- 165 HSMPRESS
- 169 Chapter 7. DFSMShsm sample tools
- 169 ARCTOOLS job and sample tool members
- 171 Chapter 8. Functional verification procedure
- 171 Preparing to run the functional verification procedure
- 171 Steps for running the functional verification procedure
- 173 FVP parameters
- 173 Small-data-set-packing parameters
- 174 VSAM data set migration parameter
- 174 Tape support parameter
- 174 Dump function parameter
- 174 Jobs and job steps that comprise the functional verification procedure
- 174 Cleanup job
- 175 Job step 1: Allocate a non-VSAM data set and a data set to prime VSAM data sets
- 176 Job step SDSPA: Create small-data-set-packing (SDSP) data set
- 177 Job step 2: Print the data sets created in STEP 1 of job ?AUTHIDA
- 178 Job step 3 (JOB B): Performing data set backup, migration, and recall
- 179 Job step 4: IDCAMS creates two VSAM data sets
- 180 Job step 5 (JOB C): Performing backup, migration, and recovery
- 181 Job steps 6, 7, and 8: Deleting and re-creating data sets
- 182 Job step 9 (JOB D): Recovering data sets
- 183 Job steps 10 and 11 (JOB E): Listing recovered data sets and recalling with JCL
- 184 Job step 12 (JOB F): Tape support
- 185 Job step 13 (JOB G): Dump function
- 185 FVPCLEAN job
- 187 Chapter 9. Authorizing and protecting DFSMShsm commands and resources
- 187 Identifying DFSMShsm to RACF
- 188 Creating user IDs
- 188 Specifying a RACF user ID for DFSMShsm
- 188 Specifying a RACF user ID for ABARS
- 188 Associating a user ID with a started task
- 188 Method 1–RACF started procedures table (ICHRIN03)
- 190 Method 2–RACF STARTED class
- 190 Configuring DFSMShsm to invoke DFSMSdss as a started task
- 191 Identifying DFSMShsm to z/OS UNIX System Services
- 191 Authorizing and protecting DFSMShsm commands in the RACF FACILITY class environment
- 191 RACF FACILITY class profiles for DFSMShsm
- 192 Protecting DFSMShsm commands with RACF FACILITY class profiles
- 192 Protecting DFSMShsm storage administrator commands with RACF FACILITY class profiles
- 194 Protecting DFSMShsm user commands with RACF FACILITY class profiles
- 194 Protecting DFSMShsm user macros with RACF FACILITY class profiles
- 195 Creating the RACF FACILITY class profiles for ABARS
- 195 ABARS comprehensive RACF FACILITY class authorization
- 196 ABARS restricted RACF FACILITY class authorization
- 197 Creating RACF FACILITY class profiles for concurrent copy
- 197 Activating the RACF FACILITY class profiles
- 198 Authorizing commands issued by an operator
- 198 Protecting DFSMShsm resources
- 198 Protecting DFSMShsm data sets
- 199 Protecting DFSMShsm activity logs
- 199 Protecting DFSMShsm tapes
- 199 Defining RACF TAPEVOL resource classes
- 201 Defining the RACF environment to DFSMShsm
- 202 User-protecting tapes with RACF
- 203 Protecting scratched DFSMShsm-owned data sets
- 203 Authorizing users to access DFSMShsm resources
- 204 Protecting DFSMShsm commands in a nonsecurity environment
- 204 Authorizing and protecting DFSMShsm resources in a nonsecurity environment
- 207 Chapter 10. Implementing DFSMShsm tape environments
- 208 Tape device naming conventions
- 210 SMS-managed tape libraries
- 210 Steps for defining an SMS-managed tape library
- 211 Determine which functions to process in a tape library
- 211 Set up a global scratch pool
- 211 Define a storage class
- 211 Define a data class
- 212 Define a storage group
- 212 Set up the ACS routines
- 213 Define the DFSMShsm tape library environment in the ARCCMDxx PARMLIB member
- 213 Example: Defining a DFSMShsm environment for SMS-managed tape libraries
- 215 Converting to an SMS-managed tape library environment
- 216 Inserting DFSMShsm tapes into a tape library
- 216 Identifying tape library tapes
- 217 Introducing tape processing functions to the library
- 218 Scenario 1–Implementing migration processing in an automated tape library
- 219 Implementing migration processing in a manual tape library
- 219 Scenario 2–Implementing backup processing in an automated tape library
- 220 Implementing backup processing in a manual tape library
- 221 Scenario 3–Implementing control data set backup in an automated tape library
- 221 Implementing control data set backup in a manual tape library
- 222 Defining the tape management policies for your site
- 224 DFSMShsm tape media
- 224 Single file cartridge-type tapes
- 224 Multiple file reel-type tapes
- 225 Obtaining empty tapes from scratch pools
- 225 Global scratch pools
- 225 Specific scratch pools
- 227 Selecting output tape
- 227 Tape hardware emulation
- 228 Initial tape selection for migration and backup tapes
- 229 Subsequent tape selection for migration and backup tapes
- 229 Initial and subsequent selection of dump tapes
- 229 Selecting a scratch pool environment
- 230 Performance tape environment with global scratch pool for library and nonlibrary environments
- 231 Tape-capacity optimization tape environment with global scratch pool for library and nonlibrary environments
- 232 Media optimization for DFSMShsm-managed nonlibrary tape environment
- 233 Performance optimization for DFSMShsm-managed nonlibrary tape environment
- 234 Implementing a recycle schedule for backup and migration tapes
- 236 When to initiate recycle processing
- 237 How long to run recycle processing
- 238 Returning empty tapes to the scratch pool
- 238 Reducing the number of partially full tapes
- 239 Protecting tapes
- 240 RACF protection
- 241 Conversion from RACF TAPEVOL to RACF DATASET class profiles
- 241 Expiration date protection
- 242 Password protection
- 242 Dump tape security considerations
- 243 Removing the security on tapes returning to the scratch pool
- 243 Communicating with the tape management system
- 243 Defining the environment for the tape management system
- 245 Data set naming conventions
- 245 Managing tapes with DFSMShsm installation exits
- 245 Defining the device management policy for your site
- 245 Tape device selection
- 246 Nonlibrary tape device selection
- 246 Library tape device selection
- 246 Restricting tape device selection
- 248 Optimizing cartridge loaders by restricting output to devices with cartridge loaders
- 248 Summary of esoteric translation results for various tape devices
- 250 Tape device conversion
- 250 Reading existing data on new device types after a device conversion in a nonlibrary environment
- 251 Specifying whether to suspend system activity for device allocations
- 251 Specifying the WAIT option
- 252 Specifying the NOWAIT option
- 253 Specifying how long to allow for tape mounts
- 253 Specifying the tape mount parameters
- 254 Implementing the performance management policies for your site
- 254 Reducing tape mounts with tape mount management
- 254 Doubling storage capacity with enhanced capacity cartridge system tape
- 255 Doubling storage capacity with extended high performance cartridge tape
- 255 Defining the environment for enhanced capacity and extended high performance cartridge system tape
- 256 Utilizing the capacity of IBM tape drives that emulate IBM 3490 tape drives
- 256 Defining the environment for utilizing the capacity of IBM tape drives that emulate IBM 3490 tape drives
- 258 IBM 3590 capacity utilization considerations
- 258 Specifying how much of a tape DFSMShsm uses
- 260 Implementing partially unattended operation with cartridge loaders in a nonlibrary environment
- 260 Defining the environment for partially unattended operation
- 262 Improving device performance with hardware compaction algorithms
- 262 Specifying compaction for non-SMS-managed tape library data
- 263 Specifying compaction for tape library data
- 263 Creating concurrent tapes for on-site and offsite storage
- 264 Duplex tape creation
- 264 Duplex tape status
- 264 Duplex tape supported functions
- 265 Considerations for duplicating backup tapes
- 265 TAPECOPY of specific backup tapes
- 265 TAPECOPY of nonspecific backup tapes
- 265 Initial device selection
- 266 Tape eligibility when output is restricted to a specific nonlibrary device type
- 266 Tape eligibility when output is not restricted to a specific nonlibrary device type
- 267 Tape eligibility when output is restricted to specific device types
- 268 Allowing DFSMShsm to back up data sets to tape
- 269 Switching data set backup tapes
- 270 Fast subsequent migration
- 271 Chapter 11. DFSMShsm in a multiple-image environment
- 272 Multiple DFSMShsm host environment configurations
- 272 Example of a multiple DFSMShsm host environment
- 273 Defining a multiple DFSMShsm host environment
- 273 Defining a primary DFSMShsm host
- 273 Defining all DFSMShsm hosts in a multiple-host environment
- 274 DFSMShsm system resources and serialization attributes in a multiple DFSMShsm host environment
- 278 Resource serialization in a multiple DFSMShsm host environment
- 278 Global resource serialization
- 279 Serialization of user data sets
- 280 Serialization of control data sets
- 280 Serialization of control data sets with global resource serialization
- 282 Serialization of control data sets without global resource serialization
- 282 Serialization of DFSMShsm functional processing
- 282 Choosing a serialization method for user data sets
- 283 DFHSMDATASETSERIALIZATION
- 283 Performance considerations
- 283 Volume reserve considerations
- 283 USERDATASETSERIALIZATION
- 283 Converting from volume reserves to global resource serialization
- 284 Setting up the GRS resource name lists
- 284 Example: DFSMShsm serialization configuration
- 285 Alternate example: DFSMShsm serialization configuration
- 286 DFSMSdss Considerations for dumping the journal volume
- 287 DFSMShsm data sets in a multiple DFSMShsm host environment
- 287 CDS considerations in a multiple DFSMShsm host environment
- 288 Preventing interlock of DFSMShsm control data sets
- 288 VSAM SHAREOPTIONS parameters for control data sets
- 289 CDS backup version considerations in a multiple DFSMShsm host environment
- 289 DASD CDS backup versions
- 290 Tape CDS backup versions
- 290 Journal considerations in a multiple DFSMShsm host environment
- 290 Monitoring the control and journal data sets in a multiple DFSMShsm host environment
- 290 Problem determination aid log data sets in a multiple DFSMShsm host environment
- 290 DFSMShsm log considerations in a multiple DFSMShsm host environment
- 291 Edit log data set considerations in a multiple DFSMShsm host environment
- 291 Small-data-set-packing data set considerations in a multiple DFSMShsm host environment
- 291 SDSP data set share options
- 291 Maintaining data set integrity
- 292 Serialization of resources
- 292 Volume considerations in a multiple DFSMShsm host environment
- 292 JES3 considerations
- 293 Running automatic processes concurrently in a multiple DFSMShsm host environment
- 293 Multitasking considerations in a multiple DFSMShsm host environment
- 293 Performance considerations in a multiple DFSMShsm host environment
- 294 MASH configuration considerations
- 295 Chapter 12. DFSMShsm and cloud storage
- 295 Communicating the cloud password to DFSMShsm
- 296 Changing the cloud password
- 296 Cleaning up the cloud password
- 297 Enabling fast subsequent migration to cloud
- 297 OVMS Segment for DFSMShsm
- 299 Part 2. Customizing DFSMShsm
- 301 Chapter 13. DFSMShsm in a sysplex environment
- 301 Types of sysplex
- 301 Sysplex support
- 302 Single GRSplex serialization in a sysplex environment
- 302 Resource serialization in an HSMplex environment
- 302 Enabling single GRSplex serialization
- 302 Identifying static resources
- 303 Translating static resources into dynamic resources
- 304 Compatibility considerations
- 305 Secondary host promotion
- 305 Enabling secondary host promotion from the SETSYS command
- 306 Configuring multiple HSMplexes in a sysplex
- 307 Additional configuration requirements for using secondary host promotion
- 307 When a host is eligible for demotion
- 307 How secondary host promotion works
- 308 Promotion of primary host responsibilities
- 308 How auto functions affect secondary host promotion
- 309 Promotion of SSM host responsibilities
- 310 How the take back function works
- 310 Emergency mode considerations
- 311 Considerations for implementing XCF for secondary host promotion
- 311 Control data set extended addressability in a sysplex environment
- 311 Using VSAM extended addressability in a sysplex
- 311 Extended addressability considerations in a sysplex
- 311 Common recall queue configurations
- 313 Common dump queue configurations
- 315 Common recover queue configurations
- 317 Chapter 14. Calculating DFSMShsm storage requirements
- 317 DFSMShsm address spaces
- 318 Storage estimating considerations
- 318 Storage guidelines
- 319 Adjusting the size of cell pools
- 321 Chapter 15. DFSMShsm libraries and procedures
- 321 DFSMShsm libraries
- 321 Procedure libraries (PROCLIB)
- 321 Parameter libraries (PARMLIB)
- 322 Creating alternate DFSMShsm parameter library members
- 322 Commands for PARMLIB member ARCCMDxx
- 324 Command sequence for PARMLIB member ARCCMDxx
- 324 Sample libraries (SAMPLIB)
- 325 DFSMShsm procedures
- 325 DFSMShsm startup procedure
- 326 Startup procedure keywords
- 333 Startup procedure DD statements
- 334 ABARS secondary address space startup procedure
- 335 DFSMShsm installation verification procedure (IVP) startup procedure
- 335 DFSMShsm functional verification procedure (FVP)
- 335 HSMLOG procedure
- 335 HSMEDIT procedure
- 337 Chapter 16. User application interfaces
- 337 Data collection
- 337 Planning to use data collection
- 338 Choosing a data collection method
- 338 Choosing a report creation method
- 340 Choosing the type of report you want
- 342 The data collection environment
- 342 Data sets used for data collection
- 342 MCDS
- 342 BCDS
- 342 Snap processing data set
- 342 Collection data set
- 343 Data collection records
- 343 Data collection record header
- 344 Migrated data set information record
- 344 Backup version information record
- 344 DASD capacity planning records
- 344 Tape capacity planning records
- 344 Related reading
- 344 Invoking the DFSMShsm data collection interface
- 344 Invoking the ARCUTIL load module with the access method services (AMS) DCOLLECT function
- 345 Direct invocation of ARCUTIL load module
- 346 Invoking the ARCUTIL load module with a user-written program
- 350 Required parameters
- 350 Optional parameters
- 351 Output parameters
- 351 ARCUTIL return codes and reason codes
- 353 Chapter 17. Tuning DFSMShsm
- 353 Tuning patches supported by DFSMShsm
- 353 Changing DFSMShsm backup and migration generated data set names to reduce contention for similar names and eliminating a possible performance degradation
- 354 Migrating and scratching generation data sets
- 354 Migrating generation data sets
- 355 Scratching of rolled-off generation data sets
- 356 Disabling backup and migration of data sets that are larger than 64K tracks to ML1 volumes
- 356 Disabling, in JES3, the delay in issuing PARTREL (partial release) for generation data sets
- 357 Using DFSMShsm in a JES3 environment that performs main device scheduling only for tapes
- 357 Shortening the prevent-migration activity for JES3 setups
- 358 Replacing HSMACT as the high-level qualifier for activity logs
- 359 Changing the allocation parameters for an output data set
- 359 Changing the unit name
- 359 Changing the primary space quantity
- 359 Changing the secondary space quantity
- 360 Changing the limiting of SYSOUT lines
- 360 Using the DFSMShsm startup PARMLIB member
- 360 Buffering of user data sets on DFSMShsm-owned DASD using optimum DASD blocking
- 360 Changing parameters passed to DFSMSdss
- 361 Allowing DFSMSdss to load in its own address space through the cross memory interface
- 361 Invoking DFSMSdss for a full-volume dump
- 363 Processing partitioned data sets with AX cells
- 364 Enabling ABARS ABACKUP and ARECOVER to wait for a tape unit allocation
- 364 Changing the RACF FACILITY CLASS ID for the console operator’s terminal
- 365 Handling independent-software-vendor data in the data set VTOC entry
- 366 Allowing DFSMShsm automatic functions to process volumes other than once per day
- 366 Running automatic primary space management multiple times a day in a test environment
- 368 Running automatic backup multiple times a day
- 371 Running automatic dump multiple times a day in a test environment
- 372 Changing the frequency of running interval migration
- 373 Making the interval less frequent than one hour
- 373 Making the interval more frequent than one hour
- 374 Changing the frequency of running on-demand migration again on a volume that remains at or above the high threshold
- 375 Reducing enqueue times on the GDG base or on ARCENQG and the fully qualified GDS name
- 375 Modifying the migration queue limit value
- 376 Changing the default tape data set names that DFSMShsm uses for tape copy and full volume dump
- 376 Default tape data set names for DFHSM Version 2 Release 6.0
- 376 Default tape data set names for DFHSM releases prior to Version 2 Release 6.0
- 376 Preventing interactive TSO users from being placed in a wait state during a data set recall
- 377 Preventing ABARS ABACKUP processing from creating an extra tape volume for the instruction data set and activity log files
- 378 Preventing ABARS ABACKUP processing from including multivolume BDAM data sets
- 379 Specifying the amount of time to wait for an ABARS secondary address space to initialize
- 379 Patching ABARS to use NOVALIDATE when invoking DFSMSdss
- 379 Patching ABARS to provide dumps whenever specific errors occur during DFSMSdss processing during ABACKUP and ARECOVER
- 380 Routing ABARS ARC6030I message to the operator console
- 380 Filtering storage group and copy pool ARC0570I messages (return code 17 and 36)
- 381 Allowing DFSMShsm to issue serialization error messages for class transitions
- 381 Enabling ARC1901I messages to go to the operator console
- 381 Changing the notification limit percentage value to issue ARC1901I messages
- 381 Patching to prevent ABARS from automatically cleaning up residual versions of ABACKUP output files
- 382 Enabling the serialization of ML2 data sets between RECYCLE and ABACKUP
- 382 Changing the default number of recall retries for a data set residing on a volume in use by RECYCLE or TAPECOPY processing
- 383 Changing the default number of buffers that DFSMShsm uses to back up and migrate data sets
- 383 Changing the compaction-ratio estimate for data written to tape
- 383 Enabling the takeaway function during TAPECOPY processing
- 384 Changing the delay by recall before taking away a needed ML2 tape from ABACKUP
- 385 Disabling delete-if-backed-up (DBU) processing for SMS data sets
- 385 Requesting the message issued for SETSYS TAPEOUTPUTPROMPT processing be WTOR instead of the default WTO
- 385 Removing ACL as a condition for D/T3480 esoteric unit name translation
- 385 Restricting non-SMS ML2 tape volume table tape selection to the SETSYS unit name of a function
- 386 Changing the amount of time ABACKUP waits for an ML2 volume to become available
- 387 Changing the amount of time an ABACKUP or ARECOVER waits for a resource in use by another task
- 387 Preventing deadlocks during volume dumps
- 388 Modifying the number of elapsed days for a checkpointed data set
- 388 Running concurrent multiple recycles within a single GRSplex
- 389 Patching to force UCBs to be OBTAINed each time a volume is space checked
- 389 Running conditional tracing
- 389 Using the tape span size value regardless of data set size
- 390 Updating MC1 free space information for ML1 volumes after an return code 37 in a multi-host environment
- 390 Allowing DFSMShsm to use the 3590-1 generic unit when it contains mixed track technology drives
- 391 Allowing functions to release ARCENQG and ARCCAT or ARCGPA and ARCCAT for CDS backup to continue
- 392 Suppressing SYNCDEV for alternate tapes during duplex migration
- 392 Patching to allow building of a dummy MCD record for large data sets whose estimated compacted size exceeds the 64 KB track DASD limit
- 393 Allowing DFSMShsm to honor an explicit expiration date even if the current management class retention limit equals 0
- 393 Using the generic rather than the esoteric unit name for duplex generated tape copies
- 393 Modifying the allocation quantities for catalog information data sets
- 394 Enabling volume backup to process data sets with names ending with .LIST, .OUTLIST or .LINKLIST
- 394 Prompting before removing volumes in an HSMplex environment
- 394 Returning to the previous method of serializing on a GDS data set during migration
- 395 Allowing DFSMShsm to create backup copies older than the latest retained-days copy
- 395 Enabling or disabling RC 20 through RC 40 ARCMDEXT return code for transitions
- 396 Enabling FSR records to be recorded for errors, reported by message ARC0734I, found during SMS data set eligibility checking for primary space management
- 396 Patch for FRRECOV COPYPOOL FROMDUMP performance to bypass the EXCLUSIVE NONSPEC ENQ
- 396 Issuing the ABARS ARC6055I ABACKUP ending message as a single line WTO
- 397 Chapter 18. Special considerations
- 397 Backup profiles and the RACF data set
- 397 Increasing VTOC size for large capacity devices
- 397 DFSMShsm command processor performance considerations
- 398 Incompatibilities caused by DFSMShsm
- 398 Volume serial number of MIGRAT or PRIVAT
- 398 IEHMOVE utility
- 399 TSO ALTER command and access method services ALTER command
- 399 TSO DELETE command and access method services DELETE command
- 399 Data set VTOC entry date-last-referenced field
- 399 VSAM migration (non-SMS)
- 399 RACF volume authority checking
- 400 Accessing data without allocation or OPEN (non-SMS)
- 400 RACF program resource profile
- 400 Update password for integrated catalog facility user catalogs
- 400 Processing while DFSMShsm is inactive
- 401 DFSMShsm abnormal end considerations
- 401 Recovering DFSMShsm after an abnormal end
- 401 Recovering from an abnormal end of a DFSMShsm subtask
- 401 Recovering from an abnormal end of the DFSMShsm main task
- 401 Restarting DFSMShsm after an abnormal end
- 402 Suppressing duplicate dumps
- 402 Duplicate data set names
- 402 Debug mode of operation for gradual conversion to DFSMShsm
- 403 Generation data groups
- 403 Handling of generation data sets
- 404 Access method services DELETE GDG FORCE command
- 404 ISPF validation
- 404 Preventing migration of data sets required for long-running jobs
- 405 SMF considerations
- 405 DFSMSdss address spaces started by DFSMShsm
- 406 Read-only volumes
- 407 Chapter 19. Health Checker for DFSMShsm
- 409 Part 3. Appendixes
- 411 Appendix A. DFSMShsm work sheets
- 411 MCDS size work sheet
- 412 BCDS size work sheet
- 413 OCDS size work sheet
- 414 Problem determination aid log data set size work sheet—Short-term trace history
- 416 Problem determination aid log data set size work sheet—Long-term trace history
- 417 Collection data set size work sheet
- 419 Appendix B. Accessibility
- 419 Accessibility features
- 419 Consult assistive technologies
- 419 Keyboard navigation of the user interface
- 419 Dotted decimal syntax diagrams
- 423 Notices
- 425 Terms and conditions for product documentation
- 426 IBM Online Privacy Statement
- 426 Policy for unsupported hardware
- 426 Minimum supported hardware
- 427 Programming interface information
- 427 Trademarks
- 429 Index
- 429 Numerics
- 429 A
- 430 B
- 430 C
- 432 D
- 433 E
- 433 F
- 434 G
- 434 H
- 434 I
- 434 J
- 435 K
- 435 L
- 435 M
- 436 N
- 436 O
- 436 P
- 437 Q
- 437 R
- 438 S
- 440 T
- 440 U
- 440 V
- 441 W