z/OS DFSMShsm Implementation and Customization Guide


Add to my manuals
444 Pages

advertisement

z/OS DFSMShsm Implementation and Customization Guide | Manualzz

Chapter 5. Specifying commands that define your DFSMShsm environment

You can specify SETSYS commands in the ARCCMDxx member to define your site’s DFSMShsm environment. The command options are described along with the reasons for choosing a command.

The starter set creates a basic (and somewhat generic) DFSMShsm environment. If you choose not to begin with the starter set or you want to expand or customize the starter set functions, the information you need is in this section.

Regardless of the DFSMShsm functions you choose to implement, you must establish the DFSMShsm environment for those functions. Your site’s DFSMShsm environment is established when you perform the following tasks: v

“Defining the DFSMShsm startup environment”

v

“Defining storage administrators to DFSMShsm” on page 74

v

“Defining the DFSMShsm MVS environment” on page 75

v

“Defining the DFSMShsm security environment for DFSMShsm-owned data sets” on page 83

v

“Defining data formats for DFSMShsm operations” on page 86

v

“Defining DFSMShsm reporting and monitoring” on page 90

v

“Defining the tape environment” on page 92

v

“Defining the installation exits that DFSMShsm invokes” on page 92

v

“Controlling DFSMShsm control data set recoverability” on page 92

v

“Defining migration level 1 volumes to DFSMShsm” on page 93

v

“Defining the common recall queue environment” on page 95

v

“Defining common SETSYS commands” on page 98

Defining the DFSMShsm startup environment

Before starting DFSMShsm, you must prepare the system by performing the following tasks: v

“Allocating DFSMShsm data sets”

v

“Establishing the DFSMShsm startup procedures” on page 68

v

“Establishing the START command in the COMMNDnn member” on page 71

v

“Establishing SMS-related conditions in storage groups and management classes” on page 71

v

“Writing an ACS routine that directs DFSMShsm-owned data sets to non-SMS-managed storage” on page 71

v

“Directing DFSMShsm temporary tape data sets to tape” on page 72

v

“Establishing the ARCCMDxx member of a PARMLIB” on page 73

Allocating DFSMShsm data sets

The DFSMShsm data sets are the data sets DFSMShsm requires for full-function processing. The DFSMShsm data sets are not user data sets and they are not

DFSMShsm-managed data sets. Rather, they are the following DFSMShsm record keeping, reporting, and problem determination data sets:

© Copyright IBM Corp. 1984, 2017

67

v DFSMShsm control data sets v DFSMShsm control data set copies v Journal data set v Log data sets v Problem determination aid (PDA) log data sets v SDSP data sets (if used)

You, or the person who installed DFSMShsm on your system, probably have allocated these data sets during installation or testing of DFSMShsm. The data sets are required for the DFSMShsm starter set. For SMS environments, you must associate the DFSMShsm data sets with a storage class having the GUARANTEED

SPACE=YES attribute so that you can control their placement. Data sets having the guaranteed space attribute are allocated differently than non-guaranteed space data sets, especially if candidate volumes are specified. Refer to z/OS DFSMShsm Storage

Administration for a discussion of the guaranteed space attribute and for information about establishing storage classes.

You must prevent the following DFSMShsm data sets from migrating: v Control data sets v DFSMShsm log data sets v Journal v Problem determination aid logs

For more information about preventing DFSMShsm data sets from migrating, see

“Storage guidance for control data set and journal data set backup copies” on page

28 and “Migration considerations for the control data sets and the journal” on page

29.

Establishing the DFSMShsm startup procedures

If you specify an HSMPARM DD, it will take precedence over MVS concatenated

PARMLIB support. However, if you are using MVS concatenated PARMLIB support, DFSMShsm uses the PARMLIB data set containing the ARCCMDxx member and the (possibly different) PARMLIB data set containing the ARCSTRxx member (if any) that is indicated in the startup procedure.

When ABARS is used, its address space (one or more) is termed ‘secondary’ to a

‘primary address space’. That primary address space must have

HOSTMODE=MAIN; you must start it with a startup procedure in SYS1.PROCLIB

(similar to the startup procedure in Figure 13 on page 69.) If your disaster recovery

policy includes aggregate backup and recovery support (ABARS), also include a second startup procedure in SYS1.PROCLIB for the DFSMShsm secondary address space.

Primary address space startup procedure

Figure 13 on page 69 is a sample DFSMShsm primary address space startup

procedure.

68

z/OS DFSMShsm Implementation and Customization Guide

//**********************************************************************/

//* SAMPLE DFSMSHSM STARTUP PROCEDURE THAT STARTS THE DFSMSHSM PRIMARY */

//* ADDRESS SPACE.

*/

//**********************************************************************/

//*

//DFSMSHSM PROC CMD=00,

// EMERG=NO,

USE PARMLIB MEMBER ARCCMD00

ALLOW ALL DFSMSHSM FUNCTIONS

//

//

//

LOGSW=YES,

STARTUP=YES,

UID=HSM,

SWITCH LOGS AT STARTUP

STARTUP INFO PRINTED AT STARTUP

DFSMSHSM-AUTHORIZED USER ID

//

//

//

//

SIZE=0M,

DDD=50,

HOST=?HOST,

PRIMARY=?PRIMARY,

REGION SIZE FOR DFSMSHSM

MAX DYNAMICALLY ALLOCATED DATA SETS

PROC.UNIT ID AND LEVEL FUNCTIONS

LEVEL FUNCTIONS

//

//

PDA=YES,

CDSR=YES

BEGIN PDA TRACING AT STARTUP

RESERVE CONTROL DATA SET VOLUMES

//DFSMSHSM EXEC PGM=ARCCTL,DYNAMNBR=&DDD,REGION=&SIZE,TIME=1440,

// PARM=(’EMERG=&EMERG’,’LOGSW=&LOGSW’,’CMD=&CMD’,’UID=&UID’,

// ’STARTUP=&STARTUP’,’HOST=&HOST’,’PRIMARY=&PRIMARY’,

’PDA=&PDA’,’CDSR=&CDSR’)

//*****************************************************************/

//* HSMPARM DD must be deleted from the JCL or made into a */

//* a comment to use Concatenated Parmlib Support */

//*****************************************************************/

//HSMPARM DD DSN=SYS1.PARMLIB,DISP=SHR

//MSYSOUT DD SYSOUT=A

//MSYSIN DD DUMMY

//SYSPRINT DD SYSOUT=A,FREE=CLOSE

//SYSUDUMP DD SYSOUT=A

//*

//*****************************************************************/

//* THIS PROCEDURE ASSUMES A SINGLE CLUSTER MCDS.

IF MORE THAN */

//* ONE VOLUME IS DESIRED, FOLLOW THE RULES FOR A MULTICLUSTER */

//* CDS.

*/

//*****************************************************************/

//*

//MIGCAT DD DSN=HSM.MCDS,DISP=SHR

//JOURNAL DD DSN=HSM.JRNL,DISP=SHR

//ARCLOGX DD DSN=HSM.HSMLOGX1,DISP=OLD

//ARCLOGY DD DSN=HSM.HSMLOGY1,DISP=OLD

//ARCPDOX DD DSN=HSM.HSMPDOX,DISP=OLD

//ARCPDOY DD DSN=HSM.HSMPDOY,DISP=OLD

//*

Figure 13. Sample Startup Procedure for the DFSMShsm Primary Address Space

Figure 14 is a sample startup procedure using STR.

Example of a startup procedure:

//DFSMSHSM PROC CMD=00,

//

//

STR=00,

HOST=?HOST,

USE PARMLIB MEMBER ARCCMD00

STARTUP PARMS IN ARCSTR00

PROC UNIT AND LEVEL FUNCTIONS

//

//

PRIMARY=?PRIMARY,LEVEL FUNCTIONS

DDD=50, MAX DYNAMICALLY ALLOCATED DS

// SIZE=0M REGION SIZE FOR DFSMSHSM

//DFSMSHSM EXEC PGM=ARCCTL,DYNAMNBR=&DDD,REGION=&SIZE,TIME=1440,

// PARM=(’STR=&STR’,’CMD=&CMD’,’HOST=&HOST’,

’PRIMARY=&PRIMARY’)

//HSMPARM DD DSN=SYS1.PARMLIB,DISP=SHR

//MSYSOUT DD SYSOUT=A

//MSYSIN DD DUMMY

//SYSPRINT DD SYSOUT=A,FREE=CLOSE

. . .

PARMLIB Member ARCSTR00 contains 4 records:

1st record: EMERG=NO,CDSQ=YES,STARTUP=YES

2nd record: /* This is a comment.

3rd record: /* This is another comment.

*/

4nd record: PDA=YES,LOGSW=YES

Figure 14. Sample of STR Usage

Chapter 5. Specifying commands that define your DFSMShsm environment

69

For an explanation of the keywords, see “Startup procedure keywords” on page

308.

The CMD=00 keyword refers to the ARCCMD00 member of PARMLIBs discussed

in “Parameter libraries (PARMLIB)” on page 303. You can have as many

ARCCMDxx and ARCSTRxx members as you need in the PARMLIBs. DFSMShsm does not require the values of CMD= and STR= to be the same, but you may want to use the same values to indicate a given configuration. In this publication, the

ARCCMD member is referred to generically as ARCCMDxx because each different

ARCCMDxx member can be identified by a different number.

Much of the rest of this discussion pertains to what to put into the ARCCMDxx member.

For information about the ARCCMDxx member in a multiple DFSMShsm-host

environment, see “Defining all DFSMShsm hosts in a multiple-host environment” on page 255. To minimize administration, we suggest that you use a single

ARCCMDxx and a single ARCSTRxx member for all DFSMShsm hosts sharing a common set of control data sets in an HSMplex.

Secondary address space startup procedure

Figure 15 is a sample DFSMShsm secondary address space startup procedure.

//**********************************************************************/

//* SAMPLE AGGREGATE BACKUP AND RECOVERY STARTUP PROCEDURE THAT STARTS */

//* THE ABARS SECONDARY ADDRESS SPACE.

*/

//**********************************************************************/

//*

//DFHSMABR PROC

//DFHSMABR EXEC PGM=ARCWCTL,REGION=0M

//SYSUDUMP DD SYSOUT=A

//MSYSOUT DD SYSOUT=A

//MSYSIN DD DUMMY

//*

Figure 15. Sample Aggregate Backup and Recovery Startup Procedure

The private (24-bit) and extended private (31-bit) address space requirements for

DFSMShsm are dynamic. DFSMShsm’s region size should normally default to the private virtual address space (REGION=0).

To run ABARS processing, each secondary address space for aggregate backup or aggregate recovery requires 6 megabytes (MB). Three MBs of this ABARS secondary address space are above the line (in 31-bit extended private address space). The other three MBs are below the line (in 24-bit address space). An option that can directly increase this requirement is the specification of SETSYS

ABARSBUFFERS(n). If this is specified with an ‘n’ value greater than one, use the following quick calculation to determine the approximate storage above the line you will need:

2MB + (’n’ * 1MB) ’n’ = number specified in SETSYS ABARSBUFFERS

As you add more functions and options to the DFSMShsm base product, the region-size requirement increases. You should therefore include the maximum region size in your setup procedure.

For a detailed discussion of the DFSMShsm primary address space startup procedure, the ABARS secondary address space startup procedure, and the startup

procedure keywords, see “DFSMShsm procedures” on page 307.

70

z/OS DFSMShsm Implementation and Customization Guide

Establishing the START command in the COMMNDnn member

When you initialize the MVS operating system, you want DFSMShsm to start automatically. You direct DFSMShsm to start when the MVS operating system is initialized by adding the following command to the SYS1.PARMLIB.

COM='S DFSMSHSM parameters'

You can also start DFSMShsm from the console. DFSMShsm can be run only as a started task and never as a batch job.

DFSMShsm can run concurrently with another space-management product. This can be useful if you are switching from another product to DFSMShsm, and do not want to recall many years’ worth of data just to switch to the new product over a short period like a weekend. By running the two products in parallel, you can recall data automatically from the old product, and migrate all new data with

DFSMShsm.

What makes this possible is that the other product usually provides a module that must be renamed to IGG026DU to serve as the automatic locate intercept for recall.

Instead, rename this module to $IGG26DU, and link-edit this module to the existing IGG026DU which DFSMS ships for DFSMShsm. In this manner, for each locate request, DFSMShsm’s IGG026DU gives the other product control via

$IGG26DU, providing it a chance to perform the recall if the data was migrated by that product. After control returns, DFSMShsm then proceeds to recall the data set if it is still migrated.

Establishing SMS-related conditions in storage groups and management classes

For your SMS-managed data sets, you must establish a DFSMShsm environment that coordinates the activities of both DFSMShsm and SMS. You can define your storage groups and management classes at one time and can modify the appropriate attributes for DFSMShsm management of data sets at another time.

The storage group contains one attribute that applies to all DFSMShsm functions, the status attribute. DFSMShsm can process volumes in storage groups having a status of ENABLE, DISNEW (disable new for new data set allocations), or

QUINEW (quiesce new for new data set allocations). The other status attributes

QUIALL (quiesce for all allocations), DISALL (disable all for all data set allocations), and NOTCON (not connected) prevent DFSMShsm from processing any volumes in the storage group so designated. Refer to z/OS DFSMSdfp Storage

Administration for an explanation of the status attribute and how to define storage groups.

Writing an ACS routine that directs DFSMShsm-owned data sets to non-SMS-managed storage

Programming Interface Information

DFSMShsm must be able to direct allocation of data sets it manages to its owned storage devices so that backup versions of data sets go to backup volumes, migration copies go to migration volumes, and so forth. DFSMShsm-owned DASD volumes are not SMS-managed. If SMS were allowed to select volumes for

DFSMShsm-owned data sets, DFSMShsm could not control which volumes were selected. If SMS is allowed to allocate the DFSMShsm-owned data sets to a volume

Chapter 5. Specifying commands that define your DFSMShsm environment

71

other than the one selected by DFSMShsm, DFSMShsm detects that the data set is allocated to the wrong volume and fails the function being performed. Therefore,

include a filter routine (similar to the sample routine in Figure 16) within your

automatic class selection (ACS) routine that filters DFSMShsm-owned data sets to non-SMS managed volumes. For information on the SMS-management of

DFSMShsm-owned tape volumes, see Chapter 10, “Implementing DFSMShsm tape environments,” on page 189.

End Programming Interface Information

/***********************************************************************/

/* SAMPLE ACS ROUTINE THAT ASSIGNS A NULL STORAGE CLASS TO */

/* DFSMSHSM-OWNED DATA SETS INDICATING THAT THE DATA SET SHOULD NOT BE */

/* SMS-MANAGED.

*/

/***********************************************************************/

/*

PROC &STORCLAS

*/

SET &STORCLAS = ’SCLASS2’

FILTLIST &HSMLQ1 INCLUDE(’DFHSM’,’HSM’)

FILTLIST &HSMLQ2 INCLUDE(’HMIG’,’BACK’,’VCAT’,’SMALLDS’,’VTOC’,

’DUMPVTOC’,’MDB’)

IF &DSN(1) = &HSMLQ1 AND

&DSN(2) = &HSMLQ2 THEN

SET &STORCLAS = ’

END

/* */

Figure 16. Sample ACS Routine that Directs DFSMShsm-Owned Data Sets to Non-SMS-Managed Storage

The high-level qualifiers for &HSMLQ1 and &HSMLQ2 are the prefixes that you specify with the BACKUPPREFIX (for backup and dump data set names) and the

MIGRATEPREFIX (for migrated copy data set names). If you do not specify prefixes, specify the user ID from the UID parameter of the DFSMShsm startup

procedure (shown in topic “Starter set example” on page 109). These prefixes and

how to specify them are discussed in the z/OS DFSMShsm Storage Administration.

Directing DFSMShsm temporary tape data sets to tape

Programming Interface Information

It is often efficient to direct tape allocation requests to DASD when the tapes being requested are for temporary data sets. However, DFSMShsm’s internal naming conventions request temporary tape allocations for backup of DFSMShsm control data sets. Therefore, it is important to direct DFSMShsm tape requests to tape.

End Programming Interface Information

If your ACS routines direct temporary data sets to DASD, DFSMShsm allocation requests for temporary tape data sets should be allowed to be directed to tape as

requested (see the sample ACS routine in Figure 17 on page 73). To identify

temporary tape data sets, test the &DSTYPE variable for “TEMP”, and test the

&PGM variable for “ARCCTL”.

72

z/OS DFSMShsm Implementation and Customization Guide

/***********************************************************************/

/* SAMPLE ACS ROUTINE THAT PREVENTS DFSMSHSM TEMPORARY (SCRATCH TAPE) */

/* TAPE REQUESTS FROM BEING REDIRECTED TO DASD.

*/

/***********************************************************************/

:

:

/***********************************************************************/

/* SET FILTLIST FOR PRODUCTION DATA SETS */

/***********************************************************************/

FILTLIST EXPGMGRP INCLUDE('ARCCTL')

:

:

/***********************************************************************/

/* FILTER TEMPORARY (SCRATCH TAPE) TAPE REQUESTS INTO DFSMSHSM */

/* REQUESTS AND NON-DFSMSHSM REQUESTS. SEND DFSMSHSM REQUESTS TO TAPE */

/* AS REQUESTED. SEND NON-DFSMSHSM REQUESTS TO DASD.

*/

/***********************************************************************/

IF (&DSTYPA = 'TEMP' && &UNIT = &TAPE_UNITS)

THEN DO

IF (&PGM ^= &EXPGMGRP) THEN DO

SET &STORCLAS = 'DASD'

WRITE '******************************************************'

WRITE '* NON-DFSMSHSM TEMPORARY DATA SET REDIRECTED TO DISK *'

WRITE '******************************************************'

END

ELSE DO

WRITE '************************************************'

WRITE '* DFSMSHSM TEMPORARY DATA SET DIRECTED TO TAPE *'

WRITE '************************************************'

END

END

Figure 17. Sample ACS Routine That Prevents DFSMShsm Temporary Tape Requests from being Redirected to

DASD

Establishing the ARCCMDxx member of a PARMLIB

At DFSMShsm startup, DFSMShsm reads the ARCCMDxx parameter library

(PARMLIB) member that is pointed to by the DFSMShsm startup procedure or is found in the MVS concatenated PARMLIB data sets.

An ARCCMDxx member consisting of DFSMShsm commands that define your site’s DFSMShsm processing environment must exist in a PARMLIB data set. (The

PARMLIB containing the ARCCMDxx member may be defined in the startup procedure.) An example of the ARCCMDxx member can be seen starting at

“Starter set example” on page 109.

Modifying the ARCCMDxx member

In most cases, adding a command to the ARCCMDxx member provides an addendum to any similar command that already exists in the member. For example, the ARCCMDxx member that exists from the starter set contains a set of commands with their parameters. You can remove commands that do not meet your needs from the ARCCMDxx member and replace them with commands that do meet your needs.

ARCCMDxx member for the starter set

The ARCCMDxx member provided with the starter set is written to accommodate any system so some commands are intentionally allowed to default and others specify parameters that are not necessarily optimal. Because the starter set does not provide an explanation of parameter options, we discuss the implications of choosing SETSYS parameters in this section.

Chapter 5. Specifying commands that define your DFSMShsm environment

73

Issuing DFSMShsm commands

DFSMShsm commands can be issued from the operator’s console, from a TSO terminal, as a CLIST from a TSO terminal, as a job (when properly surrounded by

JCL) from the batch reader, or from a PARMLIB member. DFSMShsm commands can be up to 1024 bytes long. The z/OS DFSMShsm Storage Administration explains how to issue the DFSMShsm commands and why to issue them.

Implementing new DFSMShsm ARCCMDxx functions

If you have DFSMShsm running with an established ARCCMDxx member, for example ARCCMD00, you can copy the ARCCMDxx member to a member with another name, for example, ARCCMD01. You can then modify the new

ARCCMDxx member by adding and deleting parameters.

To determine how the new parameters affect DFSMShsm’s automatic processes,

run DFSMShsm in DEBUG mode with the new ARCCMDxx member. See “Debug mode of operation for gradual conversion to DFSMShsm” on page 384 and the

z/OS DFSMShsm Storage Administration for an explanation of running DFSMShsm in DEBUG mode.

Defining storage administrators to DFSMShsm

As part of defining your DFSMShsm environment, you must designate storage administrators and define their authority to issue authorized DFSMShsm commands. The authority to issue authorized commands is granted either through

RACF FACILITY class profiles or the DFSMShsm AUTH command.

Because DFSMShsm operates as an MVS-authorized task, it can manage data sets automatically, regardless of their security protection. DFSMShsm allows an installation to control the authorization of its commands through the use of either

RACF FACILITY class profiles or the AUTH command.

If the RACF FACILITY class is active, DFSMShsm always uses it to protect all

DFSMShsm commands. If the RACF FACILITY class is not active, DFSMShsm uses the AUTH command to protect storage administrator DFSMShsm commands.

There is no protection of user commands in this environment.

The RACF FACILITY class environment

DFSMShsm provides a way to protect all DFSMShsm command access through the use of RACF FACILITY class profiles. An active RACF FACILITY class establishes the security environment.

An individual, such as a security administrator, defines RACF FACILITY class profiles to grant or deny permission to issue individual DFSMShsm commands.

For more information about establishing the RACF FACILITY class environment,

see “Authorizing and protecting DFSMShsm commands in the RACF FACILITY class environment” on page 173.

The DFSMShsm AUTH command environment

If you are not using the RACF FACILITY class to protect all DFSMShsm commands, the AUTH command is used to protect DFSMShsm-authorized commands.

To prevent unwanted changes to the parameters that control all data sets, commands within DFSMShsm are classified as authorized and nonauthorized.

74

z/OS DFSMShsm Implementation and Customization Guide

Authorized commands can be issued only by a user specifically authorized by a storage administrator. Generally, authorized commands can affect data sets not owned by the person issuing the command and should, therefore, be limited to only those whom you want to have that level of control.

Nonauthorized commands can be issued by any user, but they generally affect only those data sets for which the user has appropriate security access. Nonauthorized commands are usually issued by system users who want to manage their own data sets with DFSMShsm user commands.

DFSMShsm has two categories of authorization: USER and CONTROL.

If you specify . . .

AUTH U012345 DATABASEAUTHORITY(USER)

AUTH U012345

DATABASEAUTHORITY(CONTROL)

Then . . .

User U012345 can issue any

DFSMShsm command except the command that authorizes other users.

DFSMShsm gives user U012345 authority to issue the AUTH command to authorize other users. User U012345 can then issue the AUTH command with the

DATABASEAUTHORITY(USER) parameter to authorize other storage administrators who can issue authorized commands.

Anyone can issue authorized commands from the system console, but they cannot authorize other users. The ARCCMDxx member must contain an AUTH command granting CONTROL authority to a storage administrator. That storage administrator can then authorize or revoke the authority of other users as necessary. If no AUTH command grants CONTROL authority to any user, no storage administrator can authorize any other user. If the ARCCMDxx member does not contain any AUTH command, authorized commands can be issued only at the operator’s console.

Defining the DFSMShsm MVS environment

You define the MVS environment to DFSMShsm when you specify: v

The job entry subsystem v The amount of common service area storage v The sizes of cell pools v Operator intervention in DFSMShsm automatic operation v Data set serialization v Swap capability of DFSMShsm’s address space v Maximum secondary address space

Each of the preceding tasks relates to a SETSYS command in the ARCCMDxx member.

Figure 18 on page 76 is an example of the commands that define an MVS

environment:

Chapter 5. Specifying commands that define your DFSMShsm environment

75

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DEFAULT MVS ENVIRONMENT */

/***********************************************************************/

/*

SETSYS JES2

SETSYS CSALIMITS(MAXIMUM(100) ACTIVE(90) INACTIVE(30) MWE(4))

SETSYS NOREQUEST

SETSYS USERDATASETSERIALIZATION

SETSYS NOSWAP

SETSYS MAXABARSADDRESSSPACE(1)

/*

Figure 18. Sample SETSYS Commands That Define the Default MVS Environment

Specifying the job entry subsystem

As part of defining your MVS environment to DFSMShsm, you must identify the job entry subsystem (JES) at your site as either JES2 or JES3 by specifying the

SETSYS(JES2|JES3) command in the ARCCMDxx member. The ARCCMDxx is located in a PARMLIB.

JES3 considerations

When you implement DFSMShsm in a JES3 environment, you must observe certain practices and restrictions to ensure correct operation: v

For a period of time after the initialization of JES3 and before the initialization of

DFSMShsm, all JES3 locates will fail. To reduce this exposure:

– Start DFSMShsm as early as possible after the initialization of JES3.

– Specify the SETSYS JES3 command as early as possible in the startup procedure and before any ADDVOL commands.

v Specify JES3 during DFSMShsm startup when DFSMShsm is started in a JES3 system. This avoids an error message being written when DFSMShsm receives the first locate request from the JES3 converter/interpreter.

v Depend on the computing system catalog to determine the locations of data sets.

v Do not allocate the control data sets and the JES3 spool data set on the same volume because you could prevent DFSMShsm from starting on a JES3 local processor.

v All devices that contain volumes automatically managed or processed by

DFSMShsm must be controlled by JES3. All volumes managed by DFSMShsm

(even those managed by command) should be used on devices controlled by

JES3.

v DFSMShsm must be active on the processing units that use volumes managed by DFSMShsm and on any processing unit where JES3 can issue the locate request for the setup of jobs that use volumes managed by DFSMShsm.

The specification of JES3 places a constraint on issuing certain DFSMShsm commands. When you use JES2, you can issue ADDVOL, DEFINE, and SETSYS commands at any time. When you specify JES3, you must issue ADDVOL commands for primary volumes, DEFINE commands for pools (except aggregate recovery pools), and the SETSYS JES2 or SETSYS JES3 commands in the

ARCCMDxx member. In addition, if you are naming tape devices with esoteric names, you must include the SETSYS USERUNITTABLE command in the

ARCCMDxx member before the ADDVOL command for any of the tapes that are in groups defined with esoteric names.

76

z/OS DFSMShsm Implementation and Customization Guide

If you specify JES3 but the operating system uses JES2, DFSMShsm is not notified of the error. However, DFSMShsm uses the rules that govern pool configuration for

JES3, and one or both of the following situations can occur: v Some ADDVOL, SETSYS, and DEFINE commands fail if they are issued when they are not acceptable in a JES3 system.

v Volumes eligible for recall in a JES2 system might not qualify for the DFSMShsm general pool and, in some cases, are not available for recall in the JES3 system.

When you use DFSMShsm and JES3, the usual configuration is a symmetric configuration. A symmetric configuration is one where the primary volumes are added to DFSMShsm in all processing units and the hardware is connected in all processing units. Because of the dynamic reconfiguration of JES3, you should use a symmetric JES3 configuration.

If your device types are 3490, define the special esoteric names SYS3480R and

SYS348XR to JES3. This may only be done after the system software support (JES3,

DFP, and MVS) for 3490 is available on all processing units.

The main reason for this is conversion from 3480s, to allow DFSMShsm to convert the following generic unit names to the special esoteric names: v 3480 (used for output) is changed to SYS3480R for input drive selection.

SYS3480R is a special esoteric name that is associated with all 3480, 3480X, and

3490 devices. Any device in this esoteric is capable of reading a cartridge written by a 3480 device.

v 3480X (used for output) is changed to SYS348XR for input drive selection.

SYS348XR is a special esoteric name that is associated with all 3480X and 3490 devices. Any device in this esoteric is capable of reading a cartridge written by a

3480X device.

Note:

1.

Because of the DFSMShsm use of the S99DYNDI field in the SVC99 parameter list, the JES3 exit IATUX32 is not invoked when DFSMShsm is active.

2.

By default, JES3 support is not enabled for DFSMShsm hosts defined using

HOSTMODE=AUX. Contact IBM support if you require JES3 support for AUX

DFSMShsm hosts. When JES3 for AUX DFSMShsm hosts is enabled, you should start the main DFSMShsm host before starting any AUX hosts and stop all AUX hosts before stopping the main host.

Specifying the amount of common service area storage

Common Service Area (CSA) storage is cross-memory storage (accessible to any address space in the system) for management work elements (MWEs). The SETSYS

CSALIMITS command determines the amount of common service area (CSA) storage that DFSMShsm is allowed for its management work elements. The subparameters of the CSALIMITS parameter specify how CSA is divided among the MWEs issued to DFSMShsm. Unless almost all of DFSMShsm’s workload is

initiated from an external source, the defaults are satisfactory. Figure 18 on page 76

specifies the same values as the defaults.

One MWE is generated for each request for service that is issued to DFSMShsm.

Requests for service that generate MWEs include: v

Batch jobs that need migrated data sets v Both authorized and nonauthorized DFSMShsm commands including TSO requests to migrate, recall, and back up data sets

Chapter 5. Specifying commands that define your DFSMShsm environment

77

Two types of MWEs can be issued: wait and nowait. A WAIT MWE remains in

CSA until DFSMShsm finishes acting on the request. A NOWAIT MWE remains in

CSA under control of the MWE subparameter until DFSMShsm accepts it for processing. The NOWAIT MWE is then purged from CSA unless the MWE subparameter of CSALIMITS specifies that some number of NOWAIT MWEs are to be retained in CSA.

Note:

If you are running more than one DFSMShsm host in a z/OS image, the

CSALIMITS values used are those associated with the host with

HOSTMODE=MAIN. Any CSALIMITS values specified for an AUX host are ignored.

Selecting values for the SETSYS CSA command subparameters

DFSMShsm can control the amount of common-service-area (CSA) storage for management work elements (MWEs) whether or not DFSMShsm has been active during the current system initial program load (IPL). When DFSMShsm has not been active during the current IPL, DFSMShsm defaults control the amount of

CSA. When DFSMShsm has been active, either the DFSMShsm defaults or SETSYS values control the amount of CSA. The DFSMShsm defaults for CSA are shown in

Figure 18 on page 76. The subparameters of the SETSYS CSA command are

discussed in the following:

Selecting the value for the MAXIMUM subparameter:

The MAXIMUM subparameter determines the upper limit of CSA storage for cross-memory communication of MWEs. After this amount of CSA has been used, additional

MWEs cannot be stored. The average MWE is 400 bytes. The DFSMShsm default for this subparameter is 100KB (1KB equals 1024 bytes).

Limiting CSA has two potential uses in most data centers; protecting other application systems from excessive CSA use by DFSMShsm or serving as an early-warning sign of a DFSMShsm problem.

Setting CSALIMIT to protect other applications: Setting CSALIMITs to protect other applications depends on the amount of CSA available in the “steady-state” condition when you know the amount of CSA left over after the other application is active. This method measures the CSA usage of applications other than

DFSMShsm.

1.

Run the system without DFSMShsm active.

2.

Issue the QUERY CSALIMIT command to determine DFSMShsm’s CSA use.

3.

Set the MAXIMUM CSA subparameter to a value less than the “steady-state” amount available for the CSA.

4.

Think of DFSMShsm as a critical application with high availability requirements to set the remaining CSALIMITs.

Setting CSALIMIT as an early warning: Setting CSALIMITs as an early warning is different. Rather than measuring the CSA usage of some other application, you measure DFSMShsm’s CSA use. This method uses DFSMShsm CSALIMITS as an alarm system that notifies the console operator if DFSMShsm’s CSA usage deviates from normal.

1.

Run the system for a week or two with CSALIMIT inactive or set to a very high value.

2.

Issue the QUERY CSALIMIT command periodically to determine DFSMShsm’s

CSA use.

3.

Identify peak periods of CSA use.

78

z/OS DFSMShsm Implementation and Customization Guide

4.

Select a maximum value based on the peak, multiplied by a safety margin that is within the constraints of normally available CSA.

Selecting the value for the ACTIVE subparameter:

The ACTIVE subparameter specifies the percentage of maximum CSA available to DFSMShsm for both WAIT and NOWAIT MWEs when DFSMShsm is active. Until this limit is reached, all

MWEs are accepted. After this limit has been reached, only WAIT MWEs from batch jobs are accepted. The active limit is a percentage of the DFSMShsm maximum limit; the DFSMShsm default is 90%.

Selecting the value for the INACTIVE subparameter:

The INACTIVE subparameter specifies the percentage of CSA that is available to DFSMShsm for

NOWAIT MWEs when DFSMShsm is inactive. This prevents the CSA from being filled with NOWAIT MWEs when DFSMShsm is inactive.

Both the ACTIVE and INACTIVE CSALIMITs are expressed as percentages of the maximum amount of CSA DFSMShsm is limited to. Both specifications (ACTIVE and INACTIVE) affect the management of NOWAIT MWEs, which are ordinarily a small part of the total DFSMShsm workload.

The DFSMShsm default is 30%. When you start DFSMShsm, this limit changes to the active limit.

Selecting the value for the MWE subparameter:

The MWE subparameter specifies the number of NOWAIT MWEs from each user address space that are kept in CSA until they are completed.

The MWE subparameter can be set to 0 if DFSMShsm is solely responsible for making storage management decisions. The benefit of setting the MWE subparameter to zero (the default is four) is that the CSA an MWE consumes is freed immediately after the MWE has been copied into DFSMShsm ’s address space, making room for additional MWEs in CSA. Furthermore, if DFSMShsm is solely responsible for storage management decisions, the loss of one or more

NOWAIT MWEs (such as, a migration copy that is not being deleted) when

DFSMShsm is stopped could be viewed as insignificant.

The benefit of setting the MWE subparameter to a nonzero quantity is that MWEs remain in CSA until the function completes, so if DFSMShsm stops, the function is continued after DFSMShsm is restarted. The default value of 4 is sufficient to restart almost all requests; however, a larger value provides for situations where users issue many commands. MWEs are not retained across system outages; therefore, this parameter is valuable only in situations where DFSMShsm is stopped and restarted.

Restartable MWEs are valuable when a source external to DFSMShsm is generating critical work that would be lost if DFSMShsm failed. Under such conditions, an installation would want those MWEs retained in CSA until they had completed.

The decision for the storage administrator is whether to retain NOWAIT MWEs in

CSA. No method exists to selectively discriminate between MWEs that should be

retained and other MWEs unworthy of being held in CSA. Figure 19 on page 80

shows the three storage limits in the common service area storage.

Chapter 5. Specifying commands that define your DFSMShsm environment

79

MVS

Memory

90%

Maximum Limit

Active Limit

30%

Inactive Limit

Figure 19. Overview of Common Service Area Storage

WAIT and NOWAIT MWE considerations:

DFSMShsm keeps up to four

NOWAIT MWEs on the CSA queue for each address space. Subsequent MWEs from the same address space are deleted from CSA when the MWE is copied to the

DFSMShsm address space. When the number of MWEs per address space falls under four, MWEs are again kept in CSA until the maximum of four is reached.

Table 6 shows the types of requests and how the different limits affect these

requests.

Table 6. How Common Service Area Storage Limits Affect WAIT and NOWAIT Requests

Type of

Request DFSMShsm Active

Batch WAIT If the current CSA storage is less than the maximum limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.

TSO WAIT If the current CSA storage is less than the active limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.

NOWAIT If the current CSA storage is less than the active limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.

DFSMShsm Inactive

If the current CSA storage is less than the maximum limit, the operator is required to either start DFSMShsm or cancel the request.

The operator is prompted to start

DFSMShsm but the request fails.

If the current CSA storage is less than the inactive limit, the MWE is added to the queue. Otherwise, a message is issued and the request fails.

80

z/OS DFSMShsm Implementation and Customization Guide

A system programmer can use the SETSYS command to change any one of these values. The SETSYS command is described in z/OS DFSMShsm Storage

Administration.

Specifying the size of cell pools

DFSMShsm uses cell pools (the MVS CPOOL function) to allocate virtual storage for frequently used modules and control blocks. Cell pool storage used for control blocks is extendable, while cell pool storage used by modules is not. Using cell pools reduces DFSMShsm CPU usage and improves DFSMShsm performance. The

DFSMShsm startup procedure specifies the size (in number of cells) of five cell pools used by DFSMShsm.

DFSMShsm is configured with a default size for each cell pool. You can change these sizes by changing the CELLS keyword in the startup procedure for the

DFSMShsm primary address space. Typically the default values are acceptable.

However, if you run many concurrent DFSMShsm tasks, you may receive an

ARC0019I message, which identifies a cell pool that has run out of cells. If you receive this message, you should increase the size of the indicated cell pool by at least the number of cells specified in the message.

Related reading

v

“Adjusting the size of cell pools” on page 301

v

“DFSMShsm startup procedure” on page 307

v

“CELLS (default = (200,100,100,50,20))” on page 311

Specifying operator intervention in DFSMShsm automatic operations

The SETSYS(REQUEST|NOREQUEST) command determines whether DFSMShsm prompts the operator before beginning its automatic functions.

If you specify . . .

SETSYS NOREQUEST

SETSYS REQUEST

Then . . .

DFSMShsm begins its automatic functions without asking the operator.

DFSMShsm prompts the operator for permission to begin its automatic functions by issuing message ARC0505D. You can write code for the MVS message exit IEAVMXIT to respond to the ARC0505D message automatically. The code could query the status of various other jobs in the system and make a decision to start or not to start the DFSMShsm automatic function, based on the workload in the system at the time.

Specifying data set serialization

When DFSMShsm is backing up or migrating data sets, it must prevent those data sets from being changed. It does this by serialization. Serialization is the process of controlling access to a resource to protect the integrity of the resource. DFSMShsm serialization is determined by the SETSYS DFHSMDATASETSERIALIZATION |

USERDATASETSERIALIZATION command.

Note:

In DFSMS/MVS Version 1 Release 5, the incremental backup function has been restructured in order to improve the performance of that function. The

SETSYS DFHSMDATASETSERIALIZATION command disables that improvement.

Chapter 5. Specifying commands that define your DFSMShsm environment

81

Only use the SETSYS DFHSMDATASETSERIALIZATION command if your environment requires it. Otherwise, it is recommended that you use the SETSYS

USERDATASETSERIALIZATION command.

If you specify . . .

SETSYS

DFHSMDATASETSERIALIZATION

SETSYS

USERDATASETSERIALIZATION

Then . . .

DFSMShsm issues a RESERVE command that prevents other processing units from accessing the volume while DFSMShsm is copying a data set during volume migration. To prevent system interlock, DFSMShsm releases the reserve on the volume to update the control data sets and the catalog. After the control data sets have been updated, DFSMShsm reads the data set VTOC entry for the data set that was migrated to ensure that no other processing unit has updated the data set while the control data sets were being updated. If the data set has not been updated, it is scratched. If the data set has been updated,

DFSMShsm scratches the migration copy of the data set and again updates the control data sets and the catalog to reflect the current location of the data set. Multivolume non-VSAM data sets are not supported by this serialization option because of possible deadlock situations. For more information about volume reserve serialization,

see “DFHSMDATASETSERIALIZATION” on page

265.

Serialization is maintained throughout the complete migration operation, including the scratch of the copy on the user volume. No other processing unit can update the data set while

DFSMShsm is performing its operations, and no second read of the data set VTOC entry is required for checking. Also since there is no volume reserved while copying the data set, other data sets on the volume are accessible to users.

Therefore, USERDATASETSERIALIZATION provides a performance advantage to DFSMShsm and users in those systems equipped to use it.

You may use SETSYS USERDATASETSERIALIZATION if: v The data sets being processed are only accessible to a single z/OS image, even if you are running multiple DFSMShsm hosts in that single z/OS image.

OR v The data sets can be accessed from multiple z/OS images, and a product like

GRS must be active and is required in a multiple-image environment.

Specifying the swap capability of the DFSMShsm address space

The SETSYS SWAP|NOSWAP command determines whether the DFSMShsm address space can be swapped out of real storage.

If you specify . . .

SETSYS SWAP

Then . . .

The DFSMShsm address space can be swapped out of real storage.

82

z/OS DFSMShsm Implementation and Customization Guide

If you specify . . .

SETSYS NOSWAP

Then . . .

The DFSMShsm address space cannot be swapped out of real storage.

Guideline:

The NOSWAP option is recommended. DFSMShsm always sets the option to NOSWAP when the ABARS secondary address space is active.

In a multisystem environment, DFSMShsm also always sets the option to

NOSWAP so that cross-system coupling facility (XCF) functions are available. See

Chapter 13, “DFSMShsm in a sysplex environment,” on page 283 for the definition

of a multisystem (or a sysplex) environment.

Specifying maximum secondary address space

The SETSYS MAXABARSADDRESSSPACE (number) command specifies the maximum number of aggregate backup and recovery secondary address spaces that DFSMShsm allows to process concurrently. The SETSYS

ABARSPROCNAME(name) command specifies the name of the procedure that starts an ABARS secondary address space.

Defining the DFSMShsm security environment for DFSMShsm-owned data sets

The SETSYS commands control the relationship of DFSMShsm to RACF and control the way DFSMShsm prevents unauthorized access to DFSMShsm-owned data sets. You can use the following SETSYS commands to define your security environment: v How DFSMShsm determines the user ID when RACF is not installed and active.

v Whether to indicate that migration copies and backup versions of data sets are

RACF protected.

v How DFSMShsm protects scratched data sets.

Figure 20 is an example of a typical DFSMShsm security environment.

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DFSMSHSM SECURITY ENVIRONMENT*/

/***********************************************************************/

/*

SETSYS NOACCEPTPSCBUSERID

SETSYS NOERASEONSCRATCH

SETSYS NORACFIND

/*

Figure 20. Sample SETSYS Commands to Define the Security Environment for DFSMShsm

DFSMShsm maintains the security of those data sets that are RACF protected.

DFSMShsm does not check data set security for: v Automatic volume space management v Automatic dump v

Automatic backup v Automatic recall v Operator commands entered at the system console

Chapter 5. Specifying commands that define your DFSMShsm environment

83

v Commands issued by a DFSMShsm-authorized user

DFSMShsm checks security for data sets when a user who is not

DFSMShsm-authorized issues a nonauthorized user command (HALTERDS,

HBDELETE, HMIGRATE, HDELETE, HBACKDS, HRECALL, or HRECOVER).

Security checking is not done when DFSMShsm-authorized users issue the

DFSMShsm user commands. If users are not authorized to manipulate data,

DFSMShsm does not permit them to alter the backup parameters of a data set, delete backup versions, migrate data, delete migrated data, make backup versions of data, recall data sets, or recover data sets.

Authorization checking is done for the HCANCEL and CANCEL commands.

However the checking does not include security checking the user’s authority to access a data set. Whether a user has comprehensive or restricted command authority controls whether RACF authority checking is performed for each data set processed by the ABACKUP command. Refer to z/OS DFSMShsm Storage

Administration for more information about authorization checking during aggregate backup.

Determining batch TSO user IDs

When a TSO batch job issues a DFSMShsm-authorized command, DFSMShsm must be able to verify the authority of the TSO user ID to issue the command. For authorization checking purposes when processing batch TSO requests, DFSMShsm obtains a user ID as follows: v If RACF is active, the user ID is taken from the access control environment element (ACEE), a RACF control block.

v If RACF is not active and the SETSYS ACCEPTPSCBUSERID command has been specified, the user ID is taken from the TSO-protected step control block (PSCB).

If no user ID is present in the PSCB, the user ID is set to **BATCH*. It is the installation’s responsibility to ensure that a valid user ID is present in the PSCB.

v If RACF is not active and the SETSYS ACCEPTPSCBUSERID command has not been specified or if the default (NOACCEPTPSCBUSERID) has been specified, the user ID is set to **BATCH* for authorization checking purposes.

If you have RACF installed and active, you can specify that RACF protect resources; therefore, specify NOACCEPTPSCBUSERID. (NOACCEPTPSCBUSERID has no relevance but is included for completeness. However, if your system does not have RACF installed and active, you should use ACCEPTPSCBUSERID.)

The NOACCEPTPSCBUSERID parameter specifies how DFSMShsm determines the user ID for TSO submission of DFSMShsm-authorized commands in systems that do not have RACF installed and active.

Specifying whether to indicate RACF protection of migration copies and backup versions of data sets

When DFSMShsm migrates or backs up a data set, it can indicate that the copy is protected by a RACF discrete profile. Such a data set, when its indicator is on, is called RACF-indicated. RACF indication provides protection only for data sets that are RACF-indicated on the level 0 volume and it allows only the RACF security administrator to directly access the migration and backup copies.

For a non-VSAM data set, the RACF indicator is a bit in the volume table of contents (VTOC) of the DASD volume on which the data set resides.

84

z/OS DFSMShsm Implementation and Customization Guide

For a VSAM data set, the RACF indicator is a bit in the catalog record. The indicator remains with the data set even if the data set is moved to another system. However, if the data set profile fails to move or is somehow lost, a RACF security administrator must take action before anyone can access the data set.

The SETSYS RACFIND|NORACFIND command determines whether

DFSMShsm-owned data sets are RACF-indicated.

If you specify . . .

Then . . .

SETSYS RACFIND DFSMShsm sets the RACF indicator in the data set VTOC entry for migration copies and backup versions. The RACFIND option is recommended for systems that do not have an always-call environment, do not have generic profiles enabled, but do have

RACF discrete data set profiles.

SETSYS NORACFIND DFSMShsm does not perform I/O operations to turn on the RACF indicator for migration copies and backup versions when

RACF-indicated data sets are migrated and backed up to DASD.

Before specifying the SETSYS NORACFIND command, ensure that you: v Define a generic profile for the prefixes of DFSMShsm-owned data sets v Enable generic DATASET profiles

The preferred implementation is to create an environment in which you can specify the NORACFIND option. Generic profiles enhance DFSMShsm performance because DFSMShsm does not perform I/O operations to turn on the

RACF-indicated bit.

For a discussion of RACF environments and profiles, refer to z/OS DFSMShsm

Storage Administration.

Specifying security for scratched DFSMShsm-owned DASD data sets

Some data sets are so sensitive that you must ensure that DASD residual data cannot be accessed after they have been scratched. RACF has a feature to erase the space occupied by a data set when the data set is scratched from a DASD device.

This feature, called erase-on-scratch, causes overwriting of the DASD residual data by data management when a data set is deleted.

If you specify . . .

SETSYS ERASEONSCRATCH

Then . . .

Erase-on-scratch processing is requested only for

DFSMShsm-owned DASD data sets.

When the ERASEONSCRATCH parameter is in effect, DFSMShsm queries RACF for the erase status of the user’s data set for use with the backup version or the migration copy. If the erase status from the RACF profile is ERASE when the backup version or the migration copy is scratched, the DASD residual data is overwritten by data management. If the erase status from the RACF profile is

NOERASE when the backup version or the migration copy is scratched, the DASD residual data is not overwritten by data management.

Chapter 5. Specifying commands that define your DFSMShsm environment

85

The ERASEONSCRATCH parameter has no effect on data sets on level 0 volumes on which the RACF erase attribute is supported. The ERASEONSCRATCH parameter allows the erase attribute to be carried over to migration copies and backup versions.

Note:

Records making up a data set in a small-data-set-packing (SDSP) data set are not erased. Refer to z/OS DFSMShsm Storage Administration for information about small-data-set-packing data set security.

If you specify . . .

Then . . .

SETSYS NOERASEONSCRATCH No erase-on-scratch processing is requested for

DFSMShsm-owned volumes.

Erase-on-scratch considerations

Before you specify the erase-on-scratch option for integrated catalog facility (ICF) cataloged VSAM data sets that have the ERASE attribute and have backup profiles, consider the following results: v DFSMShsm copies of ICF cataloged VSAM data sets with the ERASE attribute indicated in the RACF profile are erased with the same erase-on-scratch support as for all other data sets.

DFSMShsm does not migrate ICF cataloged VSAM data sets that have the ERASE attribute in the catalog record. The migration fails with a return code 99 and a reason code 2 indicating that the user can remove the ERASE attribute from the catalog record and can specify the attribute in the RACF profile to obtain

DFSMShsm migration and erase-on-scratch support of the data set.

v

ERASE status is obtained only from the original RACF profile. Backup profiles created by DFSMShsm (refer to z/OS DFSMShsm Storage Administration) are not checked. The original ERASE attribute is saved in the backup version (C) record at the time of backup and is checked at recovery time if the original RACF profile is missing.

v The records in an SDSP data set are not overwritten on recall even if the SETSYS

ERASEONSCRATCH command has been specified. When a data set is recalled from an SDSP data set, the records are read from the control interval and returned as a data set to the level 0 volume. When migration cleanup is next performed, the VSAM erase process reformats the control interval but does not overwrite any residual data. Erase-on-scratch is effective for SDSP data sets only when the SDSP data set itself is scratched. Refer to z/OS DFSMShsm Storage

Administration for a discussion of protecting small-data-set-packing data sets.

Defining data formats for DFSMShsm operations

Because DFSMShsm moves data between different device types with different device geometries, the format of data can change as it moves from one device to another.

There are three data formats for DFSMShsm operations: v The format of the data on DFSMShsm-owned volumes v The blocking of the data on DFSMShsm-owned DASD volumes v The blocking of data sets that are recalled and recovered

You can control each of these format options by using SETSYS command parameters. The parameters control the data compaction option, the optimum

DASD blocking option (see “Optimum DASD blocking option” on page 90), the

86

z/OS DFSMShsm Implementation and Customization Guide

use of the tape device improved data recording capability, and the conversion option. You can also use DFSMSdss dump COMPRESS for improved tape utilization. Refer to z/OS DFSMShsm Storage Administration for additional

information about invoking full-volume dump compression. Figure 21 lists sample

SETSYS commands for defining data formats.

/***********************************************************************/

/* SAMPLE DFSMSHSM DATA FORMAT DEFINITIONS */

/***********************************************************************/

/*

SETSYS COMPACT(DASDMIGRATE NOTAPEMIGRATE DASDBACKUP NOTAPEBACKUP)

SETSYS COMPACTPERCENT(30)

SETSYS OBJECTNAMES(OBJECT,LINKLIB)

SETSYS SOURCENAMES(ASM,PROJECT)

SETSYS OPTIMUMDASDBLOCKING

SETSYS CONVERSION(REBLOCKTOANY)

SETSYS TAPEHARDWARECOMPACT

/*

Figure 21. Sample Data Format Definitions for a Typical DFSMShsm Environment

Data compaction option

The data compaction option can save space on migration and backup volumes by encoding each block of each data set that DFSMShsm migrates or backs up.

DFSMShsm compacts data with the Huffman Frequency Encoding compaction algorithm. The compacted output blocks can vary in size. An input block consisting of many least-used EBCDIC characters can be even longer after being encoded. If this occurs, DFSMShsm passes the original data block without compaction to the output routine.

Whether DFSMShsm compacts each block of data as the data is backed up or migrated from a level 0 volume is determined by the SETSYS COMPACT command. DFSMShsm compacts each block of data as the data set is backed up or migrated from a level 0 volume. Compaction or decompaction never occurs when a data set moves from one migration volume to another or from one backup volume to another. DFSMShsm does not compact data sets when they are migrated for extent reduction, are in compressed format, or during DASD conversion.

If you specify . . .

SETSYS COMPACT(DASDMIGRATE

NOTAPEMIGRATE DASDBACKUP

NOTAPEBACKUP)

Then . . .

Every block of data that migrates or is backed up to DASD is a candidate for compaction.

When DFSMShsm recalls or recovers a compacted data set, DFSMShsm automatically decodes and expands the data set. DFSMShsm decompacts encoded data even if you later run with SETSYS COMPACT(NONE).

If you do not want a specific data set to be compacted during volume migration or volume backup, invoke the data set migration exit (ARCMDEXT) or the data set backup exit (ARCBDEXT) to prevent compaction of that data set. For more information about the data set migration exit and the data set backup exit, refer to

z/OS DFSMS Installation Exits.

Compaction tables

When choosing an algorithm for compacting a data set, DFSMShsm selects either the unique source or object compaction table or selects the default general compaction table. You can identify data sets that you want to compact with unique

Chapter 5. Specifying commands that define your DFSMShsm environment

87

source or object compaction tables by specifying the low-level qualifiers for those data sets when you specify the SETSYS SOURCENAMES and SETSYS

OBJECTNAMES commands.

For generation data groups, DFSMShsm uses the next-to-the-last qualifier of the data set name. DFSMShsm uses the same compaction table for all blocks in each data set. The source compaction table is designed to compact data sets that contain programming language source code. The object compaction table is designed to compact data sets containing object code and is based on an expected frequency distribution of byte values.

Compaction percentage

When compacting a data set during migration or backup, DFSMShsm keeps a running total of the number of bytes in each compacted block that is written to the migration or backup volume. DFSMShsm also keeps a running total of the number of bytes that were in the blocks before compaction. With these values, DFSMShsm determines the space savings value, expressed as a percentage.

Total Bytes Total Bytes

Before Compaction — After Compaction

Space Savings = _________________________________________ x 100

Total Bytes Before Compaction

DFSMShsm uses the space savings percentage to determine if it should compact recalled or recovered data sets when they are subsequently backed up or migrated again. You specify this space saving percentage when you specify the SETSYS

COMPACTPERCENT command.

At least one track on the DASD migration or backup volume must be saved, or the compacted data set is not eligible for compaction when it is subsequently migrated or backed up.

Note:

For SDSP data sets, DFSMShsm considers only the space saving percentage because small-data-set packing is intended for small user data sets where the space savings is typically less than a track.

If you specify . . .

SETSYS COMPACT

(DASDMIGRATE |

TAPEMIGRATE)

SETSYS COMPACT

(DASDBACKUP |

TAPEBACKUP)

Then . . .

DFSMShsm compacts each record of a data set on a level 0 volume the first time it migrates the data set. During subsequent migrations from level 0 volumes (as a result of recall), DFSMShsm performs additional compaction of the data set only if the percentage of space savings (as indicated from the original backup or migration) exceeds the value specified with the SETSYS COMPACTPERCENT command.

DFSMShsm compacts each record of a data set on a level 0 volume the first time it backs up the data set. During subsequent backups (as a result of recovery), DFSMShsm performs additional compaction of the data set only if the percentage of space savings (as indicated by the original backup) exceeds the value specified with the SETSYS

COMPACTPERCENT command.

DFSMShsm stores the space saving percentage in the MCDS data set (MCD record) or the BCDS data set (MCB record). If the MCD or MCB record is deleted (for example, during migration cleanup or expiration of backup versions), the previous

88

z/OS DFSMShsm Implementation and Customization Guide

savings by compaction is lost and cannot affect whether or not DFSMShsm compacts the data set during subsequent migrations or backups.

Compaction considerations

Data sets sometimes exist on volumes in a format (compacted or uncompacted) that seems to conflict with the type of compaction specified with the SETSYS command. The following examples illustrate how this occurs.

DFSMShsm compacts data sets only when it copies them onto a

DFSMShsm-owned volume from a level 0 volume.

If you specify . . .

SETSYS COMPACT

(TAPEMIGRATION (ML2TAPE)) and

SETSYS COMPACT (DASDMIGRATE

TAPEMIGRATE)

SETSYS COMPACT (DASDMIGRATE

NOTAPEMIGRATE)

SETSYS COMPACT

(DASDMIGRATE)

SETSYS COMPACT

(NOTAPEMIGRATE)

Then . . .

DFSMShsm compacts data sets that migrate from level 0 volumes whether they migrate to DASD or whether they migrate directly to migration level 2 tape. DFSMShsm retains the compacted form when it moves data sets from migration level 1 DASD to migration level 2 tape.

DFSMShsm places both compacted and uncompacted data sets on migration level 2 tapes.

DFSMShsm compacts any data set migrating to migration level 1 DASD (or migration level 2 DASD, if DASD are used for ML2 volumes).

DFSMShsm does not compact data sets that migrate from level 0 volumes directly to migration level 2 tapes. However, data sets migrating from level 1 volumes to level 2 tapes remain compacted; therefore, both compacted and uncompacted data sets can be on the tape.

Similarly, if you are not compacting data sets that migrate to DASD and you are compacting data sets that migrate directly to tape, both compacted and uncompacted data sets can migrate to level 2 tapes. The uncompacted data sets occur because the data sets are not compacted when they migrate to the migration level 1 DASD and the compaction is not changed when they later migrate to a migration level 2 tape. However, data sets migrating directly to tape are compacted.

If you specify . . .

SETSYS TAPEMIGRATION

(DIRECT)

Then . . .

The DASDMIGRATE or NODASDMIGRATE subparameter of the SETSYS COMPACT command has no effect on DFSMShsm processing.

You can also have mixed compacted and uncompacted backup data sets and they, too, can be on either DASD or tape.

If you specify compaction for data sets backed up to DASD but no compaction for migrated data sets, any data set that migrates when it needs to be backed up is uncompacted when it is backed up from the migration volume.

Similarly, if you specify compaction for migrated data sets but no compaction for backed up data sets, a data set that migrates when it needs to be backed up

Chapter 5. Specifying commands that define your DFSMShsm environment

89

migrates in compacted form. When the data set is backed up from the migration volume, it is backed up in its compacted form even though you specified no compaction for backup.

Data sets that are backed up to DASD volumes retain their compaction characteristic when they are spilled to tape. Thus, if you are not compacting data sets backed up to tape but you are compacting data sets backed up to DASD, you can have both compacted and uncompacted data sets on the same tapes. Data sets that are compacted and backed up to tape, likewise, can share tapes with uncompacted data sets that were backed up to DASD.

Optimum DASD blocking option

Each DASD device has an optimum block size for storing the maximum

DFSMShsm data on each track. The default block size for DFSMShsm when it is storing data on its owned DASD devices is determined by the device type for each of the DFSMShsm-owned DASD devices to ensure that the maximum data is stored on each track of the device. For example, all models of 3390 DASD have the same track length, and therefore an optimum block size of 18KB (1KB equals 1024 bytes).

If you specify (not recommended) . . .

Then . . .

SETSYS NOOPTIMUMDASDBLOCKING

DFSMShsm uses a block size of 2KB for storing data on its owned DASD.

Data Set Reblocking

The purpose of reblocking is to make the most efficient use of available space.

If you specify . . .

SETSYS CONVERSION

(REBLOCKTOANY)

Then . . .

DFSMShsm allows reblocking during recall or recovery to any device type supported by DFSMShsm, including target volumes of the same type as the source volume. This is the only parameter used by DFSMSdss.

Defining DFSMShsm reporting and monitoring

DFSMShsm produces information that can make the storage administrator, the operator, and the system programmer aware of what is occurring in the system.

This information is provided in the form of activity logs, system console output, and entries in the System Management Facility (SMF) logs. You can specify a

SETSYS command to control: v The information that is stored in the activity logs v The device type for the activity logs v The messages that appear on the system console v The type of output device for listings and reports v Whether entries are made in the SMF logs

Figure 22 on page 91 is an example of the SETSYS commands that define a typical

DFSMShsm reporting and monitoring environment.

90

z/OS DFSMShsm Implementation and Customization Guide

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE A TYPICAL DFSMSHSM REPORTING */

/* AND MONITORING ENVIRONMENT */

/***********************************************************************/

/*

SETSYS ACTLOGMSGLVL(EXCEPTIONONLY)

SETSYS ACTLOGTYPE(DASD)

SETSYS MONITOR (BACKUPCONTROLDATASET(80 ) -

JOURNAL(80 ) -

MIGRATIONCONTROLDATASET(80 ) -

OFFLINECONTROLDATASET(80 ) -

NOSPACE NOSTARTUP NOVOLUME)

SETSYS SYSOUT(A 1)

SETSYS SMF

/*

Figure 22. Sample Reporting and Monitoring Environment Definition for Typical DFSMShsm Environment

The activity logs are discussed in detail in Chapter 3, “DFSMShsm data sets,” on page 9.

Controlling messages that appear on the system console

You can control the types of messages that appear at the system console by selecting the options for the SETSYS MONITOR command.

If you specify . . .

SETSYS MONITOR

(MIGRATIONCONTROLDATASET(threshold)

SETSYS MONITOR

(BACKUPCONTROLDATASET(threshold))

SETSYS MONITOR

(OFFLINECONTROLDATASET(threshold))

SETSYS MONITOR(JOURNAL(threshold))

SETSYS MONITOR(NOSPACE)

Then . . .

DFSMShsm notifies the operator when a control data set is becoming full. You specify the threshold (percentage) of the allocated data set space that triggers a message.

SETSYS MONITOR(NOSTARTUP)

SETSYS MONITOR(NOVOLUME)

DFSMShsm does not issue volume space usage messages.

DFSMShsm does not issue informational messages for startup progress.

DFSMShsm does not issue messages about data set activity on volumes it is processing.

For more information about the SETSYS command, see z/OS DFSMShsm Storage

Administration.

Controlling the output device for listings and reports

The SYSOUT parameter controls where lists and reports are printed if the command that causes the list or report does not specify where it is to be printed.

The default for this parameter is SYSOUT(A 1).

Controlling entries for the SMF logs

You determine if DFSMShsm writes System Management Facility (SMF) records to the SYS1.MANX and SYS1.MANY system data sets when you specify the SETSYS

SMF or SETSYS NOSMF commands.

Chapter 5. Specifying commands that define your DFSMShsm environment

91

If you specify . . .

SETSYS SMF

SETSYS NOSMF

Then . . .

DFSMShsm writes daily statistics records, function statistics records, and volume statistics records to the SYS1.MANX and

SYS1.MANY system data sets.

DFSMShsm does not write daily statistics records (DSRs), function statistics records (FSRs) or volume statistics records (VSRs) into the system data sets. For the formats of the FSR, DSR, and VSR records, see z/OS DFSMShsm Diagnosis.

Defining the tape environment

Chapter 10, “Implementing DFSMShsm tape environments,” on page 189, contains

information about setting up your tape environment, including discussions about

SMS-managed tape libraries, tape management policies, device management policies, and performance management policies.

Defining the installation exits that DFSMShsm invokes

You determine the installation exits that DFSMShsm invokes when you specify the

SETSYS(EXITON) or SETSYS(EXITOFF) commands. The installation exits can be dynamically loaded at startup by specifying them in your ARCCMDxx member in a PARMLIB.

Note:

Examples of the DFSMShsm installation exits can be found in

SYS1.SAMPLIB.

If you specify . . .

SETSYS EXITON(exit,exit,exit)

SETSYS EXITOFF(exit,exit,exit)

SETSYS EXITOFF(exit1)

Then . . .

The specified installation exits are immediately loaded and activated.

The specified installation exits are immediately disabled and the storage is freed.

You modify and link-edit exit1 and you then specify

SETSYS EXITON(

exit1), DFSMShsm replaces the original exit1 with the newly modified exit1.

z/OS DFSMS Installation Exits describes the installation exits and what each exit accomplishes.

Controlling DFSMShsm control data set recoverability

The DFSMShsm journal data set records any activity that occurs to the DFSMShsm control data sets. By maintaining a journal, you ensure that damaged control data sets can be recovered by processing the journal against the latest backup copies of the control data sets.

If you specify . . .

SETSYS JOURNAL RECOVERY

SETSYS JOURNAL SPEED

Then . . .

DFSMShsm waits until the journal entry has been written into the journal before it updates the control data sets and continues processing.

DFSMShsm continues with its processing as soon as the journaling request has been added to the journaling queue. (Not recommended.)

92

z/OS DFSMShsm Implementation and Customization Guide

For examples of data loss and recovery situations, refer to z/OS DFSMShsm Storage

Administration.

Defining migration level 1 volumes to DFSMShsm

Whether you are implementing space management or availability management, you need migration level 1 volumes. Migration processing requires migration level

1 volumes as targets for data set migration. Backup processing requires migration level 1 volumes to store incremental backup and dump VTOC copy data sets. They may also be used as intermediate storage for data sets that are backed up by data set command backup.

Fast Replication backup requires migration level 1 volumes to store Catalog

Information Data Sets. They may also be used as intermediate storage for data sets that are backed up by data set command backup.

Ensure that you include the ADDVOL command specifications for migration level

1 volumes in the ARCCMDxx member located in a PARMLIB, so that DFSMShsm recognizes the volumes at each startup. If ADDVOL commands for migration level

1 volumes are not in the ARCCMDxx member, DFSMShsm does not recognize that they are available unless you issue an ADDVOL command at the terminal for each

migration level 1 volume. Figure 23 shows the sample ADDVOL commands for

adding migration level 1 volumes to DFSMShsm control.

/***********************************************************************/

/*

/*

SAMPLE ADDVOL COMMANDS FOR ADDING MIGRATION LEVEL 1 VOLUMES TO

DFSMSHSM CONTROL

*/

*/

/***********************************************************************/

/*

ADDVOL ML1001 UNIT(3390) -

MIGRATION(MIGRATIONLEVEL1 -

SMALLDATASETPACKING) THRESHOLD(90)

ADDVOL ML1002 UNIT(3390) -

MIGRATION(MIGRATIONLEVEL1 -

SMALLDATASETPACKING) THRESHOLD(90)

ADDVOL ML1003 UNIT(3390) -

MIGRATION(MIGRATIONLEVEL1 -

NOSMALLDATASETPACKING) THRESHOLD(90)

Figure 23. Example ADDVOL Commands for Adding Migration Level 1 Volumes to DFSMShsm Control

Parameters for the migration level 1 ADDVOL commands

The example below shows parameters used with the the MIGRATIONLEVEL1 parameter:

ADDVOL

▌1▐ML1001 ▌2▐UNIT(3390) -

▌3▐MIGRATION(MIGRATIONLEVEL1 -

SMALLDATASETPACKING) THRESHOLD(90) v

▌1▐ - The first parameter of the ADDVOL command is a positional required parameter that specifies the volume serial number of the volume being added to

DFSMShsm. In Figure 23, migration level 1 volumes are identified by volume

serial numbers that start with ML1.

v

▌2▐ - The second parameter of the ADDVOL command is a required parameter that specifies the unit type of the volume. For our example, all ML1 volumes are

3390s.

v

▌3▐ - The third parameter is a required parameter that specifies that the volume is being added as a migration volume. This parameter has subparameters that

Chapter 5. Specifying commands that define your DFSMShsm environment

93

specify the kind of migration volume and the presence of a small-data-set-packing (SDSP) data set on the volume. If you specify

SMALLDATASETPACKING, the volume must contain a VSAM key-sequenced

data set to be used as the SDSP data set. See “DFSMShsm small-data-set-packing data set facility” on page 51 for details about how to allocate the SDSP data set.

The number of SDSP data sets defined must be at least equal to the maximum number of concurrent volume migration tasks that could be executing in your complex. Additional SDSPs are recommended for RECALL processing and

ABARS processing and if some SDSPs should become full during migration.

v The THRESHOLD parameter in our ADDVOL command examples specifies the level of occupancy that signals the system to migrate data sets from migration level 1 volumes to migration level 2 volumes. If you want DFSMShsm to do automatic migration from level 1 to level 2 volumes, you must specify the occupancy thresholds for the migration level 1 volumes.

Note:

1.

Automatic secondary space management determines whether to perform level 1 to level 2 migration by checking to see if any migration level 1 volume has an occupancy that is equal to or greater than its threshold. DFSMShsm migrates all eligible data sets from all migration level 1 volumes to migration level 2 volumes.

2.

If the volume is being defined as a migration level 1 OVERFLOW volume then the threshold parameter is ignored. Use the SETSYS

ML1OVERFLOW(THRESHOLD(nn)) command to specify the threshold for the entire OVERFLOW volume pool.

3.

If you're adding volumes in an HSMplex environment, and those added volumes will be managed by each host in an HSMplex, then be sure to issue the ADDVOL on each system that will manage that volume.

For more information about level 1 to level 2 migration, see z/OS DFSMShsm

Storage Administration.

In specifying the threshold parameter, you want to maintain equal free space on all of your migration level 1 volumes. If you use different device types for migration level 1 volumes, you must calculate the appropriate percentages that will make the same amount of free space available on each device type. For example, if you have a mixture of 3390 models 1 and 2, you might specify 88% for model 1 (92M) and

94% for model 2 (96M).

Using migration level 1 OVERFLOW volumes for migration and backup

An optional OVERFLOW parameter of the ADDVOL command lets you specify that OVERFLOW volumes are to be considered for backup or migration to migration level 1 when both the following are true: v The file you are migrating or backing up is larger than a given size, as specified on the SETSYS ML1OVERFLOW(DATASETSIZE(dssize)) command v DFSMShsm cannot allocate enough space on a NOOVERFLOW volume by selecting either the least used volume or the volume with the most free space.

Note that DFSMShsm will use OVERFLOW ML1 volumes for the following backup functions: v Inline backup v HBACKDS and BACKDS commands

94

z/OS DFSMShsm Implementation and Customization Guide

v ARCHBACK macro for data sets larger than dssize K bytes

You can specify the OVERFLOW parameter as follows:

ADDVOL ML1003 UNIT(3390) -

MIGRATION(MIGRATIONLEVEL1 OVERFLOW)

Related reading:

For more information about the ADDVOL command and the

SETSYS command, see z/OS DFSMShsm Storage Administration.

User or system data on migration level 1 volumes

Migration level 1 volumes, once defined to DFSMShsm, are known and used as

DFSMShsm-owned volumes. That expression implies, among other things, that when DFSMShsm starts using such a volume, it determines the space available and creates its own record of that free space. For reasons of performance, DFSMShsm maintains that record as it creates and deletes its own migration copies, backup versions, and so on; DFSMShsm does not keep scanning the VTOC to see what

other data sets may have been added or deleted.

Restrictions:

The point is that your installation can store certain types of user or system data on migration level 1 volumes as long as you keep certain restrictions in mind: v Such data sets cannot be SMS-managed, because these volumes cannot be

SMS-managed.

v Once such a data set is allocated, do not change its size during an DFSMShsm startup.

v

Do not request DFSMShsm to migrate or (except perhaps as part of a full dump of such a volume) back up such data sets.

Given that you maintain these restrictions, you can gain certain advantages by sharing these volumes with non-DFSMShsm data: v A given amount of level-1 storage for DFSMShsm can be spread across more volumes, reducing volume contention.

v Since only one SDSP data set can be defined per volume, the number of such data sets can be increased.

Defining the common recall queue environment

DFSMShsm supports an HSMplex-wide common recall queue (CRQ). This CRQ balances recall workload across the HSMplex. This queue is implemented through the use of a coupling facility (CF) list structure. For an overview of the CRQ environment, refer to the z/OS DFSMShsm Storage Administration.

Updating the coupling facility resource manager policy for the common recall queue

The CRQ function requires that an HSMplex resides in a Parallel Sysplex

® configuration. To fully utilize this function, allocate the list structure in a CF that supports the system-managed duplexing rebuild function. Before DFSMShsm can use the common recall queue, the active coupling facility resource management

(CFRM) policy must be updated to include the CRQ definition. You can use the

following information (see Table 7 on page 96) to define the CRQ and update the

CFRM policy:

Chapter 5. Specifying commands that define your DFSMShsm environment

95

Table 7. Information that can be used to define the CRQ and update the CFRM policy.

Type

Requirements

Recommendations

Useful Information

Information

The structure name that must be defined in the active CFRM policy is 'SYSARC_'basename'_RCL', where basename is the base name specified in SETSYS

COMMONQUEUE(RECALL(CONNECT(basename))). basename must be exactly five characters.

The minimum CFLEVEL is eight. If the installation indicates that the structure must be duplexed, the system attempts to allocate the structure on a CF with a minimum of CFLEVEL=12.

DFSMShsm does not specify size parameters when it connects to the CRQ. Size parameters must be specified in the CFRM policy.

Refer to “Determining the structure size of the common recall queue” for a list of recommended structure sizes and the maximum

number of concurrent recalls

Because the list structure implements locks, the CF maintains an additional cross-system coupling facility (XCF) group in relation to this structure. Make sure that your XCF configuration can support the addition of another group.

When implementing a CRQ environment, all hosts sharing a unique queue should be within the same SMSplex, have access to the same catalogs and DASD, and have common RACF configurations. The system administrator must ensure that all hosts connected to the CRQ are capable of recalling any migrated data set that originated from any other host that is connected to the same CRQ.

Nonvolatility is recommended, but not required. For error recovery purposes, each host maintains a local copy of each recall MWE that it places on the CRQ.

CF failure independence is strongly recommended. For example, do not allocate the CRQ in a CF that is on an z/OS image that is on the same processor as another z/OS image with a system running a

DFSMShsm that is using that CRQ.

Each CRQ is contained within a single list structure.

A host can only connect to one CRQ at a time.

DFSMShsm supports the alter function, including RATIO alters, and system-managed rebuilds. DFSMShsm does not support user-managed rebuilds.

Note:

System-managed rebuilds do not support the

REBUILDPERCENT option.

DFSMShsm supports the system-managed duplexing rebuild function.

The CRQ is a persistent structure with nonpersistent connections.

The structure remains allocated even if all connections have been deleted.

Determining the structure size of the common recall queue

The common recall queue needs to be sized such that it can contain the maximum number of concurrent recalls that may occur. Due to the dynamic nature of recall activity, there is no exact way to determine what the maximum number of concurrent recall requests may be.

Guideline:

Use an INITSIZE value of 5120KB and a SIZE value of 10240KB.

96

z/OS DFSMShsm Implementation and Customization Guide

A structure of this initial size is large enough to manage up to 3900 concurrent recall requests with growth up to 8400 concurrent recalls. These values should be

large enough for most environments. Table 8 shows the maximum number of

recalls that may be contained in structures of various sizes. No structure of less than 2560KB should be used.

Note:

The maximum number of recall requests that may be contained within a structure is dependent on the number of requests that are from a unique Migration

Level 2 tape. The figures shown in Table 8 are based on 33% of the recall requests

requiring a unique ML2 tape. If fewer tapes are needed, then the structure will be able to contain more recall requests than is indicated.

Table 8. Maximum Concurrent Recalls

Structure Size

2560KB

5120KB

10240KB

15360KB

Maximum Concurrent Recalls

1700

3900

8400

12900

It should be recognized that the utilization percentage of the common recall queue will be low most of the time. This is because the average number of concurrent requests will be much lower than the maximum number of concurrent requests. In order to be prepared for a high volume of unexpected recall activity, the common recall queue structure size must be larger than the size needed to contain the average number of recall requests.

Altering the list structure size

DFSMShsm monitors how full a list structure has become. When the structure becomes 95% full, DFSMShsm no longer places recall requests onto the CRQ, but routes all new requests to the local queues. Routing recall requests to the CRQ resumes once the structure drops below 85% full. The structure is not allowed to become 100% full so that requests that are in-process can be moved between lists within the structure without failure. When the structure reaches maximum capacity, the storage administrator can increase the size by altering the structure to a larger size or by rebuilding it. A rebuild must be done if the maximum size has already been reached. (The maximum size limit specified in the CFRM policy must be increased before the structure is rebuilt). You can use the CF services structure full monitoring feature to monitor the structure utilization of the common recall queue.

How to alter the common recall queue list structure size

Initiate alter processing using the SETXCF START,ALTER command. Altering is a nondisruptive method for changing the size of the list structure. Alter processing can increase the size of the structure up to the maximum size specified in the

CFRM policy. The SETXCF START,ALTER command can also decrease the size of a structure to the specified MINSIZE or default to the value specified in the CFRM policy.

How to rebuild the common recall queue list structure size

DFSMShsm supports the system-managed duplexing rebuild function. DFSMShsm does not support user-managed rebuilds.

Chapter 5. Specifying commands that define your DFSMShsm environment

97

Note:

The coupling facility auto rebuild function does not support the use of

REBUILDPERCENT. If the system rebuild function is not available because the structure was not allocated on a coupling facility that supports it, and a user needs to increase the maximum size of the structure or remedy a number of lost connections, then the user has to reallocate the structure.

Perform the following steps to reallocate the structure:

1.

Disconnect all the hosts from the structure using the SETSYS

COMMONQUEUE(RECALL(DISCONNECT)) command.

2.

Deallocate the structure using the SETXCF FORCE command.

3.

Reallocate the structure using the SETSYS

COMMONQUEUE(RECALL(CONNECT(basename ))) command.

Rule:

If the intent of the rebuild is to increase the maximum structure size, you must update the CFRM policy before you perform the above steps.

Defining the common dump queue environment

DFSMShsm supports an HSMplex-wide common dump queue (CDQ). With CDQ, dump requests are distributed to a group of hosts for processing. This increases the number of available tasks to perform the work and improves throughput by distributing the workload rather than concentrating it on a single host's address space. For an overview of the CDQ environment and a description of how to define it, refer to Common dump queue in z/OS DFSMShsm Storage Administration.

|

|

|

|

|

|

|

Defining the common recover queue environment

DFSMShsm supports an HSMplex-wide common recover queue (CVQ). With CVQ, volume restore requests are distributed to a group of hosts for processing. This increases the number of available tasks to perform the work and improves throughput by distributing the workload instead of concentrating it on a single host's address space. For an overview of the CVQ environment and how to define it, refer to Common recover queue in z/OS DFSMShsm Storage Administration.

Defining common SETSYS commands

The following example shows typical SETSYS commands for an example system.

Each of the parameters in “Defining common SETSYS commands” can be treated

as a separate SETSYS command with the cumulative effect of a single SETSYS command. This set of SETSYS commands becomes part of the ARCCMDxx member pointed to by the DFSMShsm startup procedure.

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE THE MVS ENVIRONMENT */

/***********************************************************************/

/*

SETSYS JES2

SETSYS CSALIMITS(MAXIMUM(100) ACTIVE(90) INACTIVE(30) MWE(4))

SETSYS NOREQUEST

SETSYS USERDATASETSERIALIZATION

SETSYS NOSWAP

SETSYS MAXABARSADDRESSSPACE(1)

/*

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DFSMSHSM SECURITY */

/***********************************************************************/

/*

SETSYS NOACCEPTPSCBUSERID

98

z/OS DFSMShsm Implementation and Customization Guide

SETSYS NOERASEONSCRATCH

SETSYS NORACFIND

/*

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE THE DATA FORMATS */

/***********************************************************************/

/*

SETSYS COMPACT(DASDMIGRATE NOTAPEMIGRATE DASDBACKUP NOTAPEBACKUP)

SETSYS COMPACTPERCENT(30)

SETSYS OBJECTNAMES(OBJECT,LINKLIB)

SETSYS SOURCENAMES(ASM,PROJECT)

SETSYS OPTIMUMDASDBLOCKING

SETSYS CONVERSION(REBLOCKTOANY)

SETSYS TAPEHARDWARECOMPACT

/*

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE DFSMSHSM REPORTING AND

/* MONITORING ENVIRONMENT

*/

*/

/***********************************************************************/

/*

SETSYS ACTLOGMSGLVL(EXCEPTIONONLY)

SETSYS ACTLOGTYPE(DASD)

SETSYS MONITOR(BACKUPCONTROLDATASET(80 ) -

JOURNAL(80 ) -

MIGRATIONCONTROLDATASET(80 ) -

OFFLINECONTROLDATASET(80 ) -

NOSPACE NOSTARTUP NOVOLUME)

SETSYS SYSOUT(A 1)

SETSYS SMF

/*

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DEFINE THE EXITS DFSMSHSM USES */

/***********************************************************************/

/*

SETSYS EXITON(CD)

/*

/***********************************************************************/

/* SAMPLE SETSYS COMMANDS THAT DETERMINE DFSMSHSM RECOVERABILITY */

/***********************************************************************/

/*

SETSYS JOURNAL(RECOVERY)

/*

/***********************************************************************/

/*

/*

SAMPLE SETSYS COMMAND TO CONNECT TO A COMMON RECALL QUEUE LIST */

STRUCTURE. TEST1 IS THE BASE NAME OF THE CRQ LIST STRUCTURE */

/***********************************************************************/

/*

SETSYS COMMONQUEUE(RECALL(CONNECT(TEST1)))

/***********************************************************************/

/* SAMPLE SETSYS COMMAND THAT SPECIFIES DATA SET SIZE AT WHICH AN

/* 0VERFLOW VOLUME IS PREFERRED FOR MIGRATION OR BACKUP

*/

*/

/***********************************************************************/

/*

SETSYS ML1OVERFLOW(DATASETSIZE(2000000))

/*

/***********************************************************************/

/* SAMPLE SETSYS COMMAND THAT SPECIFIES THE THRESHOLD OF ML1 OVERFLOW */

/* VOLUME POOL SPACE FILLED BEFORE MIGRATION TO ML2 DURING SECONDARY */

/* SPACE MANAGEMENT */

/***********************************************************************/

/*

SETSYS ML1OVERFLOW(THRESHOLD(80))

/*

Chapter 5. Specifying commands that define your DFSMShsm environment

99

100

z/OS DFSMShsm Implementation and Customization Guide

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

advertisement

Table of contents