User manual | IBM Tivoli Asset Discovery Deployment Tips

Show HTML Add to My manuals
89 Pages

advertisement

IBM Tivoli Asset Discovery for z/OS Deployment Tips | Manualzz

Tips on how to deploy

IBM Tivoli Asset Discovery for z/OS

TADz Deployment Tips 08 July 2010 Page 1 of 89

Table of Contents

1 Introduction............................................................................................................ 4

1.1 TADz value overview........................................................................................ 4

1.2 TADz architecture overview.............................................................................. 5

2 Setting up TADz in Test........................................................................................ 7

2.1 Kick Off meeting................................................................................................7

2.2 Setting up the infrastructure needed to run TADz components......................... 7

2.2.1

Downloading TADz from ShopzSeries..................................................8

2.2.2

2.2.3

2.2.4

2.2.5

2.2.6

SMPE install TADz................................................................................9

Registering for TADz support notifications.........................................11

z/OS customization.............................................................................. 12

Security.................................................................................................13

DB2 for z/OS customization................................................................ 15

2.2.7

Tivoli Common Reporting (TCR) install............................................. 16

2.3 TADz database install...................................................................................... 19

2.3.1

2.3.2

Creating TADz JCLLIB & PARMLIB data sets with HSISCUST...... 19

Creating a TADz database....................................................................24

2.4 TADz TCR Report Package install.................................................................. 25

2.4.1

Downloading the TADz TCR Report Package.................................... 25

2.4.2

2.4.3

Importing the TADz TCR Report Package into TCR.......................... 25

Defining the TADz TCR report Data Source....................................... 27

2.5 Getting familiar with TADz components.........................................................34

2.5.1

2.5.2

2.5.3

2.5.4

TADz Inquisitor................................................................................... 34

TADz Inquisitor Data Import............................................................... 36

TADz Match Engine............................................................................ 37

TADz Load to Repository.................................................................... 40

2.5.5

2.5.6

2.5.7

2.5.8

TADz USS Inquisitor, Import, Match, and Load to Repository...........42

TADz Usage Monitor...........................................................................42

TADz Usage Data Import.....................................................................44

TADz SCRT Data Import.....................................................................45

2.5.9

TADz TCR reports............................................................................... 46

3 Preparing TADz for Production........................................................................... 50

3.1 Starting the roll out of TADz remote components........................................... 50

3.1.1

3.1.2

Change Control.....................................................................................50

z/OS customization.............................................................................. 50

3.1.3

Security.................................................................................................50

3.2 Populating the Test TADz database with Production data...............................50

3.2.1

3.2.2

3.2.3

3.2.4

3.2.5

3.2.6

Resetting the TADz Repository............................................................50

Considerations for large customers...................................................... 51

Downloading the latest Global Knowledge Base (GKB).....................53

Inventory data....................................................................................... 55

Usage data............................................................................................ 57

Browsing raw Usage and Inventory data..............................................61

3.3 Verifying the TADz data quality...................................................................... 63

3.3.1

3.3.2

3.3.3

Verifying Usage data from all LPARs that are populated....................63

Verifying IQ data for all Inventories that are populated.......................63

Verifying product identification quality............................................... 63

TADz Deployment Tips 08 July 2010 Page 2 of 89

3.3.4

Asset Manager review of TADz Asset Reports................................... 73

4 Setting up TADz in Production............................................................................75

4.1 z/OS customization.......................................................................................... 75

4.2 Creating a TADz database on a Production/Development DB2 subsystem.....75

4.3 Copying the Test TADz database to Production/Development....................... 79

4.4 Automating Usage Monitoring.........................................................................81

4.5 Automating Usage Import................................................................................ 81

4.5.1

Weekly Usage import........................................................................... 81

4.5.2

Daily Usage Import.............................................................................. 85

4.6 Automating the Match Engine......................................................................... 88

4.7 Automating SCRT Import................................................................................ 88

4.8 Database Housekeeping....................................................................................88

4.8.1

4.8.2

4.8.3

Database backups................................................................................. 88

Database REORG for performance...................................................... 88

TADz Usage Deletion / Summary jobs................................................ 88

TADz Deployment Tips 08 July 2010 Page 3 of 89

1 Introduction

This document contains tips for deploying IBM Tivoli Asset Discovery for z/OS v7.2.

In the rest of this document, the term TADz refers to IBM Tivoli Asset Discovery for z/OS v7.2, and is not to be used in any other context.

** IMPORTANT ** The first tip to note is the Table of Contents. The structure of the sections reflects the recommended approach to deploy TADz.

This document supplements the information in the Tivoli Asset Discovery for z/OS

Administration and Reference manual, which can be downloaded from here: http://publib.boulder.ibm.com/infocenter/tivihelp/v29r1/topic/ com.ibm.tivoli.tad4z.doc_old/welcome1.html

If you would like further clarification on what is explained in this document, send an e-mail to [email protected]

.

1.1 TADz value overview

z/OS products are shared by many users and business units. Consequently, it can be difficult to manage support and license costs effectively. Without TADz, sites have to rely on educated guesswork and local experience to tackle the challenges of managing this shared environment. This approach is particularly ineffective for Service

Providers and companies that have had acquisitions. For example, if you don’t know what is “normal” in the newly acquired environment, it is hard to keep the environment stable and even harder to obtain savings by consolidating resources.

The key to effectively managing your z/OS support and license costs is being able to see inside of this shared environment and accurately determine:

 Which products are installed (versions and releases).

 Where the products are installed (machines, LPARs, and libraries).

 Who are using the products (user IDs, job names, and job account codes)?

 When are the products being used (trend graphs).

TADz helps you tackle the following scenarios:

 Reducing unexpected outages from z/OS product and application upgrades.

For example: o Seeing who would be impacted by an upgrade.

o Seeing which products a job is using.

o Seeing where different maintenance levels are deployed.

o Seeing what needs to be replicated in your Disaster Recovery systems.

 Getting the best value from your z/OS software budget.

For example: o Understanding product usage trends (very important for contract renewal negotiations).

o Dropping products that are no longer being used.

o Consolidating product versions.

TADz Deployment Tips 08 July 2010 Page 4 of 89

o Consolidating similar products (inherited from company mergers).

o Consolidating product machine/system coverage.

o Subcapacity license optimization.

o Proving to management that your budget is fully utilized to avoid funding cuts or for justifying increases.

 Becoming “audit ready” to avoid costly license compliance violations.

For example: o Conducting Product inventory verification.

o Conducting an audit trail of Product use per LPAR.

o Integrating with Tivoli Asset Management for IT, which provides full asset life cycle management. (contacts, financial, and procurement).

TADz provides:

 Discovery of IBM products, third party products, and applications.

 Monitoring of product and application usage.

 Interactive web reporting that enables you to see high level trend graphs with easy hyperlink drill down navigation to see where the products are deployed and who is using them.

 Bolt-on integration with Tivoli Asset Management for IT for full asset life cycle management.

1.2 TADz architecture overview

The following diagram illustrates the TADz architecture.

All of the mainframe components shown in a green box above (for example the

Inquisitor) are batch jobs, with the exception of the Usage Monitor, which is run as a

Started Task in Production, or as a batch job in Test.

TADz Deployment Tips 08 July 2010 Page 5 of 89

The Repository, IQ, and Knowledge Base, are sets of tables in the one TADz database. In DB2 terminology these are known as different “schemas”.

** IMPORTANT ** The TADz database contains two types of data about your environment:

1. Discovery data: o There is a “TADz Inventory” for each DASD pool.

o The TADz Inquisitor scans DASD to discover information about deployed load modules. The raw output inventory data, often referred to as IQ data, does not contain any product knowledge. At this stage it is not known if the modules are for a product or an application.

o The IQ data is imported into the central TADz database.

o The TADz Match Engine identifies the Products that the modules are for, based on the TADz Knowledge Bases and algorithms to handle environmental factors.

o The TADz Load to Repository job copies matched IQ data to a common Repository that will hold data from all “TADz Inventories” and usage data.

o Most reports query the Repository set of tables.

o Note that the TADz Inventories (per shared DASD pool) are shown in the TADz TCR Discovery reports. They are NOT shown in the TADz

TCR Asset reports, since the Asset reports are designed for Asset

Managers who typically focus on determining which Systems the

Product Versions are being used from, and are not concerned about

DASD.

2. Usage data: o Usage data is collected from all z/OS systems o The TADz Usage Monitor (UM) is the primary way to collect Usage data. The raw output usage data, often referred to as UM data, does not contain any product knowledge i.e. at this stage we don’t know if the modules are for a product or an application.

o The TADz Usage Import component imports the UM data files into the

TADz repository tables, correlating the raw module usage data with the previously matched Inventory data. In turn product usage is deduced.

o Note that the UM tracks module usage in terms of how often a module is loaded into memory for execution. It does NOT track CPU consumption of products (since this is not possible for the vast majority of products).

TADz Deployment Tips 08 July 2010 Page 6 of 89

2 Setting up TADz in Test

TADz makes a lot more sense after you’ve had a chance to play with it. There are various implementation options that may seem confusing for the first time user, so it is a good idea to initially set up a Test environment using the default settings as much as possible, as explained in the rest of this document.

TADz deployment will require involvement by the following specialists:

Role

TADz Administrator

Change Manager

Comments

Primary administrator for TADz. Some sites have a systems programmer to perform this role. Other sites have a z/OS-skilled Asset Manager to perform this role.

z/OS Systems Programmer Involved in initial set up only (SMPE install and z/OS configuration settings).

DB2 for z/OS Database

Administrator (DBA)

Distributed OS Systems

Involved in initial set up and possibly future database tuning / house keeping.

Involved in initial install of Tivoli Common Reporting.

Administrator

Security Administrator Involved in initial set up to grant security access for the

TADz components.

Involved in coordinating the roll out of TADz

Asset Manager components to Production LPARs.

Inspects TADz Asset Reports.

The following sections mention if specialist skills are needed to perform the steps in that section. If no specialist is explicitly stated, the TADz Administrator is the person who should perform the steps.

2.1 Kick Off meeting

It is beneficial to have a Kick Off meeting with the various specialists that will be involved to make sure everyone is aware of each others role and set a delivery time schedule.

2.2 Setting up the infrastructure needed to run TADz components

The tasks in this section are generally performed by infrastructure specialists, who don’t necessarily need to have detailed knowledge of TADz. TADz specific customizations are explained in later sections.

TADz Deployment Tips 08 July 2010 Page 7 of 89

2.2.1 Downloading TADz from ShopzSeries

To be done by a systems programmer who has access to ShopzSeries

The TADz Product id is 5698-B39. There are two images that need to be downloaded:

 The TADz SMPE image is downloaded from CBPDO link.

 Tivoli Common Reporting DVD (.iso) image is downloaded from the CD/

DVD link:

TADz Deployment Tips 08 July 2010 Page 8 of 89

2.2.2 SMPE install TADz

To be done by a systems programmer who does SMPE installs

The SMPE installation process is standard, For example, Receive, Apply, and Accept.

The FMID is HHSI720.

The following Target libraries will be created as part of the SMPE install:

Data set LLQ

SHSIEXEC

SHSIGKB1

SHSIMENU

SHSIMJPN

SHSIMOD1

SHSIPARM

SHSIPROC

SHSISAMP

SHSITCR1

Description

TADz REXX code.

TADz Global Knowledge base data.

TADz message templates in English.

TADz message templates in Japanese.

TADz load modules.

TADz templates used to populate &HSIINST..PARMLIB by the

HSISCUST job (see below).

TADz JCL PROCs.

TADz templates used to populate &HSIINST..JCLLIB by the

HSISCUST job (see below).

TADz TCR report package, which gets installed into TCR (Note that this is NOT the TCR DVD image).

Make sure all available maintenance is applied. At a minimum, the PTFs listed in the

TADz Preventative Service Planning (PSP) bucket should be installed. The PSP can be viewed using the following URL: http://www14.software.ibm.com/webapp/set2/psearch/search?

domain=psp&exp=n&apar=exclude&q=&search1.x=0&search1.y=0&search1=Search

&sort=2&pgLen=10&dr=&IBMDropDown=0&IBM4Fax=&Geography=&SWPTFN umber=HHSI720&SWComponentID=&SWProductAlias=&TypeModel=&IBMTask

=

At the time of writing this document (April 2010), the following PTFs are available:

PTF APAR Description

UA49387 OA29909 TADz V720 GA PTF

UA50051 OA30187 Automation Server scouting failed with HSIA006E &

HSIA010E.

UA50321 OA30605 Usage Import does not honor COMMIT_FREQ TPARAM value.

UA51789 OA31082 Provide DB2 JDBC Enterprise drivers to allow TCR to access the TADZ DB2 database on z/OS.

UA52162 OA31381 Drill down problem when system name is less than 4 characters.

UA52359 OA31430 Remove the requirement to use the Storage Group

"SGHSIIDX". Allow the use of the default storage group

(SYSDEFLT).

UA51790 OA31440 Match Engine loops.

UA52015 OA31566 ERROR: Illegal symbol in Match Engine when DB2.

TADz Deployment Tips 08 July 2010 Page 9 of 89

See

OA31566

Subsystem specifies DECIMAL=COMMA. SQLCODE =

-104

OA31572 Illegal symbol in Match Engine when DB2 Subsystem specifies DECIMAL=COMMA. SQLCODE = -104

UA52426 OA31649 Aggregator and Load to Repository don't allow user to override the Storage Group for Indices of temporary

Tables.

UA52678 OA31860 Modify the Product Tagger to issue new message

HSIT031S.

UA53239 OA32022 TADz Inquisitor abend S213-04 RC=08.

openUA53 OA32454 Usage Import message HSIC021S, error code 6813.

783 openUA53

667 openUA53

877

OA32326

OA32436

Inquisitor ABEND S878-10.

Delete Inventory job HSISDINV hangs.

UA53345 OA32087 HSISREST, HSISRUSS and HSISRIQF are not using

HSISCUST KBSTORC setting.

Ddoc OA32328 TADZ DOCUMENTATION STATES THAT TCR

SUPPORTS WIN XP WHEN IN DOES NOT.

UA53233 OA32153 THE PROC HSIJMON WILL FAIL IF IT IS RUN AS A openUA53

STC.

OA32270 UMON ABEND WITH S0C4 WHEN MODIFY D-C

533 COMMAND ISSUED.

UA54004 OA32575 INCORRECT MODULE NAME IN THE HSISBATR

REPORT.

Doc OA32677 TO CLARIFY THE FUNCTION OF THE INDEX

BUFFER POOL OVERRIDES.

UA54490 OA32675 HIPER : USAGE IMPORT FAILURE - AGGR CC=08

UA54430 OA32682 SCRT IMPORT ERROR. SQLCODE : -404

UA54091 OA32716 UPDATE JCL IN SHSISAMP TO CATER FOR JES3

UA54709 OA32913 HSITAGP ABEND 0C4-11

UA54983 OA32689 CONVERT SMF DATA INTO A USAGE DATA

FORMAT THAT THE USAGE IMPORT CAN

PROCESS - USEFUL FOR THE PROOF OF CONCEPT

SCENARIO.

UA54386 OA33060 ADD SUPPORT FOR JNM=N AND UID=N OPTION

Doc

TO TLCMz MONITOR CONVERSION UTILITY.

OA33225 DB2 ACCESS AUTHORITY. SQLCODE = -551

TADz Deployment Tips 08 July 2010 Page 10 of 89

2.2.3 Registering for TADz support notifications

To be done by a systems programmer who installs SMPE maintenance and the TADz Administrator

The TADz Support Portal can be reached using the following URL: http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/

Tivoli_Asset_Discovery_for_z~OS

Click on the “Create or update your subscription for this product” link to register for notifications.

TADz Deployment Tips 08 July 2010 Page 11 of 89

2.2.4 z/OS customization

To be done by a systems programmer who has authority to customize the Test z/OS

Copy the TADz target libraries (hlq.SHSI* data sets) to the Test system, which must have a DB2 Subsystem available for TADz.

The SHSIMOD1 data set needs to be APF-authorized. This can be done using the

SETPROG z/OS command. For example:

SETPROG APF,ADD,DSNAME=TADZ.V720.SHSIMOD1,SMS

The TADz Usage Monitor dynamically creates two data spaces (one primary and an extra for a short time when switching). You might need to increase the MAXCAD setting in the IEASYSxx z/OS PARMLB to allow for 2 extra data spaces.

During this Test phase, you can run the TADz Usage Monitor as a batch job.

However, in Production, it should be run as a Started Task (STC), and set up to start automatically soon after an IPL. This involves copying HSIJMON JCL from

SHSIPROC to a data set in the JES PROCLIB concatenation. The STC will need to be profiled to have a user ID that has:

 Access to create and write to output data sets with the high level qualifier defined in the TADz Usage Monitor DSN setting.

 Read access to the data sets defined in the STD JCL DD statements.

The TADz database set up on this Test z/OS will be populated with data gathered from Product LPARs and later be copied to the TADz database on a Production/

Development system. DASD space will need to be allocated to the DB2. Here are some examples from existing TADz customers:

 Example 1 - 5,000 cylinders for a Bank with 9 LPARs on 1 z10

– 300 cylinders for Global Knowledge Base tables

– 2,200 cylinders for IQ tables (2.5 million modules)

– 2,600 cylinders for Repository tables

• 800 cylinders for 2.5 million rows module inventory data

• 1,400 cylinders for 6 million rows of module detail usage covering 2 months

• 200 cylinders for 0.5 million rows for product detail usage covering 6 months

 Example 2 - 17,000 cylinders for Service Provider with 26 LPARs on 5 z10

– 300 cylinders for Global Knowledge Base tables

– 6,600 cylinders for IQ tables (7 million modules)

– 10,000 cylinders for Repository tables

• 2,000 cylinders for 7 million rows of module inventory data

• 5,500 cylinders for 25 million rows of module detail usage covering 2 months

• 2,400 cylinders for 5 million rows for product detail usage covering 10 months

 These customers could choose to reduce the size by over 50% by:

– Dropping the IQ tables after the match and load to repository

– Keeping less detailed usage data in the repository.

Note that the size of the summary data (how many jobs are accessing a

TADz Deployment Tips 08 July 2010 Page 12 of 89

product) is negligible compared to the detail data the actual job names accessing a product).

The DASD is usually allocated to DB2 when the z/OS Systems Programmer profiles a data set high level qualifier to the DASD volumes. The DBA then sets the DB2

Storage Group (TADz HSISCUST settings) to use the data set high level qualifier.

Alternatively the DBA can explicitly define which DASD volumes to use when the

DB2 Storage Group is set up (TADz HSISCUST settings).

2.2.5 Security

To be done by a systems programmer or Security Administrator that has authority to grant access on the Test z/OS

Resource

(Optional for Test z/OS) TADz Usage

Monitor Started Task name. Example:

HSIJMON

(Optional) TADz Automation Server

Started Task name. Example:

HSIJAUTO

TADz runtime library high level qualifier for SHSIMOD1. Example:

TADZ.V720.**

TADz configuration library high level qualifier for JCLLIB and PARMLIB

Example:

TADZ.V720INST.**

TADz database storage group data high level qualifier (VCAT). Example:

TADZ.**

To be more precise, the second qualifier should be DSNDB*. For example, if the

VCAT setting is TADZ, DB2 will allocate data sets in the following format

TADZ.DSNDB*.**

Refer to HSISCUST settings that you must define for more details about the

Storage Group setting.

TADz Usage Monitor output library high level qualifier. Example:

TADZ.UM.**

TADz Inquisitor output library high level qualifier. Example:

TADZ.IQ*.**

(Optional) Read access to all z/OS load module data sets is required IF the TADz

Inquisitor is run in NOAPF mode. This is not necessary if the Inquisitor is run in

Access

(Optional) Started Task user ID.

(Optional) Started Task user ID.

Alter access to the z/OS Sysprog

Read access to the TADz Administrator,

DBA and the TADz Started Task user IDs.

Alter access to the z/OS Sysprog, TADz

Administrator and DBA

Read access to the TADz Started Task user IDs.

Alter access to the z/OS Systems programmer, TADz Administrator, DBA and the DB2 Subsystem.

Alter access to the z/OS Sysprog, TADz

Administrator and the TADz Started Task user IDs.

Alter access to the z/OS Systems programmer and TADz Administrator.

(Optional) Read access to the TADz

Administrator.

TADz Deployment Tips 08 July 2010 Page 13 of 89

APF mode, which is the recommended mode.

Read access to all USS files, for scanning by the TADz USS Inquisitor.

This can be done by granting UID(0) or

RACF UNIXPRIV class. Example:

 RDEL UNIXPRIV

SUPERUSER.FILESYS.**

 RDEF UNIXPRIV

SUPERUSER.FILESYS.** UACC

(NONE) OWNER(MACNIVE)

 PE SUPERUSER.FILESYS.**

CLASS(UNIXPRIV) RESET

 PE SUPERUSER.FILESYS.**

CLASS(UNIXPRIV) ID(JKATNIC)

ACCESS(READ)

 SETR CLASSACT(UNIXPRIV)

 SETR RACLIST(UNIXPRIV)

SETR RACLIST(UNIXPRIV) REFR

TADz TCR data source user ID, used to query the TADz DB2 database.

For the initial testing, any user ID that has access to the DB2 may be used. For ongoing TCR use, it is recommended that a dedicated user ID is defined.

Read access to the TADz Administrator.

Define user ID and password

(The DBA will grant read access within the DB2 Subsystem).

TADz Deployment Tips 08 July 2010 Page 14 of 89

2.2.6 DB2 for z/OS customization

To be done by a DBA who has authority to customize the DB2 Subsystem

TADz requires DB2 for z/OS on one z/OS LPAR, with the following:

 DB2 Distributed Data Facility (DDF) must be started.

 Call Library Interface (CLI / ODBC) DB2 Plan must be enabled.

o The DBA does this by running DSNTIJCL from DB2 SDSNSAMP to bind the DSNACLI plan.

o

** IMPORTANT ** Depending on the DB2 maintenance level, it is possible to run the DSNTIJCL bind job okay and still get SQL error code -805 when running TADz jobs. To fix this problem, DSNTIJCL must be rerun with SQLERROR(CONTINUE) added to the Bind statement for MEMBER(DSNCLIMS). It is okay to do this, regardless of the DB2 maintenance level. For example:

BIND PACKAGE (DSNAOCLI) MEMBER(DSNCLIMS) -

CURRENTDATA(YES) ENCODING(EBCDIC)

SQLERROR(CONTINUE)

 REXX DB2 Plan must be enabled o The DBA does this by running DSNTIJRX from DB2 SDSNSAMP to bind the DSNREXX plan.

 JDBC DB2 Plan must be enabled o Unlike CLI and REXX, the bind cannot be issued from z/OS. There are several ways to do the bind. For example:

 DB2 Connect may be used to issue the JDBC Bind

 After the TADz TCR report set (HSITCR) has been installed in

TCR, it is possible to run the JDBC Bind from the TCR server using this url: java com.ibm.db2.jcc.DB2Binder –url <db2 url> -user <z/os userid> password <pw>

If you are using DB2 version 8, at least one 8K table space must be defined in a

TEMP database to support Declared Global Temporary tables. This is not necessary for DB2 version 9 or above. This can be done with the following SQL:

CREATE DATABASE TEMPDB AS TEMP;

COMMIT;

CREATE TABLESPACE DSN8K01 IN TEMPDB

USING STOGROUP SYSDEFLT

PRIQTY 720

SECQTY 144

ERASE NO

BUFFERPOOL BP8K0 SEGSIZE 4

CLOSE NO ;

COMMIT;

TADz Deployment Tips 08 July 2010 Page 15 of 89

2.2.7 Tivoli Common Reporting (TCR) install

To be done by the System Administrator of the Distributed OS that TCR will be installed on

This section explains how to do a basic TCR install. TADz customization is

explained in a later “ TADz TCR Report Package Install ” section and is generally

performed by the TADz Administrator through their Browser.

TCR is used by TADz as a report rendering web interface. The end user connects to the TCR Server through their Browser (Internet Explorer or Firefox). The TCR

Server queries the TADz database on DB2 for z/OS through JDBC communications.

No TADz data (apart from the TADz Report Package) is stored on the TCR Server.

If you already have TCR installed for another Tivoli product, you may want to consider using this for TADz too.

Multiple TCRs can access the TADz database. If your client PCs have enough capacity, it is possible to install TCR on these PCs instead of having a dedicated server. In this case, the end user would use their Browser to log in to their local host instance of TCR.

The TCR DVD “.iso” image downloaded for ShopzSeries can be burnt to a DVD or mounted:

 On Windows, Microsoft as a free tool to mount CD/DVD images. Search the

Microsoft website for “VCdControlTool”

 On Unix, the standard mount command can be used to mount the image. For example: o su – o mkdir –p /mnt/iso o mount -o loop downloaded_tcr.iso /mnt/iso

Make sure you meet the following hardware requirements:

 Process memory requirement - 2 GB

 Disk storage requirement - 662 MB

 Processor speed requirement - for best performance, processor speeds should be at least 1 GHz for RISC architectures and 2 GHz for Intel® architectures.

Choosing faster processors should result in improved response time, greater throughput, and lower CPU utilization.

Make sure you meet the following software requirements:

 Supported operating systems are (32–bit only): o Solaris version 9, 10, or 11 o Red Hat Enterprise Linux® version 4 or 5 o Red Hat Enterprise Linux 5 on System z™ o SUSE Linux version 9 or 10 o SUSE Linux 10 on System z o HP-UX version 11iv2 or 11iv3 o IBM AIX® version 5.3 or 6.1

o Microsoft® Windows® 2003 Server o Microsoft Windows 2008

TADz Deployment Tips 08 July 2010 Page 16 of 89

 ** IMPORTANT ** If you are installed on Red Hat Enterprise Linux® or

SUSE, the compat-libstdc++-33-3.2.3-47.3 package must have previously been installed.

Note: TCR installs a Java Runtime Engine (JRE), and then invokes a Java program to do the rest of the installation. If the compat-libstdc+

+-33-3.2.3-47.3 package is not previously installed, the Java program will not start and there may not be any error messages.

TCR can be installed through a UI wizard or a non-UI console command.

 On Windows run launchpad.exe (it will automatically start on Windows if a real DVD is used).

 On Linux/Unix run launchpad.sh

For more information on the TCR installation process, refer to the TCR manual. This is available on the TCR DVD and the following URL: http://publib.boulder.ibm.com/ infocenter/tivihelp/v3r1/index.jsp?topic=/com.ibm.tivoli.tcr.doc/ttcr_install.html

** IMPORTANT ** In the TADz manual “Chapter 6. Post-installation tasks for

Tivoli Common Reporting” it explains that DB2 JDBC drivers and license files need to be manually copied to a TCR directory. This step has been made obsolete by

TADz APAR OA31082, which imbeds the DB2 JDBC drivers within the TADz TCR report package (HSITCR). It must be reiterated that all other instructions in Chapter 6 still remain in force, such as Transferring and Importing the TCR report package

HSITCR after its APAR is installed via SMPe.

On Windows, the installation process creates a Service for the “TCR Server” but the name is “Tivoli Integrated Portal …”

The installation process also defines shortcuts to manually stop and start the TCR

Server.

The short cut to “Start Tivoli Common Reporting Browser” will launch your browser

(Firefox or Internet Explorer) with the URL for TCR.

TADz Deployment Tips 08 July 2010 Page 17 of 89

Confirm that you can logon to TCR.

Now that TCR is installed, you’ll need to let the TADz Administrator know the following:

 The URL to logon to TCR ( https://tcrserver:16316/ibm/console/logon.jsp

)

The URL will be shown at the end of the TCR installation.

Alternatively, click the “Start Tivoli Common Reporting Browser” shortcut and then copy the URL.

 The TCR admin user ID and password (example tipadmin), which you would have specified during the installation.

TADz Deployment Tips 08 July 2010 Page 18 of 89

2.3 TADz database install

2.3.1 Creating TADz JCLLIB & PARMLIB data sets with HSISCUST

To be done by the DBA who has authority to customize the DB2 Subsystem and the TADz Administrator.

TADz needs to know various local environment settings, such as the DB2

SDSNLOAD data set name. To avoid you having to define these in each TADz job, the HSISCUST job in SHSISAMP data set is used to define all the local settings in a

SYSIN stream. This job then generates numerous JCL and PARMLIB members with the local settings defined.

Please refer to the section “Creating post-installation jobs” in chapter 5 of the TADz manual for more details about the settings in HSISCUST. There are settings that must be defined and tuning settings that you can let default. As previously mentioned, it is recommended that you use the defaults as much as possible when setting up the Test database.

TADz Administrators may not always have DB2 experience, and this can be daunting to begin with. Points to keep in mind are:

 After the DBA has helped you create the DB2 database, most of TADz processing simply entails running batch jobs. This is similar to TLCMz batch job processing, except a database is used instead of PDSE data sets.

 DB2 has the following hierarchical structure o DB2 Subsystem

 DB2 Databases

 DB2 Table spaces o DB2 Tables

 DB2 Indexes o TADz uses one DB2 Database within one DB2 Subsystem. If you

“drop” (delete) the DB2 Database the child table spaces are also dropped.

o DB2 Table spaces and DB2 Indexes are maintained by DB2 as separate physical data sets. The Storage Group settings defined in HSISCUST below, determine the data set high level qualifiers for these data sets.

o DB2 Databases and Tables are logical constructs, not physical data sets.

o Each DB2 Table belongs to a schema (also known in DB2 as the

“creator”). In other words, a Schema is a group of tables. TADz uses several schemas. For example, SI7 is the default schema for the TADz repository tables. The schema name + table name is unique per subsystem (not per database). Consequently, when choosing the schema names, you need to make sure that you don’t conflict with other schemas that are on the same subsystem. The default names are unlikely to conflict unless there are multiple TADz databases installed on the same subsystem.

o DB2 Connect is a product that you may optionally install on your PC.

It has a feature called DB2 Control Center that enables you to browse

TADz Deployment Tips 08 July 2010 Page 19 of 89

the TADz tables. Your DBA will be able to help you set up DB2

Connect to access your TADz database.

2.3.1.1 HSISCUST settings that you must define

Setting & example

SET HSI=’TADZ.V720’

Description

Data set high level qualifier for the TADz target libraries that were copied to the Test z/OS after the

SMPE install (HLQ for SHSIMOD1).

SET ISP=’ISP’ Data set high level qualifier for ISPF data sets ( HLQ for SISPSENU).

HSIINST=’TADZ.V720INST’ Data set high level qualifier to generate

&HSIINST..JCLLIB and &HSIIINST..PARMLIB

data sets, which will contains jobs and parm

DB2LOAD =

'DB2.V810.SDSNLOAD'

DB2EXIT =

'DB2.V810.SDSNEXIT' members based on the settings specified in

HSISCUST.

The data set name of the DB2 SDSNLOAD data set.

This is used for TADz job STEPLIBs. The DBA will know the data set name to specify. This data set is usually in the DB2 Subsystem STEPLIB too.

The data set name of the DB2 Exit data set. This is

DB2RUN =

'DB2.V810.RUNLIB.LOAD'

CEERUN =

‘CEE.SCEERUN'

CBCDLL = used for TADz job STEPLIBs. The DBA will know the data set name to specify. This data set is also usually found in the DB2 Subsystem STEPLIB.

The name of the DB2 RUNLIB data set that contains the DSNTIAD module. The DBA will know the data set name to specify.

LE SCEERUN data set. This is used for TADz job

STEPLIBs.

C++ SCLBDLL data set. This is used for TADz job

TADz Deployment Tips 08 July 2010 Page 20 of 89

'CBC.SCLBDLL'

TIADPLAN = 'DSNTIA81'

DBSSID = 'DE81'

LOC = 'DE81LOC'

SGTABCAT = ‘TADZ’

SGTABVOL = '*'

SGBIGCAT = 'TADZ'

SGBIGVOL = '*'

SGIDXCAT = 'TADZ'

SGIDXVOL = '*'

DB = 'DBTADZ'

DBADMIN = ‘USER1

USER2’

KBMGMTC = 'PRIMARY'

KBSTORC = 'PRIMARY'

KBVOLS = ''

STEPLIBs.

The DB2 plan name used by the DSNTIAD module.

The DBA will know the data set name to specify.

DB2 Subsystem where the TADz database will reside.

DB2 location as used by DSNAOINI. You can also use the DB2 DISPLAY DDF command to see the

Location value for the subsystem.

DB2 Storage Group settings for TADz. Example:

CREATE STOGROUP SGHSITAB

VOLUMES ('&SGTABVOL')

VCAT &SGTABCAT;

GRANT USE OF STOGROUP SGHSITAB TO

PUBLIC;

Note that VCAT is the data set high level qualifier used by DB2 for TADz table space/index data sets, and in turn, DB2 will need to have Alter access for this HLQ. DB2 will allocate data sets in the format

<vcat>.DSNDBC.<db>.** and

<vcat>.DSNDBD.<db>.** .

Often the VCAT name reflects the DB2 subsystem name, DBDE81 for subsystem DE81, and DBDE91 for subsystem DE91.

Your z/OS Systems programmer should have already profiled the HLQ to use certain Volume names, or will provide you with a list of Volumes that you must explicitly define here.

TADz Database name that will be created in

HSIDB01 job.

List of user IDs that will be Granted DBADM access to the TADz database. The user ID for the TADz

Administered(s) should be defined here since some

TADz jobs drop and create tables.

SMS management and storage class to use when running the Knowledge Base and Filter load jobs

(HSIDB03, HSIDB05, HSIDB11). These jobs use

DFDSS utility ADRDSSU to temporarily RESTORE sequential data sets, which are then loaded into DB2 and deleted. Example:

RESTORE DATASET( INCLUDE(HSI$KB.** ) ) -

INDD(INDSS) -

CATALOG -

)SEL &KBMGMTC ^= &Z

MGMTCLAS(&KBMGMTC) -

)ENDSEL

)SEL &KBVOLS ^= &Z

OUTDYNAM(&KBVOLS) -

)ENDSEL

RENAMEUNC( (HSI$KB.**,&HSIINST..**) ) -

)SEL &KBSTORC ^= &Z

TADz Deployment Tips 08 July 2010 Page 21 of 89

ASMGMTC = 'PRIMARY'

ASSTORC = 'PRIMARY'

ASVOLS = ''

STORCLAS(&KBSTORC) -

)ENDSEL

IMPORT -

WAIT(2,2)

SMS management and storage class to define the

TADz Automation Server control data set in job

HSIASALC. This only needs to be defined if you intend to use the TADz Automation Server.

DEFINE CLUSTER( -

NAME(&HSIINST..HSIASCDS) -

CYLINDERS(5 5) -

)SEL &ASMGMTC ^= &Z

MANAGEMENTCLASS(&ASMGMTC) -

)ENDSEL

)SEL &ASSTORC ^= &Z

STORAGECLASS(&ASSTORC) -

)ENDSEL

)SEL &ASVOLS ^= &Z

VOLUMES(&ASVOLS) -

)ENDSEL

SHR(2,3) KEYS(52 0) RECSZ(96 200) -

)

2.3.1.2 HSISCUST settings that you may let default

Setting & example

KBCPYPND = 'YES'

GKBSCHMA = 'GKB7'

GKBUSCHM = 'GKU7'

LKBSCHMA = 'LKB7'

Description

This setting governs if JCL steps should be generated in HSIDB03 to Set No Copy Pending after using the DB2 Load utility. It is STRONGLY recommended that you keep this setting as YES.

DB2 table schema name used for the TADz z/OS

Global Knowledge Base. This must be unique per

DB2 Subsystem.

DB2 table schema name used for the TADz USS

Global Knowledge Base. This must be unique per

DB2 Subsystem.

DB2 table schema name used for the TADz z/OS

Local Knowledge Base. This must be unique per

DB2 Subsystem.

DB2 table schema name used for the TADz USS LKBUSCHM = 'LKU7'

REPSCHMA = 'SI7'

FLSCHMA = 'IQF7'

Local Knowledge Base. This must be unique per

DB2 Subsystem.

DB2 table schema name used for the TADz

Repository. This must be unique per DB2

Subsystem.

DB2 table schema name used for the TADz IQ filters. This must be unique per DB2 Subsystem.

IQSCHEMAS = 'IQ1 IQ2 IQ3' List of DB2 table schema names used for the TADz

IQ import and match processing. These names must

TADz Deployment Tips 08 July 2010 Page 22 of 89

IQWMPRI = 176400

IQWLPRI = 480

IQWMUPRI = 2640

IQWLUPRI = 48

LOGGED = 'Y' be unique per DB2 Subsystem.

In Test, it is easier to use the default names.

In Production, most sites will use a name that reflects the shared DASD pools. For example,

IQPLEX1, IQPLEX2.

DB2 primary space allocation (PRIQTY) in

Kilobytes for the table space used to store IQ data.

These defaults are good for a small z/OS site. The table spaces are defined SECQTY=-1, and DB2 will choose the most appropriate secondary size on a sliding scale.

For Test, keep with the defaults.

For Production, your DBA can review the size of the

Test table spaces and define a larger value if they want to reduce the number of secondary extents (see later section).

This setting determines if IQ table processing is recorded to the DB2 log.

For DB2 v8 this must be set to Y since DB2 v8 only supports logging.

For DB2 v9 it is recommended that you set this to N, but is okay to set Y.

DB2 Buffer Pools to use for the TADz table spaces and indexes.

BGKBTS = 'BP1'

BGKBTS1 = 'BP1'

BGKBIX = 'BP1'

BGKBIX1A = 'BP1'

BGKBIX1B = 'BP1'

BGKBIX1C = 'BP1'

BGKUTS = 'BP1'

BGKUIX = 'BP1'

BLKBTS = 'BP1'

BLKBIX = 'BP1'

TADz Deployment Tips 08 July 2010 Page 23 of 89

2.3.2 Creating a TADz database

To be done by the DBA who has authority to customize the DB2 Subsystem and the TADz Administrator.

HSISCUST creates &HSIINST..JCLLIB and &HSIINST..PARMLIB data sets based on the local environment settings defined in HSISCUST SYSIN in-stream deck.

The following jobs in JCLLIB must be run in sequence to create and initialize the

TADz database. You should not get any return codes greater than 4 in these jobs.

Member

HSISDB01

HSISDB02

HSISDB03

HSISDB04

HSISDB05

HSISDB06

HSISDB07

HSISDB11

Description

Job to define DB2 storage groups and the database name.

This job also grants DBADM access to the user IDs listed in the

HSISCUST DBADMIN setting.

Job to create the DB2 table spaces for the Global Knowledge Base.

Job to load the Global Knowledge Base (GKB).

NOTE: This job will be rerun in the future to load updated GKBs from IBM.

Job to create the DB2 table spaces for the z/OS UNIX Global

Knowledge Base.

Job to load the z/OS UNIX Global Knowledge Base.

NOTE: This job will be rerun in the future to load updated GKBs from IBM.

Job to create the DB2 resources for the Local Knowledge Base and z/OS UNIX LKB

Job to create the DB2 resources for the Repository.

Job to create the DB2 resources for the Inquisitor filters and to load the filters.

DI* (DIIQ1)

NOTE: This job will be rerun in the future to load updated GKBs from IBM.

Jobs (per IQ schema) to create DB2 resources for the Inquisitor

Import.

DX* (DXIQ1) Jobs (per IQ schema) to create DB2 resources for the z/OS UNIX

HSISGRNT

Inquisitor Import.

Job to grant (non IQ schema) read access to DB2 resources.

The authid can be “PUBLIC” to grant global read access.

Otherwise each used ID that queries the database. For example, the

TCR data source user ID (see Security section) will need to be

GI* (GIIQ1) granted access.

Jobs (per IQ schema) to grant read access to DB2 resources.

The authid should be set the same as it is in the HSISGRNT job.

GX* (CXIQ1) Jobs (per IQ schema) to grant read access to DB2 resources for z/

OS UNIX. The authid should be the same as it is in the

HSISGRNT job.

TADz Deployment Tips 08 July 2010 Page 24 of 89

2.4 TADz TCR Report Package install

To be done by the TADz Administrator

2.4.1 Downloading the TADz TCR Report Package

Binary download the TADz TCR package from the z/OS target library SHSITCR1 to your local PC. For example, the member HSITCR: ftp pthomu1

Connected to pthomu1.perthapc.au.ibm.com.

220-FTPD1 IBM FTP CS V1R11 at PTHOMU1.PERTHAPC.AU.IBM.COM, 04:29:34 on 2010-02-22.

220 Connection will close if idle for more than 60 minutes.

User (pthomu1.perthapc.au.ibm.com:(none)): mpres

331 Send password please.

Password:

230 MPRES is logged on. Working directory is "MPRES.".

ftp>

cd 'TADZ.V720.SHSITCR1'

250 The working directory "TADZ.V720.SHSITCR1" is a partitioned data set ftp>

dir

200 Port request OK.

125 List started OK

Name VV.MM Created Changed Size Init Mod Id

HSITCR

250 List completed successfully.

ftp: 140 bytes received in 0.00Seconds 140000.00Kbytes/sec.

ftp>

BIN

200 Representation type is Image ftp>

get HSITCR c:\temp\HSITCR.zip

200 Port request OK.

125 Sending data set TADZ.V720.SHSITCR1(HSITCR)

250 Transfer completed successfully.

ftp: 3793282 bytes received in 0.92Seconds 4114.19Kbytes/sec.

ftp>

quit

221 Quit command received. Goodbye.

2.4.2 Importing the TADz TCR Report Package into TCR

Use your Browser (Firefox or Internet Explorer) to log on to the TCR Server. The person who installed TCR will need to let you know the TCR URL, user ID, and password.

TADz Deployment Tips 08 July 2010 Page 25 of 89

Click the Reporting branch in the left navigation pane

Click Common Reporting

Right-click the Report Sets branch in the middle pane

Click Import Report Package

TADz Deployment Tips 08 July 2010 Page 26 of 89

In the Import Report Package dialog, click the Browse button and select the

HSITCR.zip file you have previously downloaded from the SHSITCR1(HSITCR) z/OS data set.

Click the small arrow icon on the right of the Advance Options section to show the section. Then click the Overwrite checkbox on and specify a Security Set such as

“TADz”.

Click Import. This will upload the HSITCR.zip from your local PC and install it into the TCR Server.

2.4.3 Defining the TADz TCR report Data Source

Click the “Tivoli Products” Report Set branch.

Click the “Tivoli Asset Discovery for z/OS” Report Set branch.

Click the “Discovery Administrator Reports” Report Set branch.

Rick-click the “Installation Verification” Report

Click “Data Sources…”

TADz Deployment Tips 08 July 2010 Page 27 of 89

Click the jdbc HSIzREP list to select it

Click the EDIT button

TADz Deployment Tips 08 July 2010 Page 28 of 89

To determine the JDBC URL, you’ll need to run the HSISTCRR batch job on z/OS.

TADz Deployment Tips 08 July 2010 Page 29 of 89

Copy and paste the URL from the HSISTCRR job output to the TCR JDBC URL field.

Note: It is very IMPORTANT to include the trailing semi-colon in the JDBC URL.

Example: jdbc:db2://demomvs.demopkg.ibm.com:4462/

NDCDB201:currentSchema=SI7;

If there is a firewall between the DB2 host and TCR Server, it is possible that the host name may need to be changed to the external host name. The easiest way to test this is from the TCR Server, open a command window and use the ping command, “ping demomvs.demopkg.ibm.com”. If the ping gets a response from the DB2 host, the

URL should be okay. Otherwise, you may need to consult with your network support team to determine the appropriate host name to define. It is also possible that the firewall may block the IP port, and you need to consult with your network support team to have the port defined to the firewall rules.

TADz Deployment Tips 08 July 2010 Page 30 of 89

As the JDBC URL field is short, it is also handy to paste the URL into the “Additional

Comments” field.

In the User ID and Password fields, define a user ID that has read access to the TADz

database tables. Refer to the Security section and the Grant jobs in the Create a TADz database section.

Click the SAVE button.

TADz Deployment Tips 08 July 2010 Page 31 of 89

You should be returned to the “Report Data Sources” screen. Click the CANCEL button to return to the Report Sets screen. (This does not cancel what you have just defined; it just closes this screen).

Click the icon on the left of the “Installation Verification” Report title.

If you have defined the settings correctly, you get the following screen. Otherwise, review the error messages and fix accordingly.

TADz Deployment Tips 08 July 2010 Page 32 of 89

TADz Deployment Tips 08 July 2010 Page 33 of 89

2.5 Getting familiar with TADz components

To be done by the TADz Administrator

To test the end-to-end components are installed correctly, you need to gather the raw inventory/usage data from the Test z/OS, populate it in the TADz database, verify the results in TCR, and optionally reinitialize the database ready for clean population.

The following JCL members from JCLLIB are used for this Verification test. The

“IQ1” member name suffix is based on the IQSCHEMAS name you defined in

HSISCUST.

Member

ZIQIQ1

ZIMIQ1

ZMEIQ1

Description

Run the TADz Inquisitor scan DASD and create an IQ zip file.

Run the TADz IQ Importer to import an IQ zip file.

Run the TADz Match Engine to match the IQ against the z/OS Global

Knowledge Base.

ZLRIQ1 Run the TADz Load to Repository component to populate the common repository that is used for central reporting of all matched inventory and usage data.

HSISUMON Run the TADz Usage Monitor to capture usage data and create a UM zip file.

HSISUIMP Run the TADz Usage Importer to import a UM zip file.

HSISSCRT Run the TADz SCRT Importer to import SCRT CSV files.

Note: For Production, the ZIMIQ1, ZMEIQ1, and ZLRIQ1 jobs are usually run in a single combined job.

Note also there are several jobs that process USS inventory data in a similar way to z/

OS inventory data. These jobs are prefixed with U instead of Z (UIQIQ1, UIMIQ1,

UMEIQ1, and ULRIQ1).

The Usage Monitor handles both z/OS and USS.

2.5.1 TADz Inquisitor

The TADz Inquisitor component scans DASD for load modules and creates an “IQ data” zipped output file, which is later imported into the TADz database.

For this initial Test, just scan the DASD on the Test z/OS. Later, when it comes to setting up the database for Production data, it is important to understand what DASD is shared to avoid redundant scanning/processing.

Edit ZIQIQ1 job in JCLLIB:

 Update the JOBCARD as appropriate

 Update the HSIPZIP DD DSN to the name of the output data set

 Update the SYSIN deck to have SCANDIR DA(*).

 Submit

TADz Deployment Tips 08 July 2010 Page 34 of 89

This job may take a few minutes to run in a small Test environment, or up to an hour in environments that share thousands of DASD volumes. You can reduce this time by adding filter criteria. Here are some examples:

SCANDIR DA(*)

SCANDIR DA(DB2.**, CICS.**)

SCANDIR DA(DB2.**) CATALOG

Scan all online DASD volumes for z/OS load modules

Scan all online DASD volumes for z/OS load module data sets beginning with

DB2 or CICS.

Scan cataloged load module data sets that begin with DB2. All online DASD

SCANDIR DA(*) +

VOL(&SYSRES.,&PROD1,&PROD2) volumes are NOT scanned.

Scan the volumes that are defined in the z/OS system symbols &SYSRES,

&PROD1, and &PROD2.

SCANDIR DA(*) VOL(SYSA*,SYSB*) Scan all online DASD volumes beginning with SYSA or SYSB.

SCANDIR DA(*) + Scan all SMS managed volumes in

)

STOGROUP(SG) +

XSTOGROUP(SGTEMP)

SCANDIR DA(SYS*) +

XDSN(**.OLD, **.BACKUP) +

VOL(AU*,PRI*) +

XVOL(AU0010)

Storage Groups beginning with SG, except volumes in storage group

SGTEMP.

Scan all DASD volumes beginning with

AU or PRI (except volume AU0010) for data sets beginning with SYS and not ending with .OLD or .BACKUP.

Note:

SCANDIR is the normal scan mode that you should be using. TADz L3 Support may request you to run one of the other scan modes, for example SCANPGM, to gather extra diagnostic information.

TADz Deployment Tips 08 July 2010 Page 35 of 89

If you get RC 16 and message “HSIP069U PROCESSING TERMINATED -

PROGRAM IS NOT APF AUTHORIZED” it means the TADz SHSIMOD1 data set is not defined to APF. This can be dynamically defined by the SETPROG z/OS command. Example:

SETPROG APF,ADD,DSN=TADZ.V720.SHSIMOD1,SMS

If you are not able to APF the SHSIMOD1 data set, you can run the Inquisitor in non-

APF mode by adding “,NOAPF” to the INQPARM. However, TADz Usage Monitor

(which is run later) only supports being run on APF mode.

Running the Inquisitor in non-APF mode will be slower than in APF mode. In addition, in non-APF mode, the Inquisitor will only be able to scan load module data sets that the user ID has read access too. Whereas, in APF mode, low level APIs are used to scan the DASD and the user ID does not need read access.

The output from the Inquisitor is a sequential zip format. If you want to browse the

contents, refer to the “ Browsing raw Usage & Inventory Data ” section in this

document.

2.5.2 TADz Inquisitor Data Import

The TADz Inquisitor Data Import component imports the “IQ data” zip file output from the TADz Inquisitor.

TADz Deployment Tips 08 July 2010 Page 36 of 89

For this initial Test, the default schema IQ1 is to be used. Consequently, when it comes to setting up the database for Production data, most sites have an IQ schema for each shared DASD pool that is scanned by the Inquisitor.

Edit ZIMIQ1 job in JCLLIB:

 Update the JOBCARD as appropriate.

 Update the INQDATA DD DSN with the data set name created by the

Inquisitor on ZIQIQ1 job.

 Submit.

For Production, the ZIMIQ1, ZMEIQ1 and ZLRIQ1 jobs are run in a single combined job.

2.5.3 TADz Match Engine

The TADz Match Engine component inspects the imported IQ data and identifies which Products the modules belong to This is done by referencing the Global

Knowledge Base (GKB) and algorithms to handle environmental factors.

Edit ZMEIQ1 job in JCLLIB:

 Update the JOBCARD as appropriate.

TADz Deployment Tips 08 July 2010 Page 37 of 89

Note: The TPRAM deck need not be changed i.e. HSISCUST has set the parameters

 Submit.

For Production, the ZIMIQ1, ZMEIQ1, and ZLRIQ1 jobs are run in a single combined job.

When the Match Engine completes, you can use TCR to review the detailed match results. This is under the “Discovery Administrator Reports” branch – “IQ Database

Report”.

TADz Deployment Tips 08 July 2010 Page 38 of 89

The Green icon means a “Perfect match”; all of the modules in a data set were matched to products.

The Yellow icon means “Partial match”; a subset of the modules in the data set were matched to products. It is not essential that every module for a product is identified, as usage for the other product modules is enough to track product usage.

No icon means “No match”; no modules in the data set were matched to a product.

These are probably application modules and the TADz Tagger can be used to have the

Match Engine identify the application. If the data set came supplied with an IBM or

ISV product, it probably means that the the product is not yet defined in the GKB.

Open a PMR and the TADz team will investigate, and if necessary update the GKB to include the product.

To see more module level details, click on the Module Count hyperlink.

TADz Deployment Tips 08 July 2010 Page 39 of 89

Following is an example module level report for a Partial Match data set:

2.5.4 TADz Load to Repository

The TADz Load to Repository component copies a matched IQ data into the central repository (SI7 schema). Modules that are not matched to a product, for example application modules, are not copied to the Repository.

Edit ZLRIQ1 job in JCLLIB:

 Update the JOBCARD as appropriate

TADz Deployment Tips 08 July 2010 Page 40 of 89

 Update the INVNAME setting in the SYSIN deck with the name you want to call this Inventory. This value is shown in the TADz TCR Discovery reports.

By default, HSISCUST sets this to be the same as the IQ schema name.

Note: If you forget to change this setting and later want to change it (since you see the value in the Discovery reports), you can do so by issuing an SQL

UPDATE for the TINVCTL table. Example:

UPDATE SI7.TINVCTL

SET FINVNAME = 'Test Inventory'

WHERE FINVNAME = 'IQ1';

COMMIT;

 Submit.

For Production, the ZIMIQ1, ZMEIQ1, and ZLRIQ1 jobs are usually run in a single combined job.

After Load to Repository is complete, many of the TCR reports will work.

Notes:

 Data from multiple Inventories are loaded into the same Repository, and the vast majority of the TCR reports query the common Repository.

TADz Deployment Tips 08 July 2010 Page 41 of 89

 Subsequent Inventory Import/Match/Load will update the Repository and add any new product libraries that have been discovered. If data sets had been deleted, the scan does not find them, but the Repository might still have the obsolete data. To mark the obsolete data sets as deleted, set the

REPLACEFULL to Y when running the Load to Repository. This should

ONLY be used if the Inquisitor scanned all DASD with no filtering.

REPLACEFULL=Y means that the Inventory is complete (not a filtered subset), and anything in the Repository that is not in the Inventory can be marked as being deleted. The data (in the Repository) is not physically deleted in order to allow historical reports to still work.

 The Load to Repository also has a step at the end that generates a TADz Usage

Monitor filter deck for the data set high level qualifiers of all known products.

This enables the Usage Monitor to exclude monitoring of application modules; improving TADz performance.

2.5.5 TADz USS Inquisitor, Import, Match, and Load to Repository

In the previous sections, you have learned to process z/OS inventory data. There is a similar process for processing USS inventory. These jobs are prefixed with a U instead of Z (UIQIQ1, UIMIQ1, UMEIQ1, and ULRIQ1).

The user ID that runs the TADz USS Inquisitor needs read access to all USS files.

This can be done through UID(0) security setting or using the UNIXPRIV RACF

Resource Class.

2.5.6 TADz Usage Monitor

The TADz Usage Monitor (UM) component captures module Usage and creates output “UM data” in zipped files. UM is usually run as a Started Task in Production, but for this initial test, run it as a Batch job on the same LPAR where you ran the

Inquisitor.

Note: The Usage Monitor will only capture module usage for jobs that load the modules whilst it is running. In other words, tasks that are already running before the

UM is started will not have Usage recorded unless the new workload causes a module to be loaded.

The HSISUMON job in JCLLIB can be used to run the Usage Monitor. It reads a parameter deck from HSISMNPM member in PARMLIB.

 In HSISMNPM, set the DSN parameter to the output data set high level qualifier. The output file will have a time stamped lower level qualifiers.

 In HSIUMON, update the JOBCARD as appropriate and submit.

 Now the UM is running, invoke a new product to generate some Usage. For example, in ISPF, open a new SDSF screen (you’ll need to close SDSF first if it is already open).

 After a short period of gathering Usage (say 5 minutes), stop the Batch job using the z/OS “P” command ( /P HSISUMON) This will cause UM to write an output file and shut down. If you cancel UM, an output file will NOT be written and the data space that UM dynamically creates will not be freed.

However, if you start UM after it has been cancelled, it will find the old data

TADz Deployment Tips 08 July 2010 Page 42 of 89

space (providing an IPL was not done whilst it was down) and write the usage data to an output file immediately, before continuing to capture new Usage.

Alternatively, you can keep HSIUMON running and use the “SWI” command to cause UM to switch data space, and write an output file. For example,

/F HSIUMON,SWI

TADz Deployment Tips 08 July 2010 Page 43 of 89

2.5.7 TADz Usage Data Import

The TADz Usage Data Import component imports the “UM data” zip file output from the TADz Usage Monitor.

In Production, Automation is generally used to import Usage data from all LPARs, either nightly or weekly. This is discussed in a later section.

Edit HSISUIMP job in JCLLIB:

 Update the JOBCARD as appropriate.

 Update the UIMPDATA with the DSN that HSISUMON created.

 Submit.

The Usage Monitor tracks raw module usage but doesn’t have any knowledge about

Products. It is the Usage Import that correlates the raw usage data with the Product that was previously matched and loaded into the Repository. If the Usage for a module is not for a known product, it is still imported and associated with the “None” product.

If the Match Engine later identifies a product for the module (for example the GKB update or the TADz Tagger), the Load to Repository component will reassign the

Usage from the “None” product to the newly identified product.

The Usage Import job HSISUIMP has several steps. You only need to update the first step with the UM data set name:

 Step MNIMPORT imports the UM raw data into the TADz database. Module usage detail is stored per unique combination of module + job name + userid + job account + usage month.

TADz Deployment Tips 08 July 2010 Page 44 of 89

 Step RUNSTAT1 invokes the DB2 RUNSTATS utility for the detail tables populated in the previous MNIMPORT step. This is needed for optimal performance of the next step.

 Step REPMERG aggregates the module level usage data imported in the first

MNIMPORT step, to Product Release summary tables for the TCR TADz

Discovery reports.

 Step RUNSTAT2 invokes the DB2 RUNSTATS utility for the tables populated in the previous REPMERG step. This is needed for optimal performance of the next step.

 Step AGGR aggregates the Product Release level usage data populated in the

REPMERG step, to Product Version summary tables for the TCR TADz Asset

Reports.

 Step RUNSTAT3 invokes the DB2 RUNSTATS utility for the tables populated in the previous AGGR step. This is needed for optimal performance of the TCR TADz Asset Reports.

As explained later in the Automate Usage Import section, in Production:

 The TADz HSISZCAT utility should be used to combine Usage files from multiple LPARs together into a single file that is imported. This avoids the subsequent steps (RUNSTATS1..RUNSTAT3) being redundantly run for each raw usage file.

 The TADz Usage Deletion (UDEL) utility should be run to keep the database size from growing out of control.

2.5.8 TADz SCRT Data Import

For sites that have System z subcapacity licenses, the Sub Capacity Reporting Tool

(SCRT) is run every month. This reads SMF data and creates a file in comma separated value (CSV) format, which is sent to IBM as part of the subcapacity billing process. The CSV output can be imported into TADz. This enables you to see subcapacity trends (sourced from SCRT data) and drill down to see who is using the products (sources from TADz UM data).

If you do not use SCRT the next job can be skipped.

Edit HSISSCRT job in JCLLIB:

 Update the JOBCARD as appropriate.

 Update the CSVIN DD with the DSN that SCRT populated. This can done using sequential data sets and/or partitioned data sets in a DD concatenation.

The data may also be in EBCDIC or ASCII.

 If you have multiple LPARs that have the same SMFid, you will need to map the SMFid to a unique System Id (SID) through the SIDMAP DD deck.

 Submit.

TADz Deployment Tips 08 July 2010 Page 45 of 89

2.5.9 TADz TCR reports

Now that you have data in the TADz Repository, you can run all of the TADz TCR reports. They are grouped into four Report Sets:

Asset Reports are designed for Asset Managers, and show information about

Product Versions (which have been aggregated up from module level discovery and Usage data).

Discovery Standard Reports are designed for z/OS skilled people. These reports show information about Product Releases that were discovered

(through the Match Engine). You can drill down to the module level.

Discovery Advanced Reports are designed for lower level technical reports.

Discovery Administrator Reports are designed for TADz Administrators.

Note: TCR can be set up to have user IDs with different access. For example, you can profile Asset Managers to just be able to see the Asset Reports and not the Discovery reports.

It is helpful, at this stage, to have a play with the reports.

Click on the icons next to the report title. Some reports will prompt you for parameters. Each Report will be discussed in more detail in a later section.

TADz Deployment Tips 08 July 2010 Page 46 of 89

2.5.9.1 Asset Reports

2.5.9.1.1 Asset – Product Inventory Report

The Asset Product Inventory Report shows a summary of the Product Versions.

There are hyperlinks to drill down for more information.

TADz Deployment Tips 08 July 2010 Page 47 of 89

2.5.9.2 Discovery Standard Reports

2.5.9.3 Discovery Advanced Reports

TADz Deployment Tips 08 July 2010 Page 48 of 89

2.5.9.4 Discovery Administrator Reports

TADz Deployment Tips 08 July 2010 Page 49 of 89

3 Preparing TADz for Production

This section will explain how you can populate your TADz database that is installed on a Test z/OS, with data collected from your Production systems. Later on this is copied to a database on a Production/Development system.

3.1 Starting the roll out of TADz remote components

It can take several months to roll out new code to Production LPARs, so this should be kicked off now.

3.1.1 Change Control

This might involve the Change Control Manager, z/OS Systems programmer,

Security Administrator and TADz Administrator

Depending on the change control procedures used by your site, it might be better to complete this roll out in 2 phases:

1. Roll out TADz target libraries, so the TADz Inquisitor Batch job can be run as soon as possible. (No TADz Started Task yet.)

2. Roll out activation of TADz Started Tasks.

Note: The TADz database is not rolled out yet. This is done later, after it has been loaded with data gathered from Production and the data quality has been verified.

3.1.2 z/OS customization

The changes explained in the “ z/OS Customization ” section setting up the Test

environment need to be performed on Production LPARs.

3.1.3 Security

The changes explained in the “ Security ” section setting up the Test environment need

to be performed on Production LPARs.

3.2 Populating the Test TADz database with Production data

3.2.1 Resetting the TADz Repository

By now, you should have populated some Test data into the TADz database, during familiarization with the TADz components. If you want to reset the Repository before populating it with data from Production systems, the cleanest and quickest way is to rerun the HSISDB07 job, adding a step to drop previously defined Repository table spaces. HSISGRNT will also need to be rerun to grant access to the tables.

TADz Deployment Tips 08 July 2010 Page 50 of 89

Some alternatives are:

 Run the TADz Delete Inventory job HSISDINV.

 DROP the database (for example, DROP DATABASE DBTADZ) and rerun all of the HSISDB* jobs to create a fresh database.

3.2.2 Considerations for large customers

It is import to consider what level of data you want to have in the database otherwise your database may grow very large with data that you don’t really need. You can significantly reduce the database size by:

 Using transient IQ schemas instead of dedicated IQ schemas, as explained in

the Inventory data section. IQ tables can consume a high volume of data and

there is only one TADz Administrator report that refers to these IQ tables.

The other 98% of the reports query the Repository tables, and are not impacted if you use transient IQ schemas to save space.

 Aggressively delete detailed module Usage data, as explained in the

TADz

Usage Deletion / Summary jobs section. For example, KEEPDETAIL=0

KEEPAGGR=12 will keep the database trim, whilst still providing full product level Usage reporting.

One of the strengths of DB2 for z/OS is its ability to handle a large quantity of data, and it is possible to have one central TADz database containing discovery and usage data for all of your LPARs. However, it is often more practical to have several TADz databases.

For example, if you have Data Centers in different countries and use just one central database:

1. A lot of raw usage data needs to be transferred to the central database.

2. The database would be very large and need an LPAR with larger capacity.

3. It is unlikely that a person in one Data Center would be interested at low level data of another Data Center. For example, someone in the USA is probably not interested which job names in Hong Kong are accessing a module.

A more practical solution might be to have a TADz database for each major Data

Center and use TADz Asset Table Mirroring. This structure keeps the detailed data within each Data Center (good for low level technical reporting), and copies the high level asset data to a central database (good for Asset Managers who often need to see

TADz Deployment Tips 08 July 2010 Page 51 of 89

high level product Usage reports across all Data Centers, particularly when arranging world wide Enterprise License Agreements).

The next diagram illustrates TADz Asset Table Mirroring. For more details, please refer to the TADz manual - Chapter 17 “Deployment for large sites”. http://publib.boulder.ibm.com/infocenter/tivihelp/v29r1/topic/ com.ibm.tivoli.tad4z.doc_old/c_deploymentforlargesites.html

Asset Table Mirroring is generally the best practice if you have multiple Data Centers that run independently.

If you prefer not to have a database in each Data Center, another option to consider is having multiple TADz DB2 databases within the same central DB2 Subsystem. Asset

Table Mirroring can also be used, but you do not need to unload-transfer-load the asset tables. For example, just use the HSISAM01 job to create a View across all of the TADz databases that are in the subsystem. The advantages of having multiple databases in this way:

 Enables you to import Usage data in parallel (one for each database, and for multiple databases at the same time).

 Easier DB2 housekeeping. For example DB2 reorg, back up, and restores, can be done independently and faster.

 Quicker low level reporting.

A single TCR server can be set up to query multiple databases/schemas, similar to how you set up TCR to query the Asset Table Mirror view, which is explained in

TADz manual - Chapter 17 “Deployment for large sites”.

TADz Deployment Tips 08 July 2010 Page 52 of 89

3.2.3 Downloading the latest Global Knowledge Base (GKB)

** IMPORTANT ** The GKB is updated monthly and made available through Fix

Central http://www.ibm.com/support/fixcentral . In order to use Fix Central you have to have a valid IBM user ID and password. Complete the screen prompts as follows:

1. Product Family - Select Tivoli

2. Product - Select IBM Tivoli Asset Discovery for z/OS

3. Installed Version - Select 7.2.0

4. Platform - Select z/OS

Select all fixes to display.

The format of the fix is 7.2.0 Tiv-TADZ-zOS-LV091201. The last six digits signify the fix level which is in YYMMDD format.

The name of the file that contains the updates is TADZKB.XMI.

Download this file as binary. You then need to upload this file to the mainframe to a preallocated file with the attributes FB 80.

Once uploaded, receive the file, RECEIVE INDATASET(tadzkb.xmi).

When prompted for additional information enter, DA(name of file).

The name of the file is important as it is used as input to the KB load jobs.

This restores the KB file, making it ready to be used to update the Knowledge bases.

Once you have restored the KB, you need to run the KB load jobs.

Note: Before you submit any job, ensure that the input file is changed to the name of the file you have received it from.

1. For HSISDB03 in the JCLLIB:

Uncomment STEP3. This step drops the Global Knowledge Base tables.

Submit HSISDB03.

2. For HSISDB05 in the JCLLIB:

Uncomment STEP3. This step drops the Global Knowledge Base for z/OS.

UNIX tables.

Submit HSISDB05.

3. For HSISDB11 in the JCLLIB (optional):

Comment STEP1. This prevents creation of table space,

Uncomment STEP3. This drops the IQ Filter tables.

Submit HSISDB11.

** IMPORTANT ** Once you have installed the updated GKB, you will need to rerun the Match Engine and Load to Repository jobs for each IQ schema. You also need to rerun the Grant jobs to provide read access to the generic TCR user ID.

TADz Deployment Tips 08 July 2010 Page 53 of 89

TADz Deployment Tips 08 July 2010 Page 54 of 89

3.2.4 Inventory data

** IMPORTANT ** Most z/OS LPARs share DASD with other z/OS LPARs.

Often, all z/OS LPARs in a Sysplex share all the same DASD. To avoid redundant scanning/matching it is important to find out from your systems programmers which

DASD volumes are shared.

** IMPORTANT ** read Chapter 2 “Deployment Scenarios” in the TADz manual: http://publib.boulder.ibm.com/infocenter/tivihelp/v29r1/topic/ com.ibm.tivoli.tad4z.doc_old/c_deploymentscenarios.html

Run the TADz Inquisitor to scan each DASD pool.

 If the LPAR is stand-alone with no shared DASD, scan all the DASD that is online to the LPAR. Example, SCANDIR DA(*).

 If the LPAR shares all of its DASD with other LPARs in a shared Sysplex, then run the scan from just one of the LPARs in the Sysplex. Example, SCANDIR

DA(*).

 If the LPAR has several DASD pools (with one pool dedicated to the LPAR), you will need to use the TADz Inquisitor filter criteria to separate the DASD pools. Example, SCANDIR DA(*) VOL(XY*,H*). This scenario is more complex to administer in TADz and it can be easier to have one inventory per

LPAR for LPARs that have complex DASD sharing. This approach may result in a larger database as it contains some redundant inventory data, but it is easier to administer and the end user Reports will show the same results.

TADz Deployment Tips 08 July 2010 Page 55 of 89

Note: The Inquisitor PLX setting is different to the Usage Monitor PLX setting. The

Inquisitor PLX setting is used by the Load to Repository to check that the data being loaded into an existing inventory was scanned from the same system that was previously scanned.

 PLX=N means the checking is done based on the SMFid.

 PLX=Y means the checking is done based on the Sysplex name.

It is generally more convenient to use PLX=Y.

There are two ways that you can process the “IQ data” in the database:

 Dedicated IQ schema per Inventory (per DASD pool). In HSISCUST you define multiple schemas in the IQSCHEMAS setting, one per DASD pool.

 Transient IQ schema that is reused for all Inventories. In HSISCUST you define only one schema as defined in the IQSCHEMAS setting, The Inquisitor

IQ data is imported and matched as normal. The difference is that the Load to

Repository job specifies an INVNAME which distinguishes the DASD pool that was scanned/matched. This approach consumes less DB2 space, but you can only use the TADz TCR IQ Report to see the last IQ processed.

Regardless of whether you use dedicated or transient IQ schema, it is a good idea to set up a single job for each Inventory that combines the IQ Import + Match Engine +

Load to Repository steps. Example, combine ZIMIQ1 + ZMEIQ1 + ZLRIQ1 jobs, setting the appropriate INVNAME. HSISCOMB in PARMLIB (not JCLLIB) has these jobs combined and can be used as a template.

Most sites scan and match their inventory once a quarter; some sites twice a year, and others weekly. It depends on how frequent product changes are made. Changes are often rolled out by volumes ( e.g. new sysres volume). TADz can be run to just scan specified volumes and/or data set name, using wildcards in the search criteria. For example, it is possible to align inventory updates with scheduled changes being rolled out.

It can take a long time to roll out TADz code to Production LPARs. To speed up the time it takes to get Product Inventory data, consider:

 If a Test LPAR has access to the same DASD Production systems use, run the

Inquisitor from the Test LPAR to capture the Production Inventory data.

 If Change Control takes longer due to concerns about APF code, run the

Inquisitor in non-APF mode.

 If Change Control takes longer due to concerns about possible impact on production workload, run the Inquisitor Batch job out of hours.

 If you have TLCMz already deployed, copy the TLCMz Surveyor data to the

Test z/OS and convert it into TADz IQ data format.

o HSISS2D1 converts TLCMz v4.1 and v4.2 Surveyor data.

o HSISS2D2 converts TLCMz v3.2 Surveyor data.

TADz Deployment Tips 08 July 2010 Page 56 of 89

3.2.5 Usage data

The TADz Usage Monitor (UM) is the primary way to gather Usage data.

SubCapacity Reporting Tool (SCRT) data can also be imported to supplement the

Usage data from UM.

You will need to populate the Inventory data BEFORE you import the Usage data.

It can take a long time to roll out TADz code to Production LPARs; particularly a new

Started Task. To speed up the time it takes to get Product Usage data, consider:

 If you have already deployed TLCMz, copy the TLCMz Monitor data to the

Test z/OS, and convert it into TADz UM data format.

HSISM2D converts TLCMz v3.2, v4.1, and v4.2 Monitor data.

 If you have SMF type 30 data, run the TADz SMF Scanner Component (which was added to TADz 7.2 as enhancement APAR OA32689 in 2Q 2010).

Note: SMF 30 records can only be used to track Usage of the modules on the

Job step EXEC PGM. Most products have at least one module that is invoked through the Job Step EXEC PGM. However, some products are only internally invoked and, consequently, Usage for these products can be tracked through SMF 30 records (products that are invoked from within TSO).

TADz Deployment Tips 08 July 2010 Page 57 of 89

3.2.5.1 Correlating Usage data with Inventory data

** IMPORTANT **

The TADz Usage Monitor PLX=N setting means that when the UM data is imported, it will be associated with the Inventory based on the SMFid where the Inquisitor was run. For example, if UM runs on a system with SMFid “MVS1” and Sysplex

“PLEXA”, the PLX=N setting means that when the data is imported it will be associated with Inventory data that was sourced by an Inquisitor scan on “MVS1”.

The TADz Usage Monitor PLX=Y setting means that when the UM data is imported it will be associated with the Inventory based on the Sysplex where the Inquisitor was run. For example, if UM runs on a system with SMFid “MVS1” and Sysplex

“PLEXA”, the PLX=Y setting means that when the data is imported it will be associated with Inventory data that was sourced by an Inquisitor scanned on

“PLEXA”. This may have been a system with SMFid “MVS1” or “MVS2”.

If an LPAR has multiple Inventories (multiple DASD pools shared with other

LPARs), the TADz Usage Monitor PLX setting is N/A, and instead, the HSISINVA job in JCLLIB must be used explicitly to map 1 to 8 inventories for the SMFid.

3.2.5.2 Handling Systems that have a non-unique SMFid.

** IMPORTANT **

If you have multiple z/OS systems with the same SMFid, these systems need to be distinguished from each other. This is done through the TADz Usage Monitor SID setting.

3.2.5.3 Optionally excluding application monitoring

With the large amounts of Usage data being collected, it is best practice to monitor libraries that are identified in the high-level qualifier listing of the Repository data. This reduces the number of Usage records collected and the subsequent importation into the Repository.

The Load to Repository jobs have a step that invokes HSICIHLQ to generate a UM filter deck in &HSIINST..UM.HLQIDS. This filter deck data set can be concatenated to the TADz Usage Monitor HSIZIN DD.

TADz Deployment Tips 08 July 2010 Page 58 of 89

Listed here are some examples that exclude all usage, but include some usage for the specified high-level qualifiers:

XDS(*)

IDS(DB2.*)

IDS(IMS.*)

IDS(CICS.*)

IDS(SYS1.*)

The data being excluded from capture are primarily in-house modules, which account for the majority of the Usage data collected.

** IMPORTANT ** TADz 7.2 was released to Production with

HSIINST..UM.HLQIDS included in the HSIZIN concatenation. This data set is initially populated with XDS(*). For example, it excludes all data sets from being monitored unless the Load to Repository adds the include filters. This can be confusing for new users, so APAR OA32153 was released to comment out the

&HSIINST..UM.HLQIDS from the HSIZIN concatenation by default.

TADz Deployment Tips 08 July 2010 Page 59 of 89

TADz Deployment Tips 08 July 2010 Page 60 of 89

3.2.6 Browsing raw Usage and Inventory data

When you start gathering data from several different LPARs, it is handy to browse the data to confirm it contains what you expect. TADz UM data and IQ data are both stored in standard zip format by default. You can do a BINARY download of the data to a PC, unzip it, (using Winzip or PKZIP) and then browse it.

You can also use the TADz utilities listed below to browse the contents on z/OS.

3.2.6.1 HSIIBRWZ

HSIIBRWZ is shipped in the SHSIEXEC TADz target library. Copy this member to a data set that is in your TSO session SYSPROC or SYSEXEC DD concatenation and

Edit to set the appropriate HLQ for the TADz SHSIMOD1 target library.

From ISPF 3.4, issue HSIIBRWZ against an IQ or UM zip data set..

TADz Deployment Tips 08 July 2010 Page 61 of 89

3.2.6.2 HSIIZIP

HSIIZIP can be used to unzip a TADz UM or IQ data set in a batch job. For example:

//UNZIP EXEC PGM=HSIZIP,PARM=UNZIP

//STEPLIB DD DISP=SHR,DSN=TADZ.V720.SHSIMOD1

//SYSPRINT DD SYSOUT=*

//SYSUT1 DD DISP=SHR,

// DSN=MPRES.TADZ.UM.OMU1.D2009218.S1253394

//SYSUT2 DD SYSOUT=*

TADz Deployment Tips 08 July 2010 Page 62 of 89

3.3 Verifying the TADz data quality

Now that you have Inventory and Usage data in the database, it is important to verify the data quality.

3.3.1 Verifying Usage data from all LPARs that are populated

Run the Asset Product Use By System report and confirm that you see all of your

Systems.

3.3.2 Verifying IQ data for all Inventories that are populated

Run the Discovery Standard Reports and confirm all Inventories are shown.

3.3.3 Verifying product identification quality

3.3.3.1 Verifying all expected products have been discovered

The Asset Manager is able to provide a list or spreadsheet of all z/OS Products that your company is licensed to used. Review the TADz TCR Asset – Product Inventory

Report and confirm that all expected licensed products have been identified.

If you see products in the report that are not in the expected list, click the First

Observed hyperlink and review the libraries that TADz discovered the product in.

TADz Deployment Tips 08 July 2010 Page 63 of 89

If the libraries don’t look correct for the product, use the TADz Discovery reports to investigate this at the module level.

If this looks like a misidentification, refer to the “ Verify no misidentifications ”

section. Otherwise, ask the Asset Manager to check the licenses, as this may be a license compliance violation.

If you don’t see expected products in the Product Inventory report, use the TADz

Asset – Product Inventory Verification Report to confirm TADz actually supports discovery for the product. This will prompt for the Vendor by use of a drop down list.

The report will show what was found and not found for the vendor.

TADz Deployment Tips 08 July 2010 Page 64 of 89

If the expected product is not in the Report, open a PMR to receive support for the product. Alternatively, you might have a license for a product that is no longer installed.

If the expected product is not found, it might be that TADz can not determine the

product version. Refer to the “ Check identified products that have unknown release ”

section.

3.3.3.2 Verifying no misidentifications

The Match Engine primarily uses the Global Knowledge Base to identify products. It also has various algorithms to tolerate environmental factors. For example, variations due to maintenance being applied to the Product and remnant modules for old releases which have not been deleted. Occasionally environmental factors can lead to misidentifications, so it is important that you review the TADz reports and look for these misidentifications.

If you find that a misidentification has occurred, open a PMR. It might be possible that a fix can be provided quickly through an updated Global Knowledge Base. The

TADz design enables the GKB to be updated quickly and robustly, with a 1 to 5 day turn around.

3.3.3.3 Checking identified products that have an unknown release

In the TADz TCR Discovery Administrator Report Set run the “Products with unknown release” report. This shows Products that have been identified (based on the module names), but the Release could not be determined (based on the module names and sizes).

TADz Deployment Tips 08 July 2010 Page 65 of 89

This report shows Product Options (which are generally aligned to FMiDs). The fact that a Product has some Options that are not determined for a release is not necessarily a problem, as other Options for the product might have had the release identified.

Here are some common reasons for unknown releases:

 Someone has copied a subset of a modules belonging to a Product from the system data set to their private data set, and there are not sufficient modules in the private data set to determine the Product Release.

 Maintenance has changed more that 20% of the Product modules. To handle this scenario, the GKB is updated to add a “put” level release (for example v1.5 (1003)). If you suspect this scenario, open a PMR for an updated GKB.

If you click the Libraries hyperlink, you will see more details.

TADz Deployment Tips 08 July 2010 Page 66 of 89

If you click the “Lmod Count” hyperlink, you will see which modules have not been release identified.

If you click the “All Products” hyperlink, you will see all Products that have been identified in this library.

TADz Deployment Tips 08 July 2010 Page 67 of 89

3.3.3.4 Checking modules with Usage for products that are not identified

The TADz Load to Repository component only copies to the Repository modules that have been matched to known products. It also has a step at the end that generates a

TADz Usage Monitor filter deck for the data set high level qualifiers of all known products. This enables the Usage Monitor to exclude monitoring of application modules to improve the performance of TADz.

In the scenario where the Usage Monitor is run without the filters to exclude applications, the UM data is imported into the Repository and the module is added; if it is not already there. In the TADz TCR Discovery Administrator Report Set run the

“Not Identified modules” report to see the modules that were added by the Usage

Import process.

If it results that the modules are actually product modules, when an updated GKB identifies the product and Load to Repository is run, the modules will be reassigned to the Product. This means Usage that has already been collected for the modules will immediately be re-linked to the Product.

TADz Deployment Tips 08 July 2010 Page 68 of 89

Click the Libraries hyperlink to see more details about the libraries.

Click the Lmod Count hyperlink to see more details about the modules within the library.

TADz Deployment Tips 08 July 2010 Page 69 of 89

Click the Job Count or User Count hyperlink to see who has been using the module.

TADz Deployment Tips 08 July 2010 Page 70 of 89

3.3.3.5 Sending diagnostic information to IBM TADz support

IBM TADz support might need to view your TADz database to troubleshoot a problem. This is done by running the HSISUNLD job in JCLLIB, which unloads the

TADz database and creates a single zip file. This utility is also used with HSISLOAD utility to copy the data from one DB2 Subsystem to another.

These unloads are structured into groups, so you can conveniently limit what is unloaded:

 Group 1 - High level asset repository tables.

 Group 2 - High/Mid level discovery repository tables.

- Asset detail repository tables.

 Group 3 - Low level (high volume) discovery repository tables.

IBM Support will request which group levels they need. Usually only group 1 and 2 are needed. Note: Group 3 usually has a high volume of data, so by default this is commented out of the DB2 UNLOAD SYSIN concatenation.

TADz Deployment Tips 08 July 2010 Page 71 of 89

There are four main types of files that your IBM support team might request you to send to troubleshoot a problem.

 HSISUNLD output zip file.

 Inquisitor output “IQ data” zip files.

 Usage Monitor output “UM data” zip files.

 SCRT CSV files.

All of these files are already zip compressed or are small (e.g. SCRT CSV).

Typically, problem reporting and diagnoses are communicated to TADz Support via

PMR, and when data is requested please follow the procedure described below to send the data.

STEPS FOR SENDING FILES:

1) Terse only files originated on the mainframe. Zip only PC Files.

2) This is important. Rename the file to this naming convention:

xxxxx.bbb.ccc.yyyyyy.yyy where the parts have the meaning: xxxxx = PMR number bbb = Branch office number (second segment of the PMR #) ccc = Country Code (third segment of the PMR #. eg. USA=000) yyyyy.yyy = file name and type (eg. gatelog.zip)

So, for PMR 11111,222,000 with gatelog.zip rename the file to be

11111.222.000.gatelog.zip

3) ftp to ftp.emea.ibm.com

4) Login as anonymous and use your email address as the password

5) cd /toibm/tivoli

6) bin

7) upload the file. put <filename>

Please note for security you cannot list out the files that have been uploaded. That is, dir and ls will not work.

As long as the file was named correctly (#2 above), the PMR will be automatically informed that it was uploaded.

For example :

//FTP EXEC PGM=FTP,

// PARM=' ftp.emea.ibm.com

(EXIT TIMEOUT 20'

//SYSPRINT DD SYSOUT=*

//OUTPUT DD SYSOUT=*

//INPUT DD * anonymous

<your email address > cd /toibm/tivoli bin

PUT '<filename>’

PUT '<filename>’

QUIT

/*

TADz Deployment Tips 08 July 2010 Page 72 of 89

3.3.4 Asset Manager review of TADz Asset Reports

By now you should have verified the discovery quality. Asset Managers can start reviewing the TADz Asset Reports. The Product Use by System and Product Use by

Machine are particularly useful. They both have a hyperlink drill down to see the product use trends.

TIP: To compare Usage trends between different product versions or systems, instead of clicking the hyperlink drill down, right click the hyperlink, and select Open in a new Tab. This enables you to have several charts open and is a convenient way of comparing by swapping the tab window.

TADz Deployment Tips 08 July 2010 Page 73 of 89

Note: You can profile which Report set branches users can see/access in TCR. For example, you can define an Asset Manager user ID that only sees the Asset reports and not the Discovery level reports.

TADz Deployment Tips 08 July 2010 Page 74 of 89

4 Setting up TADz in Production

You now have a TADz database on a Test z/OS, populated with Production data.

This section explains how you move the database from the Test z/OS to Production.

** IMPORTANT ** Many users place their Production TADz database on a

Development host, not the same host used for Production business workload.

4.1 z/OS customization

To be done by a systems programmer who has authority to customize the Prod/Dev z/OS

Your TADz libraries should already be deployed on the Prod/Dev z/OS as explained

in the Start Roll Out of TADz Remote Components section.

DASD needs to be allocated for the TADz database on the Production /Development system. Review the size consumed by the Test database to determine the appropriate initial size required on the Production/Development system. Extra capacity is needed to allow for the accumulation of Usage data over time. The amount of Usage data

accumulated is controlled by the TADz Administrator, as explained in the TADz

Usage Deletion / Summary jobs section below.

4.2 Creating a TADz database on a Production/Development

DB2 subsystem

To be done by the DBA who has authority to customize the DB2 Subsystem and the TADz Administrator.

The Table spaces and Indexes for the Test TADz database have numerous secondary extents allocated. For Production it is more efficient to have a primary allocation that is large enough to not require secondary extents in the initial stage.

To determine the appropriate PRIQTY values, issue the following queries against the

Test TADz database DB2 catalog. These can be issued using the HSISBATR job or

DB2 Control Center.

SELECT

CASE WHEN NAME LIKE 'V%' THEN 'HSISSQ17'

WHEN SUBSTR(NAME,3,1) = 'U' THEN 'D3'||SUBSTR(NAME,4)

ELSE 'D1'||SUBSTR(NAME,3)

END AS PARMLIB_MEMBER

, CASE WHEN NAME LIKE 'V%' THEN 'HSISDB07'

WHEN SUBSTR(NAME,3,1) = 'U' THEN 'DX'||SUBSTR(NAME,4)

ELSE 'DI'||SUBSTR(NAME,3)

END AS JCLLIB_MEMBER

, NAME AS TABLESPACE

, CASE WHEN NACTIVE < 0 THEN NULL

WHEN NACTIVE = 0 THEN 1

ELSE (NACTIVE * 4)

END AS PRIQTY

FROM SYSIBM.SYSTABLESPACE

TADz Deployment Tips 08 July 2010 Page 75 of 89

WHERE DBNAME = 'DBTADZ'

AND (NAME IN ('VJOBDATA','VMODULE','VUSEMTD','VSHARE','VAGGR')

OR NAME LIKE 'W%' )

AND NAME NOT LIKE '%GKB'

AND NAME NOT LIKE '%GKU'

AND NAME NOT LIKE '%LKB'

AND NAME NOT LIKE '%LKU'

AND NAME NOT LIKE '%FILTR'

ORDER BY PARMLIB_MEMBER, JCLLIB_MEMBER, CREATEDTS ;

SELECT

CASE WHEN T.TSNAME LIKE 'V%' THEN 'HSISSQ18'

WHEN SUBSTR(T.TSNAME,3,1) = 'U' THEN 'D4'||SUBSTR(T.TSNAME,4)

ELSE 'D2'||SUBSTR(T.TSNAME,3)

END AS PARMLIB_MEMBER

, CASE WHEN T.TSNAME LIKE 'V%' THEN 'HSISDB07'

WHEN SUBSTR(T.TSNAME,3,1) = 'U' THEN 'DX'||SUBSTR(T.TSNAME,4)

ELSE 'DI'||SUBSTR(T.TSNAME,3)

END AS JCLLIB_MEMBER

, RTRIM(I.TBCREATOR)||'.'||I.NAME AS INDEX

, CASE WHEN I.NLEAF < 0 THEN NULL

WHEN I.NLEAF = 0 THEN 1

ELSE (I.NLEAF * 4)

END AS PRIQTY

FROM SYSIBM.SYSINDEXES AS I

JOIN SYSIBM.SYSTABLES AS T ON T.CREATOR = I.TBCREATOR

AND T.NAME = I.TBNAME

WHERE I.DBNAME = 'DBTADZ'

AND (T.TSNAME IN

('VJOBDATA','VMODULE','VUSEMTD','VSHARE','VAGGR')

OR T.TSNAME LIKE 'W%' )

AND T.TSNAME NOT LIKE '%GKB'

AND T.TSNAME NOT LIKE '%GKU'

AND T.TSNAME NOT LIKE '%LKB'

AND T.TSNAME NOT LIKE '%LKU'

AND T.TSNAME NOT LIKE '%FILTR'

ORDER BY PARMLIB_MEMBER, JCLLIB_MEMBER, I.CREATEDTS ;

The NACTIVE and NLEAF columns in the above SQL are the number of active pages being used. The values are multiplied by 4 to determine the PRIQTY in kilobytes; a page is 4K. DB2 sets these columns when the DB2 RUNSTATS utility is run. The TADz JCL regularly invokes RUNSTATS, so the values should be correct.

TADz Deployment Tips 08 July 2010 Page 76 of 89

To create the TADz database in Production/Development, you need to follow similar steps to the ones for the Test database. After the HSISCUST has created the JCLLIB and PARMLIB data sets for the new DB2 subsystem and, BEFORE you run the jobs to create the database, you should edit the PARMLIB members listed in the results of

the above and set the appropriate PRIQTY value. Refer to DB2 for z/OS

Customization and TADz Database Install to refresh your memory regarding the steps.

TADz Deployment Tips 08 July 2010 Page 77 of 89

TADz Deployment Tips 08 July 2010 Page 78 of 89

4.3 Copying the Test TADz database to Production/

Development

To be done by the TADz Administrator

This involves unloading the Test database to a zip file by the HSISUNLD job and loading it into the new database by the HSISLOAD job.

Edit member HSISUNLD in your JCLLIB. Since you are loading all the data from your Test database you need to unload all the tables. In Step UNLOAD, you need to uncomment the line with Parmlib HSISUNL3. Modify the job as necessary and submit.

TADz Deployment Tips 08 July 2010 Page 79 of 89

Edit member HSISLOAD in your JCLLIB. In Step LOAD you need to uncomment the line with Parmlib HSISLOD3. Modify the job as necessary and submit.

TADz Deployment Tips 08 July 2010 Page 80 of 89

4.4 Automating Usage Monitoring

Enable the TADz Usage Monitor Started Task to be automatically started after an IPL on all z/OS LPARs.

If you already have TLCMz deployed, you should retire the TLCMz Monitor at this point too. Note: it is okay to run the TLCMz Monitor and TADz Usage Monitor at the same time, but be aware that redundant information will be gathered.

4.5 Automating Usage Import

** IMPORTANT **

The Usage Monitor produces at least one raw Usage data set each day. These files should be imported out of hours. Most users do this on a weekend others every

night. As explained in the TADz Usage Data Import section, the HSISUMP job has a

Step ( MNIMPORT) to import a raw UM file and several other Steps to aggregate the data. Running this job manually in a Test environment is okay, but this should NOT be done separately for each UM file when you are automating regular imports, as this causes a lot of redundant processing in the aggregation steps.

4.5.1 Weekly Usage import

Most customers import the Usage data on the weekends using HSIZCAT.

Usage Monitor output DSN includes a system identifier. For example

TADZ.UM&SMF

Note: Maximum length permitted is 26 characters; the Date and Time stamp are appended to the output data set name:

TADZ.UMMVS1.D2009295.T2158578

 Define to your site Job Scheduler, a job that runs every Saturday at 1:00 am, which runs HSISZCAT on each LPAR to concatenate the daily UM files into a single weekly UM file per system.

If you have remote LPARs that do not have shared DASD with the database host, you need to have an extra job step to transfer the weekly UM data sets to the database host.

 Define to your site Job Scheduler, a job that runs every Saturday at 3:00 am

(allowing enough gap after the HSIZCAT job to run; including a time buffer in case the host is being IPLed), only on the database host which runs the Usage

Import job HSISUIMP. The first step needs to be repeated for each weekly

LPAR UM file, followed by the various RUNSTATS and Aggregation steps.

Note: The TLCMz raw monitor output is used by the reporting component and, consequently, it is necessary to maintain an archive. With TADz, the raw Usage data

(or ZCATed weekly UM data) is imported into the database and can be discarded after a successful import. Most users still use a GDG to keep several archived copies for each LPAR. This is in case the database needs to be restored from backup and needs to be brought up-to-date with Usage data collected after the backup time. The GDG is also convenient if IBM Support request a copy of the raw data to troubleshoot a problem.

TADz Deployment Tips 08 July 2010 Page 81 of 89

Example JCL to allocate the GDG for archives on the database host.

/DEFGDG EXEC PGM=IDCAMS

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

DEFINE GENERATIONDATAGROUP -

(NAME(TADZ.UMMVS1.GDG) -

NOEMPTY -

SCRATCH -

LIMIT(5))

DEFINE GENERATIONDATAGROUP -

(NAME(TADZ.UMMVS2.GDG) -

NOEMPTY -

SCRATCH -

LIMIT(5))

DEFINE GENERATIONDATAGROUP -

(NAME(TADZ.UMMVS3.GDG) -

NOEMPTY -

SCRATCH -

LIMIT(5))

/*

Example JCL for the 1am job on system MVS1, which does NOT have shared DASD with the database system:

//ZCAT EXEC PGM=HSICZCAT,

// PARM='UMDSN(TADZ.UMMVS1),DELETE'

//STEPLIB DD DISP=SHR,DSN=TADZ.V720.SHSIMOD1

// DD DISP=SHR,DSN=CEE.SCEERUN

// DD DISP=SHR,DSN=CBC.SCLBDLL

//ZCATOUT DD DISP=(NEW,CATLG),DSN=TADZ.UMMVS1.ZCATOUT,

// UNIT=SYSDA,SPACE=(CYL,(10,50),RLSE),

// RECFM=VB,LRECL=27994,BLKSIZE=27998

//SYSOUT DD SYSOUT=*

//SYSERR DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

//*

//DELBAD EXEC PGM=IEFBR14,COND=(0,EQ)

//ZCATOUT DD DISP=(OLD,DELETE),DSN=TADZ.UMMVS1.ZCATOUT

//*

//FTP EXEC PGM=FTP,COND=(0,NE),

// PARM='database.host.ipname (EXIT TIMEOUT 20'

//SYSPRINT DD SYSOUT=*

//OUTPUT DD SYSOUT=*

//NETRC DD DISP=SHR,DSN=&SYSUID..NETRC

//INPUT DD *

QUOTE SITE RECFM=VB LRECL=27994,BLKSIZE=27998

QUOTE SITE CYLINDERS PRIMARY=10 SECONDARY=50

MODE B

TYPE E

PUT 'TADZ.UMMVS1.ZCATOUT’

QUIT

/*

//DELFTPOK EXEC PGM=IEFBR14,COND=(0,NE)

//ZCATOUT DD DISP=(OLD,DELETE),DSN=TADZ.UMMVS1.ZCATOUT

Notes:

 The HSIZCAT DELETE parameter will delete the daily import files ONLY if it has successfully created the output file. For example, an out of space problem means the output is not clean and, consequently, the input file is not deleted.

TADz Deployment Tips 08 July 2010 Page 82 of 89

 The ZCATOUT file is allocated in the first Step with DISP=NEW, and deleted in the last Step if the transfer is successful. This prevents data from being lost from a previous job that has not completed successfully.

 The FTP will fail if the ZCATOUT data set already exists on the database host. This prevents data from being lost from a previous transfer until it has been imported into the database (see the import job below, which will delete the file after it has been successfully imported).

 The NETRC data set contains the user ID and password used by FTP. Since this data set contains a password, you should define a security profile for this data set to restrict who can browse it. Here is an example of the NETRC data set contents:

MACHINE database.host.ipname LOGIN tadzadm PASSWORD tadzpw

Example JCL for the 1am job on system MVS2, which does have shared DASD with the database system. This is the same as the MVS1 job, without the FTP steps:

//ZCAT EXEC PGM=HSICZCAT,

// PARM='UMDSN(TADZ.UMMVS2),DELETE'

//STEPLIB DD DISP=SHR,DSN=TADZ.V720.SHSIMOD1

// DD DISP=SHR,DSN=CEE.SCEERUN

// DD DISP=SHR,DSN=CBC.SCLBDLL

//ZCATOUT DD DISP=(NEW,CATLG),DSN=TADZ.UMMVS2.ZCATOUT,

// UNIT=SYSDA,SPACE=(CYL,(10,50),RLSE),

// RECFM=VB,LRECL=27994),BLKSIZE=27998

//SYSOUT DD SYSOUT=*

//SYSERR DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

//*

//DELBAD EXEC PGM=IEFBR14,COND=(0,EQ)

//ZCATOUT DD DISP=(OLD,DELETE),DSN=TADZ.UMMVS2.ZCATOUT

//*

Example JCL for the 3am job on the database system:

//HSISUIMP JOB ,'Run Usage Import',CLASS=A,

// MSGCLASS=X,NOTIFY=&SYSUID

//*

// JCLLIB ORDER=(TADZ.V720INST.JCLLIB,

// TADZ.V720.SHSIPROC)

//*

//* Include JCL symbols

// INCLUDE MEMBER=HSISDB00

//*

//

*******************************************************************

//* IMPORT WEEKLY ZCAT'ED USAGE FILES FOR ALL SYSTEMS

//

*******************************************************************

//IMPORT PROC IMHLQ=''

//MNIMPORT EXEC HSIJMNIM,HSI='TADZ.V720',COND=(0,NE)

//UIMPDATA DD DISP=OLD,DSN=&IMHLQ..ZCATOUT

//TPARAM DD DISP=SHR,DSN=TADZ.V720INST.PARMLIB(IMTPARAM)

//*** TPARAM DATA SET CONTAINS A COPY OF THE INSTEAM TPARAM

//*** SETTINGS FROM THE MNINPUT STEP IN THE HSISUIMP JOB EG.

//* SSID=DE81

TADz Deployment Tips 08 July 2010 Page 83 of 89

//* DSN=QXPOMU1DE81

//* DATABASE=DBTADZ

//* REPSCHEMA=SI7

//* COMMIT=1000

//*

//ARCHIVE EXEC PGM=IEBGENER,COND=(0,NE)

//SYSPRINT DD SYSOUT=*

//SYSIN DD DUMMY

//SYSUT1 DD DISP=OLD,DSN=&IMHLQ..ZCATOUT

//ZCATOUT DD DISP=(NEW,CATLG),DSN=&INHLQ..GDG(+1),

// UNIT=SYSDA,SPACE=(CYL,(10,50),RLSE),

// RECFM=VB,LRECL=27994,BLKSIZE=27998

//*

//DELOK EXEC PGM=IEFBR14,COND=(0,NE)

//UIMPDATA DD DISP=(OLD,DELETE),DSN=&IMHLQ.ZCATOUT

//*

// PEND

//*

//MVS1 EXEC IMPORT,IMHLQ=TADZ.UMMVS1

//MVS2 EXEC IMPORT,IMHLQ=TADZ.UMMVS2

//MVS3 EXEC IMPORT,IMHLQ=TADZ.UMMVS3

//*

//

*******************************************************************

//* AGGREGATE USAGE DATA

//

*******************************************************************

//RUNSTAT1 EXEC HSIJRUN,UTILID=UIMPSTA1,COND=(4,LT)

//SYSIN DD *

RUNSTATS TABLESPACE DBTADZ.VSHARE TABLE(ALL) INDEX(ALL)

RUNSTATS TABLESPACE DBTADZ.VJOBDATA TABLE(ALL) INDEX(ALL)

RUNSTATS TABLESPACE DBTADZ.VUSEMTD TABLE(ALL) INDEX(ALL)

/*

//REPMERG EXEC PGM=HSICREPM,REGION=0M,TIME=1440,COND=(4,LT)

//STEPLIB DD DSN=HSIDEV.V720.D58.SHSIMOD1,DISP=SHR

// DD DSN=DB2V810.DE81.SDSNEXIT,DISP=SHR

// DD DSN=DB2.V810.SDSNLOAD,DISP=SHR

// DD DSN=CEE.SCEERUN,DISP=SHR

// DD DSN=CBC.SCLBDLL,DISP=SHR

//LOG DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

//SYSOUT DD SYSOUT=*

//CEEDUMP DD SYSOUT=*

//APPTRACE DD SYSOUT=*

//*SQTRACE DD SYSOUT=*

//DSNAOINI DD DSN=MPRES.V720D58.PARMLIB(HSISCLI),DISP=SHR

//TPARAM DD *

SSID=DE81

DSN=QXPOMU1DE81

DBNAME=DBTADZ

SRCSCHEMA=SI7

REPSCHEMA=SI7

COMMIT=1000

/*

//RUNSTAT2 EXEC HSIJRUN,UTILID=UIMPSTA2,COND=(4,LT)

//SYSIN DD *

RUNSTATS TABLESPACE DBTADZ.VSHARE TABLE(ALL) INDEX(ALL)

/*

//*

//AGGR EXEC HSIJTLAG,HSI='HSIDEV.V720.D58',COND=(4,LT)

//TPARAM DD *

SSID=DE81

DSN=QXPOMU1DE81

GKBSCHEMA=GKB7

REPSCHEMA=SI7

//*

TADz Deployment Tips 08 July 2010 Page 84 of 89

//RUNSTAT3 EXEC HSIJRUN,UTILID=UIMPSTA3,COND=(4,LT)

//SYSIN DD *

RUNSTATS TABLESPACE DBTADZ.VSHARE TABLE(ALL) INDEX(ALL)

RUNSTATS TABLESPACE DBTADZ.VAGGR TABLE(ALL) INDEX(ALL)

/*

4.5.2 Daily Usage Import

For users who want nightly Usage Import, instead of weekly, it is best to use the

TADz Automation Server.

 Usage Monitor output DSN includes a system identifier (TADZ.UM&SMF)

Note: Maximum length permitted is 26 characters since the Date and Time stamp are appended to the output data set name. For example:

TADZ.UMMVS1.D2009295.T2158578

 On each remote LPAR that does not have shared DASD to the database host, the TADz Automation Server is set up to FTP the daily UM files to the database host during a period. For example, TIME(0000-0059)

 On the database host, the Automation Server is set up to: o TIME(0100-0159) Run Usage Import for daily UM files, with job steps:

 MNIMPORT from HSISUIMP job

 Delete or archive UM file after import o TIME(0200-0259) run a job that processes the RUNSTATS and

Aggregation steps from HSISUIMP.

The TADz HSIJAUTO Started Task needs to be copied from the

TADZ.V720.SHSIPROC data set to a data set in your JES PROCLIB concatenation.

The NETRC DD also needed to be added to the JCL. Refer to the Weekly Import section above for an example of the NETRC data set.

//HSIJAUTO PROC HSI='HSI', Product hlq

// ACDS=''

//*

//*

//STEP1 EXEC PGM=HSIAUTO,TIME=1440,REGION=0M

//STEPLIB DD DSN=&HSI..SHSIMOD1,DISP=SHR

//HSIACNTL DD DSN=&HSIINST..PARMLIB,DISP=SHR

//HSIACDS DD DSN=&ACDS,DISP=SHR

//HSIAMSG DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

//SYSOUT DD SYSOUT=*

//OUTPUT DD SYSOUT=*

//INPUT DD UNIT=VIO,SPACE=(CYL,5)

//INTRDR DD SYSOUT=(*,INTRDR)

//NETRC DD DISP=SHR,DSN=&SYSUID..NETRC

Example: TADz Automation Server HSIAPARM member in PARMLIB on the remote LPARs.

FTP(HSISFTP1) TIME(0000-0059) DSN(TADZ.UM*.D*.T*)

TADz Deployment Tips 08 July 2010 Page 85 of 89

Example: TADz Automation Server HSISFTP1 member in PARMLIB on the remote

LPARs.

QUOTE SITE RECFM=VB LRECL=27994,BLKSIZE=27998

QUOTE SITE CYLINDERS PRIMARY=10 SECONDARY=50

MODE B

TYPE E

PUT '&DATASETNAME'

QUIT

Example: TADz Automation Server HSIAPARM member in PARMLIB on the database LPAR.

JOB(HSISIMP1) TIME(0100-0159) DSN(TADZ.UM*.D*.T*)

JOB(HSISIMP2) TIME(0200-0259) DSN(TADZ.UM.HSISIMP2)

Example: TADz Automation Server HSISIMP1 member in PARMLIB on the database LPAR.

//*****************************************************

//* IMPORT RAW USAGE DATA

//*****************************************************

//NMIMPORT EXEC HSIJMNIM,HSI='TADZ.V720'

//UIMPDATA DD DISP=OLD,DSN=&DATASETNAME

//TPARAM DD *

SSID=DE81

DSN=QXPOMU1DE81

DATABASE=DBTADZ

REPSCHEMA=SI7

COMMIT=1000

/*

//DELUM EXEC PGM=IEFBR14,COND=(0,NE)

//UIMPDATA DD DISP=(OLD,DELETE),DSN=&DATASETNAME

//*

//*****************************************************

//* TRIGGER AUTOMATION SERVER THAT HSISIMP2 CAN RUN

//*****************************************************

//TRIGGER EXEC PGM=IEFBR14,COND=(0,NE)

//HSISIM2 DD DISP=(MOD,CATLG),DSN=TADZ.UM.HSISIMP2,

// UNIT=SYSDA,SPACE=(TRK,(1))

TADz Deployment Tips 08 July 2010 Page 86 of 89

Example: TADz Automation Server HSISIMP2 member in PARMLIB on the database LPAR.

//************************************************

//* REMOVE AUTOMATION SERVER TRIGGER

//************************************************

//DELTRIG EXEC PGM=IEFBR14

//HSISIMP2 DD DISP=(MOD,DELETE),DSN=TADZ.UM.HSISIMP2,

// UNIT=SYSDA,SPACE=(TRK,(0))

//************************************************

//* AGGREGATE USAGE DATA

//************************************************

//RUNSTAT1 EXEC HSIJRUN,UTILID=UIMPSTA1,COND=(4,LT)

//SYSIN DD *

RUNSTATS TABLESPACE DBTADZ.VSHARE TABLE(ALL) INDEX(ALL)

RUNSTATS TABLESPACE DBTADZ.VJOBDATA TABLE(ALL) INDEX(ALL)

RUNSTATS TABLESPACE DBTADZ.VUSEMTD TABLE(ALL) INDEX(ALL)

/*

//REPMERG EXEC PGM=HSICREPM,REGION=0M,TIME=1440,COND=(4,LT)

//STEPLIB DD DSN=HSIDEV.V720.D58.SHSIMOD1,DISP=SHR

// DD DSN=DB2V810.DE81.SDSNEXIT,DISP=SHR

// DD DSN=DB2.V810.SDSNLOAD,DISP=SHR

// DD DSN=CEE.SCEERUN,DISP=SHR

// DD DSN=CBC.SCLBDLL,DISP=SHR

//LOG DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

//SYSOUT DD SYSOUT=*

//CEEDUMP DD SYSOUT=*

//APPTRACE DD SYSOUT=*

//*SQTRACE DD SYSOUT=*

//DSNAOINI DD DSN=MPRES.V720D58.PARMLIB(HSISCLI),DISP=SHR

//TPARAM DD *

SSID=DE81

DSN=QXPOMU1DE81

DBNAME=DBTADZ

SRCSCHEMA=SI7

REPSCHEMA=SI7

COMMIT=1000

/*

//RUNSTAT2 EXEC HSIJRUN,UTILID=UIMPSTA2,COND=(4,LT)

//SYSIN DD *

RUNSTATS TABLESPACE DBTADZ.VSHARE TABLE(ALL) INDEX(ALL)

/*

//*

//AGGR EXEC HSIJTLAG,HSI='HSIDEV.V720.D58',COND=(4,LT)

//TPARAM DD *

SSID=DE81

DSN=QXPOMU1DE81

GKBSCHEMA=GKB7

REPSCHEMA=SI7

//*

//RUNSTAT3 EXEC HSIJRUN,UTILID=UIMPSTA3,COND=(4,LT)

//SYSIN DD *

RUNSTATS TABLESPACE DBTADZ.VSHARE TABLE(ALL) INDEX(ALL)

RUNSTATS TABLESPACE DBTADZ.VAGGR TABLE(ALL) INDEX(ALL)

/*

TADz Deployment Tips 08 July 2010 Page 87 of 89

4.6 Automating the Match Engine

Inventory scanning is sometimes only done a couple of times a year and, therefore, not worth automating. However, if you want to scan inventory more frequently, it is possible to automate the process.

HSISCOMB in PARMLIB is designed to be run by the TADz Automation Server. It combines the IQ Import, Match Engine, and Load to Repository job steps.

4.7 Automating SCRT Import

For sites that have System z subcapacity licenses, the Sub Capacity Reporting Tool

(SCRT) is run every month. If this process is automated, you should arrange to have the SCRT output data imported into TADz. Otherwise, make sure the person who runs SCRT and sends the output to IBM, also sends the data to the TADz

Administrator, so they can manually be imported into TADz each month.

4.8 Database Housekeeping

Consult with your DBA about database housekeeping.

4.8.1 Database backups

Backing up your database on regular basis is imperative. Each site has their own method of backing up data, but it is important the TADz database is backed up at least monthly, if not weekly. It depends on the frequency of updates being done to the

Database. If you import your Usage data on a weekly basis, it is best to back up the data weekly.

4.8.2 Database REORG for performance

For optimal performance, a DB2 REORG should be performed on a regular basis.

Some sites do this on a weekly basis, others monthly.

4.8.3 TADz Usage Deletion / Summary jobs

** IMPORTANT **

TADz provides some utilities that will help maintain the databases with respect to space utilization and the Usage data.

The Usage Monitor tracks individual module Usage for each job name, job account code, and user ID for each day. When this is imported in the database, it is maintained at the same granularity, except it is for each month instead of each day.

The data is also aggregated to the Asset level for tracking usage of Product Versions.

Large sites can have around 6 million modules deployed and have many thousands of jobs using them every day. Storing this amount of information requires a lot of database space and it will continue to grow unless you keep it under control.

It is the module level Usage data in particular that you need to control. The aggregated product Usage data takes less than 1% of what the module level data

TADz Deployment Tips 08 July 2010 Page 88 of 89

consumes. Whilst the module level Usage data needs to be imported to determine product Usage by aggregation, you need to consider how long you need to keep the module level data in the database. You might find that you probably only report on module level Usage data when you are trying to get people off a product and, therefore, two months of detailed Usage data may be adequate.

The TADz Usage Deletion job HSISUDEL has options to control both the module detail and aggregated data. For example: o KEEPDETAIL=2 will keep module detail usage data for the current month and 2 months of history. Anything older will be deleted.

o KEEPAGGR=12 will keep aggregated product version usage data for the current month and 12 months of history (good for trend graphs). Anything older will be deleted.

The TADz Usage Deletion job is the primary way for keeping the Usage data under control. If you want to keep module level Usage data longer, but with less detail, you can use the TADz Usage Summary job HSISUSUM to replace the user ID and job name granularity with a generic job name (Batch or STC). This enables you to keep module Usage counts for a long date range with minimal database size.

TADz Deployment Tips 08 July 2010 Page 89 of 89

advertisement

Key Features

  • Discovery of IBM products, third party products, and applications
  • Monitoring of product and application usage
  • Interactive web reporting with trend graphs and drill-down navigation
  • Bolt-on integration with Tivoli Asset Management for IT
  • Reduces unexpected outages from z/OS product and application upgrades
  • Improves value from z/OS software budget
  • Makes systems audit ready

Related manuals

Frequently Answers and Questions

How many different schemas does TADz use in its database?
TADz uses several predefined schemas in its database for different purposes. Some examples are SI7 (the default schema for repository tables), GKB7 (z/OS Global Knowledge Base), and IQ1-IQ3 (IQ import and match processing).
What is the role of the HSISCUST job in TADz setup?
HSISCUST facilitates the definition of local environment settings and generates JCL and PARMLIB members for TADz jobs. This eliminates the need to manually define these settings in each job. The document details the necessary settings, including Storage Group configurations and DB2 plan names.
Where can I find additional information about the TADz database setup?
The document refers to Chapter 5 of the TADz manual for more detailed information about the settings in HSISCUST. You can download the TADz manual from the provided URL in the document: http://publib.boulder.ibm.com/infocenter/tivihelp/v29r1/topic/com.ibm.tivoli.tad4z.doc_old/welcome1.html
Download PDF

advertisement

Table of contents